Download as pdf or txt
Download as pdf or txt
You are on page 1of 1709

A PAPER PRESENTATION ON

ARTIFICIAL NEURAL NETWORKS FOR


e-NOSE
BY



K.S.THANUJ A Y.SARIKA
III/IV B-TECH III/IV B-TECH



E-mail id:thanu_ks@yahoo.com
sarika.2757@gmail.com



SIDDHARTH INSTITUTE OF ENGG. &TECH.
PUTTUR








ARTIFICIAL NEURAL NETWORKS FOR
e-NOSE


ABSTRACT:

Neural networks have seen an explosion of
interest over the last few years. The primary
appeal of neural networks is their ability to
emulate the brain's pattern-recognition skills.
The sweeping success of neural networks can
be attributed to some key factors. The paper
explicates the features of neural networks and
also enlightens how neural networks are being
successfully applied across an extraordinary
range of problem domains, in areas as diverse
as finance, medical, engineering, physics etc

Electronic nose is a new and promising
technology which is rapidly becoming a valuable
tool for the organoleptic evaluation of food
parameters related to taste and smell and could
replace human sensory panels in quality control
applications, where the objective, rapid and
synthetic evaluation of the aroma of many
specimens is required. An electronic nose is
generally composed of a chemical sensing system
(e.g., sensor array or spectrometer) and a
pattern recognition system (e.g., artificial neural
network). We are developing electronic noses for
the automated identification of volatile chemicals




for environmental and medical applications. In
this paper, we briefly describe neural networks,
electronic nose & show some results from a
prototype electronic nose.

1.INTRODUCTION:
All the electronic noses developed so far are
based on the same working principle: an array of
chemical sensors mimicking the olfactory
receptors, matched with a suitable data
processing method, allows to retrieve
quantitative and qualitative information on the
chemical environment. A sensor comprises a
material whose physical properties vary
according to the concentration of some chemical
species. These changes are then translated into
an electrical or optical signal which is recorded
by a device. Contrary to physical senses some
aspects of the human taste and olfaction
physiological working principle are still unclear.
Because of these intrinsic difficulties toward the
understanding of the nature of these senses,
only sporadic research on the possibility of
designing artificial olfactory systems was
performed until the end of the eighties.

1.1 Why use neural networks?

Neural networks, with their remarkable ability to
derive meaning from complicated or imprecise
data, can be used to extract patterns and detect
trends that are too complex to be noticed by
either humans or the computer techniques. A
trained neural network can be thought of as an
"expert" in the category of information it has
been given to analyze. This expert can then be
used to provide projections given new situations
of interest and answer "what if" questions.

Other advantages include:
1. Adaptive learning: An ability to learn how
to do tasks based on the data given for training
or initial experience.
2. Self-Organization: An ANN can create its
own organization or representation of the
information it receives during learning time.
3. Real Time Operation: ANN computations
may be carried out in parallel, and special
hardware devices are being designed and
manufactured which take advantage of this
capability.

1.2 Architecture of neural
networks:

The commonest type of artificial neural network
consists of three groups, or layers, of units: a
layer of
"input" units is connected to a layer of "hidden"
units, which is connected to a layer of " output"


units. (See Figure) The activity of the input
units represents the raw information that is
fed into the network. The activity of each
hidden unit is determined by the activities of
the input units and the weights on the
connections between the input and the
hidden units. The behavior of the output
units depends on the activity of the hidden
units and the weights between the hidden
and output units. This simple type of
network is interesting because the hidden
units are free to construct their own
representations of the input. The weights
between the input and hidden units
determine when each hidden unit is active,
and so by modifying these weights, a hidden
unit can choose what it represents. We also
distinguish single-layer and multi-layer
architectures. The single-layer organization,
in which all units are connected to one
another, constitutes the most general case
and is of more potential computational
power than hierarchically structured multi-
layer organizations. In multi-layer networks,
units are often numbered by layer, instead of
following a global numbering.

1.3 Pattern Recognition:

An important application of neural networks is
pattern recognition. Pattern recognition can be
implemented by using a feed-forward neural
network that has been trained accordingly.
During training, the network is trained to
associate outputs with input patterns. When the
network is used, it identifies the input pattern
and tries to output the associated output pattern.
The power of neural networks comes to life
when a pattern that has no output associated
with it, is given as an input. In this case, the
network gives the output that corresponds to a
taught input pattern that is least different from
the given pattern.

2. How does an electronic nose
work?
The two main components of an electronic nose
are the sensing system and the automated
pattern recognition system. The sensing system
can be an array of several different sensing
elements (e.g., chemical sensors), where each
element measures a different property of the
sensed chemical, or it can be a single sensing
device (e.g., spectrometer) that produces an
array of measurements for each chemical, or it
can be a combination. Each chemical vapor
presented to the sensor array produces a
signature or pattern characteristic of the vapor.
By presenting many different chemicals to the
sensor array, a database of signatures is built
up. This database of labeled signatures is used
to train the pattern recognition system. The goal
of this training process is to configure the
recognition system to produce unique
classifications of each chemical so that an
automated identification can be implemented.


The quantity and complexity of the data
collected by sensors array can make
conventional chemical analysis of data in an
automated fashion difficult. One approach to
chemical vapor identification is to build an array
of sensors, where each sensor in the array is
designed to respond to a specific chemical. With
this approach, the number of unique sensors
must be at least as great as the number of
chemicals being monitored. It is both expensive
and difficult to build highly selective chemical

sensors. A chemical compound is identified by a
pattern of the outputs given by the different
sensors, thanks to pattern recognition methods.

Artificial neural networks (ANNs), which have
been used to analyze complex data and to
recognize patterns, are showing promising
results in chemical vapor recognition. When an
ANN is combined with a sensor array, the
number of detectable chemicals is generally
greater than the number of sensors. Also, less
selective sensors which are generally less
expensive can be used with this approach. Once
the ANN is trained for chemical vapor
recognition, operation consists of propagating
the sensor data through the network. Since this
is simply a series of vector- matrix
multiplications, unknown chemicals can be
rapidly identified in the field. Electronic noses
that incorporate ANNs have been demonstrated
in various applications. Some of these
applications will be discussed later in the paper.
Many ANN configurations and training
algorithms have been used to build electronic
noses including back propagation-trained, feed-
forward networks; fuzzy ART maps; Kohonens
self-organizing maps (SOMs); learning





Vector quantizers (LVQs); hamming networks;
Boltzmann machines; and Hopfield networks.

Figure 1 illustrates the basic schematic of an
electronic nose.


The sensors used in an electronic nose can be
either mass transducers (such as Quartz
microbalanz or QMB) or chemo-resistors (based
on metal-oxides or conducting polymers); some
arrays comprise both types of sensors.

Currently extensive research is being carried out
on the exploitation of metallo-porphyrins as
coating material for QMB: the main feature of
such sensors is the dependence of the sensing
properties (selectivity and sensitivity) on the
nature of the substituents of the porphyrin. This
flexibility makes this class of compounds of
interest for electronic nose applications.


2.1 PROTOTYPE ELECTRONIC
NOSE:

One of our prototype electronic noses, shown in
Figure 2, is composed of an array of nine tin
oxide vapor sensors, a humidity sensor, and a
temperature sensor coupled with an ANN. Two
types of ANNs were constructed for this
prototype: the standard multilayer feed-forward
network trained with the back propagation
algorithm and the fuzzy ART map algorithm.
During operation a chemical vapor is blown
across the array, the sensor signals are




are digitized and fed into the computer, and the
ANN (implemented in software) then identifies
the chemical. This identification time is limited
only by the response time of the chemical
sensors, which is on the order of seconds. This
prototype nose has been used to identify
common household chemicals by their odor.

Figure 3 illustrates the structure of the ANN. The
nine tin-oxide sensors are commercially
available, Taguchi-type gas sensors. (Sensor 1,
TGS 109; Sensors 2 and 3, TGS 822; Sensor 4,
TGS 813; Sensor 5, TGS 821; Sensor 6, TGS
824; Sensor 7, TGS 825; Sensor 8, TGS 842;
and Sensor 9, TGS 880). Exposure of a tin-
oxide sensor to a vapor produces a large
change in its electrical resistance. The humidity
sensor (Sensor 10: NH-02) and the temperature
sensor (Sensors 11: 5KD-5) are used to monitor
the conditions of the experiment and are also
fed into the ANN.

Although each sensor is designed for a specific
chemical, each responds to a wide variety of
chemicals. Collectively, these sensors respond
with unique signatures patterns) to different
chemicals. During the training process, various
chemicals with known mixtures are presented to
the system. By training on samples of various
chemicals, the ANN learns to recognize the
different chemicals.
3. Data Analysis :







vectors v are the base of the PC space,
vectors u are the projection of experimental data
in the PC space
scalars s are the singular values, they are
considered as a measure of the particular
contribution to the systematic variance of the
respective principal component.
3.1 Principal component analysis:
The data of an electronic nose experiment are
represented in a multidimensional space (the
sensor space), whose dimension is equal to the
number of sensors in the array. A single
measure is an n-dimensional vector.




DATA
ANAL
YSIS

Princi
pal
comp
onent
Self
Orga
nizin
g
Map
and
other




Let X be the data matrix, and let us consider
the Singular Value Decomposition of X:
vectors v are the base of the PC space,
vectors u are the projection of experimental data
in the PC space scalars s are the singular
values, they are considered as a measure of the
particular contribution to the systematic variance
of the respective principal component.
3.2 Self Organizing Map and other
neural models:
SOM belongs to the category of competitive
learning methods and is based on unsupervised
learning. This last aspect means that the SOM
algorithm does not require any additional
information but the sensors output. SOM is a
network formed by N neurons arranged as the
nodes of a planar grid.


4. Applications for Neural
Networks:

Neural networks are applicable in virtually every
situation in which a relationship between the
predictor variables (independents, inputs) and
predicted variables (dependents, outputs) exists,
even when that relationship is very complex and
not easy to articulate in the usual terms of
"correlations" or "differences between groups." A
few representative examples of problems to
which neural network analysis has been applied
successfully are:

* Detection of medical phenomena
* Stock market prediction.
* Monitoring the condition of machinery.
* Industrial process control.
* data validation
* target marketing

4.1 Electronic Noses for Medicine:

Because the sense of smell is an important
sense to the physician, an electronic nose has
applicability as a diagnostic tool. An electronic
nose can examine odors from the body (e.g.,
breath, wounds, body fluids, etc.) and identify
possible problems. Odors in the breath can be
indicative of gastrointestinal problems, sinus
problems, infections, diabetes, and liver
problems. Infected wounds and tissues emit
distinctive odors that can be detected by an
electronic nose. Odors coming from body fluids
can indicate liver and bladder problems.
Currently, an electronic nose for examining
wound infections is being tested at South
Manchester University Hospital. A more futuristic
application of electronic noses has been recently
proposed for telesurgery. While the inclusion of
visual, aural, and tactile senses into telepresent
systems
is widespread, the sense of smell has been
largely ignored. An electronic nose will
potentially be a key component in an olfactory
input to telepresent virtual reality systems
including telesurgery. The electronic nose would
identify odors in the remote surgical
environment. These identified odors would then
be electronically transmitted to another site
where an odor generation system would
recreate them.

5. Conclusion:

The computing world has a lot to gain from
neural networks. Their ability to learn by
example makes them very flexible and powerful.
Furthermore there is no need to devise an
algorithm in order to perform a specific task; i.e.
there is no need to understand the internal
mechanisms of that task. Perhaps the most
exciting aspect of neural networks is the
possibility that some day 'conscious' networks
might be produced. There are a number of
scientists arguing that consciousness is a
'mechanical' property and that 'conscious' neural
networks are a realistic possibility.

ANNs are used experimentally to implement
electronic noses. Electronic noses have several
potential applications. The electronic nose would
identify odours in the remote surgical
environment. These identified odours would then
be electronically transmitted to another site
where a door generation system would recreate
them. Because the sense of smell can be an
important sense to the surgeon, telesmell would
enhance telepresent surgery. The major
differences between electronic noses and
standard analytical chemistry equipment are that
electronic noses

(1) Produce a qualitative output
(2) Can often be easier to automate
(3) Can be used in real-time analysis.

Further work involves in comparing neural
network sensor analysis to more conventional
techniques, exploring other neural network
paradigms, and evolving the preliminary
prototypes to field systems. Finally, I would like
to state that even though neural networks have
a bright prospective we will only get the best of
them when they are integrated with computing,
AI, fuzzy logic and related subjects.

Neural networks do not perform
miracles. But if used sensibly they can
produce some amazing results.

Bibliography:

1. J .W. Gardner- Application of Artificial
Neural Networks to an Electronic
Olfactory System.
2. Sensors and Sensory Systems for an
Electronic Nose.- J .W. Gardner

A Smart Camera for Traffic Surveillance



PRESENTED BY

V.S.KASHYAP P.SURYA PRAKASH
vsk.kashyap@gmail.com prakash1431@gmail.com
ph no :9247562052 ph no:9885233780


Abstract The integration of advanced CMOS image sensors with high-
performance processors into an embedded system facilitates new application
classes such as smart cameras. A smart camera combines video sensing,
video processing and communication within a single device. This paper
reports on the prototype implementation of a smart camera for traffic
surveillance. It captures a video stream, computes traffic information and
transfers the compressed video stream and the traffic information to a
network node. The achieved experimental results of the implemented
stationary vehicle detection demonstrate the feasibility of our approach.


Introduction
Due to their logarithmic behavior, high dynamic range and high bit
resolution the low-cost and low-power CMOS sensors acquire images with
the necessary quality for further image processing under varying
illumination conditions. The integration of these advanced image sensors
with high-performance processors into an embedded system facilitates new
application classes such as smart cameras. Smart cameras not only capture
images or video sequences, they further perform high-level image
processing such as motion analysis and face recognition on-board and
transmit the (compressed) video data as well as the extracted video
information via a network. An important application area where smart
cameras can potentially and advantageously replace most known cameras,
frame grabbers and computer solutions is visual traffic surveillance [1].
CMOS image sensors can overcome problems like large intensity contrasts
due to weather conditions or road lights and further blooming, which is an
inherent weakness of existing CCD image sensors. Furthermore, noise in the
video data is reduced BRAMBERGER, PFLUGFELDER, MAIER,
RINNER, STROBL, SCHWABACH by the capability of video computation
close to the CMOS sensor. Thus, the smart camera delivers a new video
quality and better video analysis results, if it is compared to existing
solutions. Beside these qualitative arguments and from a system architecture
point of view, the smart camera is an important concept in future digital and
heterogeneous third generation visual surveillance systems [2]. Not only
image enhancement and image compression but also video computing
algorithms for scene analysis and behavior understanding are becoming
increasingly important. These algorithms have a high demand for
realtime performance and memory. Fortunately, smart cameras can support
these demand as low-power, low-cost embedded systems with sufficient
computing performance and memory capacity. Furthermore, they offer
flexible video transmission and computing in scalable networks with
thousands of cameras through a fully digital interface. The purpose of this
paper is to present first results of an ongoing research project between
ARC Seibersdorf research, the Institute for Technical Informatics at Graz
University of Technology and the pattern recognition and image processing
group at Vienna University of Technology. The primary goal of this project
is the development of a smart camera for traffic surveillance. This paper
presents the cameras prototype implementation and a case study of our
smart camera concept with respect to stationary vehicle detection in tunnels.
We chose this application, because it is by far the most important application
(80 percent of all applications) in traffic surveillance. The remainder of the
paper is organized as follows: Section 2 presents the requirements of a smart
camera and lists the related work
Requirements and Related Work:
Requirements of a Smart Camera:
In general a smart camera is compromised of a sensor, a processing and
a communication unit. In this section we briefly discuss the requirements for
each of these units as well as some system wide requirements.
Sensor Requirements:
The image sensor is the prime input for a smart camera.
An appropriate image quality is, therefore, essential for the performance of
the entire system.
Dynamic Range:
Traffic surveillance applications enforce high demands
on the image sensor. Typical traffic situations may contain a high dynamics,
e.g., when high-intensity areas, such as the high-beam of a vehicle, appear
concurrently with low-intensity areas such as the cars silhouette at night.
Image sensors with high dynamic range and little blur are preferred for these
applications. Additionally, high dynamic-range sensors ease the design of
the camera control and the control of the lens aperture in changing light
conditions.
A SMART CAMERA FOR TRAFFIC SURVEILLANCE:
Resolution and Frame Rate:
Many available image sensors feature only small image
formats such as CIF and QCIF. These formats are acceptable for cell phones.
However, surveillance cameras require a larger resolution due the
requirements of the image processing and the operators. Note that many
currently available surveillance systems deliver images in PAL resolution
(720x576 pixels). Most image processing algorithms for the smart camera
are based on monochrome input, however, the operators prefer color images
for manual surveillance. The maximum frame rate (in fps) is another
important parameter of the smart camera. It is determined by the image
sensor and succeeding image processing stages. A frame rate
of 15 fps is aimed for live video and fast response times of the image
processing tasks.
Digital Interface :
In order to reduce the effect of temperature drift and
aging as well as to avoid glue logic the image sensor has to deliver digital
video output. Thus, the sensor has to include analog amplifiers and ADCs.
Processing Requirements:
There are various tasks that must be executed by the processing subsystem
including (i) video compression, (ii) video analysis, (iii) the computation of
traffic statistics, and finally (iv) the camera control and firmware.
Video Compression To reduce the required transmission bandwidth the
video stream must be compressed. The corresponding video encoder is
therefore executed on the smart camera.
Video Analysis :
The on-board video analysis transforms the digital
network image sensor into a smart camera. The video analysis extracts an
abstract scene description out of the raw video data. This description is then
exploited for example to detect stationary vehicles, to detect lost objects, or
to detect wrong-way driver. If an extraordinary situation has been detected
by the video analysis an alarm is triggered and sent to the central
control station.
Computation of Traffic Statistics:
Finally, traffic statistics are computed out of the video
stream. These statistics include the number of vehicles per time interval, the
average speed of vehicles per vehicle class or the lane occupancy. These
parameters are transmitted on demand or continuously to a network node.
Camera Control and Firm ware :
Another important tasks of the processing subsystem are
the camera control, i.e., aperture and flash control, and the firmware of the
embedded software. The firmware includes the control of all peripherals, the
management of all software tasks, and software reconfiguration via the
network interface. BRAMBERGER, PFLUGFELDER, MAIER, RINNER,
STROBL, SCHWABACH Communication Requirements
The compressed video stream and the output of the video analysis are
transferred to the control station via the communication unit. For a flexible
and fault to lerant communication different network connections such as
Ethernet, wireless-LAN and GSM/GPRS should be possible.
Beside the standard data upload the smart camera must also support data
download to enable to change the configuration or firmware of the camera
via the network.
System Requirements:
Low-Power:
Power consumption is a major design constraint in recent
embedded systems. High power consumption reduces the operation time in
battery or solar-powered environments. Another important aspect is heat
dissipation which must be low in order to avoid active cooling. Active
cooling, e.g., fans, increases size and costs as well as limits the cameras
area of application.
Real-Time:
The smart camera has various requirements concerning its
firm and soft real-time performance. There are several timing constraints
concerning the camera control and the peripherals, e.g., the flash trigger.
There are also timing constraints for the image analysis algorithms. e.g. a
stationary car has to be detected within 6 seconds.
Architecture of the Smart Camera :
System Overview:
For traffic surveillance the entire smart camera is packed into a single
cabinet which is typically mounted in tunnels and aside highways. The
electrical power is either supplied by a power socket or by solar panels.
Thus, our smart camera is exposed to harsh environmental influences such
as rapid changes in temperature and humidity as well as wind and rain. It
must be implemented as an embedded system with tight operating
constraints such as size, power consumption and temperature range.

Architecture:
As depicted in Figure 1, the smart camera is divided into three major parts:
(i) the video sensor, (ii) the processing unit, and (iii) the communication
unit.

Video Sensor:
The video sensor represents the first stage in the smart
cameras over all data flow. The sensor captures incoming light and
transforms it into electrical signals that can be transferred to the processing
unit. A CMOS sensor best fulfills the requirements for a video sensor. These
sensors feature a high dynamics due to their logarithmic characteristics and
provide on-chip ADCs and amplifiers. Our first prototype of the smart
camera is equipped with the LM-9618 CMOS sensor from National
Semiconductor. Its specification is enlisted in Table 1.


Dynamic range Type Resolution Max. fps ADC-Resolution Sensor control
100 dB Monochrome 640
Processing Unit :
The second stage in the overall data flow is the processing
unit. Due to the high-performance on-board image and video processing the
requirements on the computing performance are very high. A rough
estimation results in 10 GIPS computing performance. These performance
requirements together with the various constraints of the embedded system
solution are fulfilled with digital signal processors (DSP). The smart
camera is equipped with two TMS320DM642 DSPs from Texas Instruments
running at 600 MHz. Both DSPs are loosely coupled via the Multichannel
Buffered Serial Ports (McBSP), and each processor is connected to its own
local memory. The video sensor is connected via a FIFO memory with one
DSP to relax the timing between sensor and DSP. The image is then
transferred into the DSPs external memory with a capacity between 8 MB
and 256 MB.
Communication Unit:
The final stage of the overall data flow in our smart
camera represents the communication unit. The processing unit transfers the
data to the processing unit via a generic interface. This interface eases the
implementation of the different network connections such as Ethernet,
wireless LAN and GSM/GPRS. For the Ethernet network interface only the
physical-layer has to be added because the media-access control layer
is already implemented on the DSP. A second class of interfaces is also
managed by the communication unit. Flashes, pantiltzoom heads (PTZ), and
domes are controlled using the communication unit. The moving parts (PTZ,
dome) are typically controlled using serial interfaces like RS232 and
RS422. Additional in/outputs are also provided, e.g., to trigger flashes or
snapshots.
Low-Power Considerations:
The key power saving strategy used in our smart camera is based on system-
level Dynamic Power Management (DPM) .
Dynamic Power Management:
The basic idea behind DPM is that individual components can be switched
to different power modes during runtime. Each power mode is characterized
by a different functionality/ performance of the component and the
corresponding power consumption. For instance, if a specific component is
not used during a certain time period it can be switched off. The commands
to change the components power modes are issued by a central Power
Manager (PM). The commands are issued correspond ing a Power
Managing Policy (PMP). The PMP is usually implemented in the operating
system of the main processing component. In order to decide which
command to issue the PM must have knowledge about the systems
workload. Note that switching the components power mode requires also
some time. Thus, the PM must include these transition time in its PMP in
order to avoid malfunction of the system.


DPM in the Smart Camera
In the smart camera the PM is located in the OS kernel of the host DSP. The
power modes of the individual components are controlled by sending
individual commands via the I
2
C bus. Each component has its specific
power modes comprising different power consumption, speeds and wake-up
times. These characteristics are stored in look-uptables in the PM and are
used as input for the PMP. If the smart camera is running in Normal Mode
(Section 4.4), for instance it is not necessary to run the corresponding DSPs
in full power mode. In this case, the PM sets the DSPs into a lower power
mode in a way that real time requirements are still met. If the camera
changes its operating mode to the Alarm Mode the PM sets all components
back to their high-performance modes.
Software:
Image Processing:
To demonstrate the video processing capabilities of the smart camera we
chose the area of stationary vehicle detection, which is an important
application in traffic surveillance. The qualitative decision is based on long-
term intensity changes of background pixels. We focused on the tunnel
environment, because background modelling is simpler compared to
an outdoor scene with for example swaying trees. As more work will be
done, especially for outdoor scenes, we paid attention to design a smart
camera for future algorithms as well as to incorporate the video processing
algorithms for this application. It was assumed that the camera is static and
the ambient light conditions are constant. Thus, intensity changes
(foreground) are only caused by the motion of vehicles or by noise, e.g.
reflections, lights of cars. Figure 3(c) shows a sample foreground. Intensity
values are grey values between 0 and 255. Pixels (x,y)
T
of an image are
semantically background pixels, if the difference I
t-1
(x,y)-I
t
(x,y) of two
consecutive images I
t-1
(x,y) and I
t
(x,y) is smaller than a threshold (stationary
case) or the intensity value of a pixel is supported by a distribution over
background intensity values (statistical case). Because we assume a static
background, the intensity values of background pixels can be described
by a Gaussian distribution. Figure 2 sketches the stationary vehicle detection
algorithm. Each pixels background model is initialized with a Gaussian,
which covers the whole intensity range. Then, an observation distribution is
updated for each pixel with every new available image of the video stream.
The mean and the variance of the observation distribution are estimated by
the sample mean and sample variance over the last images. To make the
parametersmore stable over time, the image is spatially convolved with a
Gaussian filter with astandard deviation of 1.3 before parameter estimation
(see figure 3(a)).In each step, the observation and background distribution
are compared as it is shown in figure 3(e). If the significant parts, i.e.
between 25%-quantile and 75%quantile, of both distributions do not
intersect each other, then a statistical long-term intensity change


in this particular pixel is detected. To make the algorithm robust, a further
morphological voting step in the 8x8vicinity of this pixel is done. If the
majority of neighbored pixels do not show the same separation of back-
ground and observation distribution, then the gap between both distributions
is closed by an updated broader background model. Regions of pixels with
robust changes are selected by a connected component algorithm (see figure
3(b)). Stationary vehicles are detected, if two events happen: (i) The area of
a region lies between a minimal and maximal threshold and (ii) each
intensity profile (see figure 3(d)) I
t-k
(x,y),I
t
(x,y) over the last images of all
pixels of a region shows a difference I
i-k
(x,y)-I
i
(x,y) I [t-k+1,t] greater
than . Besides, (ii) takes into account that the statistical background
change was triggered by an abrupt intensity change, i.e. through a vehicle.
The adaptation of the background model is realized with respect to the
current observation distribution. Beside the case of the separation of both
distributions, further three cases can be distinguished for adaptation:
(1) The observation distribution is inside the background model
(2) The background model is inside the observation distribution
(3) Background model and observation distribution intersect each other
A SMART CAMERA FOR TRAFFIC SURVEILLANCE:
In (1), the background model is too broad. Therefore, the parameters of the
model are updated towards the parameters of the observation distribution.
This is realized by exponential averaging. The mean of the background
model is updated with the mean of the observation distribution
by defines how quickly the background model is
updated towards the current observation distribution. In (2) the background
model is only updated by exponential averaging, if the difference of the inter
quantile ranges of both distributions is small. This is defined by a new
adaptation factor defines the sensitivity with respect to
the inter-quantile range difference of In (3) the adaptation
rule is similar to the adaptation in (2). The background model intersects
the observation distribution from the left or from the right side. Accordingly,
the 75%-quantile or 25%-quantile of the background model is updated
towards the 75% quantile or 25%-quantile respectively of the observation
distribution. The mean of the background distribution remains the same.
Thus, the background model broadens and supports more of the current
observation intensities. Figure 3(f) shows the update factor over time. The
crosses show frames with background updates (AVG update) according to
adaptation case (1). In case of global intensity changes due to an ambient
light change, a significant number of pixel intensities will not be supported
by the current background models. Then, all pixel background models are
reset to the initialization models.
Video Compression:
Video transmission at full PAL resolution and at 25 fps requires a bandwidth
of approximately 20 MB/s1. State-of-the-art video compression reduces the
bandwidth needs by factor of 100 down to 1.5 Mb/s. The advanced simple
profile MPEG-4 encoding method is well-suited for traffic surveillance,
since the encoded quality, and therefore the required bandwidth can be
adapted to different needs. This MPEG-4 compression module is supplied by
a DSP- software provider. Performance data reports a required processing
power of approximately 4000 MIPS at full PAL resolution running at 25 fps.
Network Connection:
Internet connections require the TCP/IP protocol implemented as an IP
stack. This software module manages the network traffic with its various
protocols like HTTP, FTP, and UDP. The smart camera uses HTTP to
provide a flexible and user-friendly user interface for operators to adapt
parameters, and to check the cameras vitality and log files. FTP is
used to download data stored in files to the camera like firmware updates or
new parameter sets. Finally, UDP is used for multicast streaming-video
transmission. The smart camera uses an implementation of an IP stack from
Windmill Innovations Traquair Data Systems, which is optimized for Texas
Instruments TMS320C6x series DSPs.




Firmware:
The smart cameras firmware controls the overall system behavior, and
provides interfaces and methods for different tasks like task management,
camera control, and control of peripherals. Four basic runlevels are defined
by the firmware: (i) The normal mode, (ii) the alarm mode, (iii) the full
update mode, and finally (iv) the partial update mode. In Normal Mode, the
video compression is running with low quality, and therefore with low
bandwidth requirements, while the video analysis tasks are running at full
rate. The second mode, the Alarm Mode, is entered, if an alarm situation has
been detected by the video analysis system. The quality of the video stream
is increased and, if necessary, the video analysis tasks are throttled to gain
more processing power for the network management. The calculation of the
traffic parameters remains at full rate. The Full-Update Mode is used if the
firmware is updated. Therefore, all analysis and compression tasks are
halted. Finally, the Partial-Update Mode is activated each time a task has to
be replaced or removed from the smart camera. The code for the new task is
downloaded to the camera, however, all other tasks remain running.



Experiments:
Prototype Architecture:

In order to evaluate function and performance of our
smart camera as soon as possible we developed a prototype architecture.
This prototype is implemented as embedded system and includes all
functional units of the final camera platform. Heart of our prototype is the
Network Video Development Kit (NVDK) from ATEME. The NVDK
includes a TMS320C6416 DSP with 264 MB SDRAM as well as an
Ethernet network extension card. We extended this prototype platform by an
additional extension board that connects the LM-9618 CMOS sensor with
the NVDK. Video data is transferred via FIFO memory. The architecture of
the prototype is very close to the final target platform of the smart
camera. Only a single DSP is available at the prototype, but separate test
runs deliver usable results concerning functionality and performance. Figure
5 presents a picture of our prototype.

Video Analysis:
Stationary vehicle detection as described in Section 4.1 was the first
algorithm implemented on our prototype. After porting the original Matlab
implementation to C++, we have done some performance experiments.
The un optimized implementation required approximately 350 billion cycles
per frame. This corresponds to a processing time of 7 seconds per image at
full PAL resolution. The main bottlenecks of this implementation were
excessive memory trashing and the intensive use of floating-point
operations. Floating-point arithmetic is rather expensive on the fixed-point
TMS320C64x DSP. Optimized floating-point runtime libraries improved
performance, however, we decided to implement a fixed-point numerical
format to represent fractional numbers in a 16 bit variable. This boosted the
performance to 1 fps by reducing memory trashing and the implementation
of the mentioned fixed-point number representation. The stationary vehicle
detection algorithm has been implemented by exploiting the compilers class
template support which made it easy to port the algorithm from Matlab to
C++. However, the DSPs computing performance suffers significantly from
this kind of implementation. A significant performance improvement is
expected by a comprehensive re-implementation of the algorithm.
Power Estimation:
The estimated power consumption of the smart camera is summarized in
Table 2. It lists the typical power consumption of the main components of
the camera. By applying DPM, a high power-saving potential can be
exploited.

6 Conclusion
A smart camera realized as an embedded system has been presented in this
paper. Our smart camera integrates a digital CMOS image sensor, a
processing unit featuring two high-performance DSPs and a network
interface and it clearly promotes the concept of Wolf et al. High-level video
analysis algorithms in combination with state-of-the-art video compression
transform this system from a network camera into a smart camera.
There is a rapidly increasing market for smart cameras. Advances in
performance and integration will enable new and more functionality
implemented in smart cameras. The next steps in our research project
include (i) the development of the target architecture, (ii) the implementation
of further image processing algorithms, and (iii) the real-world
evaluation on highways.
References
[1] K. Borras. In life you must have vision. Traffic Technology
International, pages 3639, Oct/Nov
2002.
[2] C. S. Regazzoni, V. Ramesh, and G. L. Foresti. Introduction of the
special issue. Proceedings of the
IEEE, 89(10), Oct 2001.
[3] W. Wolf, B. Ozer, and T. Lv. Smart cameras as embedded systems.
IEEE Computer, pages 4853,
2002.
[4] Vision Components Germany. http://www.vision-comp.com, 2002.

ADDICTION AVOIDER

DETECTION OF ADDICTION ON AN INDIVIDUAL AND AVOIDING
ADDICTION USING EMBEDDED SYSTEMS

L. HARI HARASUDHAN
3
rd
YEAR
ELECTRONICS AND COMMUNICATIONS ENGINEERING
For Contact:
E-mail: lhhs_1986@yahoo.com

SRI SAIRAM ENGINEERING COLLEGE,
Sai Leo Nagar, West Tambaram,
Chennai 600 061,
Tamil Nadu State, India



ABSTRACT:
About half the people around the world
are addicted to one or more addictive
substances. Addiction is one of the chronic
disorders that are characterized by the repeated
use of substances or behaviors despite clear
evidence of morbidity secondary to such use.
It is a combination of genetic,
biological/pharmacological and social factors.
Example: Overeating, Having sex, Gambling,
Alcohol drinking, Taking Narcotic Drugs and
Certain Mannerisms. In this paper we are
going to see about a design of device that can
entirely avoid addiction. The device Addiction
Avoider is based upon the principle of
controlling Brain waves.

I. INTRODUCTION
Before going on to details we are
supposed to know the basic terms that this
paper is based upon. These are the terms The
Brain, Brainwaves and Addiction.

A. The Brain:

It is well known that brain is an electro-
chemical organ. The Brainwaves are



produced by the temporal lobe of the brain. It
processes auditory information from the ears


and relates it to Wernicke's area of the parietal
lobe and the motor cortex of the frontal lobe.
The amygdala is located within the temporal
lobe and controls social and sexual
behavior and other emotions. The limbic
system is important in emotional behavior and
controlling movements.


Fig. 1. Side and top view of the human brain with parts

Researchers have speculated that a fully
functional brain can generate as much as 10
watts of electrical power. Even though this
electrical power is very limited, it does occur in
a very specific ways that are characteristic of
the human brain.

B. Brainwaves:

Electrical activity emanating from the
brain is displayed in the form of brainwaves.
There are four categories of these brainwaves,
ranging from most activity to least activity.
These are delta waves, theta waves, alpha
waves and beta waves. Delta waves are waves
with high amplitude. It has a frequency of 0.5
4 Hertz. They never go down to zero because
that would mean that you were brain dead. But,
deep dreamless sleep would take you down to
the lowest frequency. Typically, 2 to 3 Hertz.
Theta waves are waves with amplitude lesser
than that of delta waves and have a greater
frequency of 5 8 Hertz. A person who has
taken time off from a task and begins to
daydream is often in a theta brainwave state.
Alpha waves are waves with amplitude lesser
than that of theta waves and have a greater
frequency of 9-14 Hertz. A person who takes
time out to reflect or meditate is usually in a
alpha state. Beta waves are the waves that have
the lowest amplitude and have the highest
frequency of 15 40 Hertz. These waves are
again classified into low beta waves and high
beta waves according to their range of
frequencies. The low beta waves have a
frequency of 15 32 Hertz. A person making
an active conversation would be in the low beta
state. The high beta waves have a frequency of
33 40 Hertz. A person in a stress, pain or
addiction would be in the high beta state.

TABLE 1
DIFFERENT BRAINWAVES AND ITS FREQUENCIES
S.No. Brainwaves
Frequency range
(Hertz)
1) Delta 0.5 - 4
2) Theta 5 - 8
3) Alpha 9 14
4) Low Beta 15 32
5) High Beta 32 - 40


Fig. 2. Different brainwaves with their names and the situations
when it occurs.


Fig. 3. High beta waves and Low beta waves respectively

C. Addiction:
There are two types of addiction:
Physical dependency and Psychological
dependency.

1. Physical dependency :

Physical dependence on a substance is
defined by appearance of characteristic
withdrawal symptoms when the drug is
suddenly discontinued. Some drugs such as
cortisone, beta blockers etc are better known as
Antidepressants rather than addictive
substances. Some drugs induce physical
dependence or physiological tolerance - but not
addiction - for example many laxatives, which
are not psychoactive; nasal decongestants,
which can cause rebound congestion if used for
more than a few days in a row; and some
antidepressants, most notably Effexor, Paxil
and Zoloft, as they have quite short half-lives,
so stopping them abruptly causes a more rapid
change in the neurotransmitter balance in the
brain than many other antidepressants. Many
non-addictive prescription drugs should not be
suddenly stopped, so a doctor should be
consulted before abruptly discontinuing them.

2. Psychological dependency:

Psychological addictions are a
dependency of the mind, and lead to
psychological withdrawal symptoms.
Addictions can theoretically form for any
rewarding behavior, or as a habitual means to
avoid undesired activity, but typically they
only do so to a clinical level in individuals who
have emotional, social, or psychological
dysfunctions, taking the place of normal
positive stimuli not otherwise attained.
Psychological addiction, as opposed to
physiological addiction, is a person's need to
use a drug or engage in a behavior despite the
harm caused out of desire for the effects it
produces, rather than to relieve withdrawal
symptoms. As the drug is indulged, it becomes
associated with the release of pleasure-
inducing endorphins, and a cycle is started that
is similar to physiological addiction. This cycle
is often very difficult to break.
We are going to solely consider the
psychological addictions in designing the
addiction avoider device.

D. Recovery Therapy from Addiction
Some medical systems, including those of at
least 15 states of the United States, refer to an
Addiction Severity Index to assess the severity
of problems related to substance use. The index
assesses problems in six areas: medical,
employment/support, alcohol and other drug
use, legal, family/social, and psychiatric. While
addiction or dependency is related to seemingly
uncontrollable urges, and has roots in genetic
predisposition, treatment of dependency is
conducted by a wide range of medical and
allied professionals, including Addiction
Medicine specialists, psychiatrists, and
appropriately trained nurses, social workers,
and counselors. Early treatment of acute
withdrawal often includes medical
detoxification, which can include doses of
anxiolytics or narcotics to reduce symptoms of
withdrawal. An experimental drug, ibogaine, is
also proposed to treat withdrawal and craving.
Alternatives to medical detoxification include
acupuncture detoxification. In chronic opiate
addiction, a surrogate drug such as methadone
is sometimes offered as a form of opiate
replacement therapy. But treatment approaches
universal focus on the individual's ultimate
choice to pursue an alternate course of action.
Anti-anxiety and anti-depressant SSRI drugs
such as Lexapro are also often prescribed to
help cut cravings, while addicts are often
encouraged by therapists to pursue practices
like yoga or exercise to decrease reliance on
the addictive substance or behavior as the only
way to feel good. Therapists often classify
patients with chemical dependencies as either
interested or not interested in changing.
Treatments usually involve planning for
specific ways to avoid the addictive stimulus,
and therapeutic interventions intended to help a
client learn healthier ways to find satisfaction.
Clinical leaders in recent years have attempted
to tailor intervention approaches to specific
influences that effect addictive behavior, using
therapeutic interviews in an effort to discover
factors that led a person to embrace unhealthy,
addictive sources of pleasure or relief from
pain.
THE ADDICTION AVOIDER:

II. PRINCIPLE

The principle behind this device is
Binaural Beats. Binaural beats or binaural
tones are auditory processing artifacts, which
are apparent sounds, the perception of which
arises in the brain independent of physical
stimuli. The brain produces a similar
phenomenon internally, resulting in low-
frequency pulsations in the loudness of a
perceived sound when two tones at slightly
different frequencies are presented separately,
one to each of a subject's ears, using stereo
headphones. A beating tone will be perceived,
as if the two tones mixed naturally, out of the
brain. The frequency of the tones must be
below about 1,000 to 1,500 hertz. The
difference between the two frequencies must be
small (below about 30 Hz) for the effect to
occur; otherwise the two tones will be
distinguishable and no beat will be perceived.
Binaural beats can influence functions
of the brain besides those related to hearing.
This phenomenon is called frequency
following response. The concept is that if one
receives a stimulus with a frequency in the
range of brain waves, the predominant
brain wave frequency is said to be likely to
move towards the frequency of the stimulus
(a process called entrainment). Directly using
an infrasonic auditory stimulus is impossible,
since the ears cannot hear sounds low enough
to be useful for brain stimulation. Human
hearing is limited to the range of frequencies
from 20 Hz to 20,000 Hz, while the frequencies
of human brain waves are below about 40 Hz.
To account for this, binaural beat frequencies
must be used.
According to this view, when the
perceived beat frequency corresponds to the
delta, theta, alpha or beta range of brainwave
frequencies, the brainwaves entrain to or move
towards the beat frequency. For example, if a
315 Hz sine wave is played into the right ear
and a 325 Hz one into the left ear, the brain is
supposed to be entrained towards the beat
frequency (10 Hz, in the alpha range). Since
alpha range is usually associated with
relaxation, this is supposed to have a relaxing
effect. Some people find pure sine waves
unpleasant, so a pink noise or another
background (e.g. natural sounds such as river
noises) can also be mixed with them.

BLOCK DIAGRAM:

Sensor 1
Sensor 2
Sensor 3
Amplifier 1
Amplifier 2
Amplifier 3
Atmel
8515
Left Side
Stereo-
Phone
Right Side
Stereo-
Phone
Oscillator 2
(1010 Hz)
Oscillator 1
(1000 Hz)
Head Band

Fig. 4. Block diagram of the device Addiction Avoider

III. WORKING

The block diagram consists of the
following parts whose operation is as below:

A. SENSORS
These sensors consist of a 0.7 inch
diameter hard plastic outer disc housing with a
pre-jelled Silver chloride snap style post pellet
insert. These sensors do not contain any latex
and dont need any conductive gel.


Fig. 5. Electroencephalography (EEG) sensors

The sensor sends the analog brainwave signal
into the 8515 microcontroller.

B. AMPLIFIERS


Fig. 6. Circuit diagram of a basic Inverting amplifier using
Operational amplifier.
Basically the amplitude of analog
brainwaves is in terms of 10 15 micro volts.
But the Atmel 8515 microcontroller has an
operating voltage of about 2.7V 6.0V. So we
are using amplifiers.

Gain (A) = (-R
2
/R
1
) (1)
Where, negative sign represents change in
phase by 90



Fig. 7. Circuit diagram of cascaded inverting amplifier with a
gain of 2, 00,000.

It is designed in such a way that it
amplifies 15 micro volts to about 3.5V. Here
we are using basic cascaded inverting amplifier
using operational amplifier with a gain of about
2, 00,000, embedded in a small Printed Circuit
Board (PCB).
Here we are using four inverting
amplifier cascaded with each other. So let the
gain of each inverting amplifier from left to
right be A
1
, A
2
, A
3
and A
4
. And let Vi and Vo
be the input and output voltages of the
amplifier.
Now,
A
1
= (-R
2
/R
1
)
= (-2/1)
= -2
A
2
= (-R
4
/R
3
)
= (-10/1)
= -10
A
3
= (-R
6
/R
5
)
= (-100/1)
= -100
A
4
= (-R
8
/R
7
)
= (-100/1)
= -100
Now Total Gain of the amplifier (A),

A = A
1
* A
2
* A
3
* A
4
(2)

A = (-2)*(-10)*(-100)*(-100)
A = 2, 00, 000

Therefore,

Vo = Vi * A V
= 15 * 10
-6
* 2, 00, 000 V
= 3 V

Here we have amplified an 15uV signal to an 3
V signal so that the signal is in the operational
range of the microcontroller. There are four
negative signs in the gain equation which add
up to give 360

phase shift.

D. MICROCONTROLLER

The Atmel 8515 microcontroller is a 40
pin, 4 MHz 8bit microcontroller and has 8K
FLASH, 512 EEPROM, 512 SRAM.

The
AT90S8515 is a low-power CMOS 8-bit
microcontroller. It has an internal analog to
digital converter (ADC) and internal battery.
The signal that is sent by the sensors is
converted from analog to digital signal. The
microcontroller has a pre-defined program,
which analyses the digital signal and compares
it with the digital signal equivalent of the
analog signal having the frequency range of 32
40 Hz which is already stored in the memory
of the microcontroller. If on comparison the
analyses on two signals are nearly same then
the microcontroller acknowledges and
triggers the oscillator 1 and oscillator 2.


E. OSCILLATORS

The oscillator is basically a Wein
bridge audio oscillator. The oscillator is
designed in such a way that it produces a
particular audio wave below 1500 Hz. The
oscillator will be designed such that it has 10
13 Hz difference in frequency with oscillator 1.
This difference in frequency creates Binaural
Beats. Thus if the brain of an individual
produces 32 40 Hz (High Beta waves) i.e. if
he/she is in stress or addicted to some
substance, the binaural beats having a
frequency of about 10-13 Hz creates a stimulus
making the brain to move towards the
stimulated frequency.

F. STEREO HEADPHONE


Fig. 8. Basic Stereo Headphone

This is done by sending audio waves
from one oscillator to one of the two sides of
the headphone and another oscillator to another
side of the headphone.

IV. CONCLUSION

Addiction Avoider is the safest and
simplest device to use in prevention of
Addiction. It is used for any type of addiction
like addiction caused by taking narcotic drugs
or alcohol and simple addictions like
overeating, sexual intercourse and mannerisms.

V. FUTURE PROSPECTS

Addiction avoider can be used to cure
stress or tension on any individual. The concept
of binaural waves can be further researched
and used to find a device for communication
with deaf and dumb individuals. It can be
further used to study the resonance of brain
during brain diseases.

VI. MERITS

1) The headband used is made of rubber or
any clothing (better to be an insulator)
provided it must be designed such that the
sensors touch the skin.
2) The whole device is light weight and can be
carried anywhere we want.
3) The whole device including sensors
microcontroller and headphone is cheap
and costs only about Rs. 3000 and above.

VII. DEMERITS
A. Those meeting any of the following
criteria/conditions should not use
binaural beats:
a) Epileptics
b) Pregnant women
c) People susceptible to seizures
d) Pacemaker users
e) Photosensitive people.

VIII. REFERENCES
[1] Detection of seizures in epileptic and
non-epileptic patients using GPS and
Embedded Systems by Abhiram
Chakraborty Ukranian Journal of
Telemedicine and medical Telematics
(TOM 3 No.2 Pg 211)
[2] www.bio-medical.com
[3] www.ercim.org/publication/
Ercim_News/enw51/bielikova.html
[4] en.wikipedia.org/wiki/ Binaural beats
[5] www.nlm.nih.gov/medlineplus/
ency/article/003931.htm
[6] en.wikipedia.org/wiki/Addiction
[7] en.wikipedia.org/wiki/Drug_Addiction
[8] www.angelfire.com/empire/
serpentis666/Brainwaves.html
[9] www.trdrp.org/research/ PageGrant.asp?
grant_id=383


Emerging Trends in Bio-Medical Instrumentation







G.P. Sai Pranith
III B.Tech [EEE]
SKIT-Srikalahasthi
saipranith@yahoo.co.in

Abstract
Thi s paper cat ers t o t he bas i c s t ages of meas urement s
and i ns t rument at i on, i . e. general i zed bi omedi cal
i ns t rument at i on s ys t em, bas i s of bi o- pot ent i al el ect rodes and
t rans ducers f or bi omedi cal appl i cat i ons . Speci al t echni ques
f or meas urement of non- el ect ri cal bi ol ogi cal paramet ers l i ke
di agnos t i c and t herapeut i cal as pect s of i magi ng s ys t ems s uch
as X- ray comput er t omography ( CT) and magnet i c res onance
i magi ng ( MRI ) are di s cus s ed i n det ai l , al ong wi t h t hei r
bi omedi cal appl i cat i ons .
Thi s paper al s o hi ghl i ght s t he i mport ance of i ns t rument s ,
whi ch af f ect t he human body. The us e of di f f erent s t i mul at ors
and t he new advances t hat have t aken pl ace i n us i ng t he
pacemakers and def i bri l l at ors are emphas i zed i n det ai l . Al s o
t he l at es t devel opment s i n t hi s f i el d i ncl udi ng t he Mi nd s eye
di s covery are ment i oned and our own reas ons f or s uch
phenomenon are gi ven i n det ai l . Thi s paper al s o emphas i zes
how t hes e bi o- medi cal i ns t rument s are not onl y hel pf ul i n
i dent i f yi ng t he di s eas es but al s o i n i dent i f yi ng t he cul pri t s and
get t he f act s f rom t hem wi t h narco t es t s .


1. I nt roduct i on

A Bi omedi cal I nstrument performs a speci f i c f uncti on on a
bi ol ogi cal system. The f uncti on may be the exact measurement of
physi ol ogi cal parameters l i ke bl ood pressure, vel oci ty of bl ood
f l ow, acti on potenti al s of heart muscl es, temperature, pH val ue of
the bl ood and rates of change of these parameters. The
speci f i cati on must meet the requi rements of the l i vi ng system.
The desi gn must be suf f i ci entl y f l exi bl e to accommodate the
f actor of ' bi ol ogi cal vari abi l i t y' . The bi omedi cal measuri ng
devi ces shoul d cause mi ni mal di sturbance to normal physi cal
f uncti on and are to be used wi th s af et y i ns t rument at i on.
Bi omedi cal i nstrumentati on can general l y be cl assi f i ed i nto
two maj or types:
Cl i ni cal i nstrumentati on
Research i nstrumentati on
Cl i ni cal i nstrumentati on i s basi cal l y devoted to the di agnosi s,
care, and treatment of pati ents.
Research i nstrumentati on i s used pri mari l y i n the search f or
new knowl edge pertai ni ng to vari ous systems that compose the
human organi sm.








BM i ns t rument s - Li f e s avers i n hos pi t al s

2. Man- I ns t rument Sys t em
I n man-i nstrument system the data i s obtai ned f rom l i vi ng
organi sms, especi al l y humans and there i s l arge amount of
i nt eract i on between the i ns t rument at i on s ys t em and the s ubj ect
bei ng measured. So i t i s essenti al that the person on whom
measurements are made be consi dered an i nt egral part of the
i ns t rument at i on s ys t em. Consequentl y, the over al l system,
whi ch i ncl udes both the human organi sm and the i nstrumentati on
requi red f or measurement i s cal l ed the man- i ns t r ument s ys t em.
An i nstrumentati on system i s def i ned as a set of
i nstruments and equi pment uti l i zed i n the measurement of one or
more characteri sti cs or phenomena, pl us the presentati on of the
i nf ormati on obtai ned f rom those measurements i n a f orm that can
be read and i nterpreted by a man
3. General i zed Bi omedi cal I ns t rument at i on
The sensor converts energy or i nformati on from the
measurand to another f orm (usual l y el ectri c). Thi s si gnal i s then
processed and di spl ayed so that humans can percei ve the
i nf ormati on. The maj or di f f erence between medi cal
i nstrumentati on and conventi onal i nstrumentati on systems i s that
the source of si gnal s i s l i vi ng ti ssue or energy appl i ed to l i vi ng
ti ssue. General i zed bi omedi cal i nstrument consi sts of :
Meas urand

Sens ors

Si gnal Condi t i oni ng

Out put Di s pl ay













General i zed Bi o medi cal i ns t rument s ys t em

4. Bas i s of bi o pot ent i al el ect rodes
A. Recor di ng el ect r odes
El ectrodes make a transf er f rom the i oni c conducti on i n the
ti ssue to the el ectroni c conducti on, whi ch i s necessary for maki ng
measurements. El ectrodes are empl oyed to pi ck up the el ectri c
si gnal s of the body. Si nce the el ectrodes are transf erri ng the
bi oel ectri c event to the i nput of the ampl i f i er, the ampl i f i er
shoul d be desi gned i n such a way that i t accommodates the
characteri sti cs of the el ectrodes.
To record the ECG, EEG, EMG, etc. el ectrodes
must be used as transducers to convert an i oni c f l ow of current i n
the body to an el ectroni c f l ow al ong a wi re. Two i mportant
characteri sti cs of el ectrodes are el ectrode potenti al and contact
i mpedance. Good el ectrodes wi l l have l ow stabl e fi gures for both
of the above characteri sti cs.










EEG ECG
B. Types of el ect r odes
Many types of recordi ng el ectrodes exi st i ncl udi ng metal
di scs, needl es, sucti on el ectrodes, gl ass mi croel ectrodes, f etal
scal p cl i ps or screws, etc. The most wi del y used el ectrodes f or
bi omedi cal appl i cati ons are si l ver el ectrodes, whi ch have been
coated wi th si l ver chl ori de by el ectrol yzi ng them for a short ti me
i n a sodi um chl ori de sol uti on. When chl ori ded, the surf ace i s
bl ack and has a very l arge surf ace area. A pai r of such el ectrodes
mi ght have a combi ned el ectrode potenti al bel ow 5 mV.
5. Trans ducers For bi omedi cal appl i cat i ons
I n bi omedi cal appl i cati ons there are vari ous parameters
obtai ned f rom the pati ent body by usi ng vari ous transducers.
These parameters i ncl ude - bl ood pressure (arteri al , di rect), bl ood
f l ow (aorti c, venous, cardi ac output), heart rate,
phonocardi ogram, bal l i stocardi ogram, oxi metry, respi rati on rate,
pneumotachogram, ti dal vol ume, pul monary di f f usi ng capaci ty,
pH, parti al pressures of oxygen and carbon di oxi de, temperature
etc.
A. Pr es s ur e Tr ans ducer s .
The basi c pri nci pl e behi nd al l these pressure transducers i s
that the pressure to be measured i s appl i ed to a fl exi bl e
di aphragm, whi ch gets def ormed, by the acti on exerted on i t.
Thi s moti on of the di aphragm i s then measured i n terms of
el ectri cal si gnal s. I n i ts si mpl est f orm, a di aphragm i s a thi n f l at
pl ate of ci rcul ar shape, attached f i rml y by i ts edge to the wal l of
a contai ni ng vessel . Typi cal di aphragm materi al s are stai nl ess
steel , phosphor bronze and beryl l i um copper.
Other transducers used are temperature measurement
transducers, f l ow transducers, di spl acement, moti on and posi ti on
transducers.
6. Meas urement s of Non- El ect ri cal Bi ol ogi cal Paramet ers
A. Comput ed Tomogr aphy ( CT)
CT or "CAT" scans are speci al x-ray tests that produce
cross-secti onal i mages of the body usi ng x-rays and a computer.
These i mages al l ow the radi ol ogi st to l ook at the i nsi de of the
body j ust as one woul d l ook at the i nsi de of a l oaf of bread by
sl i ci ng i t.
a) Pr i nci pl e
The basi c physi cal pri nci pl e i nvol ved i n CT i s that the
structures on a 2D (two di mensi onal ) obj ect can be recons t ruct ed
from mul ti pl e proj ect i ons of the sl i ce. Measurements are taken
f rom the transmi tted X-rays through the body and contai n
i nformati on on al l the consti tuents of the body i n the path of X-
ray beam. By usi ng mul ti di recti onal scanni ng of the obj ect,
mul ti pl e data are col l ected.
b) Appl i cat i ons of Comput er Tomogr aphy
CT scans are f requentl y used to eval uate the brai n, neck,
spi ne, chest, abdomen, pel vi s and si nuses. I t i s al so used i n
assessment of Coronary Arteri es and muscul o-skel etal
i nvesti gati ons













CT Scan Sys t em
B. Magnet i c Res onance I magi ng ( MRI )
Magneti c Resonance I magi ng (MRl ) i s a method of l ooki ng
i nsi de the body wi thout usi ng surgery, harmf ul dyes or radi ati on.
The method uses magneti sm and radi o waves to produce cl ear
pi ctures of the human anatomy.
I n the l ast three to four years, i mproved computer
technol ogy i n hardware and sof tware al l owed MRI to obtai n
better qual i ty i mages i n most of the body. MRI has proven to be
unusual l y capabl e i n the detecti on, l ocal i zati on, and assessment
of the extent and character of di sease i n the central nervous,
muscul oskel etal , and cardi ovascul ar systems. I n the brai n for
exampl e, MRI has a proven abi l i ty to defi ne some tumors and the
pl aques of mul ti pl e scl erosi s better than any other techni que.


MRI Scan




Latest Discovery :

Scientists have successfully discovered minds eye with MRI scan system. With
this minds eye, man can visualize his future and remember his past perfectly. This
has been published in Eenadu daily dated 25-1-07 and the details were experiments
were being performed on 21 persons by scanning their minds with MRI Scan and
studying their behavior during the time they remember the past.
Our view:
As per Einstein Theory of relativity, any particle which travels with the
velocity equal to or more than that of light, can enter both past and future .It is clearly
evident that our mind travels with a velocity faster than light as an example we can
visualize sun in fraction of second whereas it takes 8 minutes for light to reach from
sun to earth. As mind travels faster than light, mind can visualize future and we may
be able to invent time-machines in future with the help of this discovery thereby
alerting human race towards all the calamities, by knowing their occurrence in
advance.
Other i magi ng systems l i ke Posi tron Emi ssi on Tomography
( PET) , gamma camera and si ngl e photon emi ssi on computer
tomography ( SPECT) are the f urther i mprovements that are used
to di agnose di f f erent modal i ti es i n the human body.








PET St i mul at or

7. El ect roni c I ns t rument s For Af f ect i ng The Human Body
A. El ect r i cal St i mul at or s
Nerves and muscl es i n the body produce el ectri c potenti al s
when they operate, and conversel y they can be made to operate by
el ectri cal sti mul ati on. I n the physi otherapy department,
sti mul ators exi st whi ch may provi de di rect, al ternati ng,
pul sati ng, or pul sed wavef orms and are used to exerci se the
muscl es by sti mul ati on through El ectrodes pl aced on the ski n.
Sti mul ators exi st now whi ch are used for the rel i ef of pai n.
These Trans cut aneous El ect ri cal Neural St i mul at ors ( TENS)
appear to suppress other pai ns i n the same general area where the
sti mul ati on i s appl i ed. El ectri cal sti mul ati on of tri gger poi nts i s
al so cl ai med to be ef f ecti ve
B. Bl adder s t i mul at or
Thi s i s a general term appl i ed to el ectri cal sti mul ators of
the bl adder or urethra. I n some di sorders of the uri nary bl adder
normal emptyi ng i s i mpossi bl e, usual l y due to di srupti on of the
nerve suppl y. An el ectri cal sti mul ator (usual l y i mpl anted) can
assi st emptyi ng of the bl adder by sti mul ati ng the bl adder muscl e
di rectl y or by sti mul ati ng the nerves where they l eave the spi nal
col umn. Onl y a smal l number of such devi ces have been
i mpl anted. Someti mes external sti mul ators are used to test the
l i kel y effect of an i mpl anted sti mul ator or as a substi tute f or i t.
These may use el ectrodes mounted on a pl ug, whi ch f i ts i nto the
rectum or vagi na.
C. Cemaker s
The Pacemaker i s an el ectri cal sti mul ator wi th el ectrodes
usual l y appl i ed di rectl y to the heart and provi di ng pul ses of a
f i xed rate (asynchronous Pacemaker) or i t may provi de pul ses
onl y when the natural pul se f ai l s to appear (demand Pacemaker).

D. I mpl ant abl e Cardi overt er Def i bri l l at ors ( I CDs )
An i mpl antabl e cardi overter-defi bri l l ator i s a devi ce that
resusci tates the heart attempts to restore a normal heart beat i n a
pati ent whose heart i s beati ng rapi dl y and i n a di sorgani zed way.
I mpl antabl e cardi overter-def i bri l l ators are pl aced to shock the
heart and resusci tate the pati ent. I t gets hooked up to a wi re
pl aced i n the ri ght si de of the heart much l i ke a pacemaker. An
i mpl antabl e cardi overter-def i bri l l ator has the capaci ty to pace the
heart l i ke a pacemaker, but i t al so has the capabi l i ty to shock the
heart and resusci tate or abort a cardi ac arrest. The battery i s
pl aced under the ski n and a vei n l ocated underneath the l ef t or
the ri ght cl avi cl e i s i sol ated the wi res are pl aced through that
vei n to al l ow entry to the ri ght si de of the heart. The wi res are
attached to the devi ce and the wound i s cl osed. When the pati ent
undergoes i mpl antati on of an i mpl antabl e cardi overter-
defi bri l l ator, s/he al so needs testi ng of hi s/er normal rhythm. An
overni ght stay i n the hospi tal i s expected for al l thi s.
8. New Advances
The techni cal advances of newer I CD s have si gni f i cantl y
modi f i ed pati ent f ol l ow-up procedures over the l ast several years.
The mul ti pl e functi ons of the new systems, whi ch i ncl ude hi gh-
energy def i bri l l ati on therapy, l ow-energy cardi oversi on,
anti tachycardi a paci ng, permanent and post-therapy bradycardi a
paci ng, di agnosti c counters, and devi ce status parameters requi re
everi ncreasi ng techni cal experti se f rom the physi ci an.
I mprovements i n newer devi ces i ncl ude mul ti pl e therapy modes,
more pati ent di agnosti c i nf ormati on, and accurate devi ce status
i nf ormati on. Currentl y, there i s a f ocus on devi ce down-si zi ng.
Thi s can l ead to an i nevi tabl e search f or l ower def i bri l l ati on
energy l evel s and newer model s of energy storage and del i very.
The goal of a 'pacemaker l i ke' devi ce of si mi l ar si ze and
i mpl antati on techni que wi l l be pursued wi th vi gor i n future years.
New f eatures i n I CDs have enhanced the ease and saf ety of
i mpl antati on, the comf ort to the pati ent, the cl i ni cal i nf ormati on
provi ded to the cl i ni ci an and pati ents by the devi ce, and the
appropri ateness and ef f i ci ency of these devi ces.
Age i s no bar r i er !
The bi o-medi cal i nstruments that are bei ng i nvented today
have sui tabi l i ty f or age group 0-100 years. These i nstruments can
be used on a baby to be born or on a person i n hi s death bed. The
l atest mi croscopes devel oped are abl e to see the ti ni est among
ti ny cel l s and so new di seases and thei r characters can be studi ed
and proper medi cati on i s provi ded.
Nar co Tes t s
These narco anal ysi s tests have become very cruci al i n our
j udi ci ary where, i n many cases the reports f rom these tests have
i nf l uenced the f i nal verdi ct i n a prof ound way.
Duri ng narco anal ysi s tests, the person on whom the test i s
bei ng performed i s bei ng subj ected to drugs whi ch nul l i fi es al l
hi s def enses and makes hi m unabl e to hi de any f acts duri ng
query. But bef ore a person i s subj ected to these tests, f i rst a team
of doctors check whether the person s i s heal thy enough to
undergo narco anal ysi s tests. Thi s i s done by the use of advanced
bi omedi cal i nstruments






St amped! Tel gi Noi da Ki l l ers
I dent i f i cat i on
These bi omedi cal i nstruments al so f i nd use i n f orensi c
l aboratori es to i denti f y the dead bodi es or the remai ns of
putri f i ed body organs and bones. Thus,these i nstruments are not
onl y el pf ul i n i denti f yi ng the cul pri ts but al so to recogni ze the
dead bodi es and thus thi s hel ps both j udi ci ary and rel ati ves of the
dead.
9. Concl us i on
Si nce bi omedi cal i nstrumentati on i s a f i el d where the
technol ogy has gone i nto a l ot of advancements and
sophi sti cati on, i t i s di f f i cul t f or acti ve medi cal personnel to know
the spectrum of i nstruments avai l abl e f or vari ous di agnosi s and
therapy. Af ter the i ntroducti on to computers and mi croprocessors,
the bi omedi cal i nstrumentati on has i mproved f urther i n al l
respects. New advances i n the i nstrumentati on i ncl ude the desi gn
of devi ces, whi ch have greater conveni ence, hi gher qual i ty and
f i ner resol uti on, whi ch wi l l i mprove thei r standards.
10. Ref erences
[1] Pri nci pl es of medi cal el ectroni cs & Bi o-medi cal
i nstrumentati on, C. Raj a Rao, S.K.Guha.
[2] Medi cal I nstrumentati on, J ohn G. Webster
[3] I ntroducti on to Bi omedi cal equi pment technol ogy, 1.Carr &
M. Brown.
[4] Bi omedi cal I nstrumentati on, M.Arumugam.

*****
An Electronic Nose with Artificial Intelligence






N.B.K.R. INSTITUTE OF SCIENCE AND
TECHNOLOGY

(Approved by AICTE: Accredited by NBA: Affiliated to S.V.University, Tirupathi.)

VIDYANAGAR-524413 NELLORE (DT), A.P., INDIA



DEPARTMENT OF ELECTRONIC INSTRUMENTATION &CONTROL
ENGINEERING






P.VENKATA NAGARJUN S.SANDEEP
III B TECH, EICE. III B TECH, EICE.
EMAIL:ponnurunagarjun@hotmail.com EMAIL:sanisettysandeep@yahoo.co.in
Cell: 9866208102






ABSTRACT:

Electronic/artificial noses are being
developed as systems for the automated
detection and classification of odors,
vapors, and gases. An electronic nose is
generally composed of a chemical
sensing system (e.g., sensor array or
spectrometer) and a pattern recognition
system (e.g., artificial neural network).
We are developing electronic noses for
the automated identification of volatile
chemicals for environmental and
medical applications. In this paper, we
briefly describe an electronic nose, show
some results from a prototype electronic
nose, and discuss applications of
electronic noses in the environmental,
medical, and food industries.

INTRODUCTION:

The two main components of an
electronic nose are the sensing system
and the automated pattern recognition
system. The sensing system can be an
array of several different sensing
elements (e.g., chemical sensors), where
each element measures a different
property of the sensed chemical, or it can
be a single sensing device (e.g.,
spectrometer) that produces an array of
measurements for each chemical, or it
can be a combination.

Each chemical vapor presented to the
sensor array produces a signature or
pattern characteristic of the vapor. By
presenting many different chemicals to
the sensor array, a database of signatures
is built up. This database of labeled
signatures is used to train the pattern
recognition system. The goal of this
training process is to configure the
recognition system to produce unique
classifications of each chemical so that
an automated identification can be
implemented.

The quantity and complexity of the data
collected by sensors array can make
conventional chemical analysis of data
in an automated fashion difficult. One
approach to chemical vapor
identification is to build an array of
sensors, where each sensor in the array is
designed to respond to a specific
chemical. With this approach, the
number of unique sensors must be at
least as great as the number of chemicals
being monitored. It is both expensive
and difficult to build highly selective
chemical sensors.


Fig. Scientific Team that worked on the
first instrumental version of the E-nose;
from left to right are: Dr. S. Zaromb, Mel
Findlay, Dr. W. R. Penrose and Dr. J. R.
Stetter of ANL (~1982).
Artificial neural networks (ANNs),
which have been used to analyze
complex data and to recognize patterns,
are showing promising results in
chemical vapor recognition. When an
ANN is combined with a sensor array,



Fig. Early versions of the E-nose built at
ANL and TRI.
the number of detectable chemicals is
generally greater than the number of
sensors . Also, less selective sensors
which are generally less expensive can
be used with this approach. Once the
ANN is trained for chemical vapor
recognition, operation consists of
propagating the sensor data through the
network. Since this is simply a series of
vector-matrix multiplications, unknown
chemicals can be rapidly identified in the
field.

Electronic noses that incorporate ANNs
have been demonstrated in various
applications. Some of these applications
will be discussed later in the paper.
Many ANN configurations and training
algorithms have been used to build
electronic noses including back
propagation-trained,feed-forward
networks; fuzzy ART maps; Kohonens
self-organizing maps (SOMs); learning
vector quantizers (LVQs); Hamming
networks; Boltzmann machines; and
Hopfield networks. Figure 1 illustrates
the basic schematic of an electronic
nose. Fig





A view inside of the 1st Generation Enose
(Size reference is a 10cm) and the 2nd
Generation Enose (15cmruler)

Figure: Schematic
diagram of an electronic nose



NASA scientists are expanding the
sensitivity of an electronic nose : while
shrinking its size to make it more
compact for future space missions
following a Space Shuttle flight that
successfully demonstrated the
technology.

Enose is a new, miniature environmental
monitoring instrument that detects and
identifies a wide range of organic and
inorganic molecules down to the parts-
per-million level. The objective on STS-
95 is to flight-test Enose and assess its
ability to monitor changes in Discovery's
middeck atmosphere.

The 'Nose' Knows a Sweet Smell of
Success:
Have you ever wondered why natural
gas has such a distinctive odor? Because
it's added purposely before being piped
into our homes! The smell gives us a
warning that there may be a leak.

What about detecting chemical leaks in
enclosed spaces, like the International
Space Station or Space Shuttle? NASA
built "E-Nose" to come to the rescue!

The Agency's J et Propulsion Laboratory
in California and the California Institute
of Technology jointly developed a
method for a machine to "smell." Given
the catchy name E-Nose, the device is an
electronic nose that uses computers and
special sensing film to work much like a
human nose.

E-Nose technology has the ability to
send a signal to an environmental control
system where a central computer decides
how to handle the problem, without
human interaction. The device also can
be "trained" in one session to detect
many specific contaminants

"The more automated you can make this
kind of thing, the better off the crew is,"
said Amy Ryan, principal investigator
for E-Nose at J PL.

The 3-pound, paperback-book sized E-
Nose took a ride on a Space Shuttle
flight and successfully proved its value
in identifying contaminants in the air.
After that flight, J PL began working on a
way to expand its capability and reduce
its size.

Commercial companies were quick to
see E-Nose's potential. In March 1997,
J PL licensed the technology to Cyrano
Sciences, Inc., of Pasadena, Calif. The
company renamed the device "Cyranose
320" and put it to work in the food
industry, testing for spoilage. The
technology is also being tested to detect
toxic materials, water pollutants and
chemical leaks.

Several medical institutions are
determining how well the electronic
nose can be adapted to provide
physicians with a quicker and more
accurate way to diagnose health issues,
perhaps eliminating the need for
invasive testing and unpleasant
procedures.

Image Right: The Cyranose 320 is a
lightweight, portable device, used for
quality control purposes in the food
and chemical industries. Image credit:
NASA










Fig: Cyranose 320 can
Cyrano Sciences joined Smiths
Detection in March and researchers are
working on early-stage technology, with
a grant from the U.S. Department of
Defense, to develop miniature sensors
used principally for homeland security.

In the meantime, E-Nose technology will
continue to "sniff out" ways to keep us
safe here on Earth and in space as we go
forward with America's Vision for Space
Exploration.
Electronic noses sniff out new markets
Initially developed as laboratory
instruments, electronic noses that mimic
the human sense of smell are moving
into food, beverage, medical, and
environmental applications. By J ennifer
Ouellette Researchers and
manufacturers alike have long
envisioned creating devices that can
'smell' odors in many different
applications. Thanks to recent advances
in organic chemistry, sensor technology,
electronics, and artificial intelligence,
the measurement and characterization of
aromas by electronic noses (or e-noses)
is on the verge of becoming a
commercial reality."


Electronic nose sniffs out TB: "The
scientists at Cranfield took ideas used by
makers of food flavorings, among
others, to create sensors that can
recognize smells. The sensors use
artificial intelligence to identify bacteria
in TB cases, as well as other respiratory
diseases. The technique analyses sputum
- saliva and mucus - converted into gas
form."

The term, "electronic nose" or "E-nose"
has come into common usage as a
generic term for an array of chemical gas
sensors incorporated into an artificial

Fig.The "Biological Nose" (by Mother
Nature, patent pending).
olfaction device, after its introduction in
the title of a landmark conference on this
subject in Iceland in 1991. The term E-
nose is not pejorative. There are striking
analogies between the artificial noses of
man and the "Bio-nose" constructed by
Nature. Figure 1 illustrates a biological
nose and points out the important
features of this "instrument". Figure
illustrates the artificial electronic nose.
Comparing the two is instructive. The
human nose uses the lungs to bring the
odor to the epithelium layer; the
electronic nose has a pump. The human
nose has mucous, hairs, and membranes
to act as filters and concentrators, while
the E-nose has an inlet sampling system
that provides sample filtration and
conditioning to protect the sensors and
enhance selectivity. The human
epithelium contains the olfactory
epithelium, which contains millions of
sensing cells, selected from 100-200
different genotypes that interact with the
odorous molecules in unique ways. The
E-nose has a variety of sensors that
interact differently with the sample.
Let's Get Nosy!, the E-Nose Smell-a-
thon project from NASA's Space Place:
"E-Nose uses 32 tiny sensors, which
together are about the size of the nose on
your face. They are connected to

Fig. The basic design of the "Electronic
Nose".

a small computer that takes the
information from the sensors and figures
out just what the smell is, similar to how
your brain, through experience, learns to
identify what your nose is smelling."


APPLICATIONS OF
ELECTRONIC NOSES AND
TONGUES IN FOOD
ANALYSIS:

The applications of electronic noses
and tongues in food analysis. A brief
history of the development of sensors is
included and this is illustrated by
descriptions of the different types of
sensors utilized in these devices. As
pattern recognition techniques are
widely used to analyse the data obtained
from these multisensor arrays, a
discussion of principal components
analysis and artificial neural networks is
essential."

Development of a Personal Robot with
a Sense of Taste: "Using its sensor, the
robot is capable of examining the taste
of food and giving the name of the food
as well as its ingredients. Furthermore, it
can give advice on the food and on
health issues based on the information
gathered. The robot is called 'Health and
Food Advice Robot,' the world's first
partner robot with a sense of taste. ...
The achievement was made possible by
combining NEC System Technologies'
robot technology and pattern recognition
technology with the analytical
techniques of the infrared spectroscopic
technology of the BIFE Laboratory."

Mind Over Matter: "Sometime in the
not-too-distant future, the worlds of
people and robots will merge. Humans
already are heading in artificial
directions. We have false teeth and hair,
plastic limbs, intraocular lenses,
mechanical organs and drug- dispensing
implants. Robots are becoming more like
us in facial expression, voice
recognition, and ability to walk, talk and
make decisions. ... The future of this
technology, Perkowitz says, 'is the
formation of direct connections between
living organic systems and nonliving
ones at the neural and brain levels.''
Highly developed humanoid robots
aren't far behind. At the Massachusetts
Institute of Technology, researchers
have dedicated laboratories to Cog and
Kismet, two groundbreaking projects in
which the robots mimic human actions
and reactions. Another innovative
project, Cyrano 320, is an electronic
nose that can smell a spectrum of odors."

On Board Signal Processing - The
Brain of the Cyranose 320: "Unlike
the conventional analytical techniques,
the electronic nose takes full advantage
of the techniques in mathematics,
statistics, and computer science to
extract valuable, but often hidden,
information directly from the
measurement. Pattern recognition
algorithms are very powerful tool to deal
with a large set of data collected from
the electronic nose."

Sniffing Robot Robotic Odor
Perception: "Until now, one of the
human senses not yet implemented in a
robot was the sense of smell. This article
presents a possible application of odor
sensing in mobile robots. The robot
estimates the maximum gas
concentration point and moves toward it
or can also follow a smelling path on the
ground."

Sensing danger: Artificial eyes, ears,
and noses for stronger, safer troops. By
Kenneth Terrell. "The nose doesn't have
a specific receptor for the smell of roses;
instead it detects a particular mixture of
sweet, sour, and floral, which the brain
recognizes as a rose. Similarly, the Tufts
artificial nose has 16 fluorescent sensor
strips, each sensitive to a different range
of molecules, and a computer that
interprets their response pattern to
determine whether or not they have
sniffed a mine. While this method can be
better at filtering out false alarms than
the Fido approach, it may not be quite as
sensitive to explosives-related
chemicals."

Clearing the air - New device sniffs
offices: "Pro Services calls the $25,000
device the IAQ 4000 system. The
acronym stands for indoor air quality.
The company that makes it, Aircuity Inc.
of Newton, Mass., refers to that
equipment as the Aircuity Building
Performance and Indoor Air Evaluation
System, said spokesman Robert Skinner.
The system consists of a portable air
sampling device, a Web-based data
collection and reporting tool and an
artificial intelligence-based diagnostic
program. It monitors and analyzes
temperature, relative humidity, carbon
dioxide, carbon monoxide, airborne
particles, total volatile organic
compounds, mold and pollen, ozone and
radon. While the news is replete with
stories about building security these
days, the Aircuity system would be
ineffective against a major act of
sabotage or terrorism, Skinner said. 'The
portable technology is not designed to
detect biological agents,' he said.
'However, Aircuity is developing
technology to address biological
detection.'"

Robo Lobster to Sniff Out Mines:
"Teams of sniffer robots may someday
scour land and sea, using their artificial
snouts to root out mines in places and
situations humans would rather avoid."
E-Nose Sniffs Out Nasty Bugs:
"Scientists are working on an electronic
nose that sniffs out nasty bacteria in
blood samples. The e-nose could halve
the time for blood work and may one
day be used in smart fridges that warn
you when food has gone bad."

Electronic/Artificial Noses: A
technology brief from Pacific Northwest
National Laboratory. "An electronic
nose is generally composed of a
chemical sensing system (e.g., sensor
array or spectrometer) and a pattern
recognition system (e.g., artificial neural
network). At Pacific Northwest National
Laboratory, we are developing electronic
noses based on artificial neural network
technology for the automated
identification of volatile chemicals for
environmental and medical applications.
... Currently, the biggest market for
electronic noses is the food industry."




Fig. Modern electronic noses.


Scientists sniff out useful applications
for robotic nose: Dick Wilson,
"Scientists have endowed computers
with eyes to see, thanks to digital
cameras, and ears to hear, via
microphones and sophisticated
recognition software. Now they're taking
computers further into the realm of the
senses with the development of an
artificial nose. ... 'We have tested 40
distinct materials, some of which are
complex mixtures, things like cologne,'
said David Walt, a professor at Tufts
University near Boston. And the man-
made nose is right about 97 percent of
the time, he said. ... Developers of the
nose say its real-world applications
might include medical diagnostics and
monitoring for household fumes or
environmental contamination."

Electronic Noses Sniffing Out Wine
Niche: Perfecting Sensory Evaluation of
Smells by Imitating Canine "Noses With
Legs". By Abby Sawyer. Wine Business
Monthly, J uly 1997. "J ust as a police
dog comes to recognize cocaine by
repeated exposure to that smell, an
electronic nose must be 'trained' to
recognize an odor abnormality.
Electronic noses combined with artificial
neural network (ANN) technology allow
these instruments to be trained. ANNs
are artificial intelligence networks
which, like humans, can "learn" through
exposure to stimuli."




Fig. Discrimination of beers using the E-
nose!
An important analysis is illustrated in the
non-medical sport of beer sniffing,
where the E-nose can tell a light from a
regular beer (Figure 11). This illustrates
the potential of applications in the food
and beverage industry and puts to rest
forever the notion that all beers are alike.
So the science of the E-nose presses on.
Problems being addressed range from
determination of the quality of perfumes,
wines, and apple juice, to the detection
of infectious diseases. New technology
is inspiring devices that produce the
visual imaging of smells of various
chemicals and mixtures. Some day we
will see the "Tricorder" of the popular
TV show Start Trek. It may be a version
inspired by the modern E-nose!
For Insight Into Finding Land Mines,
Scientists Have Gone to the Dogs:
"Designing sensors that approximate a
dog's ability to smell explosives is one of
several approaches researchers in the
United States are pursuing to help
uncover what the International Red
Cross estimates to be 120 million mines
worldwide. ... The Defense Advance
Research Projects Agency alone
reportedly is spending $25 million over
three years to advance the sensor and
computing technologies needed to mimic
a dog's sense of smell. The Defense
Department's Humanitarian Demining
Technology Development Pro- gram is
spending nearly $15 million this year on
new technology and on finding ways to
apply existing technologies to
demining."

Artificial nose sees smell:
US researchers have developed a small
slip of paper that can sniff out food
contaminants, counterfeit perfumes,
banned drugs and toxic gases simply by
changing colour.
A report on the artificial nose that can
visualise odours developed by chemists
at the University of Illinois.
Called "smell-seeing" by its inventors,
the technique is based on colour changes
that occur in an array of vapour-sensitive
dyes known as metalloporphyrins -
doughnut-shaped molecules closely
related to the red haemoglobin pigment
in blood and the green chlorophyll
pigment in plants.

"The human nose is generally
sensitive to most compounds at a level
of a few parts per million," Suslick said.
"The sensitivity of our artificial nose is
10 to 100 times better than that for many
compounds."
And unlike other potential artificial nose
systems, smell-seeing is not affected by
changes in relative humidity.
Smell-seeing arrays have many potential
applications, such as in the food and
beverage industry to detect the presence
of flavourings, additives or spoilage; in
the perfume industry to identify
counterfeit products; at customs
checkpoints to detect banned plant
materials, fruits and vegetables; and in
the chemical workplace to detect and
monitor poisons or toxins.
"Scientists have endowed computers
with eyes to see, thanks to digital
cameras, and ears to hear, via
microphones and sophisticated
recognition software. Now they're
taking computers further into the realm
of the senses with the development of
an artificial nose."





Electronic Noses for
Environmental Monitoring:
Enormous amounts of hazardous waste
(nuclear, chemical, and mixed wastes)
were generated by more than 40 years of
weapons production in the U.S.
Department of Energys weapons
complex. The Pacific Northwest
National Laboratory is exploring the
technologies required to perform
environmental restoration and waste
management in a cost effective manner.
This effort includes the development of
portable, inexpensive systems capable of
real-time identification of contaminants
in the field. Electronic noses fit this
category.
Environmental applications of electronic
noses include analysis of fuel mixtures
[4], detection of oil leaks [5], testing
ground water for odors, and
identification of household odors [3].
Potential applications include
identification of toxic wastes, air quality
monitoring, and monitoring factory
emissions.

Electronic Noses for Medicine:
Because the sense of smell is an
important sense to the physician, an
electronic nose has applicability as a
diagnostic tool. An electronic nose can
examine odors from the body (e.g.,
breath, wounds, body fluids, etc.) and
identify possible problems. Odors in the
breath can be indicative of
gastrointestinal problems, sinus
problems, infections, diabetes, and liver
problems. Infected wounds and tissues
emit distinctive odors that can be
detected by an electronic nose. Odors
coming from body fluids can indicate
liver and bladder problems. Currently,
an electronic nose for examining wound
infections is being tested at South
Manchester University Hospital .
A more futuristic application of
electronic noses has been recently
proposed for telesurgery . While the
inclusion of visual, aural, and tactile
senses into telepresent systems is
widespread, the sense of smell has been
largely ignored. An electronic nose will
potentially be a key component in an
olfactory input to telepresent virtual
reality systems including telesurgery.
The electronic nose would identify odors
in the remote surgical environment.
These identified odors would then be
electronically transmitted to another site
where an odor generation system would
recreate them.

Electronic Noses for the Food
Industry:
Currently, the biggest market for
electronic noses is the food industry .
Applications of electronic noses in the
food industry include quality assessment
in food production , inspection of food
quality by odor, control of food cooking
processes , inspection of fish, monitoring
the fermentation process, checking
rancidity of mayonnaise, verifying if
orange juice is natural, monitoring food
and beverage odors , grading whiskey,
inspection of beverage containers,
checking plastic wrap for containment of
onion odor, and automated flavor control
to name a few. In some instances
electronic noses can be used to augment
or replace panels of human experts. In
other cases, electronic noses can be used
to reduce the amount of analytical
chemistry that is performed in food
production especially when qualitative
results will do.




DISCUSSION:
In this paper we discussed electronic
noses, a prototype system that identifies
common household chemicals, and
applications of electronic noses in the
environmental, medical, and food
industries. The major differences
between electronic noses and standard
analytical chemistry equipment are that
electronic noses (1) produce a qualitative
output, (2) can often be easier to
automate, and (3) can be used in real-
time analysis. These results from the
prototype electronic nose demonstrate
the pattern recognition capabilities of the
neural network paradigm in sensor
analysis, especially when the individual
sensors are not highly selective. In
addition, the prototype presented here
has several advantages for real-world
applications including compactness,
portability, real-time analysis, and
automation. Further work will involve
comparing neural network sensor
analysis to more conventional
techniques, exploring other neural
network paradigms, and evolving the
preliminary prototypes to field systems.


CONCLUSIONS:
e-noses are already available in the
market. The two main manufacturers are
EST and Cyrano Sciences, both USA-
based companies. Talking e-senses a
step further, illumine another USA-
based company, is working on an optical
nose. Meanwhile, researchers at
Caltechs ERC are working on the
computer design of the nose. The
completed sniffer will be made up of
10,000 sensors and fit on a 1 cm2 chip,
says Rodney Goodman, director of the
ERC.
Eventually , many questions concerning
the human olfactory system will be
answered and perhaps artificial solutions
will be found for people with olfactory
problems.

REFERENCES:
[1] B.S. Hoffheins, Using Sensor Arrays
and Pattern Recognition to Identify
Organic Compounds. MS-Thesis, The
University of Tennessee, Knoxville, TN,
1989.

[2] G.A. Carpenter, S. Grossberg, N.
Markuzon, J.H. Reynolds, and D.B.
Rosen, "Fuzzy ARTMAP: A Neural
Network Architecture for Incremental
Supervised Learning of Analog
Multidimensional Maps," IEEE
Transactions on Neural Networks, vol.
3, 698 -713.

[3] P.E. Keller, R.T. Kouzes, and L.J.
Kangas, "Three Neural Network Based
SensorSystemsfor Environmental
Monitoring," IEEE Electro 94
Conference Proceedings, Boston, MA,
1994, pp. 377-382.

[4] H.V. Shurmur, "The fifth sense: on
the scent of the electronic nose," IEE
Review, pp. 95-58, March 1990.

[5] K. Pope, "Technology Improves on
the Nose As Science Tries to Imitate
Smell," Wall Street Journal, pp. B1-2, 1
March 1995.

[6] P.E. Keller, R.T. Kouzes, L.J.
Kangas, and S. Hashem, "Transmission
of Olfactory Information for
Telemedicine," Interactive Technology
and the New Paradigm for Healthcare,
R.M. Satava, K. Morgan, H.B. Sieburg,
R. Mattheus, and J.P. Christensen (ed.s),
IOS Press, Amsterdam, 1995, pp. 168-
172.


























(Application to INDIAN RAILWAYS)



PRESENTED BY:


V.V.S.S.SHANMUKHA M.DEEPAK
EMAIL-ID:
Shannu333001@yahoo.co.in
Deepak_vits428@yahoo.com
B.TECH III/IV YEAR (E.C.E)

PBR VISHVODAYA INSTITUTE OF TECHNOLOGY
KAVALI
NELLORE (DT)
PINCODE: 524201













Abstract:


Search of knowledge is the beginning of wisdom. Technology
Varies exponentially with centuries. As the clock ticks, technology picks. In the world
Arena GPS TECHNOLOGY is moving to its peak position.


In this paper we reveal how the latest developments and
Applications of the fields like G.P.S (GLOBAL POSITIONING SYSTEM), sensors,
Fibers optical networking systems, software and the database management systems etc.
Can be combined together in a proper manner to serve the purpose of the safety of the
Indian Railways (in particular). Paper will show how the technology that was initially
Developed for defense purposes and various specialized requirements can be networked
Together to contribute towards the safety of the worlds largest railway network. The
Method used here can be simply started as checking the vibrations produced by the train
In accordance with the distance from the sensors and the speed of train using the various
components that are mentioned earlier to tackle the problems ahead (if any in advance) to
avoid the mishaps.


The Infrastructural and Research needs of this requirement already
exist in India thus it can be indigenously developed at a very low cost. It will be a much
better method of safety and communication possibly even better than methods proposed
for the future.



















Applications of Neural Networks to
Telecommunications Systems


Author Info Sheet:

Author: N. Ramaswamy Reddy
Branch: Electrical and Electronics Engineering
Topic Name: APPLICATIONS OF SOLAR
E-Mail ID ramaswamyneelapu@yahoo.com
Contact Phone No: 9948011515

COLLEGE OF ENGINEERING, GITAM
VISAKHAPATNAM

ABSTRACT:

This paper gives an overview of the
application of neural networks to
telecommunication Systems. Five
application areas are discussed,
including cloned software identification
and the detection of fraudulent use of
cellular phones. The systems are
summarized and the general results are
presented. The conclusions highlight the
difficulties involved in using this
technology as well as the potential
benefits.

In this paper we report on a variety of
neural computational systems that have
been applied in the telecommunications
industry. All the systems described here
were developed in collaboration between
NORTEL UK and the University of
Hertfordshire, UK. In all, five
application areas were investigated,
resulting in two fully functioning
systems, which are incorporated in
NORTEL products, two successful
prototypes and one application area for
which we did not find suitable for a
neural computational solution. In the
paper we briefly describe the five
applications, evaluate our resulting
solutions and conclude by reflecting
upon the lessons learnt.

1 INTRODUCTION:

The study of the human brain is
thousands of years old. With the advent
of modern electronics, it was only
natural to try to harness this thinking
process. The first step toward artificial
neural networks came in 1943 when
Warren McCulloch, a neurophysiologist,
and a young mathematician, Walter
Pitts, wrote a paper on how neurons
might work. They modeled a simple
neural network with electrical circuits.
In 1959, Bernard Widrow and Marcian
Hoff of Stanford developed models, they
called ADALINE and MADALINE.
These models were named for their use
of Multiple Adaptive Linear Elements.
MADALINE was the first neural
network to be applied to a real world
problem. It is an adaptive filter which
eliminates echoes on phone lines, which
is still in commercial use.
Hopfield's approach was not to simply
model brains but to create useful
devices.
Artificial neural networks are loosely
based on biology. Current research into
the brain's physiology instinct has
unlocked only a limited understanding of
how neurons work or even what
constitutes intelligence in general.
Researchers are working in both the
biological and engineering fields to
further decipher the key mechanisms for
how man learns and reacts to everyday
experiences. Improved knowledge in
neural processing helps create better,
more succinct artificial networks.
For parts not yet clear, however, we
construct a hypothesis and build a model
that follows that hypothesis. We then
analyze or simulate the behavior of the
model and compare it with that of the
brain. If we find any discrepancy in the
behavior between the model and the
brain, we change the initial hypothesis
and modify the model. We repeat this
procedure until the model behaves in the
same way as the brain. This common
process has created thousands of
network topologies.

2 SOFTWARE ANALYSIS TOOLS

Our first two applications were both
designed to help with the analysis of
large software systems, which are
typically found in telecommunication
systems - today a digital telephone
exchange incorporates tens of millions
of lines of code. Such systems have
evolved over the years, with the current
systems incorporating code from the
very first system. The
telecommunications industry, average
error rate, is about 25 errors per 1,000
lines of code, Wayt Gibbs (1994). It is
apparent, therefore, that maintaining
these systems presents a problem of
considerable difficulty.

2.1 COMPLEXITY ANALYSIS

This investigated the possibility that
units of software fall into natural clusters
when represented by a set of complexity
measures. Two data sets were examined.
The first data set was a set of 2,236
procedures drawn from a single software
product written in the proprietary
NORTEL language PROTEL, a block
structured language designed for the
control of telecommunication systems.
The second set was of 4,456 PROTEL
procedures drawn form a variety of
software products. Twelve standard
measures of software complexity were
used, so that each procedure was
represented as a 12-ary real valued
vector. Each data set was then presented
to three neural network clusterers: a
simple competitive network, a self
organizing feature map (SOM) and
Fuzzy ART. The principle clustering that
we found, using these networks,
suggested that the procedures could be
grouped primarily according to size,
with a group of very small procedures,
with less than 10 lines of code and
another group with very large
procedures. Another major grouping
clustered procedures with a large
language content compared to their size.

2.2 CLONE DETECTION

After our first general investigation of
complexity we were presented with a
more concrete problem, namely to
produce a system that could identify
copied and modified software units.
Such cloned software is very common
in large software systems that have
evolved over several years, and it
presents a problem in software
maintenance. For example if it is
necessary to change a faulty block of
code it will probably be important to
modify direct and modified copies of
that code as well. To use a neural
network on this problem it was first
necessary to find a representation of a
block of code as a fixed length, numeric
feature vector. Such an encoding is
central to the success of an application
such as this, since it is imperative that
similar blocks of code have similar
representational vectors. It is important
to note here that the notion of similar is
different in either case; for the source
code similarity relates to the probability
of the code being cloned, and for the
corresponding vectors that they are close
in Euclidean space. The representation
we use is based on three measures.

Firstly the frequencies of keywords in
the unit are accumulated, secondly the
length of each line is recorded and lastly
the indentation pattern is represented.
The latter parameter is important, as
each unit of code is automatically
indented in a manner that captures its
syntactic structure. To map indentation
(and line length) onto a fixed length
vector, we first took the raw indentation
values and viewed them as ordinates on
a graph; we then sampled one hundred
points from this graph, using linear
extrapolation where necessary. This
coding method is relatively stable
against minor modifications to the
source code, such as the addition or
removal of a line. The final vector
representation of a unit of code is made
up of 100 frequency samples, 100
corresponding line length samples, and
96 keyword frequencies, to give a 296-
ary feature vector, see Figure 1.
























A 55 by 5(3025 neurons) SOM was
trained on 10,257 software unit feature
vectors, and the resulting map gave a
view of the data representing relative
similarity of the input vectors. The SOM
was then integrated into a larger system
so that the most similar procedures of a
given procedure could be identified. This
then formed the basis of a completed
clone detection tool, which proved
useful in identifying many examples of
cloned software in the PROTEL systems
examined. A fuller description of this
system and its development can be found
in Davey (1995).

4 CALL ROUTING

The aim of this project was to
investigate the possibility of using
Hopfield nets to find optimal call routes.
Essentially this is equivalent to using a
Hopfield network to solve the traveling
salesman, optimization problem. To use
the technique on a particular problem it
is necessary to find an appropriate set of
parameters for the network. Despite
considerable investigation we were
unable to find an acceptable set, and this
project was abandoned.

5 TRAFFIC TRENDS PREDICTION

To make maximum usage of network
capacity it is useful to be able to predict
traffic demand. In this study we
investigated how a neural network
forecaster could be used to predict voice
traffic demand, over an ATM network.
The network architecture employed was
the standard sliding window, feed-
forward network, trained with the
conjugate gradient algorithm.

In addition to the sliding window input,
chronological context was explicitly
represented, as shown in figure 2. This
information was added as much
telecoms traffic is periodic, with weekly,
daily and hourly trends superimposed. A
critical parameter of this type of model
is the size of the sliding window. We use
a technique from dynamic systems
theory, the false nearest neighbour
heuristic, Frank (1999). This method
suggested that a window of four steps
back would give best performance. Good
predictive performance was observed,
but this system was not taken any
further. More detail can be found in
Edwards (1997).











Figure 2: The modified sliding window network configuration used for predicting traffic on an ATM
network.

6 FRAUDULENT USE OF
CELLULAR PHONE DETECTION

Fraudulent use of cellular phones is a
huge problem, for example in 1994 the
estimated cost to the US industry was
$482million, representing 3.7% of
revenue. In this project we investigated
whether a neural network could be
trained to give an indication of whether a
pattern of phone usage was indicative of
fraud.

Whenever a completed phone call is
made a call detail record (CDR) is
created. Depending on the operation
currently being performed the structure
of these will vary, however for our
investigations we have produced a
generic record which encapsulates all the
salient features which are required.
These include: account number,
telephone number, date and time of call,
the duration, the originating area and
receiving area, as well as a number of
other fields. These records therefore
constitute an enormous database within
which anomalous use must be detected.

The type of problem here is unusual and
difficult, as it mixes both static
classification and temporal prediction.
Anomalous use has to be classified as
such, but only in relation to an emerging
temporal pattern. Over a period of time
an individual phone will generate a
macroscopic pattern of use, in which, for
example, intercontinental calls may be
rare; however within this overall pattern
there will inevitably be violations: on a
particular day the phone may be used for
several intercontinental calls.

Against this background anomalous use
can be identified as belonging to one of
two types:
The pattern is intrinsically fraudulent -
it will almost never occur in normal use.
This type is relatively easy to detect.
The pattern is anomalous only relative
to the historical pattern established for
that phone

In order to detect fraud of the second
type it is necessary for a neural network
to have knowledge of the historical,
macro, behaviour of the phone and the
recent micro behaviour. We have chosen
to present both of these pieces of
information as input vectors to the net.
The output then is a two bit
representation of the credibility of these
two inputs when taken together. Note
that this method also copes quite
adequately with fraud of the first type
since this should be evident regardless of
the historical behaviour.

For each user within the network the
details gathered from the CDR's are
recorded as a statistical representation;
as a user profile. This includes: the
proportion of local, national and
international calls, number of units used,
number of calls andaver duration for that
user, as well as other details. These are
collated over two time periods; historical
and recent. The historical profile must be
periodically updated to reflect gradual
change in the use pattern of the unit.
Several of the fields are subjected to
differing normalisation techniques, in
order to maintain no a-priori preference
for one field over another, before the
profiles are presented to the network. A
Multi-Layered Perceptron network, with
18 input units, various hidden unit
configurations and two output units
representing, one to represent valid use
and one for fraudulent use was trained
using conjugate gradient training.

Figure 3 shows the networks
performance for unseen fraudulent
profiles. A similar graph is produced
which represents normal behaviour. The
confidence figure represents the value of
the output node representing valid use
subtracted from the value of the node
representing fraudulent use.
The accounts in the lower part of the
graph are divided between border-line
cases and miss-classified results. For
example the misclassifications on the
left of the figure represent low usage
accounts, where it is difficult to detect
anomalies due to the paucity of data.
Some of the misclassifications on the
right are made up accounts where a high
use of intercontinental calls are made;
again it is hard for the net to pick some
forms of fraud against this historical
background. Both of these scenarios
correspond with common sense
expectations.


Figure 3: A visual representation of unseen fraudulent profiles. The confidence value
measures the degree to which the network identifies the profile as fraudulent.

This prototype system has now been
developed and integrated into a full scale
fraud detector. Further details can be
found in Barson (1996).

7 CONCLUSIONS

Over the period of the collaboration we
were able to investigate five application
areas for neural computational methods,
and we achieved some success in four
areas, with one failure. Four prototypes
were developed and two were further
developed into products.
Since neural network are data driven
models, one key requirement of using
them is that real world data is needed
early in the project. Difficulties may
arise for a number of reasons, such as
data confidentiality and sensitivity,
reluctance of customers to release data,
and the size of the data sets involved.

It has also become apparent that the task
of moving from a successful, neural net
based prototype to a full system should
not be underestimated. All neural
network applications depend heavily on
the appropriate pre-processing of the
input data and post-processing of the
output data. A major part of our work
has been concerned with the pre-
processor, the user interface and the
overall quality of the system.

However the two products developed are
successful and show that neural
networks provide a powerful method for
data driven computation, in the
telecommunications industry.

8 REFERENCES

Barson, P., Davey, N. Field, S.D.H., Frank, R. J .,
McAskey, G. (1996)
The Detection of Fraud in Mobile Phone
Networks
Neural Network World Vol 6, No 4. 477-484.

Davey, N., Barson, P., Field, S.D.H., Frank, R. J .
& Tansley, D.S.W. (1995).
The Development of a Software Clone
Detector.
The International J ournal of Applied Software
Technology,

Davey N, Field S.D.H, Frank R, Barson P &
McAskey G. (1996)
The Detection of Fraud in Mobile Phone
Networks, Neural Network World, Volume 6,
no 4, 477-484, 1996

Edwards T, Tansley D.S.W, Frank R.J . & Davey
N. (1997)
Traffic Trends Analysis Using Neural
Networks.Proceedings. International Workshop
on Applications of Neural Networks to
Telecommunications 3 (IWANNT'3), 157-
164,1997























































ARTIFICIAL MECHANICAL RED BLOOD CELLS BY
NANO TECHNOLOGY

A Technical Paper submitted by





ANEM REKHA S.SAI SMRUTHI VELU
II/IV B.Tech. (Bio-Tech.) II/IV B.Tech. (CSIT)
CBIT J BIET
Roll No: 0205115 Roll No: 05671A1229
E Mail: anem_rekha@yahoo.co.in E Mail: smruthi_doll4u@yahoo.com
Mobile No: 93911 45320 Mobile No: 98661 19308





ABSTRACT
Molecular manufacturing promises
precise control of matter at the atomic
and molecular level, allowing the
construction of micron-scale machines
comprised of nanometer-scale
components. Medical nanorobots are
still only a theory, but scientists are
working to develop them.
According to the World Health
Organization the todays world is
suffering an extreme shortage of donor
blood, even with Red Cross receiving
36,000 units a day. This doesnt satisfy
the 80,000 that are needed. People who
have anemia also run into a blood
problem when their hemoglobin
concentration in the red blood cells fall
below normal, which can cause severe
tissue damage. The root of the problem
lies in hemoglobin because it delivers
oxygen from the lungs to the body
tissue.
Of the many conditions, which can do,
harm to the human body, one of the most
fundamental and fast acting is a lack of
perfusion of oxygen to the tissue.
Insufficient oxygenation can be caused
by problems with blood flow in the
arteries due to obstruction or
exsanguinations, or problems with
oxygen transportation, as with anemia.
Advances in nanotechnology have
suggested a possible treatment for these
conditions in the form of micreo
electromechanical red blood cell analogs
called RESPIROCYTE.
The artificial red blood cell or
"respirocyte" proposed here is a blood
borne spherical 1-micron diamondoid
1000-atm pressure vessel with active
pumping powered by endogenous serum

glucose, able to deliver 236 times more
oxygen to the tissues per unit volume
than natural red cells and to manage
carbonic acidity. An onboard
nanocomputer and numerous chemical
and pressure sensors enable complex
device behaviors remotely
reprogrammable by the physician via
externally applied acoustic signals.
Primary applications will include
transfusable blood substitution; partial
treatment for anemia, perinatal/neonatal
and lung disorders; enhancement of
cardiovascular/neurovascular procedure,
tumor therapies and diagnostics;
prevention of asphyxia; artificial
breathing; and a variety of sports,
veterinary, battlefield and other uses.













Introduction:
1. What are respirocytes?
The respirocyte is a blood borne 1-
micron-diameter spherical nanomedical
device designed by Robert A. Freitas
Jr.. The device acts as an artificial
mechanical red blood cell It is designed
as a diamondoid 1000-atmosphere
pressure vessel with active pumping
powered by endogenous serum glucose,
and can deliver 236 times more oxygen
to the tissues per unit volume than
natural red cells while simultaneously
managing carbonic acidity. An
individual respirocyte consists of 18
billion precisely arranged structural
atoms plus 9 billion temporarily resident
molecules when fully loaded. An
onboard nanocomputer and numerous
chemical and pressure sensors allow
the device to exhibit behaviors of modest
complexity, remotely reprogrammable
by the physician via externally applied
acoustic signals.
Twelve pumping stations are spaced
evenly along an equatorial circle. Each
station has its own independent glucose-
metabolizing power plant, glucose
tank, environmental glucose sensors,
and glucose sorting rotors. Each station
alone can generate sufficient energy to
power the entire respirocyte, and has an
array of 3-stage molecular sorting
rotor assemblies for pumping O
2
, CO
2
,
and H
2
O from the ambient medium into
an interior chamber, and vice versa. The
number of rotor sorters in each array is
determined both by performance
requirements and by the anticipated
concentration of each target molecule in
the bloodstream. The equatorial
pumping station network occupies ~50%
of respirocyte surface. On the remaining
surface, a universal "bar code"
consisting of concentric circular patterns

of shallow rounded ridges is embossed
on each side, centered on the "north
pole" and "south pole" of the device.
This coding permits easy product
identification by an attending physician
with a small blood sample and access to
an electron microscope.
Equatorial Cutaway View of
Respirocyte

2. Preliminary Design Issues:
The biochemistry of respiratory gas
transport in the blood, oxygen and
carbon dioxide (the chief byproduct of
the combustion of foodstuffs) are carried
between the lungs and the other tissues,
mostly within the red blood cells.
Hemoglobin, the principal protein in the
red blood cell, combines reversibly with
oxygen, forming oxyhemoglobin. About
95% of the O
2
is carried in this form, the
rest being dissolved in the blood. At
human body temperature, the
hemoglobin in 1 liter of blood holds 200
cm
3
of oxygen, 87 times more than
plasma alone (2.3 cm
3
) can carry.
Carbon dioxide also combines reversibly
with the amino groups of hemoglobin,
forming carbamino hemoglobin. About
25% of the CO
2
produced during cellular
metabolism is carried in this form, with
another 10% dissolved in blood plasma
and the remaining 65% transported
inside the red cells after hydration of
CO
2
to bicarbonate ion. The creation of
carbamino hemoglobin and bicarbonate
ion releases hydrogen ions, which, in the
absence of hemoglobin, would make
venous blood 800 times more acidic than
the arterial. This does not happen
because buffering action and isohydric
carriage by hemoglobin reversibly
absorbs the excess hydrogen ions,
mostly within the red blood cells.
Respiratory gases are taken up or
released by hemoglobin according to
their local partial pressure. There is a
reciprocal relation between hemoglobin's
affinity for oxygen and carbon dioxide.
The relatively high level of O
2
in the
lungs aids the release of CO
2
, which is to
be expired, and the high CO
2
level in
other tissues aids the release of O
2
for
use by those tissues.
Existing Artificial Respiratory Gas
Carriers
Possible artificial oxygen carriers have
been investigated for eight decades,
starting with the first documented use of
hemoglobin solutions in humans in
1916.The commercial potential for
successful blood substitutes has been
estimated at between $5-10 billion/year ,
so the field is quite active. Current blood
substitutes are either hemoglobin
formulations or fluorocarbon emulsions.
Shortcomings of Current Technology
At least four hemoglobin formulations
and one fluorocarbon are in Phase I
safety trials, and one company has filed
an application to conduct an efficacy
trial. Most of the red cell substitutes
under trial at present have far too short a
survival time in the circulation to be
useful in the treatment of chronic
anemia, and are not specifically designed
to regulate carbon dioxide or to
participate in acid/base buffering.
Several cell-free hemoglobin
preparations evidently cause
vasoconstriction, decreasing tissue
oxygenation, and there are reports of
increased susceptibility to bacterial
infection due to blockade of the
monocyte-macrophage system,
complement activation, free-radical
induction, and nephrotoxicity.
The greatest physiological limitation is
that oxygen solvates linearly with partial
pressure, so clinically significant
amounts of oxygen can only be carried
by prolonged breathing of potentially
toxic high oxygen concentrations.
3. Nanotechnological Design of Respi-
ratory Gas Carriers Respirocytes.
Development successes leading towards
molecular nanotechnology have been
achieved along so many independent.
(a) Pressure Vessel
The simplest possible design for an
artificial respirocyte is a microscopic
pressure vessel, spherical in shape for
maximum compactness. Most proposals
for durable nanostructures employ the
strongest materials, such as flawless
diamond or sapphire constructed atom
by atom. Tank storage capacity is given
by Van der Waals equation, which takes
account of the finite size of tightly
packed molecules and the intermolecular
forces at higher packing densities.
Rupture risk and explosive energy rise
with pressure, to a standard 1000 atm
peak operating pressure appears
optimum, providing high packing
density with an extremely conservative
100-fold structural safety margin. (By
comparison, natural red blood cells store
oxygen at an equivalent 0.51 atm
pressure, of which only 0.13 atm is
deliverable to tissues.)
In the simplest case, oxygen release
could be continuous throughout the
body. Slightly more sophisticated is a
system responsive to local O
2
partial
pressure, with gas released either
through a needle valve (as in aqualung
regulators) controlled by a heme protein
that changes conformation in response to
hypoxia, or by diffusion via low pressure
chamber into a densely packed
aggregation of heme-like molecules
trapped in an external fullerene cage
porous to environmental gas and water
molecules, or by molecular sorting
rotors.
(b) Molecular Sorting Rotors

The key to successful respirocyte
function is an active means of conveying
gas molecules into, and out of,
pressurized microvessels. Molecular
sorting rotors have been proposed that
would be ideal for this task. Each rotor
has binding site "pockets" along the rim
exposed alternately to the blood plasma
and interior chamber by the rotation of
the disk. Each pocket selectively binds a
specific molecule when exposed to the
plasma. Once the binding site rotates to
expose it to the interior chamber, the
bound molecules are forcibly ejected by
rods thrust outward by the cam surface.


Rotors are fully reversible, so they can
be used to load or unload gas storage
tanks, depending upon the direction of
rotor rotation. Typical molecular
concentrations in the blood for target
molecules of interest (O
2
, CO
2
, N
2
and
glucose) are ~10
-4
, which should be
sufficient to ensure at least 90%
occupancy of rotor binding sites at the
stated rotor speed. Each stage can
conservatively provide a concentration
factor of 1000, so a multi-stage cascade
should ensure storage of virtually pure
gases. Since each 12-arm outbound rotor
can contain binding sites for 12 different
impurity molecules, the number of
outbound rotors in the entire system can
probably be reduced to a small fraction
of the number of inbound rotors.
Sorting Rotor Cascade
Sorting Rotor Binding Sites
Receptors with binding sites for specific
molecules must be extremely reliable
(high affinity and specificity) and
survive long exposures to the aqueous
media of the blood. Oxygen transport
pigments are conjugated proteins, that is,
proteins complexed with another organic
molecule or with one or more metal
atoms. Transport pigments contain metal
atoms such as Cu
2+
or Fe
3+
making
binding sites to which oxygen can
reversibly attached. Many proteins and
enzymes have binding sites for carbon
dioxide. For example, hemoglobin
reversibly binds CO
2
, forming
carbamino hemoglobin. A zinc enzyme
present in red blood cells, carbonic
anhydrase, catalyzes the hydration of
dissolved carbon dioxide to bicarbonate
ion, so this enzyme has receptors for
both CO
2
and H
2
O.
Binding sites for glucose are common in
nature. For example, cellular energy
metabolism starts with the conversion of
the 6-carbon glucose to two 3-carbon
fragments (pyruvate or lactate), the first
step in glycolysis. This is catalyzed by
the enzyme hexokinase, which has
binding sites for both glucose and ADP.
Another common cellular mechanism is
the glucose transporter molecule, which
carries glucose across cell membranes
and contains several binding sites.
(c) Sensors

Various sensors are needed to acquire
external data essential in regulating gas
loading and unloading operations, tank
volume management, and other special
protocols. For instance, sorting rotors
can be used to construct quantitative
concentration sensors for any molecular
species desired. One simple two-
chamber designed synchronized with a
counting rotor (linked by rods and
ratchets to the computer) to assay the
number of molecules of the desired type
that are present in a known volume of
fluid. The fluid sample is drawn from the
environment into a fixed-volume
reservoir with 10
4
refills/sec using two
paddlewheel pumps. At typical blood
concentrations, this sensor, which
measures 45 nm x 45 nm x 10 nm
comprising ~500,000 atoms (~10
-20
kg),
should count ~100,000 molecules/sec of
glucose, ~30,000 molecules/sec of
arterial or venous CO
2
, or ~2000
molecules/sec of arterial or venous O
2
. It
is also convenient to include internal
pressure sensors to monitor O
2
and CO
2

gas tank loading, ullage (container
fullness) sensors for ballast and glucose
fuel tanks, and internal/external
temperature sensors to help monitor and
regulate total system energy output.

Molecular Concentration Sensor

(d) Pumping Station Layout
Twelve pumping stations are spaced
evenly along an equatorial circle. Each
station has its own independent glucose
engine, glucose tank, environmental
glucose sensors, and glucose sorting
rotors. Each station alone can generate
sufficient energy to power the entire
respirocyte.
Each pumping station has an array of 3-
stage molecular sorting rotor assemblies
for pumping O
2
, CO
2
, and H
2
O into and
out of the ambient medium. The number
of rotor sorters in each array is
determined both by performance
requirements and by the anticipated
concentration of each target molecule in
the bloodstream. Any one pumping
station, acting alone, can load or
discharge any storage tank in ~10 sec
(typical capillary transit time in tissues),
whether gas, ballast water, or glucose.
Gas pumping rotors are arrayed in a
noncompact geometry to minimize the
possibility of local molecule exhaustion
during loading. Each station includes
three glucose engine flues for discharge
of CO
2
and H
2
O combustion waste
products, 10 environmental oxygen
pressure sensors distributed throughout
the O
2
sorting rotor array to provide fine
control if unusual concentration
gradients are encountered, 10 similar
CO
2
pressure sensors on the opposite
side, 2 external environment temperature
sensors (one on each side located as far
as possible from the glucose engine to
ensure true readings), and 2 fluid
pressure transducers for receiving
command signals from medical
personnel.

The equatorial pumping station network
occupies ~50% of respirocyte surface.
On the remaining surface, a universal
"bar code" consisting of concentric
circular patterns of shallow rounded
ridges is embossed on each side,
centered on the "north pole" and "south
pole" of the device. This coding permits
easy product identification by an
attending physician with a small blood
sample and access to an electron
microscope, and may also allow rapid
reading by other more sophisticated
medical nanorobots.
4. Applications
nary, battlefield and other
applications.
(a) Treatment of Anemia
The artificial respirocyte is a simple
Nanotechnological device whose
primary applications include
transfusable blood substitution;
treatment for anemia, perinatal and
neonatal disorders, and a variety of lung
diseases and conditions; contribution to
the success of certain aggressive
cardiovascular and neurovascular
procedures, tumor therapies and
diagnostics; prevention of asphyxia;
maintenance of artificial breathing in
adverse environments; and a variety of
sports, veteri
Oxygenating respirocytes offer complete
or partial symptomatic treatment for
virtually all forms of anemia, including
acute anemia caused by a sudden loss of
blood after injury or surgical
intervention; secondary anemias caused
by bleeding typhoid, duodenal or gastric
ulcers; chronic, gradual, or post-
hemorrhagic anemias from bleeding
gastric ulcers (including ulcers caused
by hookworm), hemorrhoids, excessive
menstrual bleeding, or battle injuries in
war zones; hereditary anemias including
hemophilia, leptocytosis and sicklemia,
thalassemia, hemolytic jaundice and
congenital methemoglobinemia;
chlorosis and hypochromic anemia,
endocrine deficiency anemia, pernicious
and other nutritional anemias; anemias
resulting from infectious diseases
including rheumatism, scarlet fever,
tuberculosis, syphilis, chronic renal
failure and cancer, or from hemoglobin
poisoning such as by carbon monoxide
inhalation; hemolytic anemias including
chemical hemolysis (including malarial,
snake bite, etc.), paroxysmal
hemoglobinuria, and chronic hemolytic
anemia from hypersplenism due to
cirrhosis of the liver; leukemia and other
idiopathic or toxic aplastic anemias
caused by chemicals, radiation, or
various antimetabolic agents; and
diseases involving excessive red cell
production such as polycythemia.
(b) Respiratory Diseases
the need for
strong, regular breathing.
g the development of the
condition.
ocytes and a
diaphragmatic pacemaker.
cular and Neurovascular

Current treatments for a variety of
respiratory viruses and diseases,
including pneumonia, broncho-
pneumonia and pleuropneumonia;
pneumoconiosis including asbestosis,
silicosis and berylliosis; emphysema,
empyema, abscess, pulmonary edema
and pleurisy; epidemic pleurodynia;
diaphragm diseases such as
diaphragmatic hernia, tetanus, and
hiccups; blood flooding in lungs
(hemoptysis, tuberculosis, chronic
histoplasmosis, and bronchial tube
rupture); bronchitis and bronchiectasis;
atelectasis and pneumothorax; chronic
obstructive lung disease; arterial chest
aneurysm; influenza, dyspneas, and even
laryngitis, snoring, pharyngitis, hay
fever and colds could be improved using
respirocytes to reduce
The devices could provide an effective
long-term drug-free symptomatic
treatment for asthma, and could assist in
the treatment of hemotoxic (pit viper)
and neurotoxic (coral) snake bites;
hypoxia, stress polycythemia and lung
disorders resulting from cigarette
smoking and alcoholism; neck goiter and
cancer of the lungs, pharynx, or thyroid;
pericarditis, coronary thrombosis,
hypertension, and even cardiac neurosis;
obesity, quinsy, botulism, diphtheria,
tertiary syphilis, amyotrophic lateral
sclerosis, uremia, coccidioidomycosis
(valley fever), and anaphylactic shock;
and Alzheimer's disease where hypoxia
is speedin
Respirocytes could also be used to treat
conditions of low oxygen availability to
nerve tissue, as occurs in advanced
atherosclerotic narrowing of arteries,
strokes, diseased or injured reticular
formation in the medulla oblongata
(controlling autonomic respiration), birth
traumas leading to cerebral palsy, and
low blood-flow conditions seen in most
organs of people as they age. Even
poliomyelitis, which still occurs in
unvaccinated Third World populations,
could be treated with respir
(c) Cardiovas
Applications
Respirocyte perfusion could be useful in
maintaining tissue oxygenation during
anesthesia, coronary angioplasty, organ
transplantation, siamese-twin separation,
other aggressive heart and brain surgical
procedures, in postsurgical cardiac
function recovery, and in
cardiopulmonary bypass solutions. The
device could help prevent gangrene and
Cyanosis, for example, during treatment
of Raynaud's Disease, a condition in
which spasms in the superficial blood
vessels of the extremities cause fingers
and toes to become cyanotic, then white
and numb. Therapeutic respirocyte
dosages can delay brain ischemia under
conditions of heart or lung failure, and
might be useful in treating senility,
which has apparently been temporarily
reversed in patients, treated with
yperbaric oxygen.
(d) Asphyxia
mucosa of human and animal
lungs.
s
n "artificial lung") is in clinical trials.
(e) Underwater Breathing
s apply in
space exploration scenarios.)
h

Respirocytes make breathing possible in
oxygen-poor environments, or in cases
where normal breathing is physically
impossible. Prompt injection with a
therapeutic dose, or advance infusion
with an augmentation dose, could greatly
reduce the number of choking deaths
(~3200 deaths/yr in U.S.) and the use of
emergency tracheostomies, artificial
respiration in first aid, and mechanical
ventilators. The device provides an
excellent prophylactic treatment for most
forms of asphyxia, including drowning,
strangling, electric shock (respirocytes
are purely mechanical), nerve-blocking
paralytic agents, carbon monoxide
poisoning, underwater rescue operations,
smoke inhalation or firefighting
activities, anaesthetic/barbiturate
overdose, confinement in airtight spaces
(refrigerators, closets, bank vaults,
mines, submarines), and obstruction of
breathing by a chunk of meat or a plug
of chewing tobacco lodged in the larynx,
by inhalation of vomitus, or by a plastic
bag pulled over the head of a child.
Respirocytes augment the normal
physiological responses to hypoxia,
which may be mediated by pulmonary
neuroepithelial oxygen sensors in the
airway
A design alternative to augmentation
infusions is a therapeutic population of
respirocytes that loads and unloads at an
artificial nanolung, implanted in the
chest, which exchanges gases directly
with the natural lungs or with exogenous
gas supplies. (An intravascular
oxygenator using a bundle of hollow
fiber membranes inserted into the vena
caval bloodstream (which functions a
a

Respirocytes could serve as an in vivo
SCUBA (Self-Contained Underwater
Breathing Apparatus) device. With an
augmentation dose or nanolung, the
diver holds his breath for 0.2-4 hours,
goes about his business underwater, then
surfaces, hyperventilates for 6-12
minutes to recharge, and returns to work
below. (Similar consideration
Respirocytes can relieve the most
dangerous hazard of deep sea diving --
decompression sickness ("the bends") or
caisson disease, the formation of
nitrogen bubbles in blood as a diver rises
to the surface, from gas previously
dissolved in the blood at higher pressure
at greater depths. Safe decompression
procedures normally require up to
several hours. At full saturation, a
human diver breathing pressurized air
contains about ~(d - d
0
) x 10
21
molecules
N
2
, where d is diving depth in meters
and d
0
is the maximum safe diving depth
for which decompression is not required,
~10 meters. A therapeutic dose of
respirocytes reconfigured to absorb N
2

instead of O
2
/CO
2
could allow complete
decompression of an N
2
-saturated human
body from a depth of 26 meters (86 feet)
in as little as 1 second, although in
practice full relief will require ~60 sec
approximating the circulation time of the
blood. Each additional therapeutic dose
relieves excess N
2
accumulated from
another 16 meters of depth. Since full
saturation requires 6-24 hours at depth,
normal decompression illness cases
present tissues far from saturation; hence
relief will normally be achieved with
much smaller dosages. The same device
can be used for temporary relief from
nitrogen narcosis while diving, since N
2
has an anesthetic effect beyond 100 feet
of depth.
ablish
(harmful) residence in lung tissue.
5. Conclusions
tured
economically and abundantly.
. References
e: The
Direct water-breathing, even with the
help of respirocytes, is problematic for
several reasons: (1) Seawater contains at
most one-thirtieth of the oxygen per
lungful as air, so a person must breathe
at least 30 times more lungful of water
than air to absorb the same volume of
respiratory oxygen; lungs full of water
weigh nearly three times more than
lungs full of air, so a person could
hyperventilate water only about one-
third as fast as the same volume of air.
As a result, a water-breathing human can
absorb at most 1%-10% of the oxygen
needed to sustain life and physical
activity. (2) Deep bodies of water may
have low oxygen concentrations because
oxygen is only slowly distributed by
diffusion; in swamps or below the
thermocline of lakes, circulation is poor
and oxygen concentrations are low, a
situation aggravated by the presence of
any oxygen-consuming bottom dwellers
or by oxidative processes involving
bottom detritus, pollution, or algal
growth. (3) Both the diving reflex and
the presence of fluids in the larynx
inhibit respiration and cause closure of
the glottis, and inhaled waterborne
microflora and microfauna such as
protozoa, diatoms, dinoflagellates,
zooplankton and larvae could est
The respirocyte is constructed of tough
diamondoid material, employs a variety
of chemical, thermal and pressure
sensors, has an onboard nanocomputer
which enables the device to display
many complex responses and behaviors,
can be remotely reprogrammed via
external acoustic signals to modify
existing or to install new protocols, and
draws power from abundant natural
serum glucose supplies, thus is capable
of operating intelligently and virtually
indefinitely, unlike red cells which have
a natural lifespan of 4 months. Although
still only a theory, respirocyte could
become a reality when future advances
in the engineering of molecular machine
systems permit its construction. Within
the next twenty years nanotechnology
will advance greatly, and may be fully
capable of producing tiny complex
machines. The development of
nanodevices that assemble other
nanomachines and will allow for
massive cheap production. Thus
respirocytes could be manufac
6

Drexler KE, Peterson C, Pergamit G.
Unbounding the Futur
Nanotechnology Revolution.
91.
oresight Update

New York: William Morrow, 19
F , No. 24, 1996:1-2

J ones J A. Red blood cell substitutes:
current status. Brit J Anaesthes 1995;
4:697-703 7

Drexler KE. Nanosystems: Molecular
Machinery, Manufacturing, and
Computation. New York: J ohn Wiley &
ons, 1992.

S













Artificial Intelligence And Expert Systems



Submitted

By





K.PALLAVI CH.PRATHIBHA
III-BTECH III-BTECH
e-mail:pallavi_karna09@yahoo.com email:btech_prathibha@yahoo.co.in


NARAYANA ENGINEERING COLLEGE.

NELLORE.
Artificial Intelligence And Expert
Systems



Abstract:-

Artificial Intelligence is a branch of
science, which deals with helping machines,
finds solutions to complex problems in a more
human like fashion. Artificial Intelligences
scientific goal is to understand intelligence by
building computer programs that exhibit
intelligent behavior.

This paper presents some back
ground and potential of Artificial Intelligence
and its implementation in various fields. We
discuss issues that have not been studied in
detail with in the expert systems setting, yet are
crucial for developing theoretical methods and
computational architectures for automated
reasons. The tools that are required to construct
expert systems are discussed in detail




Introduction:-

Artificial intelligence is the study of
ideas to bring into being machines that respond
to stimulation consistent with traditional
responses from humans, given the human
capacity for contemplation, judgment and
intention. Each such machine should engage in
critical appraisal and selection of differing
opinions within itself. Produced by human skill
and labor, these machines should conduct
themselves in agreement with life, spirit and
sensitivity, though in reality, they are imitations.

An AI program that models the
nuances of the human thought process or solves
complicated real-world problems con be
complex. Languages for building AI programs
have the capacity to that take care of low level
computing processes, thus allowing the
developer to focus on requisite complexity. AI
programs work with concepts expressed in
words, phrases, or sentences. Therefore, the
ability to handle symbolic data is an important
feature .To develop an AI system, a programmer
tries numerous ways of implementing each
constituent function. Rather than go through a
lengthy edit-compile-debug cycle to test each
type or quantity of data that follows through a
program, so they must adapt to fly. A language
that incorporates flexible data structures allows
this happen in an easy natural way. Many types
of inference processes recur through out an


Expert systems

An expert system is an interactive
computer-based decision tool that uses both
facts and heuristics to solve difficult decision
problems based on knowledge acquired from an
expert. By definition, an expert system is a
computer program that simulates the thought
process of a human expert to solve complex
decision problems in a specific domain. The
growth of expert systems is expected to
continue for several years. With the continuing
growth, many new and exciting applications
will emerge. An expert system operates as an
interactive system that responds to questions,
asks for clarification, makes recommendations,
and generally aids the decision-making process.
Expert systems provide expert advice and
guidance in a wide variety of activities, from
computer diagnosis to delicate medical surgery.


The Architecture of Expert systems

Complex decisions involve intricate
combination of factual and heuristic knowledge.
In order for the computer to be able to retrieve
and effectively use heuristic knowledge, the
knowledge must be organized in an easily
accessible format that distinguishes among data,
knowledge, and control structures. For this
reason, expert systems are organized in four
distinct levels:
1. Knowledge base- consists of problem solving
rules, procedures, and intrinsic data relevant to
the problem domain.
2. Working memory- refers to task-specific data
for the problem under consideration.
3. Inference engine- is a generic control
mechanism that applies the axiomatic
knowledge in the knowledge base to the task-
specific data to arrive at some solution or
conclusion.
4.User interface - the code that controls the
dialog between the user and the system.


Knowledge base
A knowledge base is the nucleus of
the expert system structure. The knowledge base
may be a specific diagnostic knowledge base
compiled by a consulting firm, and the end user
may supply the problem data. Knowledge base
is not a database. The traditional data base
environment deals with data that have a static
relationship between the elements in the
problem domain. Knowledge engineers, who
translate the knowledge of real human experts
into rules and strategies, create it. These rules
and strategies can change depending on the
prevailing problem scenario. The knowledge
base provides the expert system with the
capability to recommend directions for user
inquiry. It is usually stored in terms of ifthen
rules.
The knowledge base of
expert systems contains both factual and
heuristic knowledge. Factual knowledge is that
knowledge of the task domain that is widely
shared, typically found in textbooks or journals,
and commonly agreed upon by those
knowledgeable in the particular field.
Heuristic knowledge is
the less rigorous, more experiential, more
judgmental knowledge of performance. In
contrast to factual knowledge, heuristic
knowledge is rarely discussed, and is largely
individualistic. It is the knowledge of good
practice, good judgment, and plausible
reasoning in the field. It is the knowledge that
underlies the "art of good guessing."


Inference engine
The basic functions of inference engine are:
1. Match the premise patterns of the rules
against elements in the working memory.
Generally the rules will be domain knowledge
built into the system, and the working memory
will contain the case based facts entered into the
system, plus any new facts that have been
derived from them.
2. If there is more than one rule that can be
applied, use a conflict resolution strategy to
choose one to apply. Stop if no further rules are
applicable.
3. Activate the chosen rule, which generally
means adding/deleting an item to/from working
memory. Stop if a terminating condition is
reached, or return to step 1.


User interface
The Expert System user interface
usually comprises of two basic components:
. The Interviewer Component
This controls the dialog with the user and/or
allows any measured data to be read into the
system. For example, it might ask the user a
series of questions, or it might read a file
containing a series of test results.
. The Explanation Component
This gives the systems solution, and also makes
the systems operation transparent by providing
the user with information about its reasoning
process. For example, it might output the
conclusion, and also the sequence of rules that
was used to come to that conclusion. It might
instead explain why it could not reach a
conclusion. So that is how we go about building
expert systems. In the next two weeks we shall
see how they can handle uncertainty and be
improved by incorporating machine learning.

Explanation system
Almost all expert systems also have an
explanation subsystem, which allows the
program to explain its reasoning to the user.

Knowledge base editor
Some systems also have a knowledge base
editor, which help the expert or knowledge
engineer to easily update and check the
knowledge base.

Working memory
The working memory represents relevant data
for the current problem being solved.


Knowledge engineering

It is the art of designing and building
expert systems, and knowledge engineers are its
practitioners. As stated earlier that knowledge
engineering is an applied part of the science of
artificial intelligence, which, in turn, is a part of
computer science. Today there are two ways to
build an expert system. They can be built from
scratch, or built using a piece of development
software known as a "tool" or a "shell." Before
we discuss these tools, let's briefly discuss what
knowledge engineers do. Though different
styles and methods of knowledge engineering
exist, the basic approach is the same: a
knowledge engineer interviews and observes a
human expert or a group of experts and learns
what the experts know, and how they reason
with their knowledge. The engineer then
translates the knowledge into a computer-usable
language, and designs an inference engine, a
reasoning structure, that uses the knowledge
appropriately. He also determines how to
integrate the use of uncertain knowledge in the
reasoning process, and what kinds of
explanation would be useful to the end user.
Next, the inference engine and
facilities for representing knowledge and for
explaining are programmed, and the domain
knowledge is entered into the program piece by
piece. It may be that the inference engine is not
just right; the form of knowledge representation
is awkward for the kind of knowledge needed
for the task; and the expert might decide the
pieces of knowledge are wrong. All these are
discovered and modified as the expert system
gradually gains competence.

Programming Languages

Expert systems are typically written
in special programming languages. The use of
languages like LISP and PROLOG in the
development of an expert system simplifies the
coding process. The major advantage of these
languages, as compared to conventional
programming languages, is the simplicity of the
addition, elimination, or substitution of new
rules and memory management capabilities. The
programming languages used for expert systems
tend to operate in a manner similar to ordinary
conversation. We usually state the premise of a
problem in the form of a question; with actions
being stated much as when we verbally answer
the question, that is, in a natural language
format. If, during or after a consultation, an
expert system determines that a piece of its data
or knowledge base is incorrect or is no longer
applicable because the problem environment has
changed, it should be able to update the
knowledge base accordingly. This capability
would allow the expert system to converse in a
natural language format with either the
developers or users. Some of the distinguishing
characteristics of programming languages
needed for expert systems work are:
Efficient mix of integer and real variables
Good memory-management procedures
Extensive data-manipulation routines
Incremental compilation
Tagged memory architecture
Optimization of the systems environment
Efficient search procedures

Expert System Shell

An expert system shell is a program
that provides the framework required for an
Expert System, but with no knowledge Base.
The shell provides an inference engine, perhaps
a user interface for providing knowledge or
some means of reading data in from files. Some
shells are self-contained, while others can be
extended by using other programming
languages. Indeed, some are effectively
programming languages have their own right!
The advantage of using a shell is that you can
focus on solving the problem at hand rather than
trying to make an Expert System to scratch. The
disadvantage is that youre stuck with the guts
of the shell.

Types of expert systems

There are various types of expert
system technology available. What to use
depends upon the nature of the problem and
software and what is easiest! Seriously, inmost
simple applications of Expert systems the choice
e of technology is not as important as one might
think.
The basic types of system are:
Decision trees
Forward chaining
Back ward chaining
State machines
Bayesian networks
Black board systems
Case based reasoning
Any of these basic technologies can be
implemented from a suitable shell or from
scratch. In some other cases technologies such
as fuzzy logic or Neural networks can be
integrated in to the Expert system to create
amore efficient Expert.

Decision tree
This technology is best suited in
creating Expert systems whose primary role is
in diagnosis of problems. In a tree system the
user is prompted with a question and each
question having a number of responses, which
indicate the next question to ask. Eventually the
tree is organized to a point where an answer is
given by the system or it gives up and
effectively says I dont know. Behind the
scenes the knowledge base is a series of
questions, responses and questions to ask based
on those responses and the end points where
the answer is represented.

Forward chaining
A forward chaining expert system
starts from the symptoms and then runs through
its knowledge base until it gets to an answer.
The difference between this system and the
decision tree method above is that in forward
chaining the input data is presented at the
starting of the inference engine runs through its
rules in an arbitrary order until no further
changes take place in the internal variables of
the system reached steady state this may take
several runs through the rules, as it is possible
that one rule firing off may influence the
behavior of another rule thats already been run.
so the rules are all run again until no more rules
are executed

Backward chaining
A back ward chaining expert system
starts with a possible result and then reviews the
inputs available to see if the evidence matches.
As you find an input that DOESNT fit the
proposed result, that result is disregarded and a
different potential solution selected. If your
knowledge base is large, taking this approach
may be faster than the sequential examination of
all inputs and rules as required by the tree or
forward chaining mechanism

State machine
A state machine is a software object
that can hold a number of different states, each
state being represented by a particular
combination values stored with in the object.
How the object behaves, and switches between
the states, depends up on the current state its in.
Each state can have different inputs and output
values to another state, so one state may in to
account the value and another state may disregard it.

Case based reasoning
Here the system requires a knowledge
base made up of previous cases or instances of
the problem, with solution that was found in a
result that took place. Rather than creating a set
of rules, you just write an inference Engine that
can look for similarities between previous
situations ad the current one.

Bayesian networks
A problem with simpler Expert
Systems is that if they cannot find answer that
matches a set of circumstances in the inference
engine and in Knowledge Base then they give
no answer. A system built around a Bayesian
Network will give a best-fit answer with
probabilities attained. The knowledge base for
such a system consists of a table of inputs and
out pts with the probability that a particular
input will contribute to a particular output

Black board systems
Black board systems work by
having lots of small components that identify
particular events or results based on specific
inputs. Each component then communicates its
result to a controlling system, which is also
receiving data from other components. The
individual components might be fully fledged
Expert Systems in their own right, or other
technologies such as Neural Networks








Transition from Data Processing to
Knowledge Processing

What data has been to the previous
generations of computing, knowledge is to the
present generation of computing. Expert
systems represent a revolutionary transition
from the traditional data processing to
knowledge processing. Figure illustrates the
relationships between the procedures for data
processing and knowledge processing to make
decisions. In traditional data processing the
decision maker obtains the information
generated and performs an explicit analysis of
the information before making his or her
decision. In an expert system knowledge is
processed b y using available data as the
processing fuel. Conclusions are reached and
recommendations are derived implicitly. The
expert system offers the recommendation to the
decision maker, who makes the final decision
and implements it as appropriate. Conventional
data can now be manipulated to work with
durable knowledge, which can be processed to
generate timely information, which is then used
to enhance human decisions.


The Need for Expert Systems

Expert systems are necessitated by the
limitations associated with conventional human
decision-making processes, including:
1. Human expertise is very scarce.
2. Humans get tired from physical or mental
workload.
3. Humans forget crucial details of a problem.
4. Humans are inconsistent in their day-to-day
decisions.
5. Humans have limited working memory.
6. Humans are unable to comprehend large
amounts of data quickly.
7. Humans are unable to retain large amounts of
data in memory.
8. Humans are slow in recalling information
stored in memory.
9. Humans are subject to deliberate or
inadvertent bias in their actions.
10. Humans can deliberately avoid decision
responsibilities.
11. Humans lie, hide, and die.
Coupled with these human limitations
are the weaknesses inherent in conventional
programming and traditional decision-support
tools. Despite the mechanistic power of
computers, they have certain limitations that
impair their effectiveness in implementing
human-like decision processes. Conventional
programs:
1. Are algorithmic in nature and depend only on
raw machine power
2. Depend on facts that may be difficult to
obtain
3. Do not make use of the effective heuristic
approaches used by human experts
4. Are not easily adaptable to changing problem
environments
5. Seek explicit and factual solutions that may
not be possible




The Applications of Expert Systems

An expert system may be viewed as a
computer simulation of a human expert. Expert
systems are an emerging technology with many
areas for potential applications. Past
applications range from MYCIN, used in the
medical field to diagnose infectious blood
diseases, to XCON, used to configure computer
systems. These expert systems have proven to
be quite successful. Most applications of expert
systems will fall into one of the following
categories:

Diagnosis and Troubleshooting of Devices
and Systems of All Kinds
This class comprises systems that
deduce faults and suggest corrective actions for
a malfunctioning device or process. Medical
diagnosis was one of the first knowledge areas
to which ES technology was applied (for
example, see Shortliffe 1976), but diagnosis of
engineered systems quickly surpassed medical
diagnosis. There are probably more diagnostic
applications of ES than any other type. The
diagnostic problem can be stated in the abstract
as: given the evidence presenting itself, what is
the underlying problem/reason/cause?

Planning and Scheduling
Systems that fall into this class analyze
a set of one or more potentially complex and
interacting goals in order to determine a set of
actions to achieve those goals, and/or provide a
detailed temporal ordering of those actions,
taking into account personnel, materiel, and
other constraints. This class has great
commercial potential, which has been
recognized. Examples involve airline scheduling
of flights, personnel, and gates; manufacturing
job-shop scheduling; and manufacturing process
planning.


Configuration of Manufactured Objects from
Subassemblies
Configuration, whereby a solution to a problem
is synthesized from a given set of elements
related by a set of constraints, is historically one
of the most important of expert system
applications. Configuration applications were
pioneered by computer companies as a means of
facilitating the manufacture of semi-custom
minicomputers (McDermott 1981). The
technique has found its way into use in many
different industries, for example, modular home
building, manufacturing, and other problems
involving complex engineering design and
manufacturing.

Financial Decision Making
The financial services industry has
been a vigorous user of expert system
techniques. Advisory programs have been
created to assist bankers in determining whether
to make loans to businesses and individuals.
Insurance companies have used expert systems
to assess the risk presented by the customer and
to determine a price for the insurance. A typical
application in the financial markets is in foreign
exchange trading.

Knowledge Publishing
This is a relatively new, but also
potentially explosive area. The primary function
of the expert system is to deliver knowledge that
is relevant to the user's problem, in the context
of the user's problem. The two most widely
distributed expert systems in the world are in
this category. The first is an advisor, which
counsels a user on appropriate grammatical
usage in a text. The second is a tax advisor that
accompanies a tax preparation program and
advises the user on tax strategy, tactics, and
individual tax policy.

Process Monitoring and Control
Systems falling in this class analyze
real-time data from physical devices with the
goal of noticing anomalies, predicting trends,
and controlling for both optimality and failure
correction. Examples of real-time systems that
actively monitor processes can be found in the
steel making and oil refining industries.

Design and Manufacturing
These systems assist in the design of
physical devices and processes, ranging from
high-level conceptual design of abstract entities
all the way to factory floor configuration of
manufacturing processes.

Applications that are computational or
deterministic in nature are not good candidates
for expert systems. Traditional decision support
systems such as spreadsheets are very
mechanistic in the way they solve problems.
They operate under mathematical and Boolean
operators in their execution and arrive at one
and only one static solution for a given set of
data. Calculation intensive applications with
very exacting requirements are better handled
by traditional decision support tools or
conventional programming. The best application
candidates for expert systems are those dealing
with expert heuristics for solving problems.


Application Roadmap
The symbolic processing capabilities
of AI technology lead to many potential
applications in engineering and manufacturing.
With the increasing sophistication of AI
techniques, analysts are now able to use
innovative methods to provide viable solutions
to complex problems in everyday applications.
Figure presents a structural representation of the
application paths for artificial intelligence and
expert systems.


Benefits of Expert Systems
Expert systems offer an environment
where the good capabilities of humans and the
power of computers can be incorporated to
overcome many of the limitations discussed in
the previous section. Expert systems:
1. Increase the probability, frequency, and
consistency of making good decisions
2. Help distribute human expertise
3. Facilitate real-time, low-cost expert-level
decisions by the non-expert
4. Enhance the utilization of most of the
available data
5. Permit objectivity by weighing evidence
without bias and without regard for the users
personal and emotional reactions
6. Permit dynamism through modularity of
structure
7. Free up the mind and time of the human
expert to enable him or her to concentrate on
more creative activities
8. Encourage investigations into the subtle areas
of a problem
Conclusions

However a good expert system is
expected to grow as it learns from user
feedback. Feedback is incorporated into the
knowledge base as appropriate to make the
expert system smarter. The dynamism of the
application environment for expert systems is
based on the individual dynamism of the
components. This can be classified as follows:
Most dynamic: Working memory. The contents
of the working memory, sometimes called the
data structure, changes with each problem
situation. Consequently, it is the most dynamic
component of an expert system, assuming, of
course, that it is kept current.
Moderately dynamic: Knowledge base. The
knowledge base need not change unless a new
piece of information arises that indicates a
change in the problem solution procedure.
Changes in the knowledge base should be
carefully evaluated before being implemented.
In effect, changes should not be based on just
one consultation experience. For example, a rule
that is found to be irrelevant less than one
problem situation may turn out to be crucial in
solving other problems.
Least dynamic: Inference engine. Because of
the strict control and coding structure of an
inference engine, changes are made only if
absolutely necessary to correct a bug or enhance
the inferential process. Commercial inference
engines, in particular, change only at the
discretion of the developer. Since frequent
updates can be disruptive and costly to clients,
most commercial software developers try to
minimize the frequency of updates.

Artificial intelligence has a long way
to go yet before it can achieve its goals, but the
discoveries made by research in this area justify
its continuation. There are intelligent techniques
that have been developed though and it is
doubtless that these will continue to be
developed and similar new discoveries made.
However, true intelligence in machines appears
to be, for now at least, beyond our reach. Only
time will tell whether this remains to be so.



References:

www.pcai.com
www.acm.org
www.alanturing.net
www.oniformation.com
www.exampleessays.com


ASR

PANHINI SWAROOP. S
N. MAHESH NARAYAN
SREE KALAHASTHEESWARA INSTITUTE OF TECHNOLOGY
SREEKALAHASTHI

Abstract


Artificial Silicon Retina (ASR) microchip is
designed to stimulate damaged retinal cells, allowing them to
send visual signals again to the brain. The ASR microchip is a
silicon chip 2mm in diameter and 25 microns thick, less than the
thickness of a human hair. It contains approximately 5,000
microscopic solar cells called micro photodiodes, each with
its own stimulating electrode. These micro photodiodes are
designed to convert the light energy from images into electrical
chemical impulses that stimulate the remaining functional cells
of the retina in patients with age-related macular degeneration
(AMD) and retinitis pigmentosa (RP) types of conditions. The
ASR microchip is powered solely by incident light and does not
require the use of external wires or batteries.


1. Introduction

The retina is a thin layer of neural tissue that lines the
back wall inside the eye. Some of these cells act to
receive light, while others interpret the information and
send messages to the brain through the optic nerve. This
is part of the process that enables you to see. Here is a
simple explanation of what happens when you look at an
object:
Scattered light from the object enters through the
cornea.
The light is projected onto the retina.
The retina sends messages to the brain through
the optic nerve.
The brain interprets what the object is.

The anatomy of the eye
The retina is complex in itself. This thin membrane at the
back of the eye is a vital part of your ability to see. Its
main function is to receive and transmit images to the
brain. These are the three main types of cells in the eye
that help perform this function:
Rods
Cones
Ganglion Cells
There are about 125 million rods and cones
within the retina that act as the eye's photoreceptors. Rods
are the most numerous of the two photoreceptors,
outnumbering cones 18 to 1. Rods are able to function in
low light (they can detect a single photon) and can create
black and white images without much light. Once enough
light is available (for example, daylight or artificial light
in a room), cones give us the ability to see color and detail
of objects. Cones are responsible for allowing you to read
this article, because they allow us to see at a high
resolution.
The information received by the rods and cones
are then transmitted to the nearly 1 million ganglion cells
in the retina. These ganglion cells interpret the messages
from the rods and cones and send the information on to
the brain by way of the optic nerve.
There are a number of retinal diseases that attack these
cells, which can lead to blindness. The most notable of
these diseases are retinitis pigmentosa and age-related
macular degeneration. Both of these diseases attack the
retina, rendering the rods and cones inoperative, causing
either loss of peripheral vision or total blindness.
However, it's been found that neither of these retinal
diseases affect the ganglion cells or the optic nerve. This
means that if scientists can develop artificial cones and
rods, information could still be sent to the brain for
interpretation.

2. Architecture & working
The current path that scientists are
taking to create artificial vision received a jolt in 1988,
when Dr. Mark Humayun demonstrated that a blind
person could be made to see light by stimulating the nerve
ganglia behind the retina with an electrical current. This
test proved that the nerves behind the retina still
functioned even when the retina had degenerated. Based
on this information, scientists set out to create a device
that could translate images and electrical pulses that could
restore vision. Today, such a device is very close to
becoming available to the millions of people who have
lost their vision to retinal disease. In fact, there are at least
two silicon microchip devices that are being developed,
and one has already been implanted in the eyes of three
blind patients. The concept for both devices is similar,
with each being:
Small enough to be implanted in the eye
Supplied with a continuous source of power
Biocompatible with the surrounding eye tissue
Perhaps the most promising of these two silicon devices is
the artificial silicon retina (ASR) developed by
optobionics. As you can see in the picture at the top of
this page the ASR is an extremely tiny device, smaller
than the surface of a pencil eraser. It has a diameter of just
2 mm (.078 inch) and is thinner than a human hair. There
is good reason for its microscopic size. In order for an
artificial retina to work it has to be small enough so that
doctors can transplant it in the eye without damaging the
other structures within the eye.
The ASR contains about 3,500 microscopic solar cells
that are able to convert light into electrical pulses,
mimicking the function of cones and rods. To implant this
device into the eye, surgeons make three tiny incisions no
larger than the diameter of a needle in the white part of
the eye. Through these incisions, the surgeons introduce a
miniature cutting and vacuuming device that removes the
gel in the middle of the eye and replaces it with saline.
Next, a pinpoint opening is made in the retina through
which they inject fluid to lift up a portion of the retina
from the back of the eye, which creates a small pocket in
the subretinal space for the device to fit in. The retina is
then resealed over the ASR.

Photo courtesy Optobionics.
Here you can see where the ASR is placed between the
outer and inner retinal layers.
For any microchip to work it needs power, and the
amazing thing about the ASR is that it receives all of its
needed power from the light entering the eye. As you
learned before, light that enters the eye is directed at the
retina. This means that with the ASR implant in place
behind the retina, it receives all of the light entering the
eye. This solar energy eliminates the need for any wires,
batteries or other secondary devices to supply power.
Another microchip device that would restore partial
vision is currently in development by a team of
researchers from J ohns Hopkins University, North
Carolina State University and the University of North
Carolina-Chapel Hill. Called the artificial retina
component chip (ARCC), this device is quite similar to
the ASR. Both are made of silicon and both are powered
by solar energy. The ARCC is also a very small device
measuring 2 mm square and a thickness of .02 millimeters
(.00078 inch). There are significant differences between
the devices, however.
Unlike the ASR which is placed between layers of retinal
tissue, the ARCC is placed on top of the retina. Because it
is so thin, light entering the eye is allowed to pass through
the device to strike the photosensors on the back of the
chip. However, this light is not the power source for the
ARCC. Instead, a secondary device attached to a pair of
common eyeglasses directs a laser at the chips solar cells
to provide power. The laser would have to be powered by
a small battery pack.
According to researchers, the ARCC will give blind
patients the ability to see 10 by 10 pixel images, which is
about the size of a single letter on this page. However,
researchers have said that they could eventually develop a
version of the chip that would allow 250 by 250 pixel
array, which would allow those who were once blind to
read a newspaper.




3. Clinical Trails

The microsurgical procedure consists of a
standard ophthalmic operation called a vitrectomy and a
retinotomy, plus the implantation of the chip, itself.
The surgeon starts by making three tiny incisions in
the white part of the subjects eye, through which
instruments are inserted. Through these incisions, the
surgeon replaces the vitreous gel in the middle of the eye
with saline.
The surgeon then makes an opening in the retina
through which fluid is injected: the fluid lifts up a portion
of the retina from the back of the eye and creates a small
pocket in the subretinal space just wide enough to
accommodate the ASR microchip.
The surgeon then slides the implant into the subretinal
space, much as one might slip a tiny coin into a pocket.
Finally, the surgeon introduces air into the middle of the
eye to gently push the retina back down over the implant.
Over a period of one or two weeks, the air bubble is
resorbed and replaced by fluids created within the eye.
The procedure typically takes about 2 hours.
In J anuary 2000, the US governments Food and Drug
Administration (FDA) authorized Optobionics to implant
our Artificial Silicon Retina device in up to ten retinitis
pigmentosa patients in a two-year safety and feasibility
study.
In two-year clinical trials that began in J une 2000,
Optobionics implanted its microchip into the subretinal
space of ten patients with RP, to study its safety and
feasibility in treating retinal vision loss.
At this time Optobionics is correlating and assessing
clinical data from all
these patients:
No patient has
shown signs
of implant
rejection
infection,
inflammati
erosion or
retinal
detachm
related
the implanted
microchip.
The durabili
,
on,
ent
to
ty

of the ASR
chip in this
location and the long-term safety, feasibility, and
suitability of this procedure, however, are yet to
be determined.



4. Advantages
We can restore the blindness permanently from
the world
Even a born blind can even see the world


5. Future

Great research is going in this filed to improve the
visibility. And this technology will give great hope for
scientists to make artificial nervous system

6. Conclusion

As already said, together, AMD and RP affect at least
30 million people in the world. They are the most
common causes of untreatable blindness in developed
countries and, currently, there is no effective means of
restoring vision. So in the near future by improving this
technology to a huge extent we can eliminate darkness in
so many eyes.


7.References

Howstuffworks 1. How Artificial Vision Work.htm
2.Optobionics -- ASR Device.htm
































BIO-MEDICAL SMART SENSOR FOR VISION IMPAIRED

V.M.PRAVEEN KUMAR B.BALA SAI BABU
3
rd
Year, E.E.E 3
rd
Year, E.E.E.
Sir C.R.R. College of Engineering, Sir C.R.R. College of Engineering,
Eluru. Eluru.
praveen_eee46@yahoo.co.in saibabu_dec31@yahoo.co.in

Abstract
In this paper, we describe the current version of the
artificial retina prosthesis and cortical implant. This
paper will require significant advances in a variety
of disciplines to develop the novel solutions needed
to make artificial vision for the visually-impaired a
reality. This paper describes the novel approach that
we have adopted to providing a complete system for
restoring vision to visually-impaired persons from
the signals generated by an external camera to an
array of sensors that electrically stimulate the retina
via a wireless interface.
Keywords
Sensor systems, biomedical sensors.
1. Introduction
In this paper, we describe the current version of the
artificial retina prosthesis and cortical implant. This
research is a multidisciplinary project. Restoring vision
to the blind and visually impaired is possible only
through significant progress in all these research areas.
In the future, artificial retina prostheses may be used to
restore visual perception to persons suffering from
retinitis pigmentosa, macula degeneration, or other
diseases of the retina. In patients with these diseases,
most of the rods and cones are destroyed, but the other
cells of the retina are largely intact. It is well known
that the application of electrical charges to the retina
can elicit the perception of spots of light. By coupling
novel sensing materials with the recent advances in
VLSI technology and wireless communication, it is
now feasible to develop biomedical smart sensors that
can support chronic implantation of a significant
number of stimulation points. Although the
development and use of artificial retina prostheses is
still in the early stages, the potential benefits of such
technology are immense. Similarly, the use of cortical
implants has promise for the visually impaired. Unlike
the retina prosthesis, a cortical implant bypasses most
of the visual system, including the eye and the optic
nerve, and directly stimulates the visual cortex, where
information from the eyes is processed. Therefore, in
addition to overcoming the effects of diseased or
damaged retina tissue, a cortical implant could
circumvent many other problems in the visual system,
including the loss of an eye. The smart sensor package
is created through the backside bonding of an array of
sensing elements, each of which is a set of microbumps
that operate at an extremely low voltage, to a integrated
circuit for a corresponding multiplexed grid of
transistors that allows individual voltage control of each
microbump sensor. The next generation design supports
a 16X16 array of sensors. An earlier circuit design,
which has been fabricated and tested, supports a 10 X10
array of sensors. The package is encapsulated in inert
material except for the microbumps, which must be in
contact with the retina. The long-term operation of the
device, as well as the difficulty of physically accessing
a biomedical device implanted in the eye, precludes the
use of a battery-powered smart sensor. Because of the
high volume of data that must be transmitted, the power
consumption of an implanted retinal chip is much
greater than, for example, a pacemaker. Instead, the
device can be powered using RF inductance. Because
of the difficulties of aligning the two coils one being
within the body and the other one outside the body for
RF power transmission, a low frequency is required to
tolerate misalignment of the coils. On the other hand, a
relatively high frequency is required to operate in the
unlicensed ISM band. For this reason, the novel
approach of using two frequencies: RF inductance
using a frequency of 5 MHz and RF data transmission
using a frequency in the range of either 900 MHz or 2.4
GHz has been adopted.

2. Retinal and Cortical Implant
Proposed retina implants fall into two general
categories:
Epiretinal, which are placed on the surface of
the retina.
Subretinal, which are placed under the surface
of the retina.
Both approaches have advantages and disadvantages.
The main advantages of the sub-retinal implant are that
the implant is easily fixed in place, and the simplified
processing that is involved, since the signals that are
generated replace only the rods and cones with other
layers of the retina processing the data from the
implant. The main advantage of the epi-retinal implant
is the greater ability to dissipate heat because it is not
embedded under tissue. This is a significant
consideration in the retina. The normal temperature
inside the eye is less than the normal body temperature
of 98.6o Fahrenheit. Besides the possibility that heat
build-up from the sensor electronics could jeopardize
the chronic implantation of the sensor, there is also the
concern that the elevated temperature produced by the
sensor could lead to infection, especially since the
implanted device could become a haven for bacteria.
There are also two options for a cortical implant. One
option is to place the sensors on the surface of the
visual cortex. At this time, it is unknown whether the
signals produced by this type of sensor can produce
stimuli that are sufficiently localized to generate the
desired visual perception. The other option is to use
electrodes that extend into the visual cortex. This
allows more localized control of the stimulation, but
also presents the possibility of long-term damage to the
brain cells during chronic use. It should be noted,
however, that although heat dissipation remains a
concern with a cortical implant, the natural heat
dissipation within the skull is greater than within the
eye. Given the current state of the research, it is unclear
which of these disadvantages will be most difficult to
overcome for a chronically implanted device. An
implantable version of the current ex-vivo microsensor
array, along with its location within the eye, is shown in
Figure 1. The microbumps rest on the surface of the
retina rather than embedding themselves into the retina.
Unlike some other systems that have been proposed,
these smart sensors are placed upon the retina and are
small enough and light enough to be held in place with
relatively little force. These sensors produce electrical
signals that are converted by the underlying tissue into
a chemical response, mimicking the normal operating
behavior of the retina from light stimulation. The
chemical response is digital (binary), essentially
producing chemical serial communication. A similar
design is being used for a cortical implant, although the
spacing between the microbumps is larger to match the
increased spacing between ganglia in the visual cortex.
As shown in Figure 1, the front side of the retina is in
contact with the microsensor array. This is an example
of an epi-retinal implant. Transmission into the eye
works as follows. The surface of the retina is stimulated
electrically, via an artificial retina prosthesis, by the
sensors on the smart sensor chip. These electrical
signals are converted into chemical signals by the
ganglia and other underlying tissue structures and the
response is carried via the optic nerve to the brain.
Signal transmission from the smart sensors implanted in
the eye works in a similar manner, only in the reverse
direction. The resulting neurological signals from the
ganglia are picked up by the microsensors and the
signal and relative intensity can be transmitted out of
the smart sensor. Eventually, the sensor array will be
used for both reception and transmission in a feedback
system and chronically implanted within the eye.
Although the microsensor array and associated
electronics have been developed, they have not yet been
tested as a chronic implant. Another challenge at this
point is the wireless networking of these microsensors
with an external processing unit in order to process the
complex signals to be transmitted to the array.

Figure 1. Location of the Smart Sensor within the Eye
3. Smart Sensor Chip Design
Figure 2 shows a close-up of the smart sensor shown in
figure 1. Each microbump array consists of a cluster of
extrusions that will rest on the surface of the retina. The
small size of the microbumps allows them to rest on the
surface of the retina without perforating the retina. In
addition, the slight spacing among the extrusions in
each micro- bump array provides some additional heat
dissipation capability. Note that the distance between
adjacent sets of microbumps is approximately 70
microns. These sensors are bonded to an integrated
circuit. The integrated circuit is a multiplexing chip,
operating at 40KHz, with on-chip switches and pads to
support a grid of connections. Figure 1 shows a 4X4
grid for illustrative purposes, although the next
generation of sensor chip has a 16X16 array. The circuit
has the ability to transmit and receive, although not
simultaneously. Each connection has an aluminum
probe surface where the micromachined sensor is
bonded. This is accomplished by using a technique
called backside bonding, which places an adhesive on
the chip and allows the sensors to be bonded to the
chip, with each sensor located on a probe surface.
Before the bonding is done, the entire IC, except the
probe areas, is coated with a biologically inert
substance. The neural probe array is a user-configured
1:100 demultiplexer/ 100:1 multiplexer, where an
external switch controls the configuration. The neural
array is a matrix of 100 microelectrodes constructed as
bi-directional switched-probe units that will stimulate
or monitor the response state of an aggregate of
neurons, more specifically, bipolar cells, which are two-
poled nerve cells. When the array is configured as a
demultiplexer, the switched-probe units serve to
stimulate the corresponding aggregate of neurons; thus,
the array functions as a neurostimulator. When the
array is configured as a multiplexer, the

Figure 2. Illustration of the Microbump Array
units serve to monitor the evoked response of the
aggregate of neurons in the visual cortex; thus, the array
functions as a neural response monitor. The array has
an additional bi-directional port called the signal
carrier, where the direction of the signal flow to and
from this port depends on the configuration of the array.
As a neuro-response monitor, the neural signals from
each aggregate will be relayed through the signal carrier
port on a single line. As a neurostimulator, the external
signal, whose magnitude will depend on the intensity of
the signal required to revive the degenerate neurons,
will be injected into the circuit through the signal
carrier (bypassing the amplifier) to be distributed to
each aggregate through the corresponding unit. Each
switched-probe unit consists of a neural probe and two
n-channel MOSFETs, whose W/L ratio is 6/2. The W/L
ratio defines the behavior of the transistor, where W is
the width of the active area of the transistor and L is the
length of the polysilicon used for the gate channel. For
each unit, the probe is a passive element that is used to
interface each aggregate of neurons to the electronic
system and the transistors are the active elements that
are used to activate the units. The second-generation
prototype adds the decoder with its outputs connected
to the row and to the column ports of the array. The
addition of the decoder reduces the number of required
contact pads from 22 to 5 (Set, master clock, VDD,
VSS, and the signal carrier port) and enhances the
reliability of the Scanner with the configuration of 2
inputs, rather than the external connections of 20 inputs.
The configuration of the Set and clock cycles will
enable the decoder to sequentially activate each unit by
sending +5V pulses to the corresponding row and
column ports. To establish the required Set and clock
cycles, further neural analysis on the periodic
stimulation of the bipolar cells must be conducted. The
Set signal initiates the scanning of the probe from left to
right and from top to bottom.

4. Computer Communication
It is not feasible to do the processing internally using
the capabilities of only the sensor arrays. Thus, work on
interconnecting these smart sensors with an external
processing system is a fundamental aspect of realizing
the potential of an artificial retina. On-going diagnostic
and maintenance operations will also require
transmission of data from the sensor array to an
external host computer. These requirements are in
addition to the normal functioning of the device, which
uses wireless communication from a camera embedded
in a pair of eyeglasses into the smart sensor arrays. The
processing steps from external image reception to
transmission to the retina prosthesis are as follows. A
camera mounted on an eyeglass frame could direct its
output to a real-time DSP for data reduction and
processing (e.g., Sobel edge detection). The camera
would be combined with a laser pointer for automatic
focusing. The DSP then encodes the resultant image
into a compact format for wireless transmission into (or
adjacent to) the eye for subsequent decoding by the
implanted chips. The setup could use a wireless
transceiver that is inside the body, but not within the
retina, and a wire to the retina chip. The ultimate
research goal is to support an array of 1600 smart
sensor chips, each with a 25X25 grid of electrodes. The
rods and cones fire at an approximate interval of 200
250ms. Therefore, the processing will be performed
periodically in a 200 250ms processing loop. Hence,
data will be transmitted four or five times per second.
Although the actual rods and cones in the eye operate in
an analogmanner (variety of possible values), our initial
system will operate in a strictly on/off mode. In other
words, one bit of data per sensor every 200 250ms.
We plan on eventually moving to multiple-level
stimulation. The investigations into understanding the
visual processing of the brain will indicate whether or
not the sensor arrays will be implanted with uniform
distribution. Functionally, electrode arrays within the
center of the macula (the central retina) will have to
stimulate the retina differently than peripherally placed
electrode arrays, since the functions of these various
parts of the retina are very different. Centrally, in the
macula, we perceive our high-resolution detail vision,
while in the periphery, the retina is better at detecting
motion or illumination transients. (For example, most
persons can perceive their computer monitors vertical
refresh when looking at the monitor using peripheral
vision, since the peripheral retina has better temporal
resolution, but poorer spatial resolution than the
macula.) Thus, a multi-electrode array visual prosthesis
will have to encode the visual scene slightly differently,
depending upon where on the retina each electrode
array is placed. The peripherally placed electrodes need
to generate signals based on lower spatial resolution
with greater emphasis on temporal events, while
centrally placed sensor arrays upon the macula need to
encode more spatially oriented information. Each array
will have to transmit some common information such as
the overall luminosity of the visual scene. So, each
smart sensor will have to be coordinated with other
smart sensors based on an image processing algorithm
designed to control a set of smart sensor arrays, each
separate, sending input to functionally different retinal
areas. In order to achieve the envisioned functionality,
two-way communication will be needed between an
external computer and cortical implant so that we can
provide input to the cortical implant and determine if
the desired image is seen. We also need two-way
communication with the retinal implant so that we can
determine that the sensors in the retina are operating as
expected. Besides input from the camera, we also need
the ability to provide direct input to the retinal implant
to determine if the patient sees what is expected from
that input pattern. This will validate our understanding
of the signaling between the camera and the smart
sensor array as well as the operation of the wireless
communication protocols. Our main objective is to
design a communication system that is energy efficient
and performs satisfactorily under interfering sources.
For very low power transmitter applications, reducing
the power consumed in the transmitter architecture and
an ideal modulation technique produces the best energy
efficiency. Many energy efficient transmitter
architectures have been developed and can be used for
low power applications. Comparison of various digital
modulation techniques have been done in terms of
SNR/bit and bandwidth efficiency for a known BER
and fixed data rate.
The power must be carefully controlled to avoid
damage to the retina and surrounding tissue. Each
sensor array operates with less than one microampere of
current. The power can be provided in different ways.
One option is to use wires to provide the power,
although we would still require wireless data
communication to limit the number of wires.
Implanting a battery near the eye could provide the
power. A second option is to use inductance, provided
by RF or IR signals. A third option is a photo-diode
array, which converts light to power. It is important to
note that even if the power source is wired, the data
communication needs to be wireless in order to
minimize the number of wires and improve the
flexibility of the system. After considering all factors,
the decision has been made to use radio frequencies for
both power inductance and data transmission

5. Related Work
The goal of artificially stimulating the retina to produce
vision is currently being investigated by seven large,
multidisciplinary research teams worldwide, including
four groups in the United States, two in Germany, and
one in Japan. Table 1 describes the location of the other
six groups and the design approach used.

Each group has directed their efforts toward the design
of an implantable device. The Massachusetts Eye and
Ear/MIT program, as well as the North Carolina State
groups have been independently working on an
epiretinal, electrically based retinal stimulator. Groups
designing an electrical subretinal device include
Chow/Peyman, Zrenner, and Eckmiller. The Yagi
group at the University of Nagoya is attempting to
hybridize cultured neurons with a siliconebased
stimulator, borrowing from work pioneered by the Pine
lab at Cal-Tech. The North Carolina State group and the
Massachusetts Eye and Ear/MIT group share common
design approaches. Both implants are intended to
stimulate the retina using electrical current, applied to
the inner retina by a two dimensional, multiple-
electrode array. Although the intended target cell was
believed to be the retinal ganglion cells, the retinal
bipolar cells were the predominant cell population
stimulated. These data were derived through an analysis
of the temporal dynamics of neuron responses after
epiretinal electrical stimulation in the frog. In addition
to an epiretinal electrode array, both groups have
designed VLSI (very large scale integration) chips
intended for ocular implantation. Both of these
integrated circuits are designed to accept an electrical
signal that encodes visual information. This signal is
formatted as an electrical representation of a visual
scene, provided by an external solid-state camera, or
computer. The VLSI chip is designed to decode this
signal and produce a graded electrical stimulation in the
appropriate electrodes, re-creating aspatially structured
electrical stimulus to the retina. A radio-frequency (RF)
receiver has been integrated into the VLSI design to
permit the RF transmission of the visual signal into the
chip. The MARC IV is an improvement over previous
versions because it is capable of measuring the
electrode contact impedance with the retina,
automatically compensating for fluctuations by altering
the stimulus voltage. The Massachusetts Eye and
Ear/MIT group in Boston powers their VLSI
implantable silicone chip using a specially designed
silicon photocell array. The array is capable of
delivering sufficient energy to power the VLSI chip.
This photocell array is mounted within a posterior
chamber intraocular lens. It consists of an array of
sixteen parallel sets of twelve linear photodiodes. A
two-watt, 820 nanometer laser is used to power the
photodiode array. By modulating the laser output
energy according to the pulse stream of a CCD sensor,
visual information may be transmitted into the VLSI
chip, digitally. This information is then decoded in a
similar manner to the MARC IV to generate a stimulus
voltage, corresponding to the level of illumination for a
given pixel at the appropriate stimulus electrode site.
For patients with hereditodegenerative retinal disease,
the outer retina is most commonly affected. Rods and
cones are dysfunctional or missing. These disease states
often leave the inner retina somewhat less affected.
Therefore, the true goal of a retinal implant in these
cases is to replace the missing functionality of rods and
cones. This involves stimulating the bipolar/
horizontal/amacrine systems of the retina. Those groups
working with subretinal electrode arrays hypothesize
that these cell populations are most accessible from the
subretinal space. Although this may be true, the bipolar
cells are stimulated by an epiretinal implant. The
inherent design of the subretinal electrical implants is
vastly different from those groups working on the
epiretinal approach. The epiretinal implants are
designed to derive their power from a source that is
independent from the electrode array. The NC State
group derives their power from an inductive
transformer. Energy from this source is then switched
to the appropriate electrodes, according to the visual
data input signal. The Massachusetts Eye and Ear/MIT
group uses a silicon photodiode array placed within a
posterior chamber intraocular lens for power. Energy
from this array is then switched by the VLSI implant to
the appropriate epiretinal electrode, based upon the
input of a CCD camera. Subretinal implants are
inherently simpler by design and mimic the modular
organization of individual rods and cones. Power is
generated at the site of sub-retinal electrical stimulation,
using photosensitive micro photodiodes (MPDs). When
arranged two-dimensionally, these micro photodiode
arrays (MPDAs) provide spatially organized electrical
stimulation to the retina. Thus, their design is inherently
much simpler. Incident light falls upon the MPDA,
generating an electrical stimulus with identical spatial
organization. No cameras or encoding/decoding
circuitry is needed. In addition, the power supply is
integrated within the implant. Although these are great
benefits in design simplicity, other factors may
complicate their use. These include the need for
optically clear media and also assume that an adequate
amount of stimulation current can be generated by an
MPD for each stimulation point. Further, since the
implant is within the subretinal space, metabolic issues
concerning adequacy of oxygen and nutrient delivery to
the outer retina become considerations. Both epiretinal
and subretinal approaches have thus far been based
upon electrical stimulation of the retina. Electrical
stimulation of the retina through injection of current
dissipates power and heat. In patients with degenerative
retinal disorders, the choriocapillaris, which normally
provides heat dissipation in the retina, is markedly
pathologic. Therefore, any study regarding the energy
dissipation requirements must be performed to account
for the compromised outer retinal blood flow/heat
dissipation system. In addition to the thermodynamic

















issues introduced by electrical stimulation, ionization of
electrodes does occur at physiologic pH and
temperature within saline media. Thus, chronically
implanted electrodes oxidize, diminishing their
effectiveness over time.

6. Conclusion

In this paper, we have described initial
approach to an artificial retina prosthesis and cortical
implant, which will be refined further as testing and
development continue. The creation of a smart sensor
implant to restore vision to persons with diseased
retinas or suffering from other damage to the visual
system has tremendous potential for improving the
quality of the life for millions. It also presents a number
of challenging research problems that require the
involvement of a multidisciplinary research team. The
eventual goal of this research is a chronically implanted
visual prosthesis that provides significant visual
functionality.

7. References
[1] A handbook of Biomedical Instrumentation
R.S. Khandpur
[2] Biomedical Instrumentation
Leslie Cromwell


BIOMETRICS


A.RAJU NIRAJ JOSHI
raju_bi30@yahoo.com nirajjoshi529@yahoo.co.in



BALAJI INSTITUTE OF TECHNOLOGY & SCIENCE



ABSTRACT

In Information Technology (IT), biometrics authentication (shortened to biometrics) refers to technologies
for measuring and analyzing human physiological or behavioral characteristics for authentication purposes
including fingerprints, retinal and iris scanning, hand and finger geometry, voice patterns, facial
recognition, and other techniques
In a typical IT biometric system, a person registers with the system when one or more of his physiological
characteristics are obtained, processed by a numerical algorithm, and entered into a database. Similarly
there are also other unique characteristics like speech, habits etc which are also used by biometrics.
We provide biometric systems and sub-system components that are used by the system integrators, subject
to open standards, to build total solutions by stringing these components together.
Biometrics is expected to be incorporated in solutions to provide for Homeland Security including
applications for improving airport security, strengthening our national borders, in travel documents, visas
and in preventing ID theft. Now, more than ever, there is a wide range of interest in biometrics across
federal, state, and local governments. Congressional offices and a large number of organizations involved
in many markets are addressing the important role that biometrics will play in identifying and verifying the
identity of individuals and protecting national assets.

INTRODUCTION:
Nowadays we see improvement in every field. For instance take an example of internet how fast it is
growing and the fact is that almost everybody knows about internet. When considering such large
technology one can imagine very important things done via Internet by many people. Almost everyone has
a mail account now in first world countries. To access the mail account or any other thing such as
automated teller machine (ATM) one requires some kind of personal identification number (PIN). It is
however noted that PIN number can either be lost or stolen. When everyone is so busy nowadays in their
respective work, then who would want to bear the trouble of their PIN numbers getting lost or stolen,
passwords stolen or broken. Many people do millions of transactions via email and many governments also
use computer with passwords to protect their secret files. Nobody wants illegal users to access their private
things or hackers to break passwords and access their accounts. In order to provide good security to each
one of us this system of biometric interpretation was evolved. Now let us take a deep insight into what is
biometrics?

Biometric is the way with which one can improve security. With the evolvement of biometrics there are
different ways of providing additional security now. As we know that no individual is similar, for instance
if we take into account eyes, fingerprints, face etc we come to know that there will be slight difference. It
means that no two individual will have the same fingerprints, though they might have the same cases which
are very rare and that too in twins only. Biometrics uses these kinds of unique characteristics of an
individual. Similarly there are other unique characteristics like speech, habits etc which are also used by
biometrics. Now that we know what biometrics is and why it came into existence, let is look into some of
its features. Biometrics features can be divided into two groups
1- Physiological features and

2- Behavioral features.

1.1) PHYSIOLOGICAL FEATURES 0F BIOMETRICS:

This includes features like eye (normally iris or retinal patterns), fingerprints, palm topology, hand
geometry, wrist veins and thermal images. From research and practical implementation of biometric it has
come to our knowledge that physiological features are more successful in providing reliable security when
used. The reason for this is that this feature does not vary with time but remain constant. Even if they
change then that change will not be of considerable amount, it will be very small which will not affect the
security system algorithms performance. Hence they are more successful and largely implemented. Though
it has this major advantage it also has some shortcomings which can be considered important and worth
mentioning.
For the implementation of biometrics based on physiological features one requires specific tools like
cameras, sensors etc. without these we cannot implement this facility of biometric interpretation. It is worth
mentioning that in case of access control to computers the need of additional sampling tools limits the
possibility of applying the technique. We need additional sampling tools because we are adding hardwares
to detect some features like face, eyes etc. this is directly proportional to the cost because when we add
hardware we obviously are putting more money. This is an important issue as everybody wants to earn
money and not spend. It is also difficult to implement remote areas. Same is the case with other biometrics
like speech.
1.2) BEHAVIORAL FEATURES:

This includes features like speech, signatures and keystroke. From the name itself one can tell that these
features are different from the one we discussed in physiological ones. These features vary with time so are
less reliable than physiological features. Keystrokes and signatures can vary considerably because of the
intrinsic variability of dynamics and also the possibility of typing the errors, but the same cannot be said
about speech feature. These can vary greatly even between two consecutive samples. Unlike other
biometrics like physiological and even speech, which requires specific tools, keystroke does not require any
except the system with the keyboard. Hence it is very cheap when compared to other biometrics and also
identification is based on rhythm patterns. In this not what you type but how you type is important. It is not
easy to keep the knowledge of the timing you maintain between consecutive keys; therefore it is difficult to
provide a level of uniformity in keystroke analysis. Since my report is supposed to be on keystroke
analysis, let us look into some more details about keystroke. How keystroke is measured? It is usually done
in two ways and they are

a) Keystroke duration (how long a key is held down) and
b) Keystroke latency (elapsed time)
Let us talk about them in detail.
a) Keystroke duration: When we type on the keyboard what we do is, we actually press the key against the
alphabet we want to type. This requires some force from our side and when we press the alphabet key we
hold it for some duration of milliseconds. This duration may vary from user to user because everyone has
not got the same speed in typing. The duration for which the key remains hit is called as the keystroke
duration. It depends on different things such as physiological and psychological factors of an individual. It
also depends on a person whether he is a right-hander or left-hander. For instance if a person is a left-
hander, then his duration of pressing the key A will be less than the right-hander. Because he can react
fast to his reflexes being a left hander where as the right hander will be slow to react and might press the
key hard which will result in long duration. From this it can be assumed that faster the speed lesser the
duration.
b) Keystroke latencies: It is the time interval between two consecutive keys. There are four types of
latencies.
1) P-P (Press-Press): It is the time interval between two consecutive key presses. Physically it represents
how fast the person types.
2) P-R (Press Release): It is the time interval between pressing the key and releasing the same key. This
shows or is analogous to how hard a person types. This type of latency shows how much pressure applied
on keys?
3) R-P (Release-Press): It is the time interval between the release of the key and pressing of the next key. 3
4) R-R (Release-Release): It is the time interval between the release of two successive keys.
The above picture gives us a fair idea about how the keystroke latencies and keystroke duration form the
basic parameters, which are used by many classification and authentication algorithms for keystroke
interpretation. This figure above is trying to explain us that the user is identified in keystroke analysis on
the basis of these parameters. It is these parameters (keystroke duration and keystroke latencies) which
form the basis for many algorithms like digraph, trigraph, Neural networks etc to interpret the user via
keystroke. One more important thing to be noted is that latencies and duration varies from keyboard to
keyboard of different classification. For instance a numerical keyboard has only numbers where only two
fingers can be used, but in the case of normal keyboards many typists they use four fingers for typing. If a
legal user who works daily on one type of keyboard is provided with a keyboard of different classification
then the duration and latencies of this legal user may vary drastically from his stored sample of latencies
and duration, which were calculated on the previous keyboard.
As we have gone through some of the properties of keystroke helpful in keystroke analysis, let us go
through one more basic property called the disorder, which is used extensively in calculating the distance
between samples.
2) DEGREE OF DISORDER:

As we move to other section which is user authentication and classification we will understand why we are
discussing about degree of disorder here and how it can be used in keystroke analysis. To begin with let us
see what is degree of disorder is. Actually degree of disorder is the distance measured between two
samples. Suppose there are two samples provided by a user and we want to find the variation in those
samples then we use degree of disorder. When two samples are provided and we want to measure the
degree of disorder of one sample with respect to the other then this is how we proceed.

When an array V of N elements is given then a simple degree of disorder of V with respect to its counter
array V can be computed as the sum of the distances between the position of each element in V and the
position of the same element in V. For example There are two arrays A and A (sorted array) as shown in
figure below.

And array has elements H, W, C, Q, M. Then from the figure we can say that the two arrays have got some
elements, which shifted from their positions. For instance take the element C in array A it is third slot
where as in array A sorted it is first slot. Then the distance with which it shifted is 2. Similarly H is shifted
or moved or traveled a distance by 1 slot, M traveled a distance by 2 slots, W traveled a distance by 3 slots
because in array A it was in second slot but in sorted array A it is in fifth slot and lastly Q have not traveled
any distance because it is in the same slot as it was in A array.
Therefore the degree of disorder is (1+2+3+0+2) =8.


2.1) NORMALIZED DEGREE OF DISORDER:

Now by considering an array V of N elements and its sorted array to be V, the normalized degree of
disorder for will be Maximum disorder of an array is when its elements are in reversing order. Given an
array of N elements, it is convenient to normalize its degree of disorder by dividing it by the value of the
maximum disorder of an array of N elements. In this way it is possible to compare the disorder of arrays of
different size. After this normalization, it is clear that, for any array V, its degree of disorder falls between 0
(if V is ordered) and 1 if V is in reverse order). The distribution of all the possible arrays of N different
elements with respect to their (normalized) disorder is not uniform. As per the studies of Francesco
Bergadano, Danielle Gunetti, Claudia Picardi regarding degree of disorder I came to know that arrays
accumulate mainly in the region of (0.5-1) and especially in the region of (0.5-0.75). It can be said that the
ratio between the number of arrays with disorder higher than 0.5 and with disorder less than 0.5 increases
as the number of array elements increases which is denoted by n. Francesco, Claudia, and Danielle found
that for 9 different elements they could make 9! Different arrays, and on this they got the ratio of number of
arrays with disorder higher than 0.5 and number of arrays with disorder lesser than 0.5 to be 4.478. When
they increased the elements to 13 they found that the ratio has rose from 4.478 to 8.075. Below are the
figures for disorders for arrays of 9 and 13 elements. X-axis in these figures are values of normalized
disorder and Y-axis is a logarithmic scale.

Francesco, Claudia, and Danielle generated 10 million arrays for 100 elements and found the following
results. Only 573 had a disorder in the interval [0.44-0.5]. The remaining had a disorder in the interval [0.5-
0.9]. This property is important, since it can be used to compare two typing samples in order to decide if
they have been provided by the same user or by different individuals.
3) USER AUTHENTIFICATION AND CLASSIFICATION:

This is the most important thing in any analysis i.e. to identify the legal user and give him access. The other
things these systems, which give access to the user through keystroke interpretation, should do are avoid
the illegal user from getting access and avoid the legal user from getting denied of his access. Let us see
how a system can classify and authenticate a legal user from illegal user. At times it does happen that when
the legal user is very tired or sick his behavioral characteristics such as typing style may change. This might
trigger the alarm, which is provided to alert the security people that an impostor has got accessed. Since the
basic concept of employing keystroke biometrics is to harden the password and improve the security it is
required that even at the slightest hint of an impostor the alarm should ring. This is good for security
enhancement but we dont want the alarm to ring when the legal user is trying to log in. Similarly we also
dont want that the system should accept an illegal user to be as a legal user and then give him access. For
that we first need to understand what is false acceptance rate (FAR)? and false rejection rate (FRR)?. For
the system to work efficiently we should see to it that both FAR and FRR are low.


3.1) FALSE REJECTION RATE AND FALSE ACCEPTANCE RATE:

When discussing the accuracy and the performance of biometric systems, it is very beneficial to find a
suitable measure in order to compare different systems. There are two such measures: the False Acceptance
Rate (FAR) and the False Rejection Rate (FRR). FAR is a measure of the likelihood that the access system
will wrongly accept an access attempt; that is, will allow an access attempt by an unauthorized user. For
many systems, the threshold can be adjusted to ensure that virtually no impostors are accepted.
Unfortunately, this often means that an unreasonably high number of authorized users are rejected, which
can be measured by FRR (the rate that a given system will falsely reject an authorized user). Since both
FAR and FRR are inversely proportional to each other there exists a certain trade off between them which
is explained below.

The trade-off relationship of FAR/FRR is shown on Figure shown below. As we see, the less FAR we get
in our system, the more FRR will increase. It is necessary to find a balance between these two types of
errors, so both: security and user friendliness are preserved.


3.2) PROCEDURE FOR USER CLASSIFICATION:

This is a very simple approach, as it doesnt involve any complex mathematical formulae and equations.
Suppose we are given a set of users and a set of typing samples of the same text from those users. Given a
new sample from one of the users, we want to determine who typed it. It is done on the basis of the
measure of distance between consecutive keys or digraphs or digraphs. Digraph is the combination of two
consecutive keys where as trigraph is the combination of three consecutive keys. In this classification
procedure what we do is we collect some samples of data from users and then find the distance between the
samples provided by the users, if only two samples are taken. More than two samples mean usually the
mean difference is taken. On an average, we may expect the distance between two samples of the same user
to be smaller than the distance between two samples of different users. This is an important assumption. In
a "perfect world," it may be the case that distance between S1 and S2 [d (S1, S2)] for two different samples
of the same user, whereas in reality it might also happen that distance of samples provided by the same user
is greater than distance of the samples provided by different users [d(S1,S2)>d(X1,Y2)] where S1 and S2
belong to the same user while X1 and Y2 come from different users. This happens because the way
individual types on the keyboard may vary according to different psychological and physiological
conditions, such as stress and tiredness. If we allow the users to make typing errors then this could be even
more true.

As a consequence, the classification can be more accurate if more typing samples of each user are available
to classify a new incoming sample. For instance, suppose three users A, B, and C provide, respectively, 4,
3, and 5 typing samples of the same text, and let X be a new sample provided by one of the three users. The
samples of each user constitute what can be called a model (or profile or signature) of that user. The way
he/she types on the keyboard. We compute the mean distance of sample X from the model of each user as
the mean of the distances of X from each sample of each user.

From these formulae we find the distance between the new sample and the signature samples. After finding
the difference we then divide these distances with the number of sample signatures or models. This gives us
the mean between the signature samples and the new sample. We calculate the mean between the new
sample and signatures of all the users. If a value we get after calculating these equations lies very close to a
mean difference of a particular user then the new sample X is classified as that user.


3.3) PROCEDURE FOR USER AUTHENTICATION:

An Access Control system based on any biometric measure has to perform a task that is much more
difficult than a classification task such as the one described in the previous section. There, a person who
provides a new sample has to be correctly identified among a finite set of known users. By contrast, an
Access Control system must be able to deal with incoming samples that may belong to one of the legal
users or to impostors, who have completely unknown biometric features. Of course, for practical
applications we expect the FAR of the system to be small and, above all, the FRR to be negligible. In order
To deal with unknown impostors, the system must be able to accept/reject a new sample X, provided by
someone who claims to be the legal user U, not only if X is closer to Us model than to any other model:
after all, this could happen by chance. A sample X must be sufficiently closeto Us typing model in order
to be classified as a sample of U.


3.4) TESTING OF SYSTEM:

To test a system we need many samples so that many tests can be carried out. For that we need samples
from volunteers. Later in this I will give out the importance of volunteers, as I will cite some examples
from the literature work carried out by people. How the samples are gathered, whether professionals are
used? What words are taken as samples etc will be given in detail in the section of algorithms and related
work?


4. TECHNIQUES OR ALGORITHMS USED IN CLASSIFICATION AND AUTHENTICATION:

As I have mentioned above the classification and authentication, how it is carried out. One important thing
in it is that a number of available algorithms can be used. Those available algorithms are
10
4.1) DIGRAPH,

4.2) TRIGRAPH

4.3) N-GRAPHS

4.1) DIGRAPH:
It is the combination of two consecutive keys. When a user types a certain phrase or his/her log in and
passes word. This algorithm takes into accounts the time period between two consecutive keys to measure
the distance and then it compares this distance with the profile or signature stored in the system. If the
distance is less, then this user is classified and authenticated as a user whose profile or signature has a mean
value very close to this calculated value stored in the system. Distance actually is the shift of position of
digraphs. However level of accuracy cannot be guaranteed by this due to variable nature of the dynamics
unless some thresholds are set. If thresholds are set then efficiency will increase resulting in less FAR and
FRR. One of the drawbacks in considering digraphs (since it is the combination of two keys) sometimes
user may release the key only after pressing the other key. . This would only measure the time from one
Key Down event to the next Key Up event. This created problems with making digraphs for the phrase,
which at this stage relied on the user pressing, and releasing keys in exactly the same order at all times.
While this should be the case, sometimes users as they become more efficient at typing will often depress
more than one key at a time. By blindly measuring the digraphs these changes could not be accommodated.
Suppose there is a word SCOTLAND.

The digraphs of this word would be SC; CO; OT; TL; LA; AN; ND; Consider this to be an array A which
is a signature stored in the system. The same user of this word provides a new sample only. Then what the
system will do is it will break this new array again into digraphs and calculate the distance between these
two samples. i.e. the signature and new sample. If the distance is almost same or very less then this user is
given access. Time duration will be recorded ever each digraph.


RELATED WORK IN THIS FIELD:

One of the earlier works in this area was undertaken by Umpires and Williams [9] in 1985. It used the
delays between keystrokes, also known as digraphs for the captured keystroke biometric. They had two sets
of inputs required in their process. The first reference profile consisted of 1400 characters of prose while
the test profile had 300 characters. This study proved that keystroke biometrics is a valid method for
identity verification. However, the study was limited by the fact it required a large amount of input text and
despite the amount of text it was only able to achieve a False Acceptance Rate (FAR) of 6%. A FAR is the
probability of an impostor posing as a valid user being able to successfully gain access to a secured system.
Ideally for a verification method to be useful its FAR should be <1%. Williams and Leggett further
extended this study in 1988. By increasing the number of users in the study, reducing experimental
variables and discarding inappropriate di-graphs according to latency and frequency, it was possible to
reduce the FAR to 5%. While the FAR was still not acceptable to use keystroke dynamics as the sole means
for identity verification, it showed that there was the potential to use it as a static identity verifier at login.

An additional research conducted in 1990 by Leggett ET alias. While the results of the static procedure of
entering a reference and test profiles achieved the same result of a 5% FAR. The experiment took the
concept of keystroke dynamics into verification in a dynamic environment. It was the first time that this
had been attempted. Basically it means that verification of the user occurs while they type the test profile
and allows for continuous verification of identity in real time. This could be applied to verify identity
throughout a login session and avoids the problem of time of check to time of use (TOCTTOU). This is
described in as the problem that occurs when a users identity is checked once only at login, even though it
uses the same identity to make access control decisions later in the session when someone else maybe be
using the terminal. Using sequential statistical theory they were able to achieve a FAR rate of 12.8% and a
False Reject Rate (FRR) of 11.1%. These experiments proved that dynamically identifying a user was
possible and further refinement of the statistical analysis would achieve more accurate results.
4.2) TRIGRAPH: 13
It is the combination of three consecutive keys. It is similar to digraph but improved only to give good
results conducted by some people. J ust like digraph it is widely used. The only difference being the time
duration between the first key and the third key. This difference in time between the pressing the first key
and releasing of the third key is known as Duration of Trigraph. Just like digraph in trigraph also time
duration is mentioned or recorded after every trigraph. Then the distance between the stored sample of
trigraphs as signature and new sample provided by the user is measured. If the user is same then the shift in
positions of trigraphs between two arrays or samples gives the distance. To keep the values less than 1 the
degree of disorder is normalized .let us take a look at how to make trigraphs of a given character. For
instance a character SCOTLAND is given to make trigraphs of this is very simple.

SCO; COT; OTL; TLA; LAN; AND; as we know that even the duration of time is mentioned or recorded
of each trigraph. Let us assume some timing for all these trigraphs. Take SCO 230; COT 225; OTL 250;
TLA 215; LAN 240; AND 235;this as a signature. Let us arrange the above signature in ascending order of
time so that would be TLA 215; COT 225; SCO 230; AND 235; LAN 240; OTL 250; now suppose that we
are provided with a sample to test against the signature. And the trigraphs for sample in the ascending order
of duration let us assume to be SCO 220; COT 235; OTL 240; TLA 270; LAN 276; AND 290; so degree of
disorder will be (4+0+2+2+0+3) =11. Therefore the degree of disorder is 11. Maximum disorder is (6*6)/2
=18;
Normalized degree of disorder is (4+0+2+2+0+3)/18 =0.6111.

The distribution of all the possible arrays of N different elements with respect to their (normalized) disorder
is not uniform. We know that ratio of number of arrays with disorder more than 0.5 and number of arrays
with disorders less than 0.5 increases as the number of elements increases. An array of elements usually
adds up in the region between the values 0.5-0.75 of degree of disorder. Cumulative distribution function
adds them this is how trigraphs can be used for authentication and classification purpose.


RELATED WORK IN THIS FIELD:

The work in this field is done by FRANCESCO BERGADANO, DANIELE GUNETTI, and CLAUDIA
PICARDI, which have yielded very good results.
In their approach they have made use of trigraphs in conjunction with latency and duration of keystrokes.
They took into consideration the following values
S1: Ica 235; mer 255; ame 277; eri 297; ric 326
From now on, when speaking of a typing sample we mean an array of trigraphs
Sorted with respect to their duration. Now, suppose a second sample S2 of the
Same text is provided:
S2: mer 215; Ica 258; ame 298; ric 306; eri 315
We may consider S1 as the referring sorted array, and we may compute the distance of S2 with respect to
S1 (in short: d (S1, S2)), as the degree of disorder of S2 with respect to S1. In other words, the distance of
S2 from S1 is the sum of the distances of each trigraph of S2 with respect to the position of the same
trigraph in S1. It is clear that [d(S1,S2)] =[(S2,S1)]. The absolute (degree of) disorder of S2 with respect to
S1 is (1 +1 +0 +1 +1), and the maximum disorder of an array of 5 elements is 12. Hence, the normalized
distance of S2 from S1 is:
For longer texts, and for a timing resolution larger than one millisecond (which is the case for our
experiments, as described in the next section), it may happen that different trigraphs have the same
duration. In this case, the trigraphs are sorted in alphabetical order.
In their research the experiment set up had 44 volunteers who were asked to submit a sample of text of 683
characters typed five times. They got a total of 220 samples. The text given to them was from an Italian
novel. All the volunteers used in this research were native Italians and familiar with English language as
well. They were also professional who were familiar with typing skills. They gathered the sample as
follows.
The samples were collected on the basis of the availability and willingness of people over a period of one
month. No one provided two samples in the same day and, normally, a few days passed between two
samples given by the same user. No volunteer was trained to type the sample text before the gathering of
his/her five samples. All the samples where provided on the same keyboard of the same notebook, in the
same office and with the same artificial light conditions. Each volunteer was left free to adjust the position
of the notebook on the desk, position of the screen and chair height as preferred. The keyboard used in the
experiments had the characters placed in the usual positions, and with a key size similar to those of normal
desktop keyboards. However, that specific keyboard had never been used before by any of the volunteers.
We are well aware that the gathering of all samples on the same keyboard is a rather artificial situation,
since, in real applications, users (and impostors) will very likely use different computers, especially in case
of remote connections to servers from different workstations. It is, however, unclear if and how this choice
may have affected the outcomes of our experiments.
When an individual was providing a sample, the sample text to be reproduced was displayed on the top of
the notebook screen, with the typed text appearing just below the referring text. Echoing was handled
remotely by the machine collecting the timing data, but perceived as instantaneous by the volunteers. They
also allowed typing errors. Volunteers used were free to correct or not correct the mistakes while typing.
They were free to take rest if they got tired. They were not in any kind of pressure, which might have had
an affect on their typing. Before actually calculating in many approaches the trigraphs are filtered in order
to remove the errors and to keep the shared trigraphs. If the value of shared graphs was big enough or
sufficient enough computation used to take place. But in this approach no sample was thrown away even if
it had error. Of course, this had consequences on the number of trigraphs actually involved in the
comparison of two samples: though the text used in the experiments is made of about 350 different
trigraphs, the number of trigraphs shared by two samples was 272 on the average. In the whole set of
samples used in their experiments, there is virtually no one pair of samples containing the same set of
trigraphs. It must be noted that most of the experiments found in the literature reject any sample containing
typing errors (e.g., Bleha et al. [1990], Brown and Rogers [1993], and Obaidat and Sadoun [1997b]).
However, in Leggett and Williams [1988], samples are kept even if they contain typing errors, while no
information is available for the experiment described in Joyce and Gupta [1990].
In their experiments they achieved an FAR of 0%, but an IPR of 2.3077%. That is, roughly one attack out
of 44 succeeds. The values they got were excellent. But they wanted to improve and found an alternate
way. It can be improved by a small observation though. They applied thresholds. Formula used by them
was where B is the user in the system. they got these values.
As just observed, if k =0.5, we simply ask md(A,X) to be closer to m(A) than to any other md(B,X). A
value for k such as k =0.66 would be a slightly weaker requirement, and k =0.33 would ask for a stronger
evidence of X being a sample of A (in fact, with k D0: 33 we ask the distance md(A,X) to be twice as close
to m(A) as to md(B,X)
Hence from their research it can be said that the systems efficiency increases if we use thresholds. We can
achieve a good trade off between FAR and FRR when used in conjunction with thresholds. Using filters
can further enhance the performance. By selecting the coefficients of filter in a proper way we can achieve
excellent results.
These guys performed their experiment in a homogenous environment. It means they tried to get more
practical by not asking the illegal users to attack from the same keyboard. Previously people used to follow
this policy, each user types in many times a different string that is already very familiar to him/her, whereas
the impostors type the same string (presumably quite unfamiliar to the attackers) for a much smaller
number of times. Of course, this setting makes it easier to spot different typists of the same string.

This table above shows the FAR and FRR values for different values of threshold and different values of
shared trigraphs. The table shown above projects the performance of system on different values of
threshold. As the threshold value decreases the impostor pass rate decreases considerably but the false
alarm rate increases. This is due to trade off property between them. It also shows that as number of shared
trigaphs is high, performance of the system increases. From the table one assumption can be drawn that
impostor pass rate is directly proportional to shared trigaphs and no trade off exists between them.

4.3) N-GRAPHS:
As proved by many people, the larger mean difference, the higher the ability of the distance measures to
discriminate among users. the larger the number of n-graphs used the higher the authentication outcomes,
whereas the presence of n-graphs occurring more than once is apparently not useful. But one thing that will
worsen the accuracy of the distance measure is long n-graph. When we move towards longer n- graphs
things that contribute to worsen the accuracy of measurement which helps in authentication of the user are
a) number of shared graphs are less. This implies that there are typing errors present which results in the
loss of synchronism in the patterns and hence this will produce shared trigraphs of less number. Therefore
the user might not be able to access his/her own computer or any other keyboard device which has
implemented password hardening via keystrokes. Higher the number of n-graphs less the number of shared
trigraphs. b) Duration of longer n-graphs is less stable since they are made of more keystrokes. As a
consequence, the comparison of the duration of the same n-graph occurring in two different samples
provides outcomes less accurate.
RELATED WORK IN THIS FIELD:
Results carried out by FRANCESCO BERGADANO, DANIELE GUNETTI, and CLAUDIA PICARDI
suggests that when we use n-graphs where the length of the sample text to be tested against the signature
sample then there is only one n-graph available. Discrimination between users is left to compute the time
required to type a sample. This might affect the users with similar typing speed because the system can get
confused. If a user stops and corrects the error then that would mount to more confusion for the system.
Face Recognition:


Throughout the nation and the world, the debate on the privacy implications of face recognition
and other surveillance technologies is heating up. In J anuary 2001, the city of Tampa, Florida used
the technology to scan the faces of people in crowds at the Super Bowl, comparing them with
images in a database of digital mug shots. Privacy International subsequently gave the 2001 Big
Brother Award for "Worst Public Official" to the City of Tampa for spying on Super Bowl
attendees. Tampa then installed cameras equipped with face recognition technology in their York
City nightlife district, where they have encountered opposition from people wearing masks and
making obscene gestures at the cameras. In late August 2001, a member of the J acksonville,
Florida City Council proposed legislation to keep the technology out of J acksonville.

The Virginia Department of Criminal J ustice Services gave a $150,000 grant to the city of
Virginia Beach in J uly 2001, to help the city obtain face recognition cameras to look for criminal
suspects and missing children. Although officials had initially expressed mixed feelings about the
technology, the city council voted on November 13 to install the software at the oceanfront. To
fully fund the system, the city must pay an additional $50,000.

In the wake of the September 2001 terrorist attacks on the U.S., privacy advocates, citizen groups,
political leaders, and the manufacturers of the technology itself are debating whether these
technologies should be more widely used, and if so, how they should be regulated to protect the
privacy of the public. Some airports are considering installing face recognition cameras as a
security measure. However, T.F. Green International Airport in Providence, Rhode Island, one of
the first airports to consider it, decided in J anuary 2002 that they would not install it after all,
citing the possibility of false matches and other technological shortcomings of facial recognition
systems.




EXPLANATION:






One of the fastest growing areas of advanced security involves biometric face recognition technologies.
The art of picking a face out of a crowd is a time-honored skill. Applying technology to such a pursuit has
to date proven both fruitful and frustrating. Biometric face recognition technology offers great promise in
its ability to identify a single face, from multiple lookout points, from a sea of hundreds of thousands of
other faces. In addition to serving as a information access control tool, biometric face recognition
technologies are being used to safeguard international borders, financial ATM transactions, prevent
benefits and identity fraud, and help combat terrorism


FACIAL RECOGNITION
Basics: Facial recognition analyzes the characteristics of a person's face images
input through a digital video camera. It measures the overall facial
structure, including distances between eyes, nose, mouth, and jaw edges.
These measurements are retained in a database and used as a comparison
when a user stands before the camera. This biometric has been widely, and
perhaps wildly, touted as a fantastic system for recognizing potential
threats (whether terrorist, scam artist, or known criminal) but so far has
been unproven in high-level usage. It is currently used in verification only
systems with a good deal of success.


How it Works: User faces the camera, standing about two feet from it. The system will
locate the user's face and perform matches against the claimed identity or
the facial database. It is possible that the user may need to move and
reattempt the verification based on his facial position. The system usually
comes to a decision in less than 5 seconds.

To prevent a fake face or mold from faking out the system, many systems
now require the user to smile, blink, or otherwise move in a way that is
human before verifying.


History: The development stage for facial recognition began in the late 1980s and
commercially available systems were made available in the 1990s. While
many people first heard about facial recognition after September 11th,
2001, football fans were introduced to it at the Super Bowl several months
earlier.


Use: Currently gaining support as a potential tool for averting terrorist crimes,
facial recognition is already in use in many law enforcement areas.
Software has also been developed for computer networks and automated
bank tellers that use facial recognition for user verification purposes.


Evaluation: One of the strongest positive aspects of facial recognition is that it is non-
intrusive. Verification or identification can be accomplished from two feet
away or more, and without requiring the user to wait for long periods of
time or do anything more than look at the camera.

That said, this non-intrusiveness is one of its drawbacks when it comes to
public opinion. Many people have expressed concern over the potential use
of facial recognition cameras placed inconspicuously around cities that
would attempt to identify passers-by without their knowledge or consent.
However, the inherent difficulties in making a positive identification
(lighting requirements, facial position, etc.) are larger than most people
realize, and seem to make this biometric a better choice for verification
systems, rather than identification.


Standards: Standards Text







The Face
Your face is an important part of who you are and how people identify you. Imagine how hard it
would be to recognize an individual if all faces looked the same. Except in the case of identical
twins, the face is arguably a person's most unique physical characteristic. While humans have
had the innate ability to recognize and distinguish different faces for millions of years, computers
are just now catching up.






Visionics, a company based in New J ersey, is one of many developers of facial recognition technology.
The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that
face from the rest of the scene and compare it to a database full of stored images. In order for this software
to work, it has to know what a basic face looks like. Facial recognition software is based on the ability to
first recognize faces, which is a technological feat in itself, and then measure the various features of each
face.
If you look in the mirror, you can see that your face has certain distinguishable landmarks. These are the
peaks and valleys that make up the different facial features. Visionics defines these landmarks as nodal
points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are
measured by the software:
Distance between eyes
Width of nose
Depth of eye sockets
Cheekbones
Jaw line
Chin
These nodal points are measured to create a numerical code, a string of numbers, that represents the face
in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt
software to complete the recognition process. In the next section, we'll look at how the system goes about

detecting, capturing and storing faces.
Next Page >>






The Software
Facial recognition software falls into a larger group of technologies known as biometrics.
Biometrics uses biological information to verify identity. The basic idea behind biometrics is that
our bodies contain unique properties that can be used to distinguish us from others. Besides
facial recognition, biometric authentication methods also include:
Fingerprint scan
Retina scan
Voice identification
Facial recognition methods may vary, but they generally involve a series of steps that serve to capture,
analyze and compare your face to a database of stored images. Here is the basic process that is used by the
FaceIt system to capture and compare images

1. : Detection - When the system is attached to a video surveillance system, the recognition software
searches the field of view of a video camera for faces. If there is a face in the view, it is detected
within a fraction of a second. A multi-scale algorithm is used to search for faces in low
resolution. (An algorithm is a program that provides a set of instructions to accomplish a specific
task). The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A
face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and
mapped into an appropriate size and pose. Normalization is performed regardless of the head's
location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process
allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at
least one stored facial representation.
The heart of the Facet facial recognition system is the Local Feature Analysis (LFA) algorithm. This is the
mathematical technique the system uses to encode faces. The system maps the face and creates a face
print, a unique numerical code for that face. Once the system has stored a face print, it can compare it to
the thousands or millions of face prints stored in a database. Each face print is stored as an 84-byte file.
The system can match multiple face prints at a rate of 60 million per minute from memory or 15 million per
minute from hard disk. As comparisons are made, the system assigns a value to the comparison using a
scale of one to 10. If a score is above a predetermined threshold, a match is declared. The operator then
views the two photos that have been declared a match to be certain that the computer is accurate.
Facial recognition, like other forms of biometrics, is considered a technology that will have many uses in
the near future. In the next section, we will look how it is being used right now.








BIONIC EYE
-A Novel Technique














Presented By,

PRABAL SEN B AVINASH REDDY
E.C.E II/IV B.E E.C.E II/IV B.E
sbrprabal7@gmail.com
Phone No:9949259605







DEPARTMENT OF E.C.E.
G.M.R INSTITUTE OF TECHNOLOGY
RAJAM
SRIKAKULAM(DIST.),A.P



ABSTRACT:


In a healthy eye, the rods and
cones on the retina convert light
into tiny electrochemical impulses
that are sent through the optic
nerve and into the brain, where
theyre decoded into images. If the
retina no longer functions
correctlydue to conditions such
as Retinitis pigmentosa (RP) or
Age-related Macular
degeneration (AMD)the optic
nerve can be given information
from an artificial source bypassing
the photoreceptor mechanism in
the path. Capturing images and
converting them into electrical
signals is the easy part. The much
trickier part is wiring the input into
a persons nervous system. Retinal
implants currently being tested
pick up radio signals from a
camera mounted on a pair of
glasses and then directly stimulate
the nerve cells behind the
malfunctioning rods and cones.
Subretinal implantation proves
to be a better solution than
Epiretinal implantation in
leading the blind into light.

What is Bionics?
The field of bionics concerns the
systematic technical
implementation of solutions nature
has found for particular problems.
Today, there are many new,
fascinating approaches for
developing bionic innovations due
to recent dynamic advances in
biological research and technology
especially at the molecular level.
While biotechnology addresses the
scientific-technical realm lying
between biology and chemistry,
bionics closes the gaps separating
the fields of biology, physics and
engineering. Bionics pursues an
interdisciplinary approach to
solving application oriented
problems. The results of bionic
research and development are,
however, never reducible to a one
to one copy of the models in nature
which provided the original
inspiration.






The Bionic eye:
Bionics has already opened the
door for replacing lenses and
corneas and is focusing on
understanding how to engineer a
new eye for those who have a
retinal disease, which would
enable 10 million people to regain
a sense of sight. The idea for
sending an electrical current to the
nerve ganglia behind the retina
started in 1988 when a blind
person demonstrated that he could
see points of light by the ARCC
(artificial retina component chip).
This tested method proved that the
nerves behind the retina could still
possibly function even though the
retina degenerated. Scientists
believe that if they could replace
the retina with a device that could
translate images to electrical
impulses then vision could be
restored. But the salty conditions
of the eye could encourage
corrosion in the delicate
electronics required for this
technology. Researchers, however,
have designed a chip that could
possibly work because it would
use an external laser to power the
chip. This should eliminate the
problem of the how to keep a
battery working in the wet, salty
environment of the eye. The power
source would have to be able to
pass through the cornea without
damaging the corneal tissue,
though.

Fig:
Smart EyeBand

A light tap on the side of your
head could one day restore your
eyesight, believe scientists. The
tap would tighten a band of
artificial muscle wrapped round
your eyeballs, changing their shape
and bringing blurry images into
focus. While the idea has a high
'yuk' factor, the people behind it
are confident it will be a safe and
effective way to improve vision.
By using artificial muscle a "smart
eye band". It will be stitched to the
sclera, the tough white outer part
of the eyeball, and activated by an
electromagnet in a hearing-aid-
sized unit fitted behind one ear.
Most of the eye's focusing is done
by the cornea, the hard transparent
surface that covers both the pupil
and the iris; the lens is responsible
only for fine-tuning. Light travels
through the cornea and lens to
focus on the retina at the back of
the eyeball. The closer an object is,
the farther back in the eye it will
be focused.
The lens compensates by adjusting
its strength to bring the focus back
onto the retina. If the cornea or
lens do not focus strongly enough
or the eyeball is too short, the light
will focus behind the retina,
blurring images of close-up
objects. This is long-sightedness.
Conversely, if the eyeball is too
long, the light will focus in front of
the retina, yielding the blurry
images of far-off objects
characteristic of short-sightedness.
Tightening the
smart eye band causes the eyeball
to elongate, just as squeezing the
middle of a peeled hard-boiled egg
causes the egg to lengthen. In
long-sighted people this pushes the
retina backwards, bringing close-
up objects back into focus.
Expanding the eye band causes the
eyeball to shorten. In short-sighted
people this will bring the retina
forward to intersect with the
focused light, making far-off
images sharp and clear again.
Stitching a band of artificial
muscle to your eyeball sounds
drastic, but the necessary surgical
techniques are already commonly
used for treating detached retinas.
This smart eye band is far more
flexible than laser surgery, in
which a laser flattens the cornea by
eroding part of it.
Retinal prosthesis:

Researchers published
a design of an optoelectronic
retinal prosthesis system that can
stimulate the retina with resolution
corresponding to a visual acuity of
20/80--sharp enough to orient
yourself toward objects, recognize
faces, read large fonts, watch TV
and, perhaps most important, lead
an independent life. The
researchers hope their device may
someday bring artificial vision to
those blind due to retinal
degeneration.
Degenerative retinal
diseases result in death of
photoreceptors--rod-shaped cells at
the retina's periphery responsible
for night vision and cone-shaped
cells at its center responsible for
color vision. Worldwide, 1.5
million people suffer from retinitis
pigmentosa (RP), the leading cause
of inherited blindness. In the
Western world, age-related
macular degeneration (AMD) is
the major cause of vision loss in
people over age 65, and the issue
is becoming more critical as the
population ages. Each year,
700,000 people are diagnosed with
AMD, with 10 percent becoming
legally blind, defined by 20/400
vision. Many AMD patients retain
some degree of peripheral vision.
If one could bypass the
photoreceptors and directly
stimulate the inner retina with
visual signals, one might be able to
restore some degree of sight.
To that end, the
researchers plan to directly
stimulate the layer underneath the
dead photoreceptors using a
system that consists of a tiny video
camera mounted on transparent
"virtual reality" style goggles.
There's also a wallet-sized
computer processor, a solar-
powered battery implanted in the
iris and a light-sensing chip
implanted in the retina.


Fig: Anti-blindness Goggles
The chip is the size of
half a rice grain--3 millimeters--
and allows users to perceive 10
degrees of visual field at a time.
One design includes an orchard of
pillars: One side of each pillar is a
light-sensing pixel and the other
side is a cell-stimulating electrode.
Pillar density dictates image
resolution, or visual acuity. The
strip of orchard across the top third
of the chip is densely planted. The
strip in the middle is moderately
dense, and the strip at the bottom
is sparser still. Dense electrodes
lead to better image resolution but
may inhibit the desirable migration
of retinal cells into voids near
electrodes, so the different
electrode densities of a current
chip design allow the researchers
to explore parameters and come up
with a chip that performs
optimally. Another design--pore
electrodes--involves an array of
cavities
with
stimulating electrodes located
inside each of them.



Fig: Orchard of Pillars
WORKING:
How does the
system work when viewing,
say, a flower? First, light from
the flower enters the video
camera. The video camera then
sends the image of the flower
to the wallet-sized computer for
complex processing. The
processor then wirelessly
sends its image of the flower to
an infrared LED-LCD screen
mounted on the goggles. The
transparent goggles reflect an
infrared image into the eye and
onto the retinal chip. J ust as a
person with normal vision
cannot see the infrared signal
coming out of a TV remote
control, this infrared flower
image is also invisible to normal
photoreceptors. But for those
sporting retinal implants, the
infrared flower electrically
stimulates the implant's array of
photodiodes. The result? They
may not have to settle for
merely smelling the roses.


Fig:
Working of a Retinal Implant
Complex processing: The eyes
have it

The eye is a complex machine. It
has more than 120million
photoreceptors i.e., equivalent to
100 megapixels. And if electronic
cameras do a good job of image
processing, the eye does a
spectacular job, compressing
information before sending it to
the brain through the 1 million
axons that make up the optic
nerve. We have a built-in
processor in the eye. Before it goes
into the brain, the image is
significantly processed.
The bottom layer of
photoreceptors is where rhodopsin-
-a protein pigment that converts
light into an electrical signal--
exists. But as far as signal
processing is concerned, the signal
enters the inner nuclear layer,
populated with bipolar, amacrine
and horizontal cells. These three
cellular workhouses process the
signals and transfer them to the
ganglion cell layer, or "output
cascade" of nerves that deliver
signal pulses to the brain.
It's best to place an
implant at the earliest accessible
level of image processing. The
earliest accessible level in
degenerated retina is in the nuclear
layer, and the more you go along
the chain of image processing, the
more complex the signals become.
Researchers try to
utilize most of the processing
power remaining in the retina after
retinal degeneration by placing
their implant on the side of the
retina facing the interior of the eye
("subretinal" placement), as
opposed to the idea of placing
retinal implants on the side of the
retina facing the outside of the
eyeball ("epiretinal" placement).
A crucial aspect of
visual perception is eye motion.
Due to the eye's natural image-
processing strengths by subretinal
placement of implants, the system
tracks rapid intermittent eye
movements required for natural
image perception.
In the subretinal
system, image amplification and
other processing occur in the
hardware, outside the eye. If
amplification occurred inside the
implant's pixels, as it does in
epiretinal, there'd be no way short
of surgery to make adjustments.


Bionics and Physics:

The new design answers
major questions about what's
feasible for bionic devices.
Biology imposes limitations, such
as the needs for a system that will
not heat cells by more than 1
degree Celsius and for
electrochemical interfaces that
aren't corrosive.
Current retinal implants
provide very low resolution--just a
few hundred pixels. But several
thousand pixels would be required
for the restoration of functional
sight. The Stanford design
employs a pixel density of up to
2,500 pixels per millimeter,
corresponding to a visual acuity of
20/80, which could provide
functional vision for reading books
and using the computer.
A major limiting factor
in achieving high resolution
concerns the proximity of
electrodes to target cells. A pixel
density of 2,500 pixels per square
millimeter corresponds to a pixel
size of only 20 micrometers. But
for effective stimulation, the target
cell should not be more than 10
micrometers from the electrode. It
is practically impossible to place
thousands of electrodes so close to
cells. With subretinal implants but
not epiretinal ones, researchers
discovered a phenomenon--retinal
migration--that they now rely on to
encourage retinal cells to move
near electrodes--within 7 to 10
microns. Within three days, cells
migrate to fill the spaces between
pillars and pores.

Conclusion:
In a more specific meaning,
bionics is a creativity technique
that tries to use biological
prototypes to get ideas for
engineering solutions. This
approach is motivated by the fact
that biological organisms and their
organs have been well optimized
by evolution.

A less
common and maybe more
recent meaning of the term
"bionics" refers to merging
organism and machine. This
approach results in hybrid
systems combining biological
and engineering parts, which
can also be referred as
cybernetic organism (the
cyborg).








REFERENCES:
1.Integrated Circuit
Research Vol-3 published by
University Of Florida
2. Advancements in Bionics
www.abc.net.au
3. www.news.stanford.edu
4. Body Atlas Illustrated by
Giuliano Fornari and Steve Parker
NARAYANA ENGINEERING COLLEGE
GUDUR-524101

A TECHNICAL PAPER PRESENTATION ON
CRYONICS
by
CH.MUKESH & P.CHANDRAKANTH

MUKESHCHEEDELLA@YAHOO.COM CHANDU_POKURU@YAHOO.COM
MUKESH_CHEEDELLA@YAHOO.COM

PH:9989607170 PH:9949887968


INDEX

INTRODUCTION

PROCEDURE

4 STEPS

DAMAGE FROM ICE FORMATION AND ISCHEMIA

REVIVAL
1

CONCLUSION















INTRODCTION

India had lost many of its eminent personalities at considerably young age to
diseases that did not have a cure in those days. These days we have medicines for many
diseases. If those great personalities could some how be preserved till now, we could
have saved them by treating them with latest technology. This paper deals with a
technology that can preserve people for 33000 years after their death named cryogens.
We also discuss about a technology that has the capability to treat these people some day:
the nanotechnology. Thus cryogenics is a prerequisite for effective utilization of
nanotechnology. The patients can thus be preserved until nanotechnology is properly
developed and then treated.
Cryonics is a technique designed to save lives and greatly extend lifespan. It

involves cooling legally-dead people to liquid nitrogen temperature where physical decay

essentially stops, in the hope that future technologically advanced scientific procedures

will someday be able to revive them and restore them to youth and good health. The

process is not currently reversible, and by law can only be performed on humans after

legal death in anticipation that the early stages of clinical death may be reversible in the

future. Some scientists believe that future medicinewill enable molecular-level repair and
2

regeneration of damaged tissues and organs decades or centuries in the future. Disease

and aging are also assumed to be reversible.

The central premise of cryonics is that memory, personality, and identities are
stored in the structure and chemistry of the brain. While this view is widely accepted in
medicine, and brain activity is known to stop and later resume under certain conditions, it
is not generally accepted that current methods preserve the brain well enough to permit
revival in the future. Cryonics advocates point to studies showing that high
concentrations of cryoprotectant circulated through the brain before cooling can mostly
prevent freezing injury, preserving the fine cell structures of the brain in which memory
and identity presumably reside.
PROCEDURE:
Cryonicists try to minimize ischemic and reperfusion injury by beginning Cardio-

pulmonary resuscitation (much like CPR) and cooling as soon as possible after

pronouncement of death. Anti-clotting agents like heparin and antioxidants may be

administered.

Figure1: alcor operation theater

Below we outline the major procedures used to place a patient into cryonic suspension.
There are four main steps:
1. Stabilize, cool, and transport the patient.
2. Perfuse the patient with cryoprotective solutions.
3
3. Lower the patient's temperature to -79C.
4. Lower the patient's temperature to -196C.

4 MAIN STEPS
Step 1 :
Follow these guidelines when the patient is pronounced dead:

Figure 2: cryonists at work
Maintain blood flow and respiration of the patient (with caution).
Cool the patient by surrounding with ice bags, especially the head
Inject 500 IU/kg of heparin.
Use sterile technique if possible. This procedure should be performed in conjunction with
a physician, nurse, or paramedic.
1) At the time of death maintain blood flow and oxygenation to limit ischemic injury.
Administer the oxygen through a facemask, or preferably an endotracheal tube. Avoid
mouth-to-mouth resuscitation, because of the danger of infection. Do cardiopulmonary
resuscitation manually until a mechanical heart-lung resuscitator (with 100% O
2
) can be
employed.
2) Establish venous cannulation in the forearm, employ a 3-way stopcock and tape
securely, before the time of death if possible, for the administration of pharmacological
agents.
3) Place the patient on a cooling blanket, if available, and circulate coolant. Surround the
patient with Ziploc ice bags, paying particular attention to cooling the head. Lower the
body temperature toward 0C.
4) Insert thermocouple probes in the esophagus and in the rectum, and monitor
temperature throughout the protocol.
5) Tape the eyelids closed to prevent dehydration.
6) Inject 300 mg Tagamet (cimetidine HCl), or administer 20 ml Maalox through a
4
gastric tube, to prevent HCl production by the gastro-intestinal tract.
7) When suitable, use a Foley catheter to drain the bladder.
Step 2 :
This perfusion step should be performed with the guidance of a surgeon,
perfusionist, and medical technician. Expose and cannulate the carotid artery and jugular
vein. Secure the cannulas and attach them to the tubing of the bypass circuit.
Figure 3: cryogenic equipment
l artery. These catheters should be coupled to pressure sensors. Monitor pH, O
2
, CO
2
, and
cryo-protectant concentration by using a refractometer.
Begin total body washout and replace the blood with 4 to 6 liters of cryoprotective
solution (one blood volume or 5 L / 70 kg). Discard the venous effluent into containers
holding Clorox bleach. After perfusion is complete, decannulate and suture the surgical
wounds.
Step 3 :
We have to place thermocouples on the surface of the skin, in the esophagus and
rectum. Monitor the patient's temperature and freeze gradually. Temperature lowering
should ideally be between 0.01 and 0.1 degrees C per minute, with slower preferred
especially after the patient has solidified.
Figure 4: patient shifted into a containe
5
Step 4 :
Place the patient in a container, and suspend the container above the (low) level of
liquid nitrogen in a Dewar, to begin vapor phase cooling to -196C. Cooling should
continue slowly at about 0.01C per minute if possible. Rapid cooling may cause stress
fractures.

DAMAGE FROM ICE FORMATION AND ISCHEMIA
The freezing process creates ice crystals, which some scientists have claimed
damage cells and cellular structures so as to render any future repair impossible.
Cryonicists have long argued, however, that the extent of this damage was greatly
exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the
body with cryo-protectant chemicals (traditionally glycerol) that inhibit ice crystal
formation.
SOLUTION TO THIS PROBLEM:
Vitrification preserves tissue in a glassy rather than frozen state. In glass,
moleculesdo not rearrange themselves into grainy crystals as they are cooled, but instead
become locked together while still randomly arranged as in a fluid, forming a "solid
liquid" as the temperature falls below the glass transition temperature. The Cryonics
Institute developed computer-controlled cooling boxes to ensure that cooling is rapid
above T
g
(glass transition temperature, solidification temperature) and slow below T
g
(to
reduce fracturing due to thermal stress).If the circulation of the brain is compromised,
protective chemicals may not be able to reach all parts of the brain, and freezing may
occur either during cooling or during rewarming. Cryonicists argue, however, that injury
caused during cooling might, in the future, be repairable before the vitrified brain is
warmed back up, and that damage during rewarming might be prevented by adding more
cryo-protectant in the solid state, or by improving rewarming methods. Again, however,
Cryonicists counter that future technology might be able to overcome this difficulty, and
find a way to combat the toxicity after rewarming. Some critics have speculated that
because a cryonics patient has been declared legally dead, their organs must be dead, and
thus unable to allow cryo-protectants to reach the majority of cells. Cryonicists respond
that it has been empirically demonstrated that, so long as the cryopreservation process
6
begins immediately after legal death is declared, the individual organs remain
biologically alive, and Vitrification (particularly of the brain) is quite feasible. . This
same principle is what allows organs, such as hearts, to be transplanted, even though they
come from dead donors.

REVIVAL
Revival requires repairing damage from lack of oxygen, cryo-protectant toxicity,
thermal stress (fracturing), and freezing in tissues that do not successfully vitrify. In
many cases extensive tissue regeneration will be necessary. Hypothetical revival
scenarios generally envision repairs being performed by vast numbers of microscopic
organisms or devices. More radically, mind transfer has also been suggested as a possible
revival approach if and when technology is ever developed to scan the memory contents
of a preserved brain.
It has been claimed that if technologies for general molecular analysis and repair
are ever developed, then theoretically any damaged body could be revived. Survival
would then depend on whether preserved brain information was sufficient to permit
restoration of all or part of the personal identity of the original person, with amnesia
being the final dividing line between life and death. Nanotechnology is capable of
delivering medication to the exact location where it is needed Organic dendrimers, a type
of artificial molecule roughly the size of a protein, would be ideal for the job of
delivering a medicine. They are more durable than proteins- as they have stronger
bonds.
They contain voids inside of them, giving them large internal surface areas; just
what is needed for delivering a medicine. There is a possibility of designing dendrimers
that swell and release cargo when the required target molecules are around- allowing
stuff intended for tissue A to reach tissue A and not somewhere else.
7
Figure 5: A dendrimer with a target cell.
Heart attacks kill more people a year in the United States than anything else. A
heart attack is the clogging of key arteries that support the heart. A nanorobot can prevent
these clots by acting as a type of Shepard. They can clear clots that start to form and
move the material along. Heart attacks, strokes and blood clots can be effectively
prevented by this method
. Figure 6: A nano robot at work
As seen in the figure 6 above, the nanorobots could be sent into the blood stream
and there the repair the damaged cells. These robots are programmed to deliver drugs to
particular cells that are damaged and the healthy cells would not be touched. The cell
responds to external triggers during gestation and development. Instructive apoptosis is
triggered by certain ligands coming in contact with certain receptors on the cells surface.
A Nanoprobe has the capability of carrying these ligands. Another relatively simple way
of triggering apoptosis is to breach the cell membrane of a target cell with a tube and
drain the cytoplasm. Getting rid of cells using apoptosis is advantageous since no damage
to other cells in the vicinity occurs. The contents of the cell are neatly packed and
discarded as the cell dies. Cancer, molds and other Eukaryotes can effectively be killed
using this method. Nanoprobes can be designed to target those organisms and effectively
destroy them.

8
CONCLUSION:
So you can see that people can be kept in a suspended state and later on cured.
Cryonics thus helps nanotechnology to prove itself. Cryonics and nanotechnology form a
useful pair. Nanotechnology is an infant science. But it has the potential to cure almost
every disease. Once the scientists succeed in reviving a person, people shall believe in it.
Let us use this technology for constructive purposes and bring back to life those who died
very young and those who are though old when age is considered but have to live for
many years to serve the science and humanity.

REFERENCES:
Ettinger, Robert C.W. (1964). The Prospect of Immortality, First, Doubleday.
Alcor Life Extension Foundation.
American Cryonics Society.
Cryonics Society - Resources and Advocacy.
The Prospect of Immortality, Free download of the book that started the cryonics
movement.
Cryonics, Volume 6 Issue 61, Alcor Life Extension Foundation.
9
APPLICATION FORM


Name _____K. Janaki, K. Soujanya
Branch of Study ______EEE__________________
Address for Communication___K. Janaki________
_Room # 25, Nalanda Hostel, JNTU Campus____ _
Kakinada___- 533003 ________________________
_________________________________________
Phone _____9440652695_______________
Email__janakikolli@yahoo.co.in, kesiraju_souju@yahoo.co.in

Accommodation Required: Yes




















A
Paper
On
Design of Sophisticated Fuzzy Logic
Controllers Using Genetic Algorithms







BY,
K.JANAKI (III B.Tech-EEE)
K.SOUJANYA (III B.Tech-EEE).







ABSTRACT

Design of fuzzy logic controllers encounters difficulties in the selection of
optimized membership functions and fuzzy rule base, which is traditionally achieved by a
tedious trial-and-error process. This paper develops genetic algorithms for automatic design
of high performance fuzzy logic controllers using sophisticated membership functions that
intrinsically reflect the nonlinearities encountered in many engineering control applications.
The controller design space is coded in base-7 strings (chromosomes), where each bit
(gene) matches the 7 discrete fuzzy value. The developed approach is subsequently applied
to the design of a proportional plus integral type fuzzy controller for a nonlinear water level
control system. The performance of this control system is demonstrated higher than that of
a conventional PID controller. For further comparison, a fuzzy proportional plus derivative
controller is also developed using this approach, the response of which is shown to present
no steady-state error.













Introduction
Modern control theory has been successful for well defined, either deterministically
or stochastically, systems. This approach, however, encounters problems in many
engineering applications where systems to be controlled are difficult to model, have a
strong nonlinearity or are embedded in a changing environment with uncertainty. With the
development of modern information processing technology and computational intelligence,
an alternative solution to these problems has been to incorporate human intelligence
directly into automatic control systems. These intelligent control schemes tend to imitate
the way of human decision making and knowledge representation and have received
increasing attention widely across the control community in the world. It has been shown,
in applications such as robot control, automotive systems, aircraft, spacecraft and process
control, that they offer potential advantages over conventional control schemes in less
dependency on quantitative models, natural decision making, learning capability, a greater
degree of autonomy, ease of implementation and friendly user interface.
The crux of designing an FLC lies, however, in the selection of high-performance
membership functions that represents the human expert's interpretation of the linguistic
variables, because different membership functions determine different extent to which the
rules affect the action and hence the performance. The existing iterative approaches for
choosing the membership functions are basically a manual trial-and-error process and lack
learning capability and autonomy. Therefore, the more efficient and systematic genetic
algorithm (GA), which acts on the survival-of-the-fittest Darwinian principle for
reproduction and mutation, has been applied to FLC design for searching the poorly
understood, irregular and complex membership function space with improved performance.
Successful application of this approach has been demonstrated in spacecraft rendezvous,
cart-pole balancing, linear motion of cart, three-term control and pH value control.
However, the design scope of these FLC membership functions is limited by internally
linear triangular/trapezoidal shapes and by binary encoding.
This paper develops genetic algorithms for designing fuzzy logic controllers using
sophisticated membership functions that intrinsically reflect the non linearity encountered
in many engineering control problems. Since the coding parameters are increased in these
FLCs and decision making of most FLCs are based on seven discrete values in one
dimension, the base-7 coding is used for the coding process.

Fuzzy Logic Controller
A schematic of a fuzzy control system is shown in Fig. 1. The FLC relates the
control variables (error and change_in_error) being translated by a component known as a
condition interface into fuzzy linguistic terms which are specified by the membership
functions of the fuzzy sets. In order to reflect the fuzzy nature of the seven linguistic
classifications and to allow for the best options for high performance design, this paper
develops the symmetrical exponential membership functions given by:



i= {zero, small, medium, large} (1a)
and
(1b)
(1c)

Fig 1 A Fuzzy Controller


Example shapes of the membership functions of the error and change_in_error are
shown in Fig 2. Here,
i
is the position parameter which describes the centre point of the
membership function along the universe of discourse,
i
[1.5, 5.0] is the "shape
parameter" which resembles evolutionary shapes, including triangles and trapezoids, and

i
[0.1, 3.0] is the "scale parameter" which modifies the base-length of the membership
functions and determines the amount of overlapping. Note that a small overlapping is
necessary for a distinctive fuzzy decision making. Note also that in (1a),
i
is not included
in the power of
i
and this will allow for a gradual and consistent change in the base-
length.

The essential steps in designing fuzzy controllers include:
Defining input and output variables;
Specifying all the fuzzy sets and their membership functions defined for each
input and output variable;
Converting the input variables to fuzzy sets;
Compilation of an appropriate and complete set of heuristic control rules that
operate on these fuzzy sets, i.e. formulating the fuzzy rule-base;
Designing the computational unit that accesses the fuzzy rules and computes for
fuzzy control action; and
Devising a transformation method for converting fuzzy control action into crisp
value.

Fig. 2 Symmetrical exponential membership functions
The major task in the design of a fuzzy controller lies in the optimal choice of the
membership functions, or the , and parameters in the case of membership functions
given in (1). In manual design, these functions and the rule-sets are usually obtained by a
tedious trial-and-error process which does not search through the entire possible solution
space and therefore does not result in an optimal design. The difficulties in manual design
are also the reason that existing FLCs compromise accuracy with simplicity by using
pure triangular and trapezoidal membership functions.




Design of Sophisticated FLCs using GAs
The genetic algorithms developed by Holland simulate the natural evolution
process that operates on chromosomes. Simple genetic algorithms that yield satisfactory
results in many practical problems consist of three operations:
Reproduction;
Crossover; and
Mutation in the following process:
Make initial population
REPEAT
Choose parents from the population;
Selected parents produce children with the number weighted by their individual
fitness;
Extend the population with the children;
Select fittest elements of the extended population to survive for the next cycle
UNTIL satisfactory generation found
Decode the optimum population (and form the final FLC)
By coding the coefficients (, & ) of the membership functions, the fuzzy logic
rule-set and the gains of error and change_in_error into an n-bit string, the entire
"population" of the strings termed "chromosomes", each of which is randomly generated
from its n bits termed "genes", form the FLC solution space. Then the GA process is used
to reproduce and select the "fittest" individual, i.e., the "optimal" solution to designing
FLCs. A FLC design process is usually complicated, nonlinear and poorly understood
and GAs has been proven to be an extremely efficient searching tool for such a process. It
is also shown that such a searching technique is robust and converges intelligently and
faster than conventional searching algorithms.
Small alterations had been made to Goldberg's general-purpose GA to include an
adaptive mutation method and the selection of the fittest elements from the new and
previous generations to survive.

In the adaptive mutation method, an identical string of chromosome is prevented
in the new generation by increasing the mutation rate based on the similarity compared to
their parents after the process of crossover. With this conception, the maximum mutation
rate is limited to 0.2 and the lowest is at 0.03.
With a concatenated, mapped, unsigned coding method, the coefficient (,&)
and the gains of the error and change_in_error (K1 and K2) are coded with values
mapped form a minimum value Cmin to a maximum value Cmax using an n-bit, unsigned
base-7 integer starting from 0.The decoding mapping is given by:

C = Cmin + Cmax Cmin (string_val/7^n-1) (2)

where the string_val is the base-7 value represented by an n-bit string, and C is the
decimal (real) value being coded. The choice of the specific number of bits n used to
represent each sub-string variable is dependent on the resolution required in its variation.
An example of a complete chromosome string is shown in Fig 3, where substring groups
A, B, C, D and E represent the rule-set for the fuzzy rule-base, scaling parameters (),
position parameters (), gains (K) and shape parameters ().


Fig. 3 A coded chromosome string and its partitioned sub-strings

The fuzzy rule-set focuses 7x7 possible control actions corresponding to values in
input error and change_in_error and therefore 49 bits are used in sub-string A to form the
look-up table, where a single bit represents each control action. This is illustrated in
Fig4.

Fig. 4 Fuzzy rule-set look-up table for control actions



In sub-string B, each 2 bit group represents the value of B, M, S and
ZO for modifying the scaling factors of the error and change_in_error membership
functions, respectively. The next sub-string of 8 integers are coded for defining the
positions of the fuzzy sets "small' and "medium' along the universe of discourse, whilst
the positions of "Big" and "Zero" are fixed, i.e.
B
=3.0 and
ZO
=0.0. Again, each
parameter requires two bits. Sub-string D represents K1 and K2 used as gains of the
error and change_in_error, with three bits assigned to each. The final group of 8 integer
characters is coded for the shape coefficients of both the error and change_in_error
variables, requiring one bit for each parameter. For each individual chromosome (a
complete string) in the population, it is necessary to establish a measure of its fitness, f(x),
which is often used to accurately evaluate the performance of the controller and will be
used to generate a probability according to which the individual in question will be
selected for reproduction. However, the task of defining a fitness function is always
application specific.


In this paper, the objective of the controller is to drive the output of the process to
the desired set- point in the shortest time possible and to maintain the output at the
desired set-point, which is evaluated by:




----(3)





where n is the time index, e the error, e the change_in_error and the finish_time in the
following implementation is 300.
For a given set of membership functions, error energies were calculated with the
intent of using the GA to minimize it. This fitness function provides a means for
evaluating the performance of each FLC using different fuzzy membership functions and
rule-base being selected, so that an optimized FLC would be developed upon it.

Implementation on a Nonlinear System
The approach developed in the previous section is programmed in Pascal and is
then used to the design of a proportional plus integral type FLC for a nonlinear twin-tank
coupled water level control system. The membership functions designed by the GA are
shown in Fig. 2, whose parameters are given below:


It can be inferred that manually designed membership functions could not be as
sophisticated as these and would not ultimately lead to optimized results by trial-and-
error. The response of the water level of one tank to a step with amplitude 75mm is given
by curve (1) in Fig. 5, using this PI type FLC automatically designed by a genetic
algorithm. For comparison, the performance of a conventional PID controller is shown in
curve (2), whose gains were initially determined by the Ziegler-Nichols rule and further
manually tuned to their best performance. As can be seen, the performance achieved by
the genetic FLC is apparently superior to that obtained from this PID controller, with
both the overshoot and delay rate being improved.



Fig. 5 Performance comparison between GA designed PI and PD FLCs and manually
designed PID controller
The GA approach developed in the previous section can also be extended to other
designs. For comparison purpose, it has been used to design an FLC in a traditional and
simplest way, where the control action generated obeys the PD instructions, rather than
the PI instructions. The performance is shown in curve (3) of Fig. 5, which is slightly
better than that of the manually tuned PID controller. It is interesting to note that there is
no steady-state error resulting from this fuzzy PD controller, whilst this is not the case for
a conventional PD controller (results not shown in Fig. 5).

Conclusion and Future Work
GAs have been proven to be an extremely efficient and robust searching tool for
complicated and poorly understood processes. It is also shown in the literature that such a
searching technique converges intelligently and much faster than conventional learning
means.
Genetic algorithms for automatic design of fuzzy logic controllers have been
developed in this paper, using sophisticated membership functions that intrinsically
reflect the nonlinearity encountered in many engineering applications. Utilizing these
sophisticated membership functions, the control laws can be implemented in a simple
scheme, such as in the PI or PD form. The sophistication obtained by the machine based
automatic design could not be reached by manual design which is exclusively based on a
painstaking trial-and-error process. The genetic design approach discussed in this paper
offers a convenient and complete way to design a fuzzy controller in the shortest time.
Further work underway includes on-line design of adaptive FLCs. For such a
genetic-fuzzy controllers parallel architectures are currently studied in order to provide a
high throughput rate for the control signals with short system latency, whilst performing
the adaptation tasks.


References
[1] K.C. Nag and Y. Li, Application of Genetic Algorithms to Design of Fuzzy Logic
Controllers, Internal Report, Department of Electronics and Electrical Engineering,
University of Glasgow, Aug. 1993.
[2] E. Rogers and Y. Li., Eds., Parallel Processing in a Control Systems Environment,
London: Prentice Hall International, May 1993.
[3] J.H. Holland, "Genetic algorithms," Scientific American, pp.44-50, July 1992.
[4] L. Karr, "Design of an adaptive fuzzy logic controller using a genetic algorithm,"
Proc. 4th Int Conf. on Genetic Algorithms, 1991, pp.450-457.
[5] P. Wang and D.P. Kwok, "Optimal fuzzy PID control based on genetic algorithm,"
Proc. 1992 Int Conf. on Industrial Electronics, Control, Instrumentation and Automation,
1992, ch286, vol.3, pp.977-981.
[6] www.ncst.ernet.in Basics Of Fuzzy Logic Fuzzy Set Theory
[7] www.austinlinks.com/Fuzzy/Basics.htm
[8] chemdiv-www.nrl.navy.mil Basics Of Genetic Algorithms.
DETECTION FOR CONCEALED WEAPONS
USING IMAGE PROCESSING

Paper presented by:
T.Nineetha A.S.V.Madhuri
ECE-3/4 ECE-3/4
GNITS GNITS
(G.Narayanamma Institute of Technology and Sciences)
Hyderabad.
Email:nineetha_13@yahoo.com
madhuri_6891@yahoo.com
ABSTRACT

The detection of weapons concealed underneath a persons clothing is an important
obstacle to the improvement of security in public places. Manual screening procedures
for detecting concealed weapons such as handguns,knives and explosives are common in
controlled access settings like airports,entrances to sensitive buildings. It is highly
desirable sometimes to detect concealed weapons from stand-off distances, especially
when it is impossible to arrange the flow of people through a controlled distance.

The paper Detection for concealed weapons using image process, aims mainly
at the eventually deployment of automatic detection and recognition of concealed
weapons where manual screening is not possible or difficult. The weapon detection is
achieved by using Digital image processing techniques coupled with sensor
techniques.This detection process uses infrared imaging sensors to obtain the images of
target. The picture obtained from the sensors undergoes the image processing so that the
hidden objects can be traced. The processing involves the following stages::
Preprocessing,
Image denoising and enhancement,
Clutter filtering,
Image fusion,
Processing towards automatic weapon detection,
Segmentation for object extraction,
Shape description and finally detection.

J.N.T.U COLLEGE OF ENGINEERING,

ANANTAPUR.

Paper by:
K. Sagar Reddy (04001A0529)
e-mail id:sagar_jn529@yahoo.co.in
V. Anil Kumar (04001A0536)
e-mail id:anilkumar_141@yahoo.com






Index

Sr. no

Topic

Page no.
1. Introduction 1
2. Structure of DNA 1
3. Computer in a test tube 2
4. A successor to silicon 3
5. Scope and recent updates 4
6. Applications 4
7. Advantages & disadvantages 5
8. Conclusion 5
9. References 6





DNA COMPUTING




Abstract

Silicon microprocessors have been the heart of computing world for more than forty
years. Computer chip manufacturers are furiously racing to make the next microprocessor
that will topple speed records and in the process are cramming more and more electronic
devices onto the microprocessor. Sooner or later the physical speed and miniaturization
limits of silicon microprocessors is bound to hit a wall.

Chipmakers need a new material to produce faster computing speed with fewer
complexities. You wont believe where scientists have found this new material. DNA, the
material our genes are made of, is being used to build the next generation of
microprocessors. Scientists are using this genetic material to create nano-computers that
might take the place of silicon computers in the next decade.

A nascent technology that uses DNA molecules to build computers that are faster than
the worlds most powerful human-built computers is called DNA computing. Molecular
biologists are beginning to unravel the information processing tools such as enzymes,
copying tools, proofreading mechanisms and so on, that evolution has spent millions of
years refining. Now we are taking those tools in large numbers molecules and using them
as biological computer processors.

DNA computing has a great deal of advantage over conventional silicon-based
computing. DNA computers can store billions of times more data than your personal
computer. DNA computers have the ability to work in a massively parallel fashion,
performing many calculations simultaneously. DNA molecules that provide the input can
also provide all the necessary operational energy.

DNA computing has made a remarkable progress in almost every field. It has found
application in fields like biomedical, pharmaceutical, information security, cracking
secret codes, etc.

Scientists and researchers believe that in the foreseeable future DNA computing could
scale up to great heights!











Introduction


Mans thirst for knowledge has driven the information revolution. Human
brain, a master processor, processes the information about the internal and external
environment and sends signals to take appropriate actions. In nature, such controls exist
at every level. Even the smallest of the cells has a nucleus, which controls the cell. Where
does this power actually come from? It lies in the DNA. The ability to harness this
computational power shall determine the fate of next generation of computing.

DNA computing is a novel technology that seeks to capitalize on the
enormous informational capacity of DNA, biological molecules that can store huge
amounts of information and are able to perform operations similar to that of a computer,
through the deployment of enzymes, biological catalysts that act like software to execute
desired operations. The appeal of DNA computing lies in the fact that DNA molecules
can store far more information than any existing conventional computer chip. Also,
utilizing DNA for complex computation can be much faster than utilizing a conventional
computer, for which massive parallelism would require large amounts of hardware, not
simply more DNA.

Structure of DNA


All organisms on this planet are made of the same type of genetic blueprint, which
bind us together. Within the cells of any organism is a substance called Deoxyribonucleic
Acid (DNA), which is a double-stranded helix of nucleotides, which carries the genetic
information of a cell. The data density of DNA is impressive. J ust like a string of binary
data is encoded with ones and zeros, a strand of DNA is encoded with four bases,
represented by letters A (Adenine), T (Thymine), C (Cytosine) and G (Guanine).



Graphical representation of inherent bonding
properties of DNA
Illustration of double helix shape of DNA.


The bases (nucleotides) are spaced every 0.35 nanometers along the DNA molecule,
giving it a remarkable data density of nearly 18Mbits per inch. These nucleotides will
only combine in such a way that C always pairs with G and T always pairs with A. This
complementarity makes DNA a unique data structure for computation and can be
exploited in many ways.

Computer in a test tube


The idea of using DNA to store and process information took off in the year
1994 when Leonard Adleman, a computer scientist at the University of Southern
California, came to the conclusion that DNA had computational potential. Adleman
caused an avalanche in the fields of biology; mathematics and computers by solving a
problem called the Directed Hamiltonian Path problem or sometimes referred to as the
Traveling Salesman Problem. The salesman in this problem has a map of several cities
that he must visit to sell his wares where these cities have only one-way streets between
some but not all of them. The crux of the problem is that the salesman must find a route
to travel that passes through each city (A through G) exactly once, with a designated
beginning and end. The salesman does not want to backtrack or go more than once
through any of the paths. This is a non-deterministic polynomial time problem.


Basic outline of Traveling Salesman Problem


Adleman used a basic seven city, thirteen street model for Traveling Salesman Problem
and created randomly sequenced DNA strands 20 bases long to chemically represent each
city and a complementary 20 base strand that overlaps each citys strand half way to
represent each street. This representation allowed each multi-city tour to become a piece
of double stranded DNA with the cities linked in some order by the streets.




Representation of 20 bases DNA strand representing a city showing the bonding tendencies of
nucleotides to DNA strands representing pathways between the cities

By placing a few grams of every DNA city and street in a test tube and allowing the
natural bonding tendencies of the DNA building blocks to occur, the DNA bonding
created over 10^9 answers in less than one second. Out of the answers that came about
the correct answers were determined considering that the correct path must start at A and
end at G, it must pass through all cities at least once and must contain each city in turn.

The correct answer was determined by filtering the strands of DNA according to their
end-bases to determine which strands began from A and end in city G. The remaining
strands were then measured through electrophoreic techniques to determine if the path
they represent has passed through all seven cities. Finally the resulting sets of DNA were
examined individually to determine if they contain each city in turn. That strand(s) that
remained was then determined to be the answer(s). This process took Adleman about a
week. A conventional computer is better suited for deterministic computation permitting
at most one next move at any step in computation. The inherent parallel computing
ability of DNA, however, is perfectly suited for solving such non-deterministic type of
problems.

A Successor to Silicon
Silicon microprocessors have been the heart of computing world for more than forty
years. Computer chip manufacturers are furiously racing to make the next microprocessor
that will topple speed records and in the process are cramming more and more electronic
devices onto the microprocessor. Many have predicted that Moores law (which states
that the microprocessors would double in complexity every two years) will soon reach its
end, because of the physical speed and miniaturization limits of silicon microprocessors.

DNA computers have the potential to take computing to new levels, picking up where
Moores law leave off. DNA computers could surpass their silicon-based predecessors.
The several advantages of DNA over silicon are:

As long as there are cellular organisms, there will be a supply of DNA. The large supply
of DNA makes it a cheap resource. Unlike the toxic materials used to make traditional
microprocessors, DNA biochips can be made cleanly. DNA computers are many times
smaller than todays computers.

DNA molecules have a potential to store extensively large amount of information. It has
been estimated that a gram of dried DNA can hold as much information as a trillion
CDs. More than 10 trillion DNA molecules can fit into an area of 1 cubic centimeter.
With this small amount if DNA a computer would be able to hold 10 terabytes of data,
and perform 10 trillion calculations at a time.

In a biochemical reaction-taking place in a tiny surface area, a very large number of DNA
molecules can operate in concert, creating a parallel processing system that mimics the
ability of the most powerful supercomputer. DNA computers have the ability to perform
many calculations simultaneously; specifically, on the order of 10^9 calculations per ml
of DNA per second! A calculation that would take 10^22 modern computers working in
parallel to complete in the span of one humans life would take one DNA computer only
1 year to polish off!



Scope and recent updates


Scientists have taken DNA from the free-floating world of the test tube and anchored it
securely to a surface of glass and gold. University of Wiscosnin-Madison researchers
have developed a thin, gold-coated plate of glass about an inch square. They believe it is
the optimum working surface on which they can attach trillions of strands of DNA.
Putting DNA computing on a solid surface greatly simplifies the complex and repetitive steps
previously used in rudimentary DNA computers. Importantly it takes DNA out of the test
tube and puts it on a solid surface, making the technology simpler, more accessible and
more amenable to the development of large DNA computers capable of tackling the kind
of complex problems that conventional computers now handle routinely. Researchers
believe that by the year 2010 the first DNA chip will be commercially available.

Applications
DNA logic gates are the first step towards creating a computer that has a structure similar
to that of an electronic PC. Instead of using electrical signals to perform logical
operations, these DNA logic gates rely on DNA code. They detect fragments of genetic
material as input, splice together these fragments and form a single output. Recent works
have shown how these gates can be employed to carry out fundamental computational
operations, addition of two numbers expressed in binary. This invention of DNA logic
gates and their uses are a breakthrough in DNA computing.
A group of researchers at Princeton University in early 2000 demonstrated an RNA
computer similar to Adlemans, which had the ability to solve a chess problem involving
how many ways there are to place knights on a chessboard so that none can take the
others.
While a desktop PC is designed to perform one calculation very fast, DNA strands
produce billions of potential answers simultaneously. This makes the DNA computer
suitable for solving "fuzzy logic" problems that have many possible solutions rather than
the either/or logic of binary computers. In the future, some speculate, there may be hybrid
machines that use traditional silicon for normal processing tasks but have DNA co-
processors that can take over specific tasks they would be more suitable for.
DNA computing is in its infancy, and its implications are only beginning to be explored.
But DNA computing devices could revolutionize the pharmaceutical and biomedical
fields. Some scientists predict a future where our bodies are patrolled by tiny DNA
computers that monitor our well-being and release the right drugs to repair damaged or
unhealthy tissue. They could act as Doctors in a cell. DNA computing research is going
so fast that its potential is still emerging.
DNA computing can be used by national governments for cracking secret codes, or by
airlines wanting to map more efficient routes. The concept of using DNA computing in
the fields of cryptography, steganography and authentication has been identified as a
possible technology that may bring forward a new hope for unbreakable algorithms in the
world of information security.



Advantages
The advantage of DNA approach is that it works in parallel, processing all
possible answers simultaneously.
DNA computing is an example of computing at a molecular level, potential a size
limit that may never be reached by the semiconductor industry.
It can be used to solve a class of problems that are difficult or impossible to solve
using traditional computing methods.
There is no power required for DNA computing while the computation is taking
place. The chemical bonds that are the building blocks of DNA happen without
any outside power source. Its energy-efficiency is more than a million times that
of a PC.
DNA computing is a cost-effective method for solving complex computational
problems.

Disadvantages

DNA computers require human assistance.
Technological challenges remain before DNA computing. Researchers need to
develop techniques to reduce number of computational errors produced by
unwanted chemical reactions with the DNA strands. They need to eliminate,
combine, or accelerate the steps in processing the DNA.
The extrapolation and practical computational environment required are daunting.
The test tube environment used for DNA computing is far from practical for
everyday use.
To the naked eye, DNA computer looks like clear water solution in a test tube.
There is no mechanical device. Hence to make the output visible, human
manipulation is needed.

Conclusion

The beauty of DNA research is found in the possibility of mankinds utilization of its
very life building blocks to solve its most difficult problems. DNA computing research is
going so fast that its potential is still emerging. Scientists and mathematicians around the
world are now looking at the application of DNA computers to a whole range of
intractable computing problems. In any case, we will not be tossing out those PCs for
test tubes of DNA anytime soon and the use of DNA computing in every walk of life is a
long way off!













References

Websites:
computer.howstuffworks.com
users.aol.com/ibrandt/dna_computer.html
arstechnica.com/reviews/2q00/dna/dna-1.html
nationalgeographic.com
hypography.com
cis.udel.edu
hypography.com
house.gov/science/landweber
whyfiles.org/shorties/dna_computer.html
www4.tpgi.com.au/users/aoaug/dna_comp.html
newsscientist.com
iturls.com/English/TechHotspot
theindianprogrammer.com
news.bbc.co.uk/hi/english/ sci/tech
chronicle.com/data/articles.dir
olympus.co.jp/en/magazine/ TecZone













ELECTROOCULOGRAPHIC GUIDANCE OF A
WHEELCHAIR USING EYE MOVEMENTS
CODIFICATION

N.B.K.R. INSTITUTE OF SCIENCE AND TECHNOLOGY
(AFFILIATED TO S.V.U, TIRUPATI)
NELLORE (DT), A.P., INDIA










DEPARTMENT OF ELECTRONICS INSTRUMENTATION &CONTROL
ENGINEERING



D.YASWANTH SAI K.NAGENDRA GOKUL
III B.TECH EICE, III B.TECH EICE,
EMAIL: yaswanthsai@gmail.com EMAIL: kngokul@hotmail.com
















Abstract:
This paper presents a new method to
guide mobile robots. An eye-control
device based on electrooculography
(EOG) is designed to develop a system
for assisted mobility. Control is made
by means eye movements detected
using electrooculographic potential.
Using an inverse eye model, the
saccadic eye movements can be
detected and know where user is
looking. This control technique can be
useful in multiple applications, but in
this work it is used to guide a
wheelchair for helping people with
severe disabilities. The system consists
of a standard electric wheelchair, an on-
board computer, sensors and a graphical
user interface. Finally, some
experimental results and conclusions
about electrooculographic guidance
using ocular commands are commented.
INTRODUCTION
Assistive robotics can improve the
quality of life for disable people.
Nowadays, there are many help systems
to control and guide autonomous mobile
robots. All this systems allow their users
to travel more efficiently and with
greater ease.In the last years, the
applications for developing help
systems to people with several
disabilities are increased, improving the
traditional systems. In this new
methods, we can see videooculography
systems (VOG) or infrared oculography
(IROG) based on detect the eye position
using a camera there are several
techniques based on voice recognition
for detecting basic commands to control
some instruments or robots the joystick
(sometimes tactil screen) is the most
popular technique
used to control different applications by
people with limited upper body mobility
but it requires fine control that the
person may be have difficulty to
accomplish. All this techniques can be
applied to different people according to
their disability degree, using always the
technique or techniques more efficiently
for each person. This work is included
in a general purpose navigational
assistant in environments with accesible
features to allow a wheelchair to pass.
This project is known as SIAMO
project. A complete sensory system has
been designed made up of ultrasonic,
infrared sensors and cameras in order to
allow the detection of obstacles,
dangerous situations and generated a
map of the environment. Then, the
control and navigation module has to
guarantee a comfortable path tracking.
Experimental results with users have
shown that these demand interact with
the system, making the robotic system
semiautonomous rather than completely
autonomous. Our goal is the
development of a robotic wheelchair
system based on electrooculography.
Our system must allow the users to tell
the robot where to move in gross terms
and will then carry out that navigational
task using common sensical constraints,
such as avoiding collision.
ELECTROOCULOGRAPHIC
POTENTIAL (EOG)
A survey of eye movements recording
methods can be seen in where are
described the main advantages and
drawbacks of each one. In this work, the
goal is to sense the electrooculographic
potential (EOG) because it presents a
good face access, good accuracy and
resolution, great range of eye
displacements, works in real time and is
cheap. Our discrete electrooculographic
control system (DECS) is based on
record the polarization potential or
corneal-retinal potential (CRP).This
potential is commonly known as an
electrooculogram.
The EOG ranges from 0.05 to 3.5 mV in
humans and is linearly proportional to
eye displacement. The human eye is an
electrical dipole with a negative pole at
the fundus and a positive pole at the
cornea. Figure 1 shows the ocular
dipole.


Figure 1. Ocular dipole.
This system may be used for increasing
communication and/or control. The
analog signal form the oculographic
measurements has been turned into
signal suitable for control purposes. The
derivation of the EOG is achieved
placing two electrodes on the outerside
of the eyes to detect horizontal
movement and another pair above and
below the eye to detect vertical
movement. A reference electrode is
placed on the forehead. Figure 2 shows
the electrode placement.


Figure 2. Electrodes placement.
The EOG signal changes approximately
20 microvolts for each degree of eye
movement. In oursystem, the signal are
sampled 10 times per second. The
record of EOG signal have several
problems. Firstly, this signal seldom is
deterministic, even for same person in
different experiments.The EOG signal is
a result of a number of factors as eyeball
rotation, eye movement, eyelid
movement, different sources of artifact
such as EEG, electrodes placement,
head movements, influence of the
luminance, etc. For this reasons, it is
neccesary to eliminate the shifting
resting potential (mean value) because
this value changes. To avoid this
problem is necesary an ac diferential
amplifier where a high pass filter with
cutoff at 0.05 Hz and relatively long
time constant is used. The amplifier
used have programable gain ranging
from 500,1000,2000 and 5000.
EYE MODEL BASED ON
EOG (BIDIM-EOG)
Our aim is to design a system capable of
obtaining the gaze direction detecting
the eye movements. In view of the
physiology of the oculomotor system,
the modelling thereof could be tackled
from two main viewpoints.a)
Anatomical modelling of the gaze-
fixing system, describing the spatial
configuration thereof and the ways the
visual information is transmitted and
processed.b)modelling of the eye
movements, studying the different types
of movements and the way of making
them.On the basis of the physiological
and morphological data of the EOG, a
model of the ocular motor system based
on electrooculography is proposed
figure 3. Bidimensional dipolar model.


Figure 3. Bidimensional bipolar
model (BiDiM-EOG).

The Filter block has as mission to
eliminate the problematic associated to
the electrooculographic signal. These
problems are due mainly its variability,
as well as possible interferences on the
same one, such as blinking effect, facial
movements and devices or interferences
taken place by other biopotencials
(EMG, EEG) or electrodes
displacements on the skin. It should also
eliminate the possible power supply
interference (frequency of 50Hz). At the
same time, it suits to develop algorithms
for improving the signal/noise
relationship. Keeping in mind this, the
Filter block should be compound for a
high pass filter with a cutoff frequency
of 0.05 Hz that eliminates the
continuous component, and therefore,
its variability. To eliminate high
frequency interferences on the EOG, a
low pass filter with a cutoff frequency
of 35 Hz is used. Finally, to enhance the
signal and to improve their
characteristics an ALE_LMS
(Adaptative Line Method using Less
Mean Square) algorithm based on
Wiener filter is used.The internal
structure of the Filter block is shown
in figure 4.

The ocular movements detector block
allows to separe saccadic and smooth
eye movements using the EOG
derivative as can be seen in figure 5.

Figure 5. Ocular movements detector
block.
The ocular movement model block
models the eye movement. For
simplicity, a lineal saccadic model is
chosen. This model obtain a good
accuracy in detection of saccadic
movement (error less of 2). The smooth
movements model block corresponds a
foveal persecution model. The final
calculation of the ocular position inside
its orbit is carried out in the ocular
position" block. If a saccadic movement
is detected, a position control is used,
this is, the displacement angle
corresponding to this movement is
determined and added to the angle
corresponding to the position previous
of the eye. Whereas if a smooth
movement
takes place, a speed control is used, this
is, the speed, duration and direction of
the movements is calculated, allowing
this way to calculate the displacement
angle corresponding to the smooth
movement. The final position (angle) is
calculated as the sum of the saccadic
and smooth movements. Figure 6 shows
this process.

Figure 6. Ocular position block.
The security block detects when the
eyes are closed and in this case, the
ouput is disabled. Besides, the model
has to adapt itself to the possible
variations of adquisition conditions. To
do this the model parameters are
adjusted in accordance with the angle
detected. A person, in a voluntary way,
only can made saccadic movements
unless he tries to follow an object in
movement. Therefore, to control some
interface by means of eye movements
codification it
is convenient to focus the study in the
detection of saccadic movements. This
way,the MDBEOG can be simplified
and to be eliminated the part of
detection of smooth eye movements.
Figure 7 shows the new eye model for
this case.

Figure 7. Ocular movements model for
eye movements codification.
The process followed (using a linear
saccadic eye model) can be observed in
figure 8, where the results of a process
in which the user made a secuence of
saccadic movements from 10 to 40
in horizontal derivation are shown. It is
possible to see that the derivative of the
electrooculographic signal allows us to
determinate when a sudden movement
is
made in the eye gaze. This variation can
be easily translated to angles (figure
8.d). Results obtained have
demonstrated that this model obtain an
error less than 2 in saccadic movements
in tests carried
out during more than a working hour.
These results will be more accuracy as
much as better will be the process of
initial calibration of the system that
should adapt to each user.

Figure 8. Process results for gaze angle
detection.
GUIDANCE OF A WHEELCHAIR
USING COMMANDS
GENERATED BY EYE
MOVEMENTS DETECTEDUSING
ELECTROOCULOGRAPHY
The aim of this control system is to
guide an autonomous mobile robot
using the positioning of the eye into its
orbit by means of EOG signal. In this
case, the autonomous vehicle is a
wheelchair for disable people. The EOG
signal is recorded using Ag-AgCl
electrodes and this data, by means of an
adquisition system are sent to an
onboard PC in which they are processed
to calculate the gaze direction or eye
movements. Then, in accordance with
the guidance control strategy and the
ocular movement detected, the
corresponding control command is
activated. This generates the linear and
angular speed, which are sent to the low
level control of the wheelchair. It is
possible to see that exists a visual
feedback in the system by means of a
tactile screen that the user has in front of
him.

Figure 9. Wheelchair.


Figure 10. Guidance system.
The mechanical structure of the
wheelchair consists of a platform
(measuring 100x80x58 cm and
weighing approximately 35 Kg) on two
motor wheels and two idle wheels. The
motor wheels, with a radius Rd =16 cm
and separated by a distance D =54 cm,
have independent traction provided by
two DC motors. There is a distributed
control system based on LonWorks
technology of ECHELON. This way,
the communication between the PC and
the motor drivers of the right and left
wheels is carried out by means of a
PCLTA through a DDE connection The
low level control of the electronic
system for controlling the DC traction
motor is implemented with a
PID(programmed into Neuron Chip)
and its mission is to ensure that the
linear speed of the right and left-hand
wheels is approximately that indicated
on the electronic control cards. Given
that this control loop can not be
sufficient in itself to ensure reliability in
the wheelchair movements
another external loop can be
implemented based on a neural control
Left driver.


Figure 11. Low level control

Figure 12. User interface
To control the robot movements there
are multiple options: direct access,
semiautomatic and automatic sweep
(scan) and eye movements codification.
In former works, we studied the direct
access guidance and automatic and
semiautomatic scan.In direct access
guidance, the user can see the different
guidance commands in a screen and
select them directly by gaze direction.
In this way, when the user looks at
somewhere, the cursor is positioned
where he is looking, then, the users can
select the action to control the
wheelchair movements. The actions are
validated by time, this is, when a
command is selected, it is necessary to
stay looking at it for a period of time to
validate the action. In scan guidance,
it is necessary to do an eye to select
among the different commands
presented in the screen. The actions are
validated by time, this is, when a
command is selected, if other tick is
not generated during a time interval, the
command is validated and the guidance
action is executed. The guidance based
on eye movements codification has
different options, such as continuous
guidance and on-off activation
commands. The on-off activation
consists on detecting some ocular action
and execute a guidance command
associated. The guidance commands are
effected by means of the following
ocular actions: UP - The wheelchair
moves forward. DOWN The
wheelchair moves backwards. RIGHT
The wheelchair moves to the right.
LEFT - The wheelchair moves to the
left. Speed fixed per event are used. To
finish the execution of this command is
enough with generate another ocular
action and the system reaches at rest
state. In this paper, we are going to
focus our work in continuous control
technique because it allows us to
generate simple code for controlling the
wheelchair.This control attempts to
emulate the intuitive control that a non-
handicapped person makes when he
drives a mobile. This system controls
the linear speed as the car accelerator
and the angular speed as the steering
wheel of a car. For this, we have
implemented the following movement
commands:
UP: Linear speed increase (V++).
DOWN: Linear speed decrease (V).
RIGHT: Angular speed increase (++).
LEFT: Angular speed decrease (--).
Commands generation is coded using a
state machine that establishes the state
where the system is working (figure 13).
As can be see, when users look up
(Mov_UP) the linear speed increases
(V++). On the other hand, when user
look down (Mov_DO) the linear speed
decreases (V--). This increases and
decreases in linear and angular speed
must be adjusted to the users
possibilities. This way, it exists three
speed control: a) the increment or
decrement are fixed. b) proportional to
number of same command during a
interval of time. This get a greater
acceleration and deceleration of the
system. c) fuzzy control that selects the
linear and angular speed in function of
V and selected by user and the
trajectory that must be follow. This
control allows to reduce the linear speed
if the angular one is big, and this way,
closed curved trajectories is not allowed
that can be dangerous for the user.
Besides, it exists alarm and stop
commands for dangerous situations that
permits stop (Rep) and switch off (Off)
the system.The linear and angular speed
selected by user are converted to linear
speed of the right and left-hand wheels
(wr, wl) in accordance with equation



Figure 13. State machine.
Conclusions
This research project is aimed towards
developed a usable, low-cost assistive
robotic wheelchair system for disabled
people. In this work, we present a
system that can be used as a means of
control allowing the handicapped,
especially those with only eye-motor
coordination, to live more independent
lives. Eye movements require minimum
effort and allow direct selection
techniques, and this increase the
response time and the rate of
information flow. Some of the previous
wheelchair robotics research are
restricted a particular location and in
many areas of robotics, environmental
assumptions can be made that simplify
the navigation problem. However, a
person using a wheelchair and EOG
technique should not be limited by the
device intended to assist them if the
environment have accessible.










References
[Akay, 94] M. Akay. Biomedical
Signal Proccesing.
Ed: Academic Press. 1994.
[Lahoud&Cleveland, 94] J oseph A.
Lahoud and
Dixon Cleveland. "The Eyegaze
Eyetracking
System". LC Technologies, Inc. 4th
Anual IEEE
Dual-Use Technologies and
Applications
Conference.
[Echelon, 95] LonManager DDE Server
users guide.
Echelon Corporation,USA, 1995.
[Glenstrup&Engell, 95] A. J . Glenstrup
and T. Engell,
Eye Controlled Media: Present and
Future State.
PhD, DIKU (Institute of Computer
Science)
University of Copenhagen, Denmark,
1995.
[Nicolau et al., 95] M.C. Nicolau, J .
Burcet, R.V. Rial.
"Manual de tcnicas de Electrofisiologa
clnica".
University of Islas Baleares.
[J acob, 96] Robert J .K. J acob. "Eye
Movement-Based
Human-Computer Interaction
Techniques: Toward
Non-Command Interfaces". Human-
Computer



A
PAPER PRESENTATION
ON

Embedded Image Coding Using Zerotrees
of Wavelet Coefficients


PRESENTED BY:

CH.GANGA BHAVANI R.CHANDANA
bhavani_chandu02@yahoo.com sai_chandana19@yahoo.co.in
04121A0418 04121A0407
Ph.no: 9908308418 Ph.no: 9866964746


E.C.E (III/IV) B.TECH


SREE VIDYANIKETHAN ENGINEERING COLLEGE
A.RANGAMPET, TIRUPATI








Abtract

The embedded zerotree
wavelet algorithm (EZW) is a
simple, yet remarkably
effective, image compression
algorithm, having the property
that the bits in the bit stream
are generated in order of
importance, yielding a fully
embedded code. The embedded
code represents a sequence of
binary decisions that
distinguish an image from the
null image. Using an
embedded coding algorithm, an
encoder can terminate the
encoding at any point thereby
allowing a target rate or target
distortion metric to be met
exactly.Also, given a bit
stream, the decoder can cease
decoding at any point in the bit
stream and still produce exactly
the same image that would
have been encoded at the bit
rate corresponding to the
truncated bit stream. In addition
to producing a fully embedded
bit stream, EZW consistently
produces compression results
that are competitive with
virtually all known
compression algorithms on
standard test images. Yet this
performance is achieved with a
technique that requires
absolutely no training, no
prestored tables or codebooks,
and requires no prior
knowledge of the image source.
The EZW algorithm is based on
four key concepts:
1) a discrete wavelet transform
or hierarchical subband
Decomposition,
2) prediction of the absence of
significant information across
scales by
exploiting the self-similarity
inherent in images,
3) entropy-coded successive-
approximation quantization,
4) universal lossless data
compression which is achieved
via adaptive
arithmetic coding.
EZW encoding does not
really compress anything, it
only reorders wavelet
coefficients in such a way that
they can be compressed very
efficiently. An EZW encoder
should therefore always be
followed by a symbol encoder,
for instance an arithmetic
encoder.




Introduction:
The EZW encoder is a
special type of encoder used to encode
signalas of any dimentions.It is used to
encode the signals such as image,
sound etcOfcourse, it couldn`t beat
out the popular type of encoders in the
market.It can have it`s own
applications in the embedded encoded
systems. Hence it could maintain the
illegal access of the data from the
systems.
The EZW encoder also
provides the advantages of
Denoising,lower BW,simply
implemented algorithms,which are
well explained in the following
sections.There by it can provide a lot
of room for the Embedded type of
encoding.
What is Wavelet transform?
A wavelet is a
waveform of effectively limited
duration that has an average
value of zero.A wavelet
transform transforms a signal
from the time domain to the
joint time-scale domain. This
means that the wavelet
coefficients are two-
dimensional.After wavelet
transforming an image we can
represent it using trees because
of the subsampling that is
performed in the transform.

Fourier analysis
consists of breaking up a signal
into sine waves of various
frequencies. Similarly, wavelet
analysis is the breaking up of a
signal into shifted and scaled
versions of the original (or
mother) wavelet.

Why Wavelets?
Traditional DCT & subband
coding: trends obscure
anomalies that carry
information.
E.g., edges get spread, yielding
many non-zero coefficients to
be coded.
Wavelets are better at
localizing edges and other anomalies
Yields a few non-zero
coefficients & many zero coefficients
Difficulty: telling the decoder
where the few non-zeros
are!!!
Significance map (SM): binary
array indicating location of
zero/non-zero
Coefficients.Typically requires
a large fraction of bit budget to
specify the SM.Wavelets
provide a structure (zerotrees)
to the SM that yields efficient
coding

Zerotree Coding
Every wavelet
coefficient at a given scale can
be related to a set of
coefficients at the next finer
scale of similar orientation.
Zerotree root (ZTR) is a low
scale zero-valued coefficient
for which all the related higher-
scale coefficients are also
zero-valued
Specifying a ZTR allows the
decoder to track down and
zero out all the related higher-
scale coefficients

Steps involved in EZW Encoding
An image is wavelet
transformed, the energy in the
subbands decreases as the scale
decreased.Encode the wavelet
coefficients in decreasing
order, in several passes.For
every pass a threshold is chosen
against which all the wavelet
coefficients are
measured.When all the wavelet
coefficients have been visited
the threshold is lowered and the
image is scanned again to add
more detail to the already
encoded image.
Using the dependency
between the wavelet
coefficients across different
scales to efficiently encode
large parts of the image which
are below the current threshold.
It is here where the zerotree
enters.A zerotree is a quad-tree
of which all nodes are equal to
or smaller than the root.
EZW Coding Algorithm:
STEP 1: determine the initial
threshold using bitplane coding.

for successive iterations
threshold (Ti)is halved
and coefficients <2Ti
are only coded in each
flow.
STEP 2: We Maintain Two Separate
Lists:
Dominant List:
coordinates of
those
coefficients not
yet found to be
significant
Subordinate List:
magnitudes of
those
coefficients
found to be
significant
For each threshold,
perform two passes:
Dominant Pass
followed by
Subordinate Pass.
STEP 3: Dominant Pass
(Significance Pass):
Wavelet
Coefficients on
the Dominant
List are
compared to Ti
to determine
significance and,
if significant,
their sign
The resulting
significance map
is zero-tree
coded and sent

STEP 4: Code using four symbols:
zerotree root
isolated zero
positive
significant
negative
significant
STEP 5:Entropy code using
adaptive AC, and send. For each
coefficient
coded as significant (pos.
or neg.).
put its
magnitude on
the Subordinate
List
remove it from
the Dominant
List
STEP 6: Subordinate Pass
(Refinement Pass)
Provide one
more bit on the
magnitudes on
the

Subordinate List as
follows
Halve the
quantizer
cells
If magnitude
is in upper
half of old
cell, provide

ovide
0
d 0s using adaptive AC, and
send
the fullrate
version.
EZW E
where
bit bud
2 log2(max coeff)
nant List = All
Coeffic
Empty
: scan
appropriate
(m)| Tk [i.e. w(m) is
signific
egative]
sn
Put w(m) on the Subordinate
List
w(m) from the
Domin
i.e., |w(m)| < Tk ;
insigni
w(m) is a non-root
t of
ode it is predictably
insigni
Case #2: w(m) is a zerotree
root
#3: w(m) is an isolated
(Optional, but
recomm
End loop through Dominant
List
entry w(m) in
Subord
If w(m) Bottom Half of [Tk,
1
If magnitude
is in lower
half of old
cell, pr

STEP 7:Entropy code sequence of
1s an
Stop when bit budget is
exhausted. Encoded
stream has embedded in
it all lower-rate encoded
versions. Thus,
encoding/decoding can
be terminated prior to
reaching
par

ncoder Pseudocode
Note: stop at any point
get is exceeded Initialize
T0 =
k=0
Domi
ients
Subordinate List =
Significance Pass
For each entry w(m) in
Dominant List (note
using any
order)
If |w
ant]
If w(m) is positive
Output symbol sp
Else [i.e., w(m) is n
Output symbol
Endif on sign
Remove
ant List
Else [
ficant]
Case #1:
a zerotree
Dont C
ficant
Output symbol zr
Case
zero
Endif on significance
Entropy code symbols using
adaptive AC
ended)
Send bits
Refinement Pass
For each
inate List
2Tk]
[i.e., w(m) Top Half of
[Tk, 2T
h)
C (Optional,but
recomm
End loop through Subordinate
List
Tk/2
Significance Pass
Output L (L for low)
Else
k]]
Output H (H for hig
Endif on bottom/top
Entropy code Hs and Ls using
adaptive A
ended)
Send bits
Update
Tk+1 =
k=k+1
Go to
EXAMPLE

APPLYING EZW
ALGORITHM ON THIS EX. WITH
MORTAN SCAN GIVES
ttttztttttttptt
tt
00110
pnttnnptpttnttttttttptttpttttttttt
0011101111011011000
tptpptpnptntttttptpn
tttptptttpnp
00100000111011010001
1100
zttpttttnptppttptttnppntt
ttpttppttt
10
11000111
D6: zzzttztttztttttnnttt

Den
RESULT AS:
D1: pnztp
S1: 1010
D2: ztnptttttt
S2: 1
D3:
zzzzzppnp
ptttttttttttt
S3: 1
D4:
zzzzzzztztznzzzzpt
pppptt
S4:
11011111011
001010
D5:
zzzzztzzzzztpzz
ttpnnp
S5:
10111100110100010111110101101
01000000001101101100

oising using EZW:
Lossy EZW
compression/decompression is very
similar to wavelet shrinkage.
wavelet shrinkage is a way of noise
reduction by trying to remove the
wavelet coefficients that
correspond to noise. Since noise is
uncorrelated and usually small with
respect to the signal of interest, the
wavelet coefficients that result
from it will be uncorrelated too and
probably be small as well. The idea
is therefore to remove the small
coefficients before reconstruction
and so remove the noise. Of course
this method is not perfect because
some parts of the signal of interest
will also result in small
coefficients, indistinguishable from
the noisy coefficients, that are
thrown away. But if the selection
criteria are well chosen, based on
statistical properties of the input
signal and the required
reconstruction quality, the result is
qui

we con
st makes it less clear
W is often
bas
PSN
fficients. This
means that lossy EZW is identical
Lossy EZW is identical
se lossy EZW
se reduction and data
te remarkable.
There are two popular kinds of
wavelet shrinkage: one using hard
thresholding and one using soft
thresholding. For hard thresholding
you simply throw away all wavelet
coefficients smaller than a certain
threshold. For soft thresholding you
subtract a constant value from all
wavelet coefficients and you throw
away everything smaller than zero (if
sider only the positive values).
In the following, when we use
EZW, we mean the reorganisation of
the wavelet coefficients as described
above only, not the compression part.
The compression doesn't change
anything, it ju
what happens.
When we do lossy EZW, we throw
away the coefficients that you don't
need to meet the quality criteria you set
for the reconstruction. These
coefficients are the small ones, because
these are the ones that add the extra
detail. Due to the way EZW works,
these are also the coefficients that are
encoded at the end of the EZW stream.
The threshold for lossy EZ
ed on visual criteria and acceptable
R after reconstruction.
Since you simply throw away the
small coefficients, lossy EZW
looks a lot like hard thresholding,
but it isn't, because the small
coefficients at the end of the stream
are used also to lift the already
decoded coefficients to their
original level. So if you do not
have these small coefficients, all
the other coefficients will be too
small. Furthermore, this difference
is the same for all coe
to soft thresholding.

Conclusion 1:
to wavelet shrinkage using soft
thresholding.
Conclusion 2: You can u
to do noi
compression in one go!
Refere
[Sha93
USING
Signal
ocessing, Vol. 41, No. 12
462.
[Cre97
EMBEDDED
Image
ocessing, Vol. 6, No. 10
[Alg95
ANSFORM
Proceedings of the SPIE, Vol.
2564 (1995), p. 11-21.

nces
]
Shapiro, J. M.
EMBEDDED IMAGE
CODING
ZEROTREES OF WAVELET
COEFFICIENTS.
IEEE Transactions on
Pr
(1993), p. 3445-3
]
Creusere, C. D.
A NEW METHOD OF
ROBUST IMAGE
COMPRESSION BASED ON
THE
ZEROTREE WAVELET
ALGORITHM.
IEEE Transactions on
Pr
(1997), p. 1436-1442.
]
Algazi, V. R. and R.R. Estes.
ANALYSIS BASED CODING
OF IMAGE TR
AND SUBBAND
COEFFICIENTS.


A Paper Presentation on

Embedded Security
Using Public Key Cryptography
In Mobile Phones


Submitted by

M.V.N.SIVA SAIRAM

Reg.No : 04711A0459
Email : sivasairam.mvn@gmail.com

Department of Electronics and Communication Engineering

NARAYANA ENGINEERING COLLEGE

NELLORE







1
ABSTRACT
As mobile networks expand their
bandwidth, mobile phones, as with any
other Internet device, become substantially
exposed to Internet security
vulnerabilities. Since mobile phones are
becoming popular and widely distributed,
they are increasingly used for financial
transactions and related electronic
commerce. Consequently, they will feature
applications that also demand adequate
security functions. In this regard, the
prevailing security system in wired
networks could be extended to wireless
networks as well. There are many
schemes for enforcing security, of which,
the most efficient is the public key
infrastructure (PKI), employing public
key cryptography. Extension of PKI to
wireless networks demands for a
modification of the existing technologies.
In this paper, we propose an idea for
implementing public key cryptography in
mobile phones, by means of a
comprehensive design, with due
consideration for the hardware aspects as
well. Public key cryptography deals with
a secure way of encrypting documents,
by the use of public and private keys.
PKI, of which public key cryptography
forms the essential part, ensures more
protection and privacy than the existing
methods like IDs and passwords. A
mobile phone with public key
cryptography capabilities can also act
as an authentication device for access-
control systems, based on the
challenge-response mechanism.
Introducing a highly advanced security
concept such as PKI to the wireless
Internet will facilitate the rapid market
adoption of secure, web-based
transaction and authentication services
such as mobile banking, mobile
brokerage and mobile payment. The
freedom of the wire free world
combined with the security and
authentication made possible by PKI
will change the face of commerce for
business and consumers alike.
INTRODUCTION

In todays world, where the
Internet has become the way of life,
transactions in commerce are
extensively carried over the Internet.
These include banking, payments,
financial transactions and other
commerce related operations. This has
lead to the introduction of a new
concept known as e-commerce. E-
commerce is an electronic way of
conducting transactions in finance,
banking and payments across the
Internet. This rapidly expanding system
is plagued by the crippling problem of
network security. Security forms the
backbone of the world of commerce. A
2
data transfer session across the network
may be interfered in the ways described
below.
Eavesdropping the
information privacy is compromised
without altering the information itself.
Eavesdropping may imply that
someone has recorded or intercepted
sensitive information (e.g. credit card
numbers, confidential business
negotiations).
Tampering the
information is altered or replaced and
then sent on to the recipient (e.g.
change of an order or commercial
contract transmitted).
Impersonation the
information is passed from or to a
person pretending to be someone else
(this is called spoofing, e.g. by using
a false E-mail address or web site),
or a person who misrepresents
himself (e.g. a site pretends to be a
books store, while it really just
collects payments without providing
the goods).
This situation is being
tackled and many solutions have
been
Authentication allows a
receiver of information to verify
the origin of information. proposed and implemented for ensuring
security in networks. All these methods
are based on the idea of cryptography,
which is a special branch of applied
mathematics. Cryptography deals with
the manipulation of the data, to
produce a garbled or scrambled
message which is then sent across the
network as encrypted data The data
may be received by the correct person
and decrypted to produce the original
message. The most widely sought after
method is
PKI (public Key Infrastructure), which
is implemented using Public Key
Cryptography.
A PKI is expected to offer its
users the following benefits:

Encryption allows concealing
information transmitted between
two parties. The sender encrypts
the information and then sends it,
and the receiver decrypts the
information before reading it. The
information in transit is
unintelligible to an eavesdropper.
Integrity (tamper detection)
allows the recipient of information
to verify that a third party has no
altered the information in transit.
Non-repudiation prevents the
sender of information from
3
claiming at a later time that he/she
never sent the information.
PKI (PUBLIC KEY
INFRASTRUCTURE)
As explained above, PKI
is a security architecture that has been
introduced to provide an increased level
of confidence for exchanging information
over an increasingly insecure Internet,
where such features cannot otherwise be
readily provided. PKI facilities can,
however be used just as easily for
information exchanged over private
networks, including corporate internal
networks. PKI can also be used to
deliver cryptographic keys between users
(including devices such as servers)
securely, and to facilitate other
cryptographically delivered security
services.
Public Key Cryptography is the
concept used in PKI. This concept makes
use of what is known as a public key
and a private key. The public key is the
key which is made public i.e. it is freely
available and known to all people those
wish to send a message to the recipient.
The private key is a key which is
maintained secretly by the recipient. The
concept of Public Key Cryptography is
elucidated in the following section.

A PKI consists of the following sections:

Certificate:
A public key certificate is
used for authentication and secure
exchange of information on the
Internet, extranets and intranets. The
issuer and signer of the certificate are
known as a certification authority (CA),
described below. The entity being
issued the certificate is the subject of
the certificate.
A public key certificate is a
digitally signed statement that binds the
value of a public key to the identity
of the subject (person, device, or
service) that holds the corresponding
private key. By signing the certificate,
the CA attests that the private key
associated with the public key in the
certificate is in the possession of the
subject named in the certificate.
Certificates are issued for a
variety of functions, including Web user
authentication, Web server
authentication, secure e-mail, IP Security
and code signing. They can also be
issued from one CA to another in order
to establish a certification hierarchy.
The common format for certificates in
use today is defined by the ITU-T X.
509 version3 international standard.
Certification Authority:
A certification authority
(CA) is an entity to issue certificates
4
to an individual, a computer, or any
other requesting entity. A CA accepts a
certificate request verifies the requesters
information and then uses its private key
to apply its digital signature to the
certificate. The CA then issues the
certificate to the subject of the certificate
for use as a security credential within a
PKI.
CA Policy :
A CA issues certificates to
requesters based on a set of established
criteria. The set of criteria that a CA
uses when processing certificate requests
(and issuing certificates, revoking
certificates, and publishing CRLs) is
referred to a CA Policy.
Rooted CA Hierarchies :
A hierarchy of CAs is built
using a root CA certificate, and then
intermediate CAs with each CA issuing
certificates to subordinate CAs. The chain
terminates when a CA issues a certificate
to an end entity (a user).
Registration :
Registration is the process
by which subjects make themselves
known to a CA. Registration can be
implicit in the act of making the request
for a certificate, or accomplished through
another trusted entity.
Certificate Revocation :
Certificates have a specified
lifetime, but they can be revoked for a
number of reasons such as, key
compromise, CA compromise etc...
PUBLIC KEY
CRYPTOGRAPHY

Cryptography is a special
branch of applied mathematics which
deals with the procedure for securing
data and protecting it by means of
converting it into a format which is
scrambled to anyone who intercepts it
midway. The original data is converted
to unintelligible data by means of a
code or a key and is transmitted across
the network. At the receiving end,
however, the recipient has the key to
decode the garbled message and
retrieve its contents. The methods in
use today include, public key
cryptography, symmetric cryptography,
hash algorithms. The most effective of
these is the public key cryptography,
which uses two separate keys, namely,
the public key and the private key.
In public-key encryption, the public
key can be passed openly between the
parties or published in a public
repository, but the related private key
remains private. Data encrypted with
the public key can be decrypted only
using the private key. Data encrypted
with the private key can be decrypted
only using the public key.
5

Like symmetric-key cryptography, public-
key cryptography also has a number of
types of algorithms. However, symmetric
key and public-key algorithms are not
designed in similar ways. Different
public-key algorithms, on the other hand,
work in very dissimilar ways and are
therefore not interchangeable.

Generally public key
cryptography is not used as the lone
method for encryption. Public-key
algorithms are complex mathematical
equations using very large numbers.
Their primary limitation is that they
provide relatively slow forms of
cryptography. In practice, they are
typically used only at critical points,
such as for exchanging a symmetric key
between entities or for signing a hash of
a message (a hash is a fixed-size result
obtained by applying a one-way
mathematical function, called a hash
algorithm, to data). Using other forms of
cryptography, such as symmetric-key
cryptography, in combination with public-
key cryptography optimizes
performance. Public-key encryption
provides an efficient method to send
someone the secret key that was used
when a symmetric encryption operation
was performed on a large amount of
data. Public-key algorithms can be
combined with hash algorithms to
produce digital signatures.

A digital signature is a
means for originators of a message,
file, or other digitally encoded
information to bind their identity to
that is, provide a signature forthe
information. The process of digitally
signing information entails transforming
the information, together with some
secret information held by the sender,
into a tag called a signature. Digital
signatures are used in public key
environments to help secure electronic
commerce transactions by providing
verification that the individual sending
the message really is who he or she
claims to be, and by confirming that
6
the message received is identical to the
message sent. The most common used
public-key algorithm is the RSA
algorithm.
The public-key exchange of a
symmetric coding takes place as
described below.

The sender obtains the publickey
of the recipient.
The sender creates a random
secret key (the single key used in
symmetric-key encryption).
The sender uses the secret key
with a symmetric algorithm to
convert the plaintext data into
cipher text data.
The sender uses the recipients
public key to transform the secret
key into cipher text secret key.
The sender sends the cipher text
data and cipher text secret key to
the recipient.
The recipient converts the cipher
text secret key into plaintext using
the private key of the recipient
The recipient converts the cipher
text data into plaintext using the
plaintext secret key.
OUR PROPOSAL :
EXTENSION OF PKI TO
WIRELESS ENVIRONMENT
Public key techniques have been
adopted in many areas of information
technology, including network security,
operating systems security, application
data security and Digital Rights
Management (DRM). The benefits of
extending PKI to mobile phones are
secure browsing, mobile payment
authentication, access control, digital
signatures on mobile transactions to
name a few.

The access media\carrier
to the mobile device changes in the
case of wireless PKI. This calls for
some changes to be made to the
existing PKI. A PKI is considered
wireless when the front- end devices are
wireless. The back-end of these
connections into wired networks such as
the Internet. The basic structure of a
wireless PKI is given below.
7


PROPOSED WIRELESS PKI
ARCHITECTURE

The wireless network
shown above provides a solution for
integrating PKI with existing wireless
networks. The network shown is shown in
greater detail in the following pages. A
trusted third party or certification authority
is brought into the network and the
certification, key validation, etc are
accomplished by the CA. Theoretically the
mobile network, service\content provider and
the trusted third party are enough to
complete the process, but practically there
are many obstacles to overcome.

Mobile devices are
handicapped in the way that they have less
powerful CPUs, less memory, restricted
power consumption, smaller displays,
and diverse input devices compared to
their immobile counterparts in the
network. They must be able to

Generate and register keys
Manage end-user mobile
identities
Encrypt and decrypt messages
Receive, verify, store, and send
certificates
Receive, verify, store, and send
digitally signed data
Create and sign data
Ordinary mobile phones do not have
sufficient memory and processing power
to perform the above mentioned
8
functions. Because of this, the need arises to
include network agents that take care of
some of these tasks. The verification of
certificates must be done by the network
agents. Keys can be generated and
maintained by the network agents. They must
also be capable of encrypting and decrypting
messages and transmitting them across the
network. The whole solution boils down to
the idea that the mobile device must at least
be able to perform a digital signature
function in order to permit the establishment
of a wireless PKI. Network agents can
perform all other PKI-related tasks (e.g. data
validation, archiving, or certificate delivery).
A definite standard must be followed
while distributing PKI-related tasks
among mobile devices and network
agents. This standard must be followed
by all mobile device providers and the
software providers.
There are two methods for
implementing wireless PKI in mobile
phones. One way is to design new
mobile phones with the following
capabilities.
o Hardware modules that accelerate
asymmetric and symmetric
encryption algorithms.
o Cryptographic firmware library
that provides access to the
asymmetric, symmetric as well
as hash algorithms.
Implementation should be
standard compliant.
o A software package that adds
support for a wide range of
applications, such as digital
signature, certificate verification.
o A set of software modules for
handling all complementary
aspects of PKI, such as
certificates handling, PIN
handling, and secure key
storage.
o Software implementation of the
most commonly used security
protocols, making use of the
hardware-accelerated encryption
algorithms.
o High-level PKI-based
applications (e.g. challenge-
response token application).

A hierarchical diagram of the above mentioned architecture is given below.

9


The other method is to introduce
external devices which do not change
the existing hardware of the mobile
phones, but which include the above
mentioned aspects. A removable
smart card incorporates the above
mentioned functionality. It should also
extend support to specific predefined
functions. Alternatively software-based
packages functioning on the limited
processing power and system
resources of the mobile device can be
used to provide security in wireless
networks.


The second method is
the most suitable for existing mobile
technology and adapts to the
limitations of todays mobile phones.
New software packages are being
developed for incorporating the above
mentioned features.



10
The network structure explained previously is depicted below with more details.

PROPOSED ARCHITECTURE


The setting up of the network is given
below.

1. The card manufacturer creates
PKI enabled SIM cards. In addition to
the normal process, public and private
key pairs have to be generated.
2. The private key is securely
stored away on the SIM card and the
public key has to be forwarded to the
Certification Authority.
3. The SIMs are distributed to the
customers.



4. When the customer finally applies
for digital signature, a registration
process is started and a notification is
sent to the CA, to create a certificate
from the pre-stored public key
associated with the users SIM.
5. The new certificate is published
through the directory of the
Certification Authority.
6. In order to complete the
registration, the signing functionality on
the SIM is unblocked and a signing
PIN (S-PIN) is assigned to the
customer.
11
Now the network incorporates all
the required functions and applications for
PKI. When a mobile user wishes to
access the internet and conduct some
transactions across the Internet, the
system goes through the following steps.

1. The application sends a request
to sign certain data to the security
gateway at the operator.
2. The request is converted into
executable SMS byte code that can be
interpreted by a plug-in on the handset
and is sent to the user.
3. The user gets a request on his
handset and now must enter his S-PIN in
order to allow the phone to digitally
sign the incoming data.
4. The signed data is returned to
the verification system at the operator.
This requests the matching public key
certificate of the SIM that should have
signed the data and uses it to verify the
signature.
5. Finally, a response
is created whether the signing was
successful or not and is returned
to the application server.

CONCLUSION:

In this paper concepts
of PKI and public key cryptography
were discussed. A novel way for
incorporating PKI in the wireless
network was proposed with a design
of the architecture. Various aspects
were discussed and a detailed
description of the working of such a
network is given without delving
deeply into the hardware details. The
opportunities of a secure wire free
environment, where all transactions
can take place, are multiple and
varied. The most important aspect is
that of security. If implemented, this
technology could open the doors for a
world where protection and privacy
could be reinstated, in a way never
imagined before.


REFERENCES:

1. Mobile cellular telecommunications by William C.Y. Lee
2. Mobile communication engineering by William C.Y. Lee
3. Cryptography and its applications by Peter Gutmann.
12








SRI VENKATESWARA UNIVERSITY
COLLEGE OF ENGINEERING
TIRUPATI.



DEPARTMENT OF ELECTRICAL
ENGINEERING

This paper is presented by

1.P.ABIDA MUBEEN 2.T.VAGDEVI
s.v.u.college of engineering s.v.u.college of engg
D.NO:6-1-2l(6), Roomno507
Chennareddy colony engg block,svuce hostels
Tirupati Tirupati
Ph:08772230725 ph:08772230725
email:p_abidamubeen@yahoo.co.in
email:t_vagdevi@yahoo.co.uk













Name of the Engineering Stream :ECE Subject Code : ECP



EMBEDDED SYSTEMS & VLSI

ABSTRACT:

Any sufficient advanced technology is indistinguishable
from magic, wrote Arthur C. Clarke. Thousands of people have
embedded processors beneath their skin, as pacemakers or hearing
aids. It is the embedded technology, that makes all this possible.
And the day is not far off when youll have a car similar to what
J ames Bond drives. An embedded systems thus simply refers to a
system that is controlled by a computer that resides within the
system.
EMBEDDED SYSTEM APPLICATIONS





















Embedded
Controller
Govern
ment/Mi
litary
Medical
Equipm
ent
Automoti
ve /
Transport
ation
Aerospac
e
electronic
s
Communi
cation /
Office
automatio
Consume
r
electronic
s
This paper particularly deals with Embedded Systems. The
main topics discussed in this paper are
1. Flying Robotics
2. Security and protection of Embedded Systems
FLYING ROBOTICS

Introduction :

The flying robot we develop here is a hovering robot that
possesses the capability to remain directly over an area of interest
until a designated mission is complete. The goal of this project was
to design and build an autonomous hovering robot system based on
the 68HC8S12 microcontroller. The most challenging aspect of the
system is the inherent stability problem of the aircraft that must
constantly be maintained by the controller. Fig. Shows a photo of
the hovering robot frame. To meet the project objectives, we must
first identify all system requirements. There are a number of
challenging problems that must be solved.:

1. Creation of a study, yet light, air frame that meets the lift
requirements.
2. Sensor integration
3. Actuator selection
4. Design of the power system and
5. Stability control.
The hovering robot must meet the following system requirements
1. Be autonomous
2. Fit within a 16 x 16 in (40.6x40.6cm) box
3. Weigh no more than 2.5lb (1.14kg)
4. Take off and land on its own
5. Use DC motors.
6. Lift up to 4.5lb (2.05kg) including its own weight.
7. Maintain level hoverv
8. Have the ability to move in all directions
9. Avoid obstacles.
10. Have room for upgrades
11. Cost no more than $500.00.

The following HCS12 systems will be employed in this project;
Input capture system
ATD converter
Pulse width modulation system

Background Theory:

For the hardware frame, a single air frame was designed,
similar to a pre-existing, commercially available, radio-controlled
product. It is made of four rods that span out from the center of the
frame and form an X-Shaped air frame. Attached to each rod is a
DC motor and gear assembly for a propeller. The pitch of adjacent
propellers is reversed to compensate for counter rotation, making
two propellers rotate clockwise and the other two rotate counter
clock wise. The counter rotation is controlled by a tail rotor in
conventional helicopters.











For the microcontroller, a 68HCS12 T- Board from Image
Craft was used. The controller has a 25MHz Clock, 256 Kbyte on
chip flash memory, 12Kbyte RAM, and 4Kbyte of EEPROM. The
computer size of the board is highly desirable for the on-board
circuitry. The built-in-pulse width modulation module is used to
control the DC motors. For sensors, piezo gyros placed on three
axes of rotation provide the yaw, pitch and roll angles of the
hovering robot for control. The gyro sensor outputs are fed to the
timer input capture port of the controller and are used to adjust the
speed of the four motors to control the flying robot. In addition to
the gyro sensors, four infrared sensors are also placed on the robot.
These sensors detect obstructions when and if the robot approaches
walls or obstacles. The sensors outputs are fed to the ATD
converter input port to trigger the flight control algorithm to avoid
crashing into walls or obstacles.

For the gyro sensors, three Futaba GYA350 piezo gyros
were used. We selected this specific gyro since it is specially made
for model airplanes. The gyro weighs 26g and fits in a 27mm x
27mm x 20mm box. The sensors provide PWM signals at 55Hz.

The sensor is light and can fit in a 45mm x 14mm x 20mm
box. Four Graupner speed 300 6 V DC motors were used to propel
the hovering robot off the ground. Each motor weighs 50 g and has
a 2mm shaft diameter. The motor can draw upto 5A.

SECURING THE EMBEDDED SYSTEMS

Embedded systems often have no real system
administrator. There is no body to ensure that only strong pass
words are used. So any one can take over your Internet connected
washing machine and use it as a platform to launch distributed
denial of service attacks against a government agency.

c









Classification of attacks on Embedded System
Hackers may measure electromagnetic radiation or power
consumption to get clues about the concealed information.

Securing Against Software Attacks:

Compared to other attacks, software attacks typically
require infrastructure that is substantially cheaper and easily
available to most hackers, making them a serious challenge to
secure embedded system design. Vulnerability allows the attacker
to gain direct access to the end system, while an exposure is an
entry point that an attacker may indirectly exploit to gain access.
Debugging is especially difficult in the embedded world. In the best
of cases, theres little visibility into the working of the code.















Common security requirements of embedded systems
from end-user perspective

The best practices for software security apply at various
levels.
1. The requirements level : Security requirements must cover both
overt functional security (eg. The use of applied cryptography)
and emergent characteristics.
2. The design and architecture level : A system must be coherent
and present a unified security architecture that takes into
account security principles (such as the principle of least
privilege).
3. The code level : Static analysis tools tools that scan source
code for common vulnerabilities can discover
implementation bugs at the code level.



Tamper Mechanisms:

The goal of tamper mechanisms is to prevent any attempt
by an attacker to perform an unauthorised physical or electronic
action against the device. Tamper mechanisms are divided into four
groups; prevention, evidence, detection and response/ recovery.

Implementing Counter Measures:

The Trustzone security technology from ARM is a good
example of how counter measures against software attacks (and
limited physical attack protection) are implemented for an
embedded system-on-chip. The primary objective of TrustZone is to
clearly separate access to the sensitive information and other
hardware/ software portions of an ARM based system on chip
architecture.

The use of TrustZone to secure a typical embedded
system on chip is shown in fig ,wherein the security perimeter of
the system extends beyond the processor core to the memory
hierarchy and peripherals. The overall SoC architecture is divided
into secure and non-secure regions. The S-bit and the monitor mode
are used to ensure that secure area. Exception handling is also
partitioned into normal and secure areas.

In summary, the Trust Zone technology provides an
architecture level security solution to enforce a trusted code base,
enable certification of trusted software independent of the operating
system and provide protection against malicious software attacks.















Components of an embedded system on chip architecture
demarcated into secure and non-secure areas
CONCLUSIONS:
In order to secure Embedded Systems the following design
guidelines have to be followed.
1. For a device storing sensitive or private data, the design must
provide protection during normal operation, during attack
through a network connection or during electronic probing in
the attackers laboratory.
2. Embedded device security threats force designers to include
physical packaging protection in addition to traditional
software safeguards.
3. Debugging is especially difficult in the embedded world. One
needs to understand in circuit emulators, background
debuggers, scopes and logic analysers.
4. Security testing must encompass testing of the security
functionally with standard functional testing techniques, and
risk based security testing based on the attack patterns and
threat modes.
5. It is important to differentiate between the public and the
private data that an embedded device stores or displays because
you may be able to reduce or even eliminate the sensitive data
to minimise the security effort.


The good news is that unlike the problem of providing
security in cyberspace (where the scope is very large), securing the
application limited world of embedded systems is more likely to
embedded systems is more likely to succeed in the near term. We
dont often hear of a security breach in critical life support systems
where embedded systems are used. In other words, we are living in
a fairly safe, controlled environment driven by embedded
technology. However, security is a challenge that should not be
overshadowed by other parameters like performance, area, energy
consumption, cost and usability.

Multiprocessors and multiple processors per device will be
the rule. Today, designers fill the rest of their chips with
peripherals, caches, memory and more microprocessors. Theres
plenty of room to make microprocessors fabulously complex
without taking up too much silicon.


The task of securing the future lies at the hands of the
engineer at each level with a desirable perfection. For now, go
ahead and be a part of the future as it lies right at your fingertips,
just waiting to be tapped.

REFERENCES


1. EMBEDDED SYSTEMS DESIGN & APPLICATIONS
WITH THE 68HC12 & HCS 12
- STEVEN F. BARRETT & DANIEL J . PACK

2. EMBEDDED Systems John. Peatman

3. EFY

1
SRI VENKATESWARA
UNIVERSITY
TIRUPATHI

EVIDOTS
(NANOTECHNOLOGY)






AUTHORS
L.SINDHURI K.PUJA
(III-Btech-ECE) (III-Btech-ECE)
E-mail:janu986@gmail.com
pujasree84@gmail.com


(SRI VENKATESHWARA COLLEGE OF ENGGINERING AND TECHNOLOGY,
CHITTOOR-517127,ANDHRA PRADESH)






CONTENTS

1. ABSTRACT
2. HISTORY OF EVIDOTS
3. WORKING
4. ADVANTAGES
5. APPLICATIONS
6. CONCLUSION
7. REFFERENCES












ABSTRACT

Nanotechnology is the engineering of functional systems at the molecular
scale. It involves the manipulation of matter at nanometer length (one-billionth of a
meter) scales to produce new materials, structures and devices. This paper deals with a
narrow application of nanotechnology called EVIDOTS.
EviDots are an exciting new semiconductor nanocrystal quantum dot
technology developed by Evident Technologies. These are manufactured quantum dots
that range in size from 2-10 nm and contain less than 1,000 atoms. Each EviDot type is
precisely composed of the same material but exhibits different emission properties based
upon the size of the quantum dots. Quantum dots are novel semiconductor crystals so
small their fundamental properties can be tuned or tailored to fit an application,
overcoming the limitations of many existing materials.
EviDots produce high quantum yields with intense fluorescent brightness at
targeted peak wavelengths available in a wide range of colors. They exhibit a broad
excitation spectrum and can be excited by light at any wavelength shorter than the EviDot's
stated emission peak. This makes possible simultaneous detection of nanocrystals with
different emission peaks, imaging and quantification. EviDots are available for applications
ranging in wavelength from 490-2000 nm.These semiconductor nanocrystals can be used to
enhance and expand applications in photonics, biological(life sciences) and
nanotechnology optics.Ideal for nanotechnology researchers and product developers,
EviDots provide scientists with a variety of nanocrystals suspended in a solution for
experimentation and applications.

THE THRUST OF DEVELOPMENT OF NEW TECHNOLOGY NEVER ENDS.


HISTORY

EviDots are available as core quantum dots in their fundamental state, or enhanced with
our proprietary coating technologies as core-shell semiconductor nanocrystal quantum
dots. Core quantum dots are often used for research or in applications where the novel
semiconductor properties are exploited. Core EviDots are the underlying technology for
our proprietary core-shell quantum dots. EviDots quantum dots in a core-shell
configuration have improved stability and fluorescent brightness. Evident shell
techniques remove surface defects, adding strength to the quantum dots preventing
natural degradation.
Quantum dots, from Evident Technologies, are tuned by selecting a semiconductor
composition and chemically growing it to the size that yields the desired properties.
Evident Technologies is the premier provider of the widest breadth of quantum dot
technologies. We are the industry leader in producing commercial quantities of these
nanomaterials, as well as incorporating quantum dots into a wide variety of polymers and
material forms. Unique to quantum dots is the ability to engineer the optoelectronic
properties by changing the size and composition of the nanomaterials. We can tune the
bandgap, photoluminescent, and electroluminescent properties of EviDots.
We have hundreds of customers around the world and a growing range of quantum dot
materials and products
EviDot quantum dot products are set to exacting quality standards, narrow
photoluminescence line widths, high quantum yields, and precise peak wavelength
emissions which can be integrated into applications that range from biology to photonics,
lighting to displays, or security inks to energy applications.


WORKING
EviDot semiconductor nanocrystals can be used to enhance and expand many
applications in photonics and nanotechnology optics. For photonics applications such as
biological warfare agent detection, for example, nanocrystals can be used as genetic
markers when coupled to secondary molecules including proteins or nucleic acids.
Additionally, nanocrystals can be separated from the solvent to form self-assembled thin
films or combined with polymers and cast
into films for chemical sensing applications.
EviDots produce high quantum yields with
intense fluorescent brightness at targeted
peak wavelengths. Available in a wide range
of colors, EviDots exhibit a broad excitation
spectrum and can be excited by light at any
wavelength shorter than the EviDot's stated
emission peak. EviDot nanocrystals can be
combined with Ocean Optics spectrometers,
excitation sources and filters to create complete spectrophotometric systems for
fluorescence applications.
EviDots core-shell nanocrystals have a unique zinc sulfide shell coating that stabilizes the
core, improves quantum yield and reduces photo-degradation. Ocean Optics offers four
such core-shell products: cadmium selenide with fluorescing wavelengths of 520 nm, 540
nm, 568 nm, 598 nm and 620 nm; cadmium selenide with a fluorescing wavelength of
490 nm; cadmium selenide with fluorescing wavelengths of 535 nm, 560 nm, 585 nm,
610 nm and 640 nm; and lead selenide with fluorescing wavelengths of 1100 nm, 1310
nm, 1550 nm and 2000 nm.
Each Evidot test kit comes with 4-mL vials of the various nanocrystal compounds, stored
in a toluene solvent.

ADVANTAGES
Evident is the leading developer and commercial source of a wide range of quantum dot
semiconductor nanomaterials and the partner-of-choice for research institutions,
universities and companies in a growing number of markets. In the life science fields, our
proprietary EviFluor technologies enable researchers and product developers to:
Conduct research using an easily-multiplexed, bright probe that has a
fluorescence that lasts orders of magnitude longer than current fluors.
Pioneer new ways to investigate cell physiology, perform high through-put
screening; develop multi-color Western blots; explore highly sensitive FRET
assays; or carry out flow-cytometry with faster, better results.
Create and innovate new methods of stem and cancer cell in vivo imaging.
Explore and create new diagnostic assays.
Many other applications could be enabled by and benefit from the use of EviFluors.
EviFluor quantum dot based fluorescent probes are engineered to avoid the many
shortcomings of organic dye based fluorophores.
Quantum dots in PC3 Cells Stained for Tubulin, DAPI Nuclear Stain
Photostability - EviFluors have greater photostability than traditional dyes; and
will still fluoresce after one hour of continuous excitation;
Signal to Noise Ratiose thermal circuits have a variety of advantages, including
no moving parts, long life, low maintenance, and no emissions- making them an
ideal device for a diverse set of applications in the medical, military, space, and
especially the silicon chip industries, where excess heat generation from ever-
shrinking computer chips threatens to stall the entire industry's growth within a
decade. However, the positive aspects of this technology have been traditionally
overshadowed by efficiency limitations that stem from shortcomings of materials
used to construct thermoelectric devices. These inefficiencies lead to prohibitively
high costs - Easily excited using excitation sources hundreds of nanometers away
from the emission wavelength, improving the signal to noise ratio in many forms
of assays;
Narrow Emission - Very narrow emissions enable multiplexing assays.
Brightness - Have bright fluorescence with larger excitation coefficients at lower
wavelengths.
Fluorescent Lifetimes - Are highly photo-resistant with significantly longer
fluorescence lifetimes. Researchers can use their intense fluorescence to track
individual molecules.
Single excitation source possible - Essentially every quantum dot can be excited
by the same source. J ust use a shorter wavelength light source (Blue or UV
preferable) than your EviTags emissions.
Sensitive and precise - Due to their large Stokes Shift and sharp emission
spectra, our conjugates have high signal intensity with minimal background
interference.
Easily excitable - Their broad excitation spectra allow the use of existing excitation
sources, making imaging easier.




APPLICATIONS
Specialty Color LEDs

Evident has developed color targeting nanophosphor technology comprising quantum
dots dispersed in thermal curable silicone and epoxies with emission wavelengths ranging
from 520nm to 620nm. LEDs made from these products can reach precise specialty
colors not possible to achieve with conventional phosphor technology and are ideally
suited for corporate branded colors, signage, channel, architectural and seasonal lighting.
Evident LED quantum dot materials can be combined and applied onto "blue emitting"
460-470nm InGaN LEDs, 405-410nm violet LEDs, 390nm UV LEDs, UV emitting black
lights and mercury vapor lamps to achieve nearly any color coordinate within the
chromaticity diagram. Evident's EviMitter nanophosphors are very efficient and have
been shown to convert greater than 70% of the light generated by the underlying LED to
the desired colors. In fact, green LEDs made with 520 and 540nm emitting quantum dots
on blue and violet LEDs have efficiencies that exceed commercial green emitting LED
chips alone.
White Light LEDs

Evident has developed white light EviMitter nanophosphors that offer alternatives to
traditional conversion technologies and phosphor suppliers. Red/green, and
red/green/yellow quantum dot nanophosphor mixtures applied to blue InGaN LED can
achieve high quality white light with CRI greater than 90. Quantum dot mixtures can be
formulated to achieve any color point on the Plankian locus from 4000K warm whites
(red hued early morning and late afternoon sunlight) to 6500K pure white (midday in the
summer), to 8000K (blue hued) cool white.

Evident's quantum dot nanophosphors are compatible with conventional phosphors. Red
620nm quantum dots are readily mixed with broadband yellow phosphors such as
Ce:YAG and Ce:TAG to improve color rendering index (from 70 to 88) and "warm" the
white (reducing color temperature from 8200K to 5500K)
.
Molded & Printed Parts

Quantum dot in polymer composites are injection moldable into any shape or size.
Quantum dots inks are formulated for thermal ink jet, piezo ink jet, screen printing, and
flexographic processes. Illuminating the molded or printed parts with a short wavelength
LED or UV source results in bright and colored light emission.

Quantum Dot Fluorescent Arrays

Polychromatic printed quantum dot arrays are in early research stages and are being
designed for high energy efficiency LCD applications. Quantum dot arrays are designed
to replace the color filter and the white light backlight to be replaced by a less expensive
and simpler blue LED. Evident Technologies expects that this design will yield a 60%
increase in the efficiency of the display. These arrays are also being investigated for uses
in laser micro displays and in projector applications.
Quantum Dot Electroluminescent Devices

Evident Technologies is collaborating with partners to develop quantum dot
electroluminescent displays similar to Organic Light Emitting Diodes (OLEDs).
Conventional polymer and small molecule based OLEDs suffer from inherent low
efficiency because a fundamental limit of only 25% of the charge injected into organic
dyes emitters within the OLED device can be converted into light. Additionally, many of
the organic emitters are unstable and result in relatively short device lifetimes. Quantum
dots do not suffer from this limitation and can theoretically achieve greater efficiency.
Quantum dots are color tunable throughout the visible portion of the spectrum and as
such can achieve more vibrant and saturated colors than achievable by other means.
Molded & Printed Parts

Quantum dot in polymer composites are injection moldable into any shape or size.
Quantum dots inks are formulated for thermal ink jet, piezo ink jet, screen printing, and
flexographic processes. Illuminating the molded or printed parts with a short wavelength
LED or UV source results in bright and colored light emission.

Optical Imaging Agents for Cancer

Non-targeted near infrared emitting InP quantum dot core T2 EviTags were tested in
tumor bearing mice. Optical image was acquired after intravenous injection of 100pmol
of T2 EviTags (left) or of physiological buffer as a control (right) into the tail vein of
tumor bearing mice. In this preliminary experiment, the T2 EviTags were shown to be
capable at generating a reasonable signal to noise image when compared to the control.
Further, the biodistribution pattern as determined from the optical image shows favorable
clearance of the untargeted T2 EviTags through the lymphatics and the kidneys and
bladder. No uptake in the tumor was observed suggesting the next round of imaging to be
done with tumor targeted T2 EviTags will have minimal background signal within the
tumor. The development of T2 EviTags as non-invasive optical molecular imaging
probes will have a great impact on the early detection, diagnosis and treatment
monitoring of cancer.




CONCLUSION


Evident is the leading developer and commercial source of a wide range of quantum dot
semiconductor nanomaterials and the partner-of-choice for research institutions,
universities and companies in a growing number of markets. The US Department of
Defense is very active in funding nanotechnology research and major weapons system
contractors are monitoring developments in nanotechnology research and development
with the expectation that as nanotechnologies continue to mature they will find a wide
variety of applications in military systems. Currently most nanotechnology related
materials and processes are too new for adaptation into robust and reliable military
hardware.so due to this there is widespread expectation of great promise in new
technologies enabled by nanoscale engineering.







REFFERENCES
1. www.evidots.com
2. www.spie.org
3. www.oceanoptics.com
4. www.siena.edu



THE THRUST OF DEVELOPMENT OF NEW TECHNOLOGY NEVER ENDS.


PMDC Micro Motor Speed Control Using Fuzzy Logic Controller

Radhesham G. and A.B.Kulkarni
Department of Applied Electronics, Gulbarga University, Gulbarga- 585 106 (INDIA)
E-mail: radheshamgg@gmail.com

Abstract
PMDC motors find many applications in
robotics, hard disc drives, in computers and
industrial automation. This paper describes
the speed control of a Swiss PMDC motor
(model 2230 U015S) using PWM technique.
The IC UC 3637 provides PWM signal, which
can rotate the PMDC motor, either in
forward/reverse direction. The H-bridge
MOSFET driver have been used to provide
requisite armature current. Using optical
encoder and F to V converter the feed back
loop is completed. PID, auto tuned PID, FLC
and IFLC have been implemented using
LabVIEW6i, SCXI analog input card 1122,
SCXI analog output card 1124 and DAQ card.
The experimental results for the PID, auto
tuned PID, FLC and IFLC are compared. The
IFLC gives the shortest settling time with no
overshoots/undershoots and steady state
error. Thus IFLCs are observed to be faster,
robust, flexible, more accurate and easy to
configure and implement than the remaining
parameters.

1. Introduction:
The PMDC micro motors are ideal for
servo and positioning systems in all fields.
Their main applications are in the factory
automation, robotics, optical, audio and
video instrumentation, office security
systems, analytical and bio-medical
instrumentation, instrumentation in
electronic communication, defence and
aerospace system/industries.

Texas Instrument data book [1] reported the
control of PMDC motor manufactured by
EG&G Torque Systems (model MT-2605-
102CE) using IC UC 3637 as a switched
mode controller based on PWM technique
but succeeded in rotating the motor at 100
rpm using manual input velocity command.
F.M.El-Khouly et al [2-3] reported the speed
control of PMDC motor (ratings: 36W, 24V,
and 1400 rpm) at 1400 rpm and 1300 rpm
with settling time of 2.0 and 12 seconds
respectively and reported the superiority of
fuzzy logic controller over the conventional
PID controller.




The PMDC micro motor (model: 2230
U015S, rating: 15V, 7mA, 8400 rpm) based
on self-
supporting patented skew wound coil
technology [4], has not been subjected so far
to the speed
control using IC UC3637 with H-bridge
MOSFET driver circuit for the PID
(proportional +
integral +derivative), auto tuned PID, FLC
(fuzzy logic controller) and IFLC (integrated
fuzzy
logic controller) technique at 3000 rpm.
Hence we have taken up this detailed
comparative study.
The PWM technique most oftenly
used is found most suitable for the speed
control of PMDC micro motor along with
controlling circuit IC UC3637 and H-bridge
MOSFET driver circuit. Hence this
technique and control circuit along with
driver circuit is used in this present work.

2. System configurations:
The block diagram for the speed
control of a PMDC micro motor using SCXI
input/output card is shown in fig.1.




optical encoder
Fig 1:Block diagram for speed control of PMDC
micro motor
The PWM signal generated by UC 3637
switched mode controller IC is fed to the H-
bridge MOSFET driver circuit, which in
turn drives the motor and the optical encoder
assembly attached to the motor shaft
generates TTL pulses. These pulses are
converted into analog voltage using F/V
converter LM331. This analog voltage is fed
to the PC, which is equipped with
LabVIEW6i software and interfaced with
SCXI 1122 analog input card and the DAQ
(PCI MOI 6028E) board.
LabVIEW (Laboratory Virtual
Instrumentation Engineering Workbench) is
a graphical programming language, which
uses icons instead of text to create
applications. LabVIEW program facilitates
virtual instrumentation (VI), which imitates
Lab
View/
flc/pid
DAQ
Board
SC
XI
11
UC
36
37
H-
Bri
dge
dri
ver
F
to
V
SC
XI
11
24
the appearance and operation of physical
instruments; VI is defined as combination of
hardware (DAQ board, SCXI cards etc) and
software (PID control tool-kit etc.) with
industry- standard computer technology to
create user defined instrumentation solution
[6-10].
The process variable (Speed of the
motor) is measured and compared with the
set point to obtain error e(k). The e(k) is fed
to the controller (either to the PID, auto-
tuned PID, FLC or IFLC). Depending on the
e(k), and the previous error e(k-1) the
controller gives out control signal. This
control voltage is fed to the switched mode
PWM controller IC UC3637 through DAQ
board and SCXI 1124 analog out put card
[5].
The switched mode PWM controller
IC (UC 3637) generates the pulse width
modulated signal. These signals are
amplified using H-bridge MOSFET driver
circuit. This H-bridge in turn drives the
PMDC micro motor. The whole cycle is
repeated till the set point is achieved. This
configuration with optical encoder forms the
closed loop feed back control system for
controlling the speed of a PMDC micro
motor.
3.1 Auto Tuned PID Controller:
Auto tuning is used to improve
performance of controllers. Often, many
controllers are poorly tuned; some are too
aggressive, some are too sluggish. When one
is not sure about the disturbance or process
dynamic characteristics, tuning of PID
controller is difficult; therefore, the need for
auto tuning arises. For the present
application the auto-tuned PID controller is
implemented practically using LabVIEW
and the PID control tool-kit. This controller
uses Zeigler and Nicholas heuristic method
for determining the parameters of a PID
controller.
3.2 Fuzzy Logic Controller (FLC):
Fuzzy logic controllers have some
advantages compared to classical controllers
such as the simplicity of control, handling
complexity of non-linear system and ability
to design with out the accurate and precise
mathematical model. Motor speed error and
armature voltages are often taken together
with the change of errors as the fuzzy input
to determine the suitable gains (PID or auto-
tuned PID); in order to achieve fast speed
regulation while maintaining inrush voltage
at a preferred level.

In the present application we have
used 7-member triangular membership
function viz; positive large (PL), positive
medium (PM), positive small (PS), zero
(ZE), negative large (NL), negative small
(NS), negative medium (NM) and negative
large (NL). The error e(k) ( set value
present value) and change in error ce(k) (
present error previous error) are the two
conditions usually monitored by fuzzy logic
controller. The fuzzy logic controller
produces values of a control variable c(u),
which represents the control action; fig2
shows the triangular membership function
for cu(k) ( present control previous
control). The rule base is generated
accordingly. A typical rule is as follows:
IF e is PL and ce is NL THEN cu is ZE
The 49-term rule base editor used
for 7- number-triangular membership
Cu NL NM NS ZE PS PM PL

1




0 -3 -2 -1 0 1 2 3 e
Fig. 2: Triangular membership function for fuzzy
controlled output cu(k).
function applied for this application is
shown in fig.3. Defuzzification is the
conversion of fuzzy quantity to a crisp
(analog or digital) quantity, as practical
applications need crisp control action [11].
In the present application, we have used
center of gravity (COG) method for
defuzzification, as this method is found most
suitable for the present work.

ce
Fig.3: Rule base editor for two-input FLC
3.3 Integrated Fuzzy Logic Controller
(IFLC):
In order to enhance the fuzzy logic
controller performance. In the present
application we have used FLC plus PID
controller as IFLC. The IFLC shows
e
NL NM NS ZE PS PM PL
PL ZE PS PM PL PL PL PL
PM NS ZE PS PM PM PL PL
PS NM NS ZE PS PS PM PL
ZE NM NM NS ZE PS PM PM
NS NL NM NS NS ZE PS PM
NM NL NL NM NM NS ZE PS
NL NL NL NL NL NM NS ZE
considerable improvement over the PID,
Auto-tuned PID and FLC techniques.
4. VI Block Diagram for IFLC using
LabVIEW:
The VI block diagram of IFLC for
the speed control of PMDC micro motor is
shown in fig.4.

A i in p u t
S p e e d m e a s u re m e n t
0 . 0 1 1 6 3 . 3 8 5 . 0 A o o u t
s e t p o in t
1 0 0 0
s a m p le
P I D p a ra m e te rs
W h ile L o o p

Fig.4: VI block diagram for the speed control of
PMDC micro motor using LabVIEW
As IFLC is a suitable cascade combination
of FLC and PID controller, the VI block
diagram has both FLC and PID controllers.
We have used both fuzzy logic tool kit and
PID control tool kit with LabVIEW for this
application. First the input voltage from F/V
converter is accessed by Analog Input (AI)
channel. This analog voltage is converted
into the corresponding speed by using adder
and multipliers in the LabVIEW. First DBL
(double precision floating point) represents
the set value of 3000 rpm. The error and
change in error are found by using two
subtactors. These two signals are fed to the
FLC.The FLC gives the control action. This
is added to the set point, which in turn acts
as a new set point to the PID controller. The
PID controller then gives final control
voltage to be given to the PMDC micro
motor. This voltage is sent to the PMDC
motor through Analog Output (AO) channel,
which in turn controls the desired speed
through switched mode controller PWM
based IC UC 3637 and H-bridge MOSFET
driver circuit. The measured speed status is
displayed on the waveform chart monitor.
All these events are carried out in a feed
back loop (labeled as while loop).
The VI diagram for the FLC, auto
tuned PID and PID controllers are also
constructed using the above procedure, but
are not shown due to lack of space.

5. Experimental Results:
The software (front panel and VI
diagram) and hard ware are designed for the
PID, auto-tuned PID, FLC and IFLC for the
speed control of PMDC micro motor using
virtual instrumentation LabVIEW platform.
The experimental transient responses for the
said four studies are shown in fig.5.

0 1 2 3 4 5
0
500
1000
1500
2000
2500
3000
S
p
e
e
d

o
f

P
M
D
C

m
o
t
o
r
(
r
p
m
)
Time (sec)
IFLC
FLC
AutoPID
PID
Fig.5: Experimental transient responses of
PID, auto-PID, FLC and IFLC for PMDC
micro motor

The PID parameters (K
p
= 0.0001, T
i
=
0.0053 and T
d
=0.0032) are given to auto-
tuned PID controller. The auto-tuned PID
controller automatically tunes the PID
parameters using Zeigler and Nicholas
heuristic method. This results in an adaptive
PID controller.
In the FLC (or IFLC) controller,
more than one rule may be fired at the same
time, but with varying strengths. This leads
to a crisp control action through the process
of defuzzification. This leads to the better
transient response as compared to the PID
and auto-tuned PID controllers.
We have obtained the following
settling times for different controllers:
1. PID controller =4.2 sec
2. Auto-tuned PID controller =2.8 sec
3. FLC controller =2.0 sec and
4. IFLC controller =1.0 sec
The IFLC has no
overshoots/undershoots and has no
steady state error. The set point
is 3000 rpm. The settling time of IFLC
is the lowest.
Conclusion:
For the first time, we have
controlled the speed of Swiss PMDC micro
motor (2230 U015S) at 3000 rpm using
switched mode PWM based controller IC
UC3637 with H-bridge MOSFET driver
circuit.The PID, auto-tuned PID, FLC and
IFLC have been designed and practically
verified for the speed control of PMDC
micro motor. The IFLC (PID +FLC) gives
shortest settling time (1 sec), no
overshoots/undershoots and no steady state
errors. This comparative study reveals that,
IFLC is superior to the PID, auto-tuned PID
and FLC respectively, and observed to be
faster, robust, flexible and more accurate.

7.National Instruments Texas, USA, DAQ,
PCI series User Manual (1999)
References:
1.UC 3637 Switched Mode Controller for
DC Motor Drive Unitrode Application
Note U-102,
Power Supply Control Products, Texas
Instruments data book, section-8 pp 126-
135 (2000)
2.F.M. El-Khouly et al A Rule Based
Fuzzy Logic Speed Tracking Regulator for
Permanent
Magnet Chopper fed DC Motor Drive
Electrical and computer engineering, 1993,
Canadian
conference on 14-17 September, 1993,
IEEE, Vol 2. pp 1061- 1064 (1993)
3.F.M. El-Khouly et al Artificial
Intelligent Speed Control Strategies for
Permanent Magnet
DC Motor DrivesIndustry Application
Society Annual Meeting, 1994, 2-6 October
1994,IEEE
Vol. 1 pp 377-385 (1994)
4.Faulhaber DC Motors MINIMOTORS,
Minimotors SA 6980 Croglio, Switzerland
[2000]
5.Sen P C , Principles of Electric Machines
and Power Electronics J ohn Wiley and
Sons, 2/e,
N.Y. (2001)
6.National Instruments Texas, USA, SCXI
Getting Started with SCXI (2000)
8.National Instruments Texas, USA,
LabVIEW User Manual (2000)
9.National Instruments Texas, USA, PID
Control Toolkit for G reference manual
(1998)
10.National Instruments Texas, USA, Fuzzy
Logic for G Toolkit reference manual
(1997)
11.Klir. G.J . and Yuan B. Fuzzy sets and
Fuzzy Logic: Theory and Applications
Prentice-Hall, Englewood Cliffs,
N.J .(1995)


FUZZY LOGIC BASED OVER CURRENT PROTECTION FOR
MV NETWORKS
Presented by,
P.srinivas,
ece,
Email:sreenu_e432@yahoo.co.in
Phone:9885013292.

Amit kumar mahato,
ece,
Email:mahato_amit@rediffmail.com
Phone:9441127410.
SISTAM college of engg.,
Srikakulam.

Abstract-In the paper a novel approach to over current protection
stabilization based on FUZZY Logic technique is presented. The scheme is intended
for MV overhead lines for which the auto enclosure function is applied to enhance
reliability of supply in case of intermediate the feeders supplying a part of network
with numerous loaded MV/LV transformers that may face inrush conditions after
reemerging the line. Traditional harmonic effective for line over current protection
because of different current spectrum as measured at the MV bus bars. The new FL
protection developed takes both current and voltage signals for further processing
and final decision is issued as a result of partial decision support coefficients.
Application of signal fuzzification an fuzzy setting enabled obtaining a relay with
increased sensitivity, reliability and much faster than commonly available solutions.
The developed fuzzy protection scheme has been tested with EMTP signals and
compared with other standard protection approaches.
Introduction to fuzzy control:
Fuzzy logic imitates the logic of human thought, which is much less rigid than
the calculations computers generally perform. Consider the task of driving a car. As
you drive along, you notice that the stoplight ahead is red and the car in front of you is
braking. Your (very rapid) thought process might be something like this: I see that I
need to stop. The road is wet because its raining. The car is only a short distance in
front of me. Therefore, I need to apply significant pressure to the brake pedal
immediately. This reasoning tasks place subconsciously, of course, but thats the
way our brains work in fuzzy terms.
Human brains do not base such decisions on the precise distance to the car ahead
or the exact coefficient of friction between the tires and the road, as an embedded
computer might. Likewise, our brains do not use a Kalman filter to derive the optimal
pressure that should be applied to the brakes at a given moment. Our brains use
common sense rules, which seems to work pretty well.
But when we finally get around to pressing the brake pedal, we apply an
exact force, lets say 23.26 pounds. So although we reason in fuzzy terms, our final
actions are considerably less so. The process of translating the results of fuzzy
reasoning to a nonfuzzy action is called defuzzification.
Lets think about how a fuzzy cruise control system might work. The cruise
controller maintains a constant vehicle speed in spite of never ending changes in road
grade, wind resistance, and other variables. The controller does this by comparing the
commanded speed with the actual speed. We can call the difference between
commanded and actual speed current error. The error change is the difference in error
from one sample period to the next.
If the current error is a small positive number vehicle speed is slower than
commanded the controller needs to slightly increase the throttle angle in order to
speed up the vehicle appropriately.
IF both current error and error change are positive, the vehicle is going too
slowly and decelerating. In this case, the controller needs to increase the throttle angle
by a larger amount to achieve the desired speed.
In the paper the OC protection operation in distribution networks is studied,
with particular consideration of mentioned above effects of ARC on line- transformer
feeders. EMTP-ATP model of a fragment of typical Polish MV network as well as the
simulation results of a few of ARC-cleared fault cases are described. Further more, a
new algorithm of the OC relay stabilization with application of Fuzzy Logic is
described and its performance is compared to traditional stabilization versions with
harmonic blocking (second, or second. Third and fifth harmonics). The FL reasoning
unit developed takes advantage of parallel fuzzy processing of numerous criteria
(current fundamental component amplitude, harmonic content, etc.) and their fuzzy
aggregation, leading to final protection decision.
The proposed fuzzy OC protection scheme has been tested with the signals
generated with use of EMTP-ATP programme as well as field recordings the designed
protection proved to be reliable and much more sensitive than the traditionally used
OC relays, assuring sufficient stabilization for the cases of transformer inrush currents
evoked by autoreclosure operation.
As discussed in the paper about fuzzy logic based over current protection for
MV networks Over current (OC) pr inciple belongs to the oldest and widely used
criteria in power system protection. It is often applied as primary protection for
distribution lines, small transformers and motor as well as a backup protection o
transmission lines, transformers, generators and motors. The idea of the method is
simple, however ,it has also numerous drawbacks. Over currents are characteristic for
phase and ground faults, but also occur during normal operation, e.g. when energizing
power transformers, induction motors etc. The over current criterion can thus be used
for detecting faults, providing over currents of the king mentioned above either cannot
occur in the particular system or are prevented from causing tripping by functions
included in the protection (e.g. time delay, blocking by harmonic detectors,
discrimination by pick-up setting etc.).
Distribution networks in Poland are manly of the radial structure. The over had lines,
cables and transformers in such MV networks are usually protected with are higher
than the pickup setting but shorter than the time delay (fixed or depending on the
current level ) cannot cause tripping of the circuit breaker. Since the majority of faults
in MV networks are transient (arcing) nature, i.e they disappear either spontaneously
or as a result o tripping the faulty feeder, the autoreclosure (ARC) relays usually
supplement the main OC protection. Excluding persistent fault cases, the ARC
schemes enable fast re-establishing of the normal state of the power system, thus
restoring power supply to the customers. Reenergizing the feeder after defined dead
time may be successful after single shot or, sometimes, after two or more attempts. In
case of MV feeders supplying a part of network with numerous MV/LV transformers
certain problems with OC relay setting may be after successful ARC cycle. Restoring
the power supply may result in temporarily increased currents measured at the relay
location, which can be interpreted as over current and consequently lead to feeder
tripping (Fig.1).











Fuzzy Logic Based Protection:
Fuzzy Systems in power system protection:
The fuzzy signal processing and fuzzy reasoning techniques (belonging to the
family of Artificial Intelligence) have gained remarkable attention for at least 15 years
and numerous studies have been performed in world-leading research centers with
regard to their application also for power system protection and control tasks. Fuzzy
Logic systems (FL) are well suited for solving various decision-making problems,
especially when the precise analytical model of the process/object to be tracked is not
known or is very complicated (e.g. non-linear) [2]. Analysis of power system faults
and other abnormal phenomena belongs to the family of tasks that can be quite well
carried out with use of FL-based decision modules or classifiers.
Fuzzy inference Systems (FIS) employ the theory of fuzzy sets and fuzzy if-then
rules to derive an output. Various types of FIS are often used either for fuzzy
modeling or fuzzy classification purposes. Typically a FIS scheme performs its action
in several steps including (Fig.2):

Fuzzification (comparing the input values with membership functions to
obtain membership values of each linguistic term),
Fuzzy reasoning (firing the rules and generating their fuzzy or crisp con
sequents),
Defuzzification (aggregating rule consequents to produce a crisp output).






The theory of fuzzy sets has met considerable approval for the sake of ability
to describe quantitatively the uncertainties appearing during the operation of a
protective relay. Implementation of fuzzy criteria signals together with fuzzy settings
brings antidotes to uncertainties caused by dynamic measurement errors and may
constitute a remedy against problems related with sharp boundaries in the universe of
criteria signals between areas of faulty and failure-free operation of protected plant.
Sample application of this approach to power system protection include fault type
identification [3] and multi-criteria protection of power transformers [4]. An idea of
FL-based generator protection against out-of-step conditions was developed by the
authors of this paper and published at the IEEE PES Summer Meeting in Vancouver
in 2001 [5].
In the literature one can find also protection or control schemes employing a
fuzzy processing module supplemented with other techniques, e.g. neural networks,
wavelet transformation, etc. The resulting hybrid structures combine the strengths and
eliminate weaknesses pf particular techniques, which brings about increase efficiency
and reliability of the scheme. Applications of fuzzy hybrid solutions are e.g. fuzzy
wavelet scheme for fault classification [6] and location [7] as well as fuzzy-neural
distance protection [8].
Fuzzy Logic over current protection developed:
In this paper a fuzzy approach to overcurrent protection of MV feeders is
studied. The protection solution developed (Fig.3) takes advantage of fuzzy signal
processing and fuzzy comparison to issue trip decision.







The first three blocks in Fig.3, i.e. analogue filters, A/D converters and digital
signal processing unit, are typically applied in contemporary digital protection relays.
At the output of this path certain criterion values are issued, basing on which the
protection decision is to be worked out, usually by their comparison with pre-set
thresholds or characteristics. Here, additional signal processing is performed to obtain
fuzzified criterion signals (Fig.4).








Signal fuzzificiation is made according to the following
formula:




Which can be interpreted as finding minimum, maximum and average values
of the criterion signal X over a time period corresponding to a quarter of fundamental
frequency cycle, with N1 being number of samples within 20ms. The criterion signals
taken into account were: amplitude of fundamental frequency phase current11,
relative level of second to fundamental harmonic h2 (a ratio of amplitudes 12/11) and
line-to-line voltage amplitude UN. All the amplitudes were calculated with
application of full cycle Fourier algorithm (sine-cosine filters plus determination of
complex vector norm). the latter two variables are intended to serve as protection
stabilization criteria. High values of coefficient h2 indicate temporary increase of
second harmonic current I2, which may take place either due to MV/LV transformers
inrush phenomenon after successful ARC cycle or as an effect of current transformer
saturation during switching the feeder on a persistent fault (unsuccessful ARC). To
discriminate between the two cases the voltage amplitude is additionally measure,
with decreased values in dicating a persistent fault on the line.
The criterions sign also mentioned are are fuzzified in order to obtain a relative
measure of their membership to given group of cases. The fuzzy criterion signals are
compared with fuzzy settings that have been set as shown in Fig.5. The fuzzy setting
S for current amplitude 11 and voltage amplitude UN are crisp (sudden change from
0 to 1 or from 1 to 0 after exceeding certain threshold value), while the function S
(h2) is a kind of saturable curve, changing gradually from 0 to 1. Its shape and
parameters have been found after analysis of the level of 12 for numerous cases of
faults with successful and unsuccessful
ARC (Fig.6). The result of fuzzy
comparison (see Fig.7, here for variable
h2) is defined as a ratio of the area F under
both setting and signal membership
functions and the area F1 under criterion
signal membership function:






As a result of fuzzy comparison the non-fuzzy coefficients d(11), d(UN), d(h2)
are determined. The calculated values of d specify the degree of satisfying of given
criterion (degree of confidence of exceeding the fuzzy threshold) and may amount
between 0 and 1. The calculated variables are interpreted as follows:
10 if d (11) =1& d (h2) =0 a clear indication on a persistent fault is issued, which
Implies necessity of immediate line tripping:
20 if d (11) =1 & d (h2) =1 & d (UN) =0 a situation of magnetizing inrush in
MV/LV transformers in the depth of network (supplied from the feeder to be
protected) is confirmed, i.e. the feeder should not be tripped;
30 if d (11) =1 & d (h2) =1 & d (UN) =1 the second harmonic current 12 is
evoked due to saturation of current transformers, which, simultaneously with full
support for exceeding of fundamental frequency current threshold 11, gives a reliable
indication on line-to-line fault, i.e. the feeder should be tripped.
The e situations listed above encompass only the most extreme and
unquestionable cases. The space in between the cases 10 to 30 includes, of course, all
uncertain situations which are also expected to be well handled by the scheme.
Thanks to introduced fuzzy features the borders between the classes of events to be
discriminated become fuzzy and the decision, even for doubtful cases, may be taken
with higher confidence.
The block scheme of reasoning with fuzzy variables is shown in Fig.8.
Depending on the value of voltage related support coefficient d (UN), the final
decision is met on the basis of either

Single handed current support coefficient d (11) for d (UN) greater than
0.5, or

High values of the resulting support coefficient d are observed for the cases of
persistent line-to-line faults with or without current transformer saturation, which
indubitably should lead to line tripping. Other cases, including magnetizing inrush
situations, would imply relay stabilization and further uninterrupted supplying of
electrical energy to customers in the network.
Relay Testing and Comparative Analysis:
MV network structure and testing signals:

The new fuzzy logic based protection has been verified with use of signals
generated with EMPT-ATP software. To obtain data for scheme testing a model of
typical Polish radial MV network has been prepared (Fig.9). The network under study
(voltage level 20kv) was operated with neutral point grounded via Petersen coil, i.e.
with compensation of capacitive current, connected through Zy grounding
transformer. The feeders outgoing from MV bus bars were of mixed type, i.e.
overhead lines, cable sections or both. The MV lines were distributing power to
numerous domestic and industrial loads connected to the network through dedicated
20/0.4 kV transformers, rated power from 63 kVA to 1.2MVA. some of the feeders
applied included details related to parameters of transformer applied included details
related to parameters of transformers windings, active and reactive power losses as
well as non-linear magnetizing characteristics with full hysteretic loop (ATP inductor
model type 96). The simulation results obtained from the model prepared were of
good conformity with the signals registered in a real MV network of similar structure
[10]. It was also confirmed that the network model was sufficiently good for accurate
rendering of phenomena related to faults and autoreclosure induced transients.






Various line- to-ground and line-to-line
fault cases were simulate at the beginning and at the end of selected feeders in the
network. The faults were of both transient and persistent type, with three-phase
autoreclosure initiated for overhead lines. The protection has been tested with current
signals from the cases with both successful and unsuccessful ARC. Such model
parameters as feeder switching on angle (after ARC pause), degree of MV/LV
transformer loadings, load type, etc. were being changed to obtain a wide variety of
fault conditions. Feeder currents as well as MV bus bar voltages were registered in
ATP output files for further processing.
FL Protection testing:
The algorithms and logic of proposed fuzzy overcurrent protection have been
implanted in MATLAB programming code. Numerous testing attempts have been
performed with the aim to optimize the relay fuzzy settings, threshold values and
fuzzification window lengths. The optimization goal was to obtain a protection
solution characterized by possibly shortest operating time for persistent faults, yet
being absolutely stable for transformer inrush cases after successful ARC cycle, at the
same time.
Operating of proposed fuzzy overcurrent protection for MV lines has been
compared with other versions of available OC relays:
A Classical OC scheme without stablisiation.
B as A, supplemented with second harmonic restraint (set low or high) [9].
C OC scheme with 2
nd
, 3
rd
and 5
th
harmonic stabilization, as described in [11].
The scheme B, with second harmonic restraint, was modeled on simple
stabilization idea typically applied e.g. in transformer protection. A flag for exceeding
certain amount 2
nd
harmonic content was used for relay blocking purpose. The
problems with proper setting of such a restraint will be shown below, which confirm
impossibility of ideal solving of the sensitivity vs. stability dilemma with single crisp
relay setting.
The solution C is based on correcting of OC relay input current by subtracting a
current term related to transient components evoked by switching on the line after
ARC pause, according to:


Where ks stabilization coefficient , I (n), I (n-k) current samples before
correction IN-nominal current of the protected overhead line, k2 h, k3h, k5h
coefficients of stabilization for the 2
nd
, 3
rd
and 5
th
harmonic, respectively h2, h3 and
h5 percentage content of 2
nd
and 3
rd
and 5
th
harmonic currents. The corrected current
value was further compared with typically set OC threshold.
Below two cases of ARC attempts for single-phase intermediate fault (relay
blocking required, Fig.10) and line-to-line persistent fault (fast tripping necessary,
Fig.11) are shown. Operation of overcurrent protection solutions A and B is compared
with the new FL based relay developed. Direct comparison with version C was not
possible here due to lack of all required parameters and relay settings.
It is seen (Fig.10b) that the classical OC scheme without stabilization-version A-
responded falsely to situation of healthy line switching on (over function) Numerous
simulation runs have shown, as anticipated, that avoiding unselective tripping for
magnetizing inrush conditions after successful ARC is possible at a cost of increased
trip delay (not less than 100ms for most cases) or significant raising of relay pick-up
setting (decreased sensitivity).
Application of relay blocking in dependence on the second harmonic level
(OC version B) enables considerable increase of resulting relay selectivity (Fig.10c).
Nevertheless, choosing the restraint threshold should be made with care , since too
small values may imply elongated operation time for persistent line-to-line faults
(fig.11c), without completed elimination of possible unselective trappings for
transformer inrush cases. It is obvious that along with in crease of the 2
nd
harmonic
threshold faster relay operation is expected for persistent faults (Fig.11c), without
complete elimination of possible obvious that along with increase of the 2ndharmoniv
threshold faster relay operation is expected for persistent faults (Fig.11d), yet by
considerable deterioration of relay selectivity (Fig.10d).
Comparison of the FL based scheme with the stabilization algorithm with
temporary current correction (Version C) was performed indirectly, I.e. by using the
result statistics given in paper [11]. The tests have shown that stable and selective
operation of the relay C was obtainable with ca.30 ms internal time delay. It should be
stated, however that such operation parameters could be reached with introduction of
numerous coefficients and threshold values, which may unfavorably influence
universal ness of the scheme and cause problems by relay setting. The new FL based
scheme turned out to be even faster than relay C , being able to respond to all cases
with time delay less than 20ms. Such a high operation speed was obtained despite of
the fact that for calculation of criterion values (amplitudes of 11, 12 and UN) an
algorithm with full cycle filtering was used. The relay C was a bit slower, although it
employed much faster wavelet transformation for signal processing.
CONCLUSIONS
In the paper a new Fuzzy Logic based over current protection for MV
Overhead lines are proposed. The scheme designed is well suited for interaction with
standard auto enclosures relays. Its main feature is the improved stabilization for
transients evoked by switching on the feeder after successful ARC cycle due to
possible inrush rents of supplied MV/LV transformers.
The FL-based OC protection scheme has been thoroughly tested with ATP-
generated power system signals. The new relay was able to classify properly all the
considered simulated fault/ARC cases. No unselective answers for transients due to
transformer inrush have been observed. With FL algorithm the decisions were taken
earlier and much more reliable than with use of other OC protection systems. Wide
robustness features of the scheme with respect to changes of both network
configuration and figuration and fault parameters have also been confirmed.




REFERENCES
[1] H.Ungrad, W.Winkler and A.Wiszn iewski, protection Techniques in Electrical
Energy systems, Marcel Dekker, Inc., Newyork,, 1995
[2]J .M>Mendel, FuzzyLogic systems for Engineering: A Tutorial,proc.of the
IEEE,vol.83,No.3.pp.345-377,1995
[3]A.Ferrero,S.Sangiovanni,EZappitelli,A fuzzyset approach to fault type
identification in digital relaying.IEEE Trans. On power Delievery, Vol.13,
no43pp.169-175, 1995.














FUZZY LOGIC IN
AUTOMATIVE ELECTRONICS






Presented by
V.V.S.SARATH P.NEETH PRAPUL

Sarath.vemuri@gmail.com prapul2u@yahoo.co.in

III YEAR ECE


NARAYANA ENGINEERING COLLEGE

NELLORE

ANDHRA PRADESH


ABSTRACT
Automobiles have become an
integrated part of our daily life. The
development of technology has
improved the automobile industry in
both cost & efficiency. Still, accidents
prove as challenge to technology.
Highway accident news is frequently
found in the newspapers. The
automobile speed has increased with
development in technology through
years and the complexity of the
accidents has also increased. Higher
speeds the accidents prove to be more
fatal. Man is intelligent with reasoning
power and can respond to any critical
situation. But under stress and tension
he falls as a prey to accidents. The
manual control of speed & braking of a
car fails during anxiety. Thus
automated speed control & braking
system is required to prevent
accidents. This automation is possible
only with the help of Artificial
Intelligence (Fuzzy Logic).
In this paper, Fuzzy Logic
control system is used to control the
speed of the car based on the obstacle
sensed. The obstacle sensor unit
senses the presence of the obstacle.
The sensing distance depends upon the
speed of the car. Within this distance,
the angle of the obstacle is sensed and
the speed is controlled according to the
angle subtended by the obstacle. If the
obstacle cannot be crossed by the car,
then the brakes are applied and the car
comes to rest before colliding with the
obstacle. Thus, this automated fuzzy
control unit can provide an accident
free journey.
INTRODUCTION
Fuzzy logic is best suited for
control applications, such as
temperature control, traffic control or
process control. Fuzzy logic seems to
be most successful in two kinds of
situations:
i) Very complex models
where understanding is
strictly limited, in fact,
quite judgmental.
ii) Processes where human
reasoning, human
perception, or human
decision making are
inextricably involved.
Our understanding of physical
processes is based largely on imprecise
human reasoning. This imprecision
(when compared to the precise
quantities required by computers) is
nonetheless a form of information that
can be quite useful to humans. The
ability to embed such reasoning and
complex problem is the criterion by
which the efficacy of fuzzy logic is
judged.
Undoubtedly this ability cannot
solve problems that require precision -
problems such as shooting precision
laser beam over tens of kilometers in
space; milling machine components to
accuracies of parts per billion; or
focusing a microscopic electron beam
on a specimen of size of a nanometer.
The impact of fuzzy logic in these
areas might be years away if ever. But
not many human problems require
such precision - problems such as
parking a car, navigating a car among
others on a free way, washing clothes,
controlling traffic at intersections & so
on. Fuzzy logic is best suited for these
problems which do not require high
degree of precision.
Fuzzy Vs. Probability:
Fuzziness describes the
ambiguity of an event, whereas
randomness (probability) describes the
uncertainty in the occurrence of the
event.
An example involves a
personal choice. Suppose you are
seated at a table on which rest two
glasses of liquid. The liquid in the first
glass is described to you as having a
95 percent change of being healthful
and good. The liquid in the second
glass is described as having a 0.95
membership in the class of healthful
& good liquids. Which glass would
you select; keeping in mind first glass
has a 5 percent change of being filled
with non-healthful liquids including
poisons.
What philosophical distinction
can be made regarding these two forms
of information? Suppose we are
allowed to test the liquids in the
glasses. The prior probability of 0.95
in each case becomes a posterior
probability of 1.0 or 0; i.e., the liquid
is either benign or not. However, the
membership value of 0.95, which
measures the drink ability of the liquid
is healthful & good, remains 0.95
after measuring & testing. These
examples illustrate very clearly the
difference in the information content
between change & ambiguous events.
Complexity of a System vs. Precision
in the model of the System:
For systems with little
complexity, hence little uncertainty,
closed-form mathematical expressions
provide precise descriptions of the
systems. For systems that are a little
more complex, but for which
significant data exist, model-free
methods, such as artificial neural
networks, provide a powerful and
robust means to reduce some
uncertainty through learning, based on
patterns in the available data. Finally,
for the most complex systems where
few numerical data exist and where
only ambiguous or imprecise
information may be available. Fuzzy
reasoning provides a way to
understand system behaviour by
allowing us to interpolate
approximately between observed input
and output situations.

Fuzzy Set vs. Crisp Set:
A classical set is defined by
crisp boundaries; i.e., there is no
uncertainty in the prescription or
location of the boundaries of the set.
A fuzzy set, on the other hand, is
prescribed by vague or ambiguous
properties; hence its boundaries are
ambiguous.

If complete membership in a
set is represented by the number 1, and
no membership is represented by 0,
then point C must have some
intermediate value of membership
(partial membership in fuzzy set )
on the interval [0, 1]. Presumably the
membership of point C in
approaches a value of 1 as it moves
closer to the control (unshaded) region
of , and membership of point C in
~
A
~
A
A
~
~
Aapproaches a value of 0 as it moves
closer to leaving the boundary region
of .
~
A
Membership Function:
Membership function
characterizes the fuzziness in a fuzzy
set, whether the elements in the set are
discrete or continuous - in a graphical
form for eventual use in the
mathematical formalisms of fuzzy set
theory. Just as there are infinite
number of ways to characterize
fuzziness, there are an infinite number
of ways to graphically depict the
membership functions that describe
fuzziness.
Features of membership function:
The core of a membership
function for some fuzzy set is
defined as that region of the universe
that is characterized by complete and
full membership in the set A, i.e., the
core comprises those elements x of the
universe such that
~
A
~
~
A
(x) = 1.

The support of a membership
function for some fuzzy set A is
defined as the region of the universe
that is characterized by non zero
membership in the set A. That is, the
support comprises those elements x of
the universe such that
~
A
(x) > 0.
The boundaries of a
membership function for some fuzzy
set are defined as the region of the
universe containing elements that have
non zero membership, but not
complete membership. That is, the
boundaries comprise these elements x
of the universe such that 0 <
~
A
A
(x) <
1.
Fuzzification:
Fuzzification is the process of
making a crisp quantity fuzzy. We do
this by simply recognizing that many
of the quantities that we consider to be
crisp & deterministic are actually not
deterministic at all. They carry
considerable uncertainty. If the form
of uncertainty happens to arise because
of imprecision, ambiguity, or fuzzy
and can be represented by a
membership function.
In this paper institution method
is used for fuzzification of the input
variables, as it is very simple
Defuzzification:
Defuzzification is the
conversion of a fuzzy quantity to a
precise quantity, just as fuzzification is
the conversion of precise quantity to a
fuzzy quantity.
Some of the defuzzification
techniques are:
1. Max - Membership Principle:
Also known as the height
method, this scheme is limited to
peaked output junctions. This method
is given by the algebraic expression

c
(Z
*
) (Z) for all z Z
~
C
2. Centroid Method:
This procedure (also called
center of area, center of gravity) is the
most prevalent & physically appealing
of all the defuzzification methods; it is
given by the algebraic expression:
Z
*
=

(Z)dz C
(Z).zdz C
~
~

3. Weighted Average Method:
This method is only valid for
symmetrical output membership
function. It is given by the algebraic
expression:
Z
*
=

) z ( c
z ). z ( c
~
~
, where
denotes an algebraic sum.
4. Means-Max Membership:
This method (also called
middle-of-maxima) is closely related
to the first method, except that the
locations of the maximum membership
can be non-unique (i.e., the maximum
membership can be a plateau rather
than a single point. This method is
given by the expression:
Z
*
=
2
b a +
, where a & b are
shown in the figure.
In this paper, centroid method
is used for defuzzification if the output
variables.
FUZZY LOGIC CONTROL
SYSTEM:
Obstacle Sensor Unit:
The car consists of a sensor in
the front panel to sense the presence of
the obstacle.
Sensing Distance:
The sensing distance depends
upon the speed of the car. As the
speed increases the sensing distance
also gets increased, the obstacle can be
sensed at a large distance for higher
speed and the speed can be controlled
by gradual anti skid braking system.
The speed of the car is taken as
the input and the distance sensed by
the sensor is controlled.
Input Membership Function:

Output Membership Function:

The input & output are
properly related by the If i/p then o/p
rules.
The defuzzified values are
obtained and the variation of speed
with sensing distance is plotted as a
surface graph using mat
lab


From the graph it is clear that
the sensing distance almost varies
linearly with speed. And the curve is
not very smooth because we deal with
fuzzy values.
Speed Control:
The speed of the car is
controlled according to the angle
subtended by the obstacle. The angle
subtended by the obstacle is sensed at
every instant.
Obstacles which the car can
overcome:
At any instant, if the obstacle
subtends an angle less than 60, then
the car can overcome the obstacle and
the speed of the car reduces according
to the angle.
1. Speed breaker

2. Fly over

Obstacles which the car cannot
overcome:
At any instant, if the angle
subtended by the obstacle is greater
than 60, then the car comes to rest
before colliding with the obstacle as
the car cannot overcome the obstacle.
E.g. 1. Vehicles

The angle is taken as the i/p &
the o/p speed is controlled.
Input - Membership Function:

Output - Membership Function:

The input & the output
functions are related by the If i/p then
o/p rules. Using matlab the surface
graph relating the speed and angle is
obtained.
Simulation Results:


Some of the sample readings
obtained from the matlab simulations
is shown:

From the graph it is clear that
the speed becomes zero when the
angle of the obstacle is greater than
60.
Conclusion:
In this world of stress and
tension where the drivers
concentration is distracted in many
ways, an automated accident
prevention system is necessary to
prevent accidents. The fuzzy logic
control system can relieve the driver
from tension and can prevent
accidents. This fuzzy control unit
when fitted in all the cars can result in
an accident free world.
The rules are applicable not
only for obstacles that have elevation
but also depression like a small pit,
subway, etc.
Rear Sensing:
This fuzzy control can be
extended to rear sensing by placing a
sensor at the back side of the car, and
can be used to control the motion of
the car when the wheals rotate in the
opposite direction or when the car is in
rear gear.

HAPTIC TECHNOLOGY
IN SURGICAL SIMULATION AND MEDICAL TRAINING





DEPARTMENT OF INFORMATION TECHNOLOGY
V.R.SIDDHARTHA ENGINEERING COLLEGE
VIJAYAWADA


P.dinesh, Joydeepdas
3
rd
yr btech IT 3
rd
yr btech IT
dineshpalakurthy@gmail.com joydeepcares@gmail.com
09989766567 09885329356

HAPTIC TECHNOLOGY
IN
SURGICAL SIMULATION & MEDICAL TRAINING
A touch revolution












ABSTRACT

Engineering as it finds its wide
range of application in every field not an
exception even the medical field. One of
the technologies which aid the surgeons
to perform even the most complicated
surgeries successfully is Virtual Reality.
Even though virtual reality is
employed to carry out operations the
surgeons attention is one of the most
important parameter. If he commits any
mistakes it may lead to a dangerous end.
So, one may think of a technology that
reduces the burdens of a surgeon by
providing an efficient interaction to the
surgeon than VR. Now our dream came
to reality by means of a technology
called HAPTIC TECHNOLOGY.
Haptic is the science of
applying tactile sensation to human
interaction with computers. In our
paper we have discussed the basic
concepts behind haptic along with the
haptic devices and how these devices are
interacted to produce sense of touch and
force feedback mechanisms. Also the
implementation of this mechanism by
means of haptic rendering and contact
detection were discussed.
We mainly focus on
Application of Haptic Technology in
Surgical Simulation and Medical
Training. Further we explained the
storage and retrieval of haptic data
while working with haptic devices. Also
the necessity of haptic data compression
is illustrated.




Haptic Technology

Introduction:
Haptic, is the term derived from
the Greek word, haptesthai, which
means to touch. Haptic is defined as
the science of applying tactile
sensation to human interaction with
computers. It enables a manual
interaction with real, virtual and remote
environment. Haptic permits users to
sense (feel) and manipulate three-
dimensional virtual objects with respect
to such features as shape, weight, surface
textures, and temperature.
A Haptic Device is one that
involves physical contact between the
computer and the user. By using Haptic
devices, the user can not only feed
information to the computer but can
receive information from the computer
in the form of a felt sensation on some
part of the body. This is referred to as a
Haptic interface.
In our paper we explain the basic
concepts of Haptic Technology and its
Application in Surgical Simulation
and Medical Training.

Haptic Devices:
Force feedback is the area of
haptics that deals with devices that
interact with the muscles and tendons
that give the human a sensation of a
force being appliedhardware and
software that stimulates humans sense
of touch and feel through tactile
vibrations or force feedback.
These devices mainly consist of
robotic manipulators that push back
against a user with the forces that
correspond to the environment that the
virtual effectors is in. Tactile feedback
makes use of devices that interact with
the nerve endings in the skin to indicate
heat, pressure, and texture. These
devices typically have been used to
indicate whether or not the user is in
contact with a virtual object. Other
tactile feedback devices have been used
to stimulate the texture of a virtual
object.
PHANToM and CyberGrasp
are some of the examples of Haptic
Devices
PHANToM:
A small robot arm with
three revolute joints each connected to a
computer-controlled electric DC motor.
The tip of the device is attached to a
stylus that is held by the user. By
sending appropriate voltages to the
motors, it is possible to exert up to 1.5
pounds of force at the tip of the stylus, in
any direction.


CYBER GRASP:
The CyberGlove is a
lightweight glove with flexible
sensors that
accurately measure the position and
movement of the fingers and wrist. The
CyberGrasp, from Immersion
Corporation, is an exoskeleton device
that fits over a 22 DOF CyberGlove,
providing force feedback. The
CyberGrasp is used in conjunction with
a position tracker to measure the position
and orientation of the fore arm in three-
dimensional space.



Haptic Rendering:
It is a process of applying forces
to the user through a force-feedback
device. Using haptic rendering, we can
enable a user to touch, feel and
manipulate virtual objects. Enhance a
users experience in virtual environment.
Haptic rendering is process of
displaying synthetically generated
2D/3D haptic stimuli to the user. The
haptic interface acts as a two-port system
terminated on one side by the human
operator and on the other side by the
virtual environment.


Human
operator
Haptic
interface
Virtual
Environment


Contact Detection
A fundamental problem in
haptics is to detect contact between the
virtual objects and the haptic device (a
PHANToM, a glove, etc.). Once this
contact is reliably detected, a force
corresponding to the interaction physics
is generated and rendered using the
probe. This process usually runs in a
tight servo loop within a haptic
rendering system.
Another technique for contact
detection is to generate the surface
contact point (SCP), which is the
closest point on the surface to the actual
tip of the probe. The force generation
can then happen as though the probe
were physically at this location rather
than within the object. Existing methods
in the literature generate the SCP by
using the notion of a god-object, which
forces the SCP to lie on the surface of
the virtual object.

APPLICATIONS OF HAPTIC
TECHNOLOGY:
Haptic Technology as it finds it
wide range of Applications some among
them were mentioned below:
1. Surgical simulation &
Medical training.
2. Physical rehabilitation.
3. Training and
education.
4. Museum display.
5. Painting, sculpting
and CAD
6. Scientific
Visualization.
7. Military application.
8. Entertainment.

The role of Haptic Technology in Surgical Simulation and Medical
Training is discussed in detail below.






SURGICAL SIMULATION AND MEDICAL TRAINING:

Haptic is usually classified as:-
Human haptics: human touch perception and manipulation.
Machine haptics: concerned with robot arms and hands.
Computer haptics: concerned with computer mediated.

A primary application area
for haptics has been in surgical
simulation and medical training. Haptic
rendering algorithms detect collisions
between surgical instruments and virtual
organs and render organ-force responses
to users through haptic interface devices.
For the purpose of haptic rendering,
weve conceptually divided minimally
invasive surgical tools into two generic
groups based on their functions.
1. Long, thin, straight probes
for palpating or puncturing the tissue and
for injection (puncture and injection
needles and palpation probes)
2. Articulated tools for pulling,
clamping, gripping, and cutting soft
tissues (such as biopsy and punch
forceps, hook scissors, and grasping
forceps).
A 3D computer model of an
instrument from each group (a probe
from the first group and a forceps from
the second) and their behavior in a
virtual environment is shown. During
real-time simulations, the 3D surface
models of the probe and forceps is used
to provide the user with realistic visual
cues. For the purposes of haptic
rendering of tooltissue interactions, a
ray-based rendering, in which the probe
and forceps are modeled as connected
line segments. Modeling haptic
interactions between a probe and objects
using this line-object collision detection
and response has several advantages
over existing point based techniques, in
which only the tip point of a haptic
device is considered for touch
interactions.

Grouping of surgical instruments for simulating tooltissue interactions.
Group A includes long, thin, straight probes.
Group B includes tools for pulling, clamping, and cutting soft tissue.

Users feel torques if a proper
haptic device is used. For
example, the user can feel the
coupling moments generated by
the contact forces at the
instrument tip and forces at the
trocar pivot point.
Users can detect side collisions
between the simulated tool and
3D models of organs.
Users can feel multiple layers of
tissue if the ray representing the
simulated surgical probe is
virtually extended to detect
collisions with an organs
internal layers. This is especially
useful because soft tissues are
typically layered, each layer has
different material properties, and
the forces/torques reflected to
the user depends on the
laparoscopic tools orientation.
Users can touch and feel
multiple objects simultaneously.
Because laparoscopic
instruments are typically long
slender structures and interact
with multiple objects (organs,
blood vessels, surrounding tissue,
and so on) during a MIS
(Minimally Invasive Surgery),
ray-based rendering provides a
more natural way than a purely
point-based rendering of tool-
tissue interactions. To simulate
haptic interactions between
surgical material held by a
laparoscopic tool (for example, a
catheter, needle, or suture) and a
deformable body (such as an
organ or vessel), a combination
of point- and ray-based haptic
rendering methods are used
.

In case of a catheter insertion
task shown above, the surgical tools
using line segments and the catheter
using a set of points uniformly
distributed along the catheters center
line and connected with springs and
dampers. Using our point based haptic
rendering method; the collisions between
the flexible catheter and the inner
surface of a flexible vessel are detected
to compute interaction forces.
The concept of distributed
particles can be used in haptic rendering
of organorgan interactions (whereas a
single point is insufficient for simulating
organorgan interactions, a group of
points, distributed around the contact
region, can be used) and other minimally
invasive procedures, such as
bronchoscope and colonoscopy,
involving the insertion of a flexible
material into a tubular body .
Deformable objects:
One of the most important
components of computer based surgical
simulation and training systems is the
development of realistic organ-force
models. A good organ-force model must
reflect stable forces to a user, display
smooth deformations, handle various
boundary conditions and constraints, and
show physics-based realistic behavior in
real time. Although the computer
graphics community has developed
sophisticated models for real-time
simulation of deformable objects,
integrating tissue properties into these
models has been difficult. Developing
real-time and realistic organ-force
models is challenging because of
viscoelasticity, anisotropy, nonlinearity,
rate, and time dependence in material
properties of organs. In addition, soft
organ tissues are layered and
nonhomogeneous.
Tooltissue interactions generate
dynamical effects and cause nonlinear
contact interactions of one organ with
the others, which are quite difficult to
simulate in real time. Furthermore,
simulating surgical operations such as
cutting and coagulation requires
frequently updating the organ geometric
database and can cause force
singularities in the physics-based model
at the boundaries. There are currently
two main approaches for developing
force-reflecting organ models:
1. Particle-based methods.
2. Finite-element methods
(FEM).
In particle-based models, an
organs nodes are connected to each
other with springs and dampers. Each
node (or particle) is represented by its
own position, velocity, and acceleration
and moves under the influence of forces
applied by the surgical instrument.
In finite-element modeling, the
geometric model of an organ is divided
into surface or volumetric elements,
properties of each element are
formulated, and the elements are
assembled together to compute the
deformation states of the organ for the
forces applied by the surgical
instruments.
Capture, Storage, and Retrieval of
Haptic Data:
The newest area in haptic is the
search for optimal methods for the
description, storage, and retrieval of
moving-sensor data of the type
generated by haptic devices. This
techniques captures the hand or finger
movement of an expert performing a
skilled movement and play it back, so
that a novice can retrace the experts
path, with realistic touch sensation; The
INSITE system is capable of providing
instantaneous comparison of two users
with respect to duration, speed,
acceleration, and thumb and finger
forces.
Techniques for recording and
playing back raw haptic data have been
developed for the PHANToM and
CyberGrasp. Captured data include
movement in three dimensions,
orientation, and force (contact between
the probe and objects in the virtual
environment).
Haptic Data Compression:
Haptic data compression and
evaluation of the perceptual impact of
lossy compression of haptic data are
further examples of uncharted waters in
haptics research.
Data about the user's interaction with
objects in the virtual environment must
be continually refreshed if they are
manipulated or deformed by user input.
If data are too bulky relative to available
bandwidth and computational resources,
there will be improper registration
between what the user sees on screen
and what he feels.
On analyzing data obtained
experimentally from the PHANToM and
the CyberGrasp, exploring compression
techniques, starting with simple
approaches (similar to those used in
speech coding) and continuing with
methods that are more specific to the
haptic data. One of two lossy methods to
compress the data may be employed:
One approach is to use a lower sampling
rate; the other is to note small changes
during movement. For example, for
certain grasp motions not all of the
fingers are involved.
Further, during the approaching
and departing phase tracker data may be
more useful than the CyberGrasp data.
Vector coding may prove to be more
appropriate to encode the time evolution
of a multi-featured set of data such as
that provided by the CyberGrasp. For
cases where the user employs the haptic
device to manipulate a static object,
compression techniques that rely on
knowledge of the object may be more
useful than the coding of an arbitrary
trajectory in three-dimensional space.
CONCLUSION:
We finally conclude that Haptic
Technology is the only solution which
provides high range of interaction that
cannot be provided by BMI or virtual
reality. Whatever the technology we can
employ, touch access is important till
now. But, haptic technology has totally
changed this trend. We are sure that this
technology will make the future world as
a sensible one.



REFRENCES:
1. http://haptic.mech.nwu.edu
2. http://www.webopedia.com/
TERM/H/haptic.html
3. http://www.stanford.edu/dep
t/news/report/news/2003/apr
il2/haptics-42.html
4. http://www.utoronto.ca/atrc/
rd/vrml/haptics.html
5. http://www.caip.rutgers.edu/
~bouzit/lrp/glove.html
6. http://www.haptics-
e.org/vol_02/he-v2n2.pdf
















Heart Failure Alert System using Rfid Technology


Abstract

Now-a-days the deaths caused due to the heart
failure have been of major concern .The majority of the deaths
caused by heart failures are due to the lack of medical
assistance in time. This paper gives an insight of a new
technology that relates directly to the exploding wireless
marketplace. This technology is a whole new wireless and
RFID (Radio Frequency Identification) enabled frontier in
which a victims actual location is integral for providing
valuable medical services.
The paper will be demonstrating for the first time ever the
usage of wireless telecommunications systems and miniature
sensor devices like RFID passive Tags , that are smaller than a
grain of rice and equipped with a tiny antenna which will
capture and wirelessly transmit a person's vital body-function
data, such as pulse or body temperature , to an integrated
ground station. In addition, the antenna will also receive
information regarding the location of the individual from the
GPS (Global Positioning Satellite) System. Both sets of data
medical information and location will then be wirelessly
transmitted to the ground station and made available to save
lives by remotely monitoring the medical conditions of at-risk
patients and providing emergency rescue units with the
person's exact location.






This paper gives a predicted general model for Heart Failure
Alert System. It also discusses the Algorithm for converting
the Analog pulse to Binary data in the tag and the Algorithm
for Alerting the Location & Tracking Station. It discusses in
detail the various stages involved in tracking the exact location
of the Victimusing this technology.
Keywords:
RFID, RFID-passive tags, GPS, PAM.
1. Introduction:
It is tough to declare convincingly what is the most
important organ of our body.Infact every organ has its own
importance contributing and coordinating superbly to keep the
wonderful machine the human body functioning smoothly.
And one of the primary organs which the body cannot do
without is the heart, 72 beats a minute or over a trillion in a
lifetime. The pump house of our body pumping the blood to
every corner of our body every moment, thus sending oxygen
and nutrients to each and every cell. Over a period of time, the
heart muscles go weak, the arteries get blocked and sometimes
because of a shock a part of the heart stops functioning
resulting in what is called a HEART ATTACK. Heart attack is
a major cause of death and in todays tension full world it has
become very common. Presently there is no mechanism by
which a device monitors a persons heart 24 hours a day, 7
days a week and gives him instant protection in case of
problem. Our primary focus is on people with a history of
heart problem as they are more prone to death due to heart
failure. In the 1970s, a group of scientists at the Lawrence
Livermore Laboratory (LLL) realized that a handheld receiver
stimulated by RF power could send back a coded radio signal.
Such a system could be connected to a simple computer and
used to control access to a secure facility.









This systemultimately became one of the first building entry
systems based on the first commercial use of RFID.
RFID or Radio Frequency identification is a technology that
enables the tracking or identification of objects using IC based
tags with an RF circuit and antenna, and RF readers that
"read" and in some case modify the information stored in the
IC memory. RFID is an automated data-capture technology
that can be used to electronically identify, track, and store
information about groups of products, individual items, or
product components. The technology consists of three key
pieces:
RFID tags;
RFID readers;
A data collection and management
system.
Rfid tags:
RFID tags are small or miniaturized computer chips
programmed with information about a product or with a number
that corresponds to information that is stored in a database. The
tags can be located inside or on the surface of the product, item,
or packing material.
The RF tags could be divided in two major groups:
Passive, where the power to energize the tags circuitry is
draw fromthe reader generated field.
Active, in this case the tag has an internal power source, in
general a battery that could be replaceable or not, in some case
this feature limited the tag lifetime, but for some applications
this is not important, or the tag is designed to live more than
the typical time needed.
Rfid readers:
RFID readers are querying systems that interrogate
or send signals to the tags and receive the responses. These
responses can be stored within the reader for later transfer to a
data collection system or instantaneously transferred to the
data collection system. Like the tags themselves, RFID readers
come in many sizes. RFID readers are usually on, continually
transmitting radio energy and awaiting any tags that enter their
field of operation. However, for some applications, this is
unnecessary and could be undesirable in battery-powered
devices that need to conserve energy. Thus, it is possible to
configure an RFID reader so that it sends the radio pulse only
in response to an external event. For example, most electronic
toll collection systems have the reader constantly powered
upon that every passing car will be recorded. On the other
hand, RFID scanners used in veterinarians offices are
frequently equipped with triggers and power up the only when
the trigger is pulled. The largest readers might consist of a
desktop personal computer with a special card and multiple
antennas connected to the card through shielded cable. Such a
reader would typically have a network connection as well so
that it could report tags that it reads to other computers. The
smallest readers are the size of a postage stamp and are
designed to be embedded in mobile telephones.
2. General Model for Heart Failure Alert
System:

















The Heart Failure Alert System consists of:
RFID Tag (Implanted into Human body).
RFID Reader (Placed in a Cellular Phone).
Global Positioning Satellite System.
Locating & Tracking Station.
Mobile Rescue Units.
The grain-sized RFID Tag is implanted into the human body,
which keeps track of the heart pulse in the form of Voltage
levels. A RFID Reader is placed into the Cellular Phone. The
RFID Reader sends a Command to the RFID Tag which in
turn sends these Voltage pulses in the form of bits using the
Embedded Software in the Tag as Response which is a
continuous process. These bit sequence is then sent to
Software Program in the Cellular Phone as input and checks
for the Condition of Heart Failure. If any sign of Failure is
sensed then immediately an ALERT Signal will be generated
and in turn results in the AUTODIALING to the Locating &
Tracking Station. This station with the use of GPS System
comes to know the Whereabouts of the Victim. The Locating
& Tracking Station also simultaneously alerts the Rescue
Units.
3. Working of implanted rfid tags:
Passive RFID systems typically couple the transmitter
to the receiver with either load modulation or backscatter,
depending on whether the tags are operating in the near or far
field of the reader, respectively. In the near field, a tag couples
with a reader via electromagnetic inductance. The antennas of
both the reader and the tag are formed as coils, using many
turns of small gauge wire. The reader communicates with the
tag by modulating a carrier wave, which it does by varying the
amplitude, phase, or frequency of the carrier, depending on the
design of the RFID system in question. The tag communicates
with the reader by varying how much it loads its antenna. This
in turn affects the voltage across the readers antenna. By
switching the load on and off rapidly, the tag can establish its
own carrier frequency (really a sub carrier) that the tag can in
turn modulate to communicate its reply.
Fig: Grain sized RFID Tag
RFID Tags are smaller than a grain of rice and equipped with
a tiny antenna will capture and wirelessly transmit a person's
vital body-function data, such as pulse and do not require line
of sight. These tags are capable of identifying the Heart pulses
in the form of Voltage levels and converts into a bit sequence.
The first step in A-D Conversion is Pulse Amplitude
Modulation(PAM).This takes an Analog signal, samples it and
generates a sequence of pulses based on the results of the
Sampling(measuring the Amplitude at equal
intervals)PCM(Pulse Code Modulation)quantizes PAM
Pulses.ie the method of assigning integral values in a specific
range to sampled instances. The binary encoding of these
integral values is done based on the Algorithm BIN_ENC
depending on the Average Heart pulse voltage of the Victim
(Avg_pulse).
Alg BIN_ENC:
Step1: Read the Analog Signals from the Heart.
Step2: Sample the Analog Signal and generate series of
pulses based on the
Results of Sampling based on the Tag Frequency.
Step3: Assign Integral Values to each Sampled Instances
generated.
Step4: Consider every Individual Sampled Unit and
Compare with the Average
Voltage Level of the Heart.
Step5: If the Sampled Instance Value is in between the
avg_pulse Values
Then Assign BIT=0
Otherwise Assign BIT=1
Step6: Generate the bit sequence by considering all the
generated Individual
Sample Instances.








Algorithm ALERT.
Alg ALERT:
Step 1: Read the bit sequence from the Reader.
Step 2: Count for the no of bit zeros in the data using a
counter.
Step 3: If you encounter a bit one, then set counter to
zero.
Step 4: If the counter is equal to five then go to Step 5
else go to Step 1.
Step 5: Send alert to the nearest Locating & Tracking
Station.

Fig: Analog-Binary Digits Conversion in Tags

Working of rfid reader inside cellular phone:
The RFID reader sends a pulse of radio energy to the tag and
listens for the Tags response. The tag detects this energy and
sends back a response that contains the tags serial number
and possibly other information as well. In simple RFID
systems, the readers pulse of energy functioned as an on-off
switch; in more sophisticated systems, the readers RF signal
can contain commands to the tag, instructions to read or write
memory that the tag contains. Historically, RFID readers were
designed to read only a particular kind of tag, RFID readers
are usually on, continually transmitting radio energy and
awaiting any tags that enter their field of operation.










Fig: RFID Reader in cellular phone.
The Reader continuously sends the Command to the tags and in
turn receives the Voltage levels in the form of bit sequence as
Response from the tags with the help of the BIN_ENC
algorithm. The reader sends the received Bit Sequence to a
software embedded in the cellular phone , In case of detection of
a weak heart pulse this software automatically alerts the
Tracking & Location station . The software uses the
4. Stages In Heart Failure Alert System:
Stage 1:
The Tag continuously senses the Heart Pulses, when the
Reader sends a Command it sends the output of the
BIN_ENC() as the Response to the Reader.
/*Module for the Conversion of Analog Signals to Binary
digits*/
BIN_ENC()
{
Scanf (The Value of the generated Sample %f, Value);
If (+Avg_pulse<Value<-Ang_pulse)
{Bit=0 ;}
else if (Value>+Avg_pulse || Value<-Avg_pulse)
{Bit=1 ;}
}
Stage 2:









simultaneous ALERT to both the GPS System & Mobile
The bits obtained are sent to the ALERT() programto check
whether the bit is BIT 0 or BIT 1.If a BIT 0 is
encountered, the counter is incremented and again it checks for
the next bit. If a BIT 1is encountered then counter is set to zero
and it again checks for the next bit. If counter=5 then it alerts the
Locating & Tracking Station.
/*Module for checking the Weak Pulse */
ALERT ()
{
if (bit==0)
{ counter++; }
else
{counter=0 ;}
if(counter==5)
{
printf( Report Weak Pulse Detected to Locating &
Tracking System);
counter=0;
}
}
Stage 3:











A Special ALERT message is sent to the Locating & Tracking
System through the Cellular Phone by making use of features
like AutoMessaging, Autodialling which will be provided by
the Cellular Network Service Provider. Then the Locating &
Tracking Station simultaneously sends an ALERT to the
Mobile Rescue Unit and sends a request to GPS System for
the proper location of the RFID Reader (or the Cellular
Phone).The Locating & Tracking Station sends an
Rescue Unit in order to alert the Rescue team in the Mobile
Rescue Unit to indicate a possible Heart Failure within the
radius of the Unit. The GPS System mean-while tracks the
exact location of the Victim and it guides the Mobile Rescue
Unit to the destination in time and provides immediate
medical assistance to the Victim.
5. Conclusion:
This new technology will open up a new era in the
field of Biomedical Engineering .The only drawback of this
technology is that, It doesnt give the promise of saving every
person who is implanted with the tag and using this
technology. In the near future; we would like to extend the
technology so that every customer who is implanted with the
tags and those who have been using the technology will be
saved. The Worlds first GSM phone (NOKIA 5140)offering
with RFID reading capability has already come into the
market, In the near future the rfid readers would come into the
wrist watches, which would be handy than the cellular phones.
This new technology would probably become cheaper in the
future. In the near future we hope this new technology would
probably reduce the deaths due to heart failures.
Fig: Nokia 5140 Handset offering RIFD Reader
6. a. Web References:
1.Identity chip planted under the skin approved for use in
Health care.
URL: http://www.spychips.com
2.RFID Tags and RFID Chips
URL: http://www.rfidjournal.com
3.Latest Updates on RFID









URL: http://www.rfidnews.com
4. "Fundamentals and Applications in Contact less Smart
Cards and Identification"
URL:http://www.rfid-handbook.de/index.html
5.Annual review of Bio-medical Engineering:
URL: http://www.ide.com
6.Injectable Electronic Identification, Monitoring and
Simulating Systems
URL: http://www.in-stat.com
7.Changing the world for less than the price of a cup of a
coffee
URL: http://www.line56.com
8. www.siliconchip.com.au
9. www.wdrg.com/news/currentPR/rfid.html
10. www.digitalangel.net
6. b. Other References:
1. RFID SECURITY - by Pete Lindstorm
2. RFID ESSENTIALS - by Bill Glover, Himanshu
Bhatt
3. RFID Case Studies - by Dr. Peter Harrop
4. RFID - George P. Lister


































swaps_tweety@yahoo.co.in




mega_agem2787@yahoo.co.in


INTELLIGENT SHOPPING
GUIDE



Abstract. This paper reports on extensions to
a decision-theoretic location-aware
Shopping guide and on the results of user studies
that have accompanied its development.
On the basis of the results of an earlier user
study in a mock-up of
a shopping mall, we implemented an improved
version of the shopping guide.
A new user study with the improved system in a
real shopping mall confirms in
a much more realistic setting the generally
positive user attitudes found in the earlier study.
The new study also sheds further light on the
usability issues raised by the system, some of
which can also arise with other mobile guides
and recommenders.
One such issue concerns desire of users to be
able to understand and second-guess the
system's recommendations. This requirement led
to the development of an explanation component
for the decision-theoretic guide, which was
evaluated in a smaller follow-up study in the
shopping mall.

Keywords
Mobile commerce, navigation support,
decision-theoretic planning, user studies,
recommender systems, explanation

1 Introduction
A natural and popular application domain for
pervasive computing is assistance for
shoppers who are shopping in physical
environments such as grocery stores,
shopping centers, or larger areas such as
entire towns.
This paper describes the development of a
mobile shopping guide that focuses on one
of the many types of assistance that have
been explored to date: Given a shopper who
wants to find a particular set of products
within a limited time, how can the shopper
be guided to the possible locations of these
products in an order that tends to maximize
the likelihood of finding the products while
minimizing the time required to do so? What
makes this problem more than just a
navigation problem is the uncertainty that
pervades many aspects of it: In particular,
there is considerable uncertainty about
whether the shopper will find a desired
product at a given location and how long she
will have to spend looking for it. Despite
these uncertainties, the shopper may be
under pressure to complete the trip quickly,
perhaps even before a fixed deadline.
In the rest of this section, we summarize our
earlier work on a decision-theoretic
shopping guide that addresses this problem
and place it in the larger context of mobile
shopping assistants. In the remainder of the
paper, we discuss recent significant
enhancements to the system and how they
have been tested in a real shopping
environment.
1.1 A Decision-Theoretic Shopping
guide
Bohnenberger and J ameson [1] introduced
the basic idea of a decision-theoretic
shopping guide: Given a set of products that
a user wants to buy, a set of stores (e.g.,
within a shopping mall) where she may be
able to find them, and estimated times for
traveling among the stores, such a system
generates a policy for guiding the user
among the stores: At any given point in time,
given knowledge of where the user is and
what products she has already found, the
system recommends the next store to visit
and gives directions for getting to that store.
Decision-theoretic planning (cf. Section 2.2)
is generally well suited to this
recommendation task in that it can take into
account not only the costs of visiting
particular stores but also the uncertainty
about whether the desired product will be
found in a given store.
To obtain some initial feedback about the
usability and acceptance of this type of
guide, we created an initial prototype for a
PDA1 and conducted a study in an artificial
mockup of a shopping mall (Bohnenberger
et al. [2]): A number of stores were crudely
simulated on two floors of a computer
science building. Localization was realized
with infrared beacons attached to the walls
at a number of points, which transmitted
identifying signals to the subject's PDA.
Each of 20 subjects performed two assigned
shopping tasks, with the role of the
shopkeeper being played in each case by
the experimenter, who followed the subject
around. Each subject performed one
shopping task with the PDA-based guide
and the other one with a conventional paper
map of the shopping mall that they were
allowed to study in advance. Shopping
performance with the shopping guide was
somewhat better than performance with the
paper map, illustrating the basic viability of
the method. Even the artificial setting
enabled the subjects to suggest a number of
improvements to the user interface, which
were taken into account in the next version
(see Section 2). On a more general level,
although the study did not itself create a
realistic shopping experience, it did make
the functionality clear to the subjects, so that
they could offer some speculative comments
about their willingness to use a system of
this type in a real shopping situation. Their
overall attitudes were almost uniformly
positive, and they showed appreciation for
the system's ability to save them time,
cognitive effort, and the frustration of having
to replan when the store did not have the
product that they had expected to find there.
On the negative side, a number of subjects
felt that they did not have the .big picture.,
almost as if they were being led blindfolded
through a shopping mall.
On the basis of the user feedback from the
study, we developed a version of the
shopping guide that included a number of
improvements (see Section 2), ranging from
minor interface improvements to significant
enhancements that address fundamental
user requirements. This improved prototype
was then tested in a real shopping mall (see
Section 3).

1.2 RelatedWork
Shopping Assistant Prototypes A large
number of systems have been developed
since the early 1990s that offer various
types of assistance to a shopper. Asthana et
al. [3] presented an early portable shopping
assistant which, among other functions,
helped the shopper to locate particular
products within a large store and alerted the
shopper to potentially interesting special
offers.
ISHOPPER (Fano [4]), like our own guide,
assisted the user within a larger
geographical area, such as a shopping mall.
It alerted a shopper to desired products that
were available near his or her current
location. This approach is the opposite of
the one taken by our guide: Instead of taking
the shopper's location as given and
recommending products that can be
conveniently bought nearby, our guide starts
with a list of desired products and guides the
user to the locations at which she is most
likely to find the products in accordance with
an efficient policy.
The more recently developed shopping
assistant iGrocer (Shekar et al. [5]) offers
a variety of services to a shopper in a
grocery store via a mobile phone. In
particular, it shows the .quick shopper. the
fastest route within the grocery store for
picking up the products on the shopping list.
This function is similar to that of our
shopping guide, except that it does not take
into account uncertainty about whether a
given product will be found at a given
location a capability that requires the
probabilistic reasoning that is characteristic
of decision-theoretic planning.
Recently, Cumby et al. [6] have shown how
a shopping assistant can predict (and
therefore suggest) a shopper's current
shopping list on the basis of information
about past purchases. Again, this method is
complementary to our approach, which
presupposes that a shopping list already
exists (whether specified entirely by the
shopper or with external assistance).
On the whole, existing shopping assistants
like these offer functionality that is
complementary to that of our decision-
theoretic shopping guide. In the long run, it
should be possible to integrate various types
of functionality offered by different systems
into a more comprehensive shopping
assistant.

Shopper-Centered Studies Newcomb et
al. [7] have carefully examined the
requirements of grocery shoppers for mobile
shopping assistants. Some of their results
confirm the importance of the goals of our
shopping guide: Their survey of 46 diverse
shoppers showed that two of the three most
widely requested features of a grocery
shopping assistant concerned help in
arranging the items on the list and in
locating the items within the grocery store.
The authors also found that shoppers view
waiting in checkout lines as a major
nuisance in the shopping process; our guide
addresses this problem in that it takes into
account the expected length of time that a
shopper will have to spend in a given store.
In a test of their prototype with five shoppers
in a grocery store, the authors noted a
tendency that was commented on by some
of the subjects in our first study: the
tendency to focus on getting the products on
their list quickly, as opposed to exploring the
shopping area.
Taken together with our early results, these
results indicate that (a) a decision theoretic
shopping guide can fulfill important
requirements that shoppers often have; but
(b) it tends to discourage the kind of
recreational and exploratory shopping that is
in many cases desired by shoppers and
store owners. Since this latter fact is
sometimes claimed to limit the practical
deploy ability of our approach severely, we
should point out that the recreational and
convenience orientations to shopping are
not as contradictory as they may seem (cf.
Kim and LaRose [8]). For example, a
shopper who intends to do both some
convenience shopping and some
recreational shopping in a given visit to a
shopping mall may have more time and
energy for the recreational shopping if she
can finish the convenience shopping quickly
and with little effort. And the fact that a given
shopping mall has infrastructure that
supports efficient convenience shopping (as
well as recreational shopping) may
constitute a reason for visiting that mall in
the first place rather than another one.

Decision-Theoretic Planning for
Recommendation In addition to our own
work cited in Section 1.1, other researchers
have recently applied decision-theoretic
methods to recommendation problems in
shopping contexts. Plutowski [9] addresses
shopping scenarios very similar to those
addressed by our guide, offering new
solutions to the complexity issues raised by
decision-theoretic planning, which tend to
limit the scalability
of decision-theoretic methods. New
approaches to complexity issues are also
presented in the work of Brafman et al. [10]
(first introduced by Shani et al. [11]), whose
shopping domain is an on-line bookstore, in
which the authors were able to test their
decision theoretic recommender in real use.

2 Description of the New Prototype
On the basis of the results of our first user
study (Section 1.1) and the related research
just summarized, we developed an improved
version of the decision-theoretic shopping
guide.
2.1 User Interface
Before beginning with her actual shopping,
the user must specify to the system in one
way or another which products she would
like to buy in the shopping mall. In the
relatively simple solution that we adopted for
this prototype, the shopping list is specified
on a larger stationary computing device, on
which the computations necessary for the






(a)

(b)

(c)

Fig. 1. Three screen shots of the improved
shopping guide for the user study in the shopping
mall. (a: Animated arrows are used for user-
centered navigation recommendations.
b: Overview maps show the user's location, the
expected next store, and further information
about the environment. c: Shortly before the user
reaches the expected next store, a picture of the
store is shown on the display of the PDA to help
the user recognize the destination.)

Decision-theoretic planning are also carried
out.2 The usability and acceptance of this
aspect of the prototype were not evaluated
in our study.
The main display on the PDA has two parts
(see Figure 1): On the bottom of the display,
the user can always see the shopping list
(which included 6 items in the current user
study) with check boxes for indicating to the
system which items have been purchased
so far. As is shown in screen shot (a), a
large animated arrow is used to indicate the
direction in which the user should walk
whenever she reaches a point at which a
choice is available. The animation shows
movement from the beginning of the arrow
to the end, so as to emphasize that the
recommended motion is relative to the
user's current orientation. Between one such
instruction and the next, an overview map
(b) is presented that shows a part of the
shopping mall that includes both the user's
current location and the next store to be
visited. These overview maps were
introduced in response to comments of
subjects in the first study that they lacked a
big picture of the shopping mall and wanted
to be able to prepare mentally for the next
store that they were to visit. When the user
approaches the next recommended store,
the system displays a photograph (c) of the
store, so as to reassure the user that the
intended store has been reached.
2 In a practical deployment, this device might be a
computer with touch screens mounted in the
walls of the shopping mall.

Table 1.
Overview of the types of data about the
shopping environment required by the
decision-theoretic planner.
Topography of the shopping mall that specifies:
1. The location of each shop
2. The location of each of a set of beacons, each
of which sends an infrared signal to the PDA of
the user as she approaches the beacon
3. For each beacon, the typical time required to
walk between it and each of the neighboring
beacons
For each shop, for each of a set of product
characterizations:
1. The probability of finding a product fitting
that characterization
2. The typical time spent in the store searching
for a product
3. The typical time spent waiting in line after a
product has been found
The user is alerted to each change in the
display by an acoustic signal a feature
suggested by users in the first study who
wanted to minimize the extent to which they
had to attend to the display.3
When the user has reached a store, the
items available in that store become active
in the shopping list; so that the user can
check them off if she finds them. Inside a
store, the user alone is responsible for
finding the right places to look for the
desired items.
The next navigation recommendation is
given to the user when she leaves the store.

2.2 Decision-Theoretic Planning
Data Required For the purpose of this
paper, the most important questions
concerning the decision-theoretic approach
concern (a) the data required by the system
and how they can be obtained and (b) the
nature of the result of the planning process
and the computational resources required.4
Table 1 gives an overview of the required
data. A product characterization can have
different degrees of specificity. For example,
for a bookstore, it might specify the name of
a particular magazine; in which case the
probability refers to the likelihood that a
shopper who enters the store looking for that
particular magazine will find it. Or the
specification might be more general, such as
sports magazine., in which case the
probability refers to the likelihood that a
shopper who is looking for either a particular
sports magazine or simply some sports
magazine that she considers worth buying
will end up finding it in this store.
The question of where all of these
quantitative estimates might come from in
real life applications of the technology is
open-ended, because there are many
possibilities In the user study, users wore an ear
set, which ensured that they heard the signals
regardless of he level of ambient noise. It would
also be possible to give complete navigation
instructions via speech, as several subjects in the
earlier study requested. For information on
technical aspects of the planning process, we
refer the interested reader to the brief description
given by Bohnenberger et al. [2] and the detailed
account given by
Bohnenberger [12]. General introductions to
decision-theoretic planning are given by Boutilier
et al. [13] and Russell and Norvig [14], among
others. (see Bohnenberger [12], chap. 6).
Some solutions presuppose a certain
amount of cooperation by the vendors in the
shopping mall (e.g., some sort of access to
their databases).
Other solutions would make use of data
collected during the actual use of the system
to update parameters such as the probability
of finding a product fitting a given
characterization in a given store, provided
that at least some users allowed some data
concerning their shopping trips to be
transferred to a central computer. A simple,
low-tech method was applied for our user
study: To determine the time cost of walking
from one store to another, the experimenter
did the necessary walking and counted the
steps required.To estimate the probability of
finding a particular type of item in a given
store and the time required to find and buy
it, the experimenter looked at the number
and variety of products offered by the store.

Result of the Planning Process and
Computational Resources The result of
the planning process is not a single route
through the shopping mall but rather a
recommendation policy: For each possible
state of the shopping process, the policy
specifies a recommended user action, which
may involve either walking in a given
direction between stores or entering a
particular store. A state in the shopping
process is characterized by the user's
current location, the set of products that the
user has bought so far, and (if a time limit
has been specified), the amount of time left
until the deadline.
Note that, if the system generated a fixed
plan, it would be determined in advance, for
example, how many bookstores the user
would visit. With a policy, the user will visit
as many bookstores as is necessary to
obtain the desired book(s).unless time runs
out first to the point where it is no longer
worthwhile to look for books.
Although the resulting recommendation
policy covers a large number of possible
states, it can be represented and applied on
a resource-limited PDA, because the mere
application of an existing policy is not
computationally demanding. By contrast, the
planning process that generates the policy
can be highly resource-intensive; for this
reason, it may be necessary for this
computation to be performed on a larger
computing device, as was done for our user
study.

3 User Study in a Real Shopping Mall
By testing the use of the prototype in a real
shopping mall, we aimed to answer the
following questions:
1. Does the shopping guide work as
expected in effectively enabling users to find
the desired products in an unfamiliar
shopping mall and with limited time
available?
2. How do potential users evaluate the
shopping guide overall, and in what
situations would they be inclined to use such
a system for their own shopping?
3. What features of the system are well
received, and where do users see a need for
improvement?

3.1 Method
Realization of Localization A major
practical obstacle to a study in a real
shopping mall was the infeasibility of
installing localization infrastructure just for
the purpose of his study. Therefore, in our
study, the experimenter simulated the
localization infrastructure of the earlier user
study with a second PDA that
communicated with the device of the
subject. Whereas the subject's PDA would
normally receive infrared signals from
beacons mounted at various locations in the
shopping mall, in our study these signals
were transmitted from the experimenter's
device via wireless LAN (it is convenient to
speak of these signals as coming from
virtual beacons). The experimenter's device
ran a specially written program that
automated as much as possible of the task
of simulating the beacons.

Shopping Mall The study was conducted
in the Saarpark-Center shopping mall6 in the
city of Neunkirchen.the largest shopping
mall in the German province of the
Saarland.
The mall hosts about 120 stores on
33,500mfi spread over two floors.
Shopping Task Instead of conventional
payment for participation in the study, each
subject was given a fixed budget of 25
Euros (about $30) to spend. With this
money, the subject could buy (and keep) 1
item from each of 6 categories (some bread,
a book, a gift item, some fruit, a magazine,
and some stationery). In order to recreate
the normal real life situation in which a
shopper has some particular ideas in mind
about the product being sought (as opposed
to looking for, say, just any magazine) the
subject was asked to specify in advance, for
each category, some specific characteristics
of the product sought (e.g., a particular
magazine or a particular type of bread). This
measure also introduced some (realistic)
uncertainty as to whether the desired
product would in fact be found in any given
store of the relevant type. Each subject was
allowed to shop for at most 20 minutes. This
restriction was explained to the subjects with
reference to the real-life situations of having
to finish shopping before the closing time of
the mall or before leaving the mall to go to
an important appointment.

Subjects We recruited 21 subjects (10
female, 11 male), aiming mainly to find
subjects who were not familiar with the
shopping mall in question (because subjects
who are thoroughly familiar with a shopping
mall can be expected to benefit relatively
little from a shopping guide). Of the 21
subjects, 14 had not been to the shopping
mall within the previous 12 months, and
none had been there more than 3 times
during that period. Only 6 of the subjects
had experience with navigation systems, in
most cases with car navigation systems.
Only 2 subjects had ever used a PDA. The
subjects' ages ranged from 21 to 73 years,
with a median of 27.The program exploited the
fact that, given knowledge of the mall topology
and the current shopping policy, the movements
of a user were largely predictable. Although on a
few brief occasions subjects moved too
unpredictably for the experimenter to keep up.
http://www.ece.de/de/shopping/center/spn/spn2.j
sp
We did not include a control condition in which
the subjects shopped without the shopping guide,
because (a) we wanted to devote all available
resources to the study of the shopping guide in a
natural setting and (b) we assumed that subjects
already had a great deal of experience in
shopping in shopping malls in the normal way.
Since no reliable gender differences were found
in the results, the gender variable will not be
mentioned again.



































Fig. 2. Time required to finish shopping for
subjects who (a) did and (b) did not find all
6 items.

Procedure Each subject performed the
shopping task individually, accompanied by
the experimenter. First, the experimenter
familiarized the subject with the use of the
shopping guide. He then explained the
shopping task described above and asked
the subject to characterize the specific
product to be sought in each category. The
subject began at the mall's main entrance
and was followed around by the
experimenter. Each time the subject's PDA
received a signal from a virtual beacon, the
shopping guide displayed the navigation
recommendation specified by the previously
computed policy.
When a subject found one of the desired
products in a store, he or she checked it off
of the list, and the experimenter paid for it.
When the 20-minute shopping period was
over, the subject filled in a questionnaire and
was subsequently asked in a verbal
debriefing to elaborate on her answers.
3.2 Results
The study proceeded without problems, and
the subjects supplied detailed comments
on the questionnaire and in the debriefing.


Subjects' Performance With the
Shopping Guide The recommendation
policy used by the shopping guide, which
took into account the time limit of 20
minutes, did in fact ensure that all subjects
reached the mall exit before the deadline
had elapsed. This result is not trivial,
because of course not all subjects
proceeded with the exact speed assumed in
the underlying model. Even when a subject
walked more slowly than predicted, spent a
relatively long time in stores, or simply
experienced bad luck in finding the particular
products that she had specified, the
recommendation policy was able to bring the
subject to the exit in time by in effect giving
up On finding one or more products.
Six of the subjects were guided to the exit
although they had managed to find only 5
of their 6 products (or only 4, in the case
of one of these subjects). In all of these
cases, the shopping guide recommended
skipping the (relatively expensive) gift item,
which would have been bought at a gift store
near the mall exit. When they arrived at the
gift store, these subjects had between 2 and
6 minutes left less than the typical time
assumed in the system's model for finding
and paying for a gift item. Therefore, the
system acted appropriately in advising them
to bypass the gift store, given the higher
priority goal of reaching the mall exit in time.
But why didn't the policy recommend
skipping a less valuable item (such as the
magazine) at an earlier stage of the
shopping trip, freeing enough time to look for
a gift later? In fact, a thorough analysis of
the policy reveals that the system exhibits
exactly this behavior in some time windows.
But if a subject performs well in the early
stages of the shopping, the system in effect
does not consider it necessary to skip a less
important item, expecting that the subject
will manage to complete the entire shopping
task. If the subject shops more slowly than
expected later, the system has no choice but
to recommend skipping the last item, even if
12

8

4

0
N
u
m
b
e
r

o
f

s
u
b
j
e
c
t
s

<10 10 to 15 15 to 20
Time (minutes)
12

8

4

0
N
u
m
b
e
r

o
f

s
u
b
j
e
c
t
s

<14 14to16 16to18 18to20
Time (minutes)
it happens to be the least desirable one to
skip.
Having to skip the gift store was a potentially
disappointing result for those subjects who
had almost enough time left when they
reached this store. If these subjects had
understood exactly what was going on,
instead of going straight to the exit they
could have taken an action that was not
represented in the system's model: They
could have entered the gift store, seen if
they could find a suitable gift especially
quickly (keeping an eye on the remaining
time) and if necessary left the store without
a gift just before the time ran out. It would be
possible to enrich the system's model so
that it could actually recommend this type of
action; but generally speaking situations can
often arise in which it is desirable for the
user to be able to second-guess the
system's recommendation. We will return to
this point in Section 3.3.

3.3 Discussion
Taken together, these results indicate three
directions in which the tested prototype still
calls for improvement, despite the generally
positive feedback:
Further Reduction of Distraction
From the Environment One way in
which all mobile and wearable systems can
hamper the natural performance of a user's
task is simply by unnecessarily consuming
physical or mental resources that could
otherwise be devoted to the task. Although
our efforts to minimize this type of distraction
(e.g., easy-to-perceive animated arrows;
acoustic alerts to changes in the display)
were partly successful, hardware
improvements that minimize the need to
hold the PDA in one's hand and to use a
wire with the earphone are still called for.
Allowing the User to Second-Guess
the System As we have seen, there are
various situations in which a user may have
good reason to deviate from the system's
recommendation, and several users
expressed a desire to do so. The system's
modeling of the user's shopping needs,
preferences, and capabilities can never be
more than a serviceable approximation. In
addition to the observed examples
mentioned above, the user may, for
example, have a strong liking or disliking
for a particular chain of stores, which the
system is unlikely to know about. In such
cases, the user should have the option of
deviating from the system's
recommendation without sacrificing the
benefits of using the system; but she should
also be able to judge whether deviation from
the recommendation will in fact lead to an
improvement or not. The results of the study
suggest several specific enhancements
along these lines:

1. One reason why subjects felt that they
could not second-guess the system is that
they had little idea of the consequences of
doing so. In reality, the system is quite
robust in this regard, much like a car
navigation system that computes a new
route if the user for some reason leaves the
recommended route: Even if the user were
to disregard the system entirely for a while
and wander around at will, as soon as she
began consulting the recommendations
again, they would once again constitute an
optimal policy in view of her current
situation, including the amount of time
remaining.9 In such a case the user's
sequence of actions before and including
the deviation may no longer be optimal
in any sense. On the other hand, if the
user's deviation is minor and well-founded
(e.g., going into a non-recommended store
because the user sees a desired product
displayed in the store window), it is likely
that the entire sequence of actions will be
more successful than it would have been if
the user had followed the recommendations
strictly. The system does not even need to
recomputed its policy in this case, because the
originally Computed policy already takes into
account all possible states that the user might
reach, as long as they involve the modeled
locations and the relevant time interval.
Since these basic facts about the system
were not clear to the subjects in the study, it
is understandable that they were hesitant to
deviate from the recommendations even
when they saw the desirability of doing so.
One strategy for improvement, therefore is
to convey to the users in some appropriate
way the necessary understanding of the
system's capabilities.
2. Even if users know that they can deviate
from the recommendations, it will be difficult
for them to do so if they have little
information on which to base such a
decision. The users in our study could see
the stores around them, and the overview
maps also showed stores that were not
immediately visible. But it was not made
explicit what alternative options the user had
for finding the desired products. A second
strategy for improvement, then, is to give
users some sort of preview of what will
happen if they take some action that was not
recommended by the system.
One possible way of realizing both of the
solutions just proposed is to follow a trend
that is becoming increasingly popular in the
area of recommender systems (see,
e.g., Herlocker et al. [16]): providing a
mechanism for explaining the system's
recommendations. The next section will
describe how we designed and implemented
a simple explanation component and how
users responded to it in a small follow-up
study.

4 Introduction of an Explanation
Component
A well-crafted explanation can convey
understanding not only of how a
recommendation
was arrived at by the system but also of the
basic way in which the system works
and of how much faith the user should place
in it (cf. J ameson [17], section 15.7).
It is not a priori obvious what content should
be presented in an explanation of a
recommendation
based on decision-theoretic planning or in
what form the content should
be presented. A particular challenge with
this type of recommendation is that an
individual
recommendation may be understandable
only as part of a larger recommendation
policy.
In order not to rely entirely on our own
intuitions, we followed the strategy of
Herlocker
et al. [16], generating a variety of different
types of content and forms of representation
and presenting them for feedback to a group
of potential users (see Bohnenberger
[12], section 7.4, for details). Although these
subjects (33 students in a computer science
lecture on Intelligent Environments) were not
representative of the entire population
of potential users and were not using the
recommendations while performing a real
shopping task, they did provide some
interpretable feedback on the naturalness
and perceived value of various elements of
explanations.

4.1 Initial Implementation
In order to see how explanations are used
and evaluated in a natural setting, we
prepared a relatively simple initial
implementation, making use of ideas that
had emerged from the questionnaire study.
What Is Presented to the User Figure 4
shows two examples of explanation. The
first one might be offered relatively early in
the shopping trip used for the study in the




Fig. 4. Screen shots from the initial
implementation of the explanation mechanism.
(Each store is represented by a rectangle that
shows the name of the store and the type of
product that it sells. For example, Wish and
Nanu-Nana are two alternative gift stores.)

mall, when the user has so far found just 2
of the 6 desired products. The (at most 3)
walking options with the highest expected
utility are always shown, the recommended
one being placed in the middle of the screen
and highlighted with a more salient
background. The information provided about
each option concerns two attributes that
were judged especially important in the
questionnaire study and that were fairly
straightforward to implement: the expected
length of time until completion of the
shopping task if the option in question is
followed and the next three (at most) stores
that the user can expect to encounter in both
cases, on the (optimistic) assumption that
the user will find each product in the next
possible store and will proceed with the
expected speed. In this example, the user
can recognize that the option on the right is
quite similar to the recommended option,
apparently involving only a different order of
visiting the first three stores and a slightly
longer expected duration. The user might
choose the right-hand option if for some
reason she wanted to go to the gift store
before the other two stores.
The second explanation shown in Figure 4
would be offered a bit later on, after the user
had found the bread. Note that these
displays explain the system's
recommendations only in a superficial way,
by showing the properties of alternative
options in such a way that the user may be
able to see why one of them is probably
preferable. More generally, we will see
shortly (Section 5.1) that the term
explanation may not be the most apt
characterization of the supplementary
information provided by our guide or by
other recommender systems.

Necessary Computations The type of
explanation just discussed is so simple that
the explanations can be computed on-line
on the PDA. The current implementation
computes an explanation each time the user
arrives at a beacon that corresponds to a
location at which the user could walk in
more than one direction. For each of the
available options, the system simulates the
application of the recommendation policy for
a user who goes in the direction in question,
assuming that every action is successful.
For more sophisticated explanations, which
took into account the inherent uncertainty
involved in these actions, more
sophisticated computations would be
required (see Bohnenberger [12], section
7.4.3, for a discussion of some possibilities).

Deciding How to Make an
Explanation Available
Aside from the question of how explanations
should look, a system that can offer
explanations must address the question
of when the explanations should be
presented to the user. A relatively simple
policy was implemented for the new
prototype:
.When the difference between the overall
expected utility of the recommended option
is only slightly greater than that of the
second best option, the user must tap a
button labeled explanation to see the
explanation.
. When the difference is of moderate size,
the system displays on the normal screen a
text explicitly urging the user to consider
tapping on this button.
. When the difference is large, the
explanation is presented immediately
instead of the simpler display with a single
animated arrow.
The rationale is that it is especially important
for the user to see an explanation when the
recommended alternative is much better
than the alternatives, in case the user
should be considering deviating from the
recommendation. Although this rationale is
not necessarily the best one, it seemed well
suited to eliciting feedback in the study.

4.2 Follow-Up Study in the Shopping
Mall Method

Five subjects were recruited, two of whom
had participated in the first study in the
shopping mall. The method was the same
as for the first study, except for the following
additions:
1. When introducing the system, the
experimenter demonstrated how an
explanation could be requested (and how it
might appear spontaneously) and he made
sure that the subject understood the
information presented in an explanation.
2. The questionnaire presented at the end
included several new questions about the
explanation mechanism, and the
experimenter followed up on them during the
debriefing.

Results
Despite the limited number of subjects, the
answers given are sufficiently consistent and
interpretable to yield a fairly clear picture.
Regarding the overall question of whether
the subject found the explanations helpful,
3 subjects answered positively and 2
negatively. The main objection (mentioned
even by one of the subjects with a positive
attitude) was that under heavy time
pressure, a user is not inclined to look at
relatively detailed new information but would
rather simply trust the system's
recommendations. One question concerned
the circumstances under which explanations
should be offered spontaneously: when the
difference between the recommended and
the second best alternative was especially
large or especially small. Of the 5 subjects,
4 chose the former alternative, in effect
endorsing the rationale explained above. But
one of the same subjects also noted the
potential utility of an explanation even in the
opposite case:
.If the difference [in the predicted remaining
duration] is only one minute, the user can
decide himself whether he wants to take the
longer route, maybe because he considers
the fruit more important than the book.
One subject liked the explanations simply
because they displayed information about
the recommended option that went beyond
what was usually displayed (i.e., the
predicted remaining duration of the shopping
trip and the next three upcoming stores).

Discussion
Despite the roughly even split between
positive and negative overall judgments, the
following summary statements (which take
into account individual comments not
mentioned above) seem to reflect the
general consensus among the 5 subjects:

1. The appeal of explanations is relatively
low in situations involving high time
pressure, in which users tend to prefer
simple displays that can be taken in at a
glance (or better yet, with ambient vision, as
is possible with the animated arrows). In
addition to the obvious reason that reading
explanations consumes scarce time, another
reason for this preference may be the
subjects' recognition (which is expressed in
other comments) that the shopping guide's
recommendations are relatively hard to
improve on when time pressure is involved:
Given time pressure, the system is
exploiting not only its knowledge of the
products available in various locations but
also its ability to adapt its recommendations
continually to the approaching deadline a
task that humans are not especially good at.
2. The subjects do perceive several potential
benefits of explanations, but these benefits
are due only in part to the explanations'
function of clarifying the reasons for the
system's recommendations: An explanation
can also help simply by conveying additional
information about the recommended option
and/or the available alternatives, thereby
allowing the user to make a well-founded
decision to deviate from the system's
recommendation and/or to prepare mentally
for what lies ahead.
3. Regarding the unrequested presentation
of explanations, the strategy implemented in
the current prototype is perceived as being
basically reasonable, but some subjects
would like to exert more control over the
presentation strategy (for example, reducing
or eliminating spontaneous explanations in
the case of extreme time pressure).

5 Conclusions
5.1 Conclusions concerning the
shopping Guide
The fact that even the follow-up study with
the explanation component yielded
significant ideas about further improvement
of the shopping guide illustrates that the
optimal design of a system of this type is not
something that can be determined once and
for all. Each test involving a new version
and/or a new context may yield further ideas
for improvement.
But this fact is not surprising, since it also
applies even to widely accepted software
tools like calendar applications. Our detailed
discussion of results calling for
improvements should not obscure the
overall result that the majority of the subjects
found the shopping guide to be an attractive
tool for the support of shopping in some
types of circumstances, characterized
mainly by the unfamiliarity of the shopping
environment and the existence of some form
of time pressure. Both the basic functionality
of the system and many of the specific
interface features were found to be well
adapted to this type of shopping task.
Aside from some improvements that can be
realized straightforwardly, the necessary
improvements that came to light concern
mostly the need of users for certain types of
additional information, ranging from general
background information about the basic
capabilities and limitations of the system to
information that supports particular types of
thinking during the performance of the
shopping task (for example, deciding
whether to deviate from a recommendation
of the system). The results yield a good deal
of guidance as to how this information
should be presented; they also show that
the presentation must be selective and
subject to control by the user, since users
are highly sensitive to the presentation of
unnecessary information and since the
appropriate amount depends on individual
preferences and on situational factors such
as time pressure.
With regard to information that conveys a
basic understanding of how the system
works, one approach currently being
explored (see, e.g., Kroner et al. [18]) is to
create a special retrospective mode in which
the user can interact with the system outside
of the natural environment of use. For
example, at home at the end of the day, the
system might walk the user through some of
the day's events, presenting reminders of
what happened and explanations of its
actions. The idea is that when the user is
free of time pressure and attention-
consuming environmental events, she will
be better able to build up a mental model of
how the system works.
With regard to additional information about
specific user options and system actions,
the main challenge appears to be that of
ensuring that the timing and mode of
presentation are well adapted to the user
and the situation. A good solution will
probably involve some combination of
(a) Specification of long-term preferences by
the user,
(b) Requests by the user for specific
information during use, and
(c) Automatic situational adaptation by the
system.
5.2 A More General Lesson
One general theme illustrated by this
research concerns a fundamental tension
that arises with systems that are employed
while the user is interacting in a rich
environment.
On the one hand, users do not in general
want to receive and process much
information from a system of this type,
preferring instead messages that can be
perceived with minimal distraction from their
interaction with the environment. On the
other hand, users at least sometimes want
to use their own knowledge and
understanding to second guess and override
the information and advice provided by the
system; and doing so will sometimes require
getting more information from the system
than the user needs in order to take the
system's outputs at face value. Finding an
acceptable resolution of this tension may be
one of the trickiest challenges both for
designers of pervasive computing systems
and for their users.







References

1. Bohnenberger, T., J ameson, A.: When policies
are better than plans: Decision-theoretic planning
of recommendation sequences. In Lester, 2001:
International Conference on
Intelligent User Interfaces. ACM, New York
(2001) 21.24
2. Bohnenberger, T., J ameson, A., Kruger, A.,
Butz, A.: Location-aware shopping assistance:
Evaluation of a decision-theoretic approach. In:
Proceedings of the Fourth International
Symposium on Human-Computer Interaction with
Mobile Devices, Pisa (2002) 155.169
3. Bohnenberger, T.: Decision-Theoretic Planning
for User-Adaptive Systems: Dealing With
Multiple Goals and Resource Limitations. AKA,
Berlin (2005) Dissertation version available from
http://w5.cs.uni-sb.de/fi bohne/.
4. Boutilier, C., Dean, T., Hanks, S.: Decision-
theoretic planning: Structural assumptions and
computational leverage. J ournal of Artificial
Intelligence Research 11 (1999) 1.94
5. Shekar, S., Nair, P., Helal. A.: iGrocer: A
ubiquitous and pervasive smart grocery shopping
system. In: Proceedings of the 2003 ACM
Symposium on Applied Computing. (2003) 645.
652

















TSUNAMI AND EARTHQUAKE ALERT SYSTEM
THROUGH THE IRIDIUM SATELLITE SYSTEM (ISS)

THE ULTIMATE SOLUTION FOR NATURAL DISASTERS






By
R.VAMSI KRISHNA P.VENKATESWARA PRASAD
chinna_1325@yahoo.co.in venks06@gmail.com

II-B.Tech, E.C.E
N.B.K.R INSTITUTE OF SCIENCE AND TECHNOLOGY
VIDYANAGAR.


ABSTRACT: -
Earthquakes and
Tsunamis strike without warning. The
resulting damage can be minimized and
lives can be saved if the people living in
the earth quake-prone area are already
prepared to survive the strike. This
requires a warning before the strong
ground motion from the earthquake
arrives. Such a warning system is
possible because the energy wave
released at the epicenter travels slower
(at 3.5 to 8km/s) than light, which is the
principle behind developing this
application.
The warning signal from
the earthquake or tsunami epicenter can
be transmitted to different places using
the satellite communication network,
fiber-optics network, pager service,
cellphone service or a combination of
these. The satellite based wireless
network such as ISS is idle if system has
to cover a large continent like ASIA.
The credential part of the
paper lies in the applications part where
the applications of ISS as an alert
system for EARTHQUAKE and
TSUNAMI like natural disasters with
which the casualties can be reduced
drastically. For EARTHQUAKE,
TSUNAMI-prone countries like
Indonesia, J apan seismic alert system
using the ISS network spread throughout
the earth is proposed here.
This paper unleashes the system
facts such as the network architecture
and coverage, satellite constellation,
Frequency plan and modulation of ISS
system and its operation along with its
advantages and applications. Last but not
least, the innovative application of ISS
as TSUNAMI, EARTHQUAKE alert
system is explained in brief.
PROLOGUE:-
Iridium is a satellite based
wireless personal communications
network designed to permit a wide range
of mobile telephone services including
voice, data, networking, facsimile,
geolocation, fax capabilities and paging.
With this system caller can call to any
person, anywhere at any time in the
world.
IRIDIUM SYSTEM
ARCHITECTURE:-
The iridium uses GSM-based
telephony architecture to provide a
digitally switched telephone network and
global roaming feature is designed in to
the system.

Operation:
The 66-vehicle LEO inter-linked
satellite constellation can track the
location of a subscribers telephone
handset, determine the best routing
through a network of ground-based
gateways and inter-satellite links,
establish the best path for the telephone
call, Initiate all the necessary
connections, and terminate the call upon
completion. The unique feature of
iridium satellite system is its cross-links.

Its main intention is to
provide the best service in the telephone
world, allowing telecommunication
anywhere, any time, and any place Each
satellite is cross-linked to four other
satellites; two satellites in the same orbital
plane and two in an adjacent plane To
relay digital information around the globe.
Feeder link antennas relay information to
the terrestrial gateways and the system
control segment located at earth stations
IRIDIUM SATELLITE
CONSTELLATION:

The Iridium constellation
consists of 66 operational satellites and
14 spares orbiting in a constellation of
six polar planes. Each plane has 11
mission satellites performing as nodes in
the telephony network. The 14 additional
satellites orbit as spares ready to replace
any unserviceable satellite. This
constellation ensures that every region on
the globe is covered by at least one
satellite at all times.
Iridium uses 66
operational satellites configured at a
mean elevation of 420 miles above earth
in six nearly polar orbital times of 100
min 28 sec. The first and last planes
rotate in opposite directions, creating a
virtual beam. The co-rotating planes are
separated by 31.6 degrees and the beam
planes are 22 degrees apart.



llite is equipped with 3
L-band

Each sate
antennas forming a honeycomb
pattern that consists of 48 individual spot
beams with a total of 1628 cells aimed
directly below the satellite. As the
satellite moves in its orbit, the footprints
move across earths surface and
subscriber signals are switched from one
beam to the next or from one satellite to
the next in a handoff process. Each cell
has 174 full duplex voice channels for a
total of 283,272 channels worldwide.
IRIDIUM SATELLITE
NETWORK COVERAGE :

BEAM FOOTPRINT PATTERN
IRIDIUM SYSTEM SPOT

Each satellite is equipped with 3
L-band antennas forming a honeycomb
pattern that consists of 48 individual spot
beams, with a total of 1628 cells aimed
directly below the satellite as shown in
above figure each of the spot beam
approximately measuring around 30miles
or 50k.m.
FREQUENCY PLAN AND
MODULATION: -
All ka-band up-links and
cross-links are packetized TDM/FDMA
using quadrature phase shift keying
and FEC1/2 rate convolutional coding
with viterbi decoding.
L-band subscriber to satellite
voice links=1.616GHZ TO 1.6265GHZ
Ka-band gateway
downlinks=19.4 GHZ to 19.6GHZ.
Ka-band gateway up-
links=29.1GHZ to 29.3GHZ
Ka-band inter-satellite cross-links
=23.18GHZ to 23.38GHZ
Comparison between
iridium and traditional satellite
systems: -
Using satellite cross links is
the unique key to the iridium system and
the primary differentiation between
iridium and the traditional satellite bent
pipe system where all transmissions
follow a path from earth to satellite to
earth.
Iridium is the first mobile satellite
to incorporate sophisticated, onboard
digital processing on each
satellite.
Entire global coverage by a single
wireless network system.
Only provider of truly global voice
and data solutions.
With this system the subscriber
will never listen a message called
OUT OF COVERAGE AREA








ADVANTAGES: -


DISADVANTAGES: -
High risk associated with
designing, building, and launching
satellites.
High cost for the terrestrial-
based networking and interface
infrastructure.
low power, dual mode
transceivers are more cumbersome
and expensive



APPLICATIONS: -
Fixed cellular telephone
service
Complementary and back
up telephone service in fields of :
Retail
Manufacturing
Military
Government
Transportation
Insurance
EARTHQUAKE AND TSUNAMI
ALERT THROUGH ISS :
Earthquakes and Tsunamis
strike without warning. The resulting
damage can be minimized and lives can
be saved if the people living in the earth
quake-prone area are already prepared to
survive the strike. This requires a
warning before the strong ground motion
from the earthquake arrives. Such a
warning system is possible because the
energy wave released at the epicenter
travels slower (at 3.5 to 8km/s) than
light.
The warning signal
from the earthquake or tsunami
epicenter can be transmitted to different
places using the satellite communication
network, fiber-optics network, pager
service, cell phone service or a
combination of these. The satellite based
wireless network such as ISS is idle if
system has to cover a large continent
like ASIA.

For EARTHQUAKE, TSUNAMI-prone
countries like Indonesia, J apan seismic
alert system using the ISS network
spread throughout the earth is proposed
here. This system does not try to find the
epicenter or the fault line caused by the
earthquake.


PRINCIPLE: Energy waves released
travel slower than light waves .It simply
monitors the earth vibrations and
generates alert signal when the level of
earth vibrations crosses a threshold.

COMMUNICATING THE DANGER:
This GSM-based ISS alert system
monitors the earth vibration using a
strong motion accelerometer at the
earthquake-prone area and broadcasts an
alert message to towns and villages
through the cellphone network existing
throughout the state. Here wireless
mobile phones (ISS phones) are used as
transmitter and receivers.
The communication
system for earthquake alert comprises an
earthquake

Sesnsor and interface unit,decision
system and alert-disseminiation network.

The block diagram shown
here describes how the signal from
epicenter is detected and transduced by
sensors and then transmitted to network
or SMS server through which the
messege is transmitted to those areas
where earthquake is expected and again
from mobile phones the signal is
transmitted to alarm units where the
alarm rings indicating the problem that is
going to occur.



BLOCKDIAGRAM OF EARTH SENSOR NETWORK




INTERFACE
UNIT



After receiving alert, a
middle-aged person takes 30 to
40seconds to godown the stairs from
fifth floor and 65 to 80 seconds from
tenth floor. If it takes a minimum of 10
seconds to damage a poorly structured
house, this 10 seconds too can be
consider for going to safer place. If we
consider these points, giving earthquake
alert before the actual occurrence of
earthquake can minimize casualties.
Time to alert is critical. But in generating
the alert quickly, there are possibilities of
false alarm. In the system proposed here,
ACCELROMETER
MO
I
L
E
an attempt has been made to reduce the
possibility of false alarm. Still, the
system needs to be simulated and
validated before putting into practice.
EPILOGUE: -
Commercial point of view:

Availability of services and early
subscriber take-up will be the key to
survival for operators. Lower
infrastructure costs will further help in
early break-even and profitability for
network operators. Equipment vendors
should therefore focus on making
available cost effective solutions for
providing a wide range of services to
attract both business and non-business
users. Evolution, not revolution is the
only way to get to the market earl and
with the lowest cost.

Economic point of view:
Since the satellites have
already been launched it is important that
this system is applied as much as
possible. Innovative Applications like
seismic alert of earthquakes and
tsunami should be brought out which
serves the real purpose of being an
engineering application. Government
should also play a major role to get these
services close towards ordinary man and
should play its part in providing its
citizen the best possible communication
system in the world.

References:
1.Electronic Communication
Systems -WAYNETOMASI
PEARSON EDUCATION
2.Satellite Telecommunication.
SHELDONTMH.2000
3.EFY MAGAZINE-DEC 2004
EDITION.
REGISTER FORMAT




Name of the college : Sir C. R. Reddy College of Engineering

Name of the Students : P AVINASH
3/4 B.E (E.I.E)
J RAJ A PAVAN KUMAR
3/4 B.E (E.E.E)
Title of the paper : AUTONOMOUS CONTROL OF
MICRO ROVERS


Address of communication : J RAJ A PAVAN KUMAR,
C/O Sir C. R. Reddy College of Engg,
Vatluru,Eluru.
E-mail : jrajapavan@gmail.com
iva-1278@yahoo.com











ABSTRACT:
This paper describes the end-to-
end control system used to program and
supervise autonomous operations of a
micro-rover placed on the Mars surface
by a Lander spacecraft. This rover,
named Nanokhod getting power from
the lander through the tether rather than
from energy sources on the rover. The
tether also carries data and commands
between the lander and the rover.
To accomplish the required
autonomy within the restrictions of the
mission scenario, a novel end-to-end
control system has been designed,
featuring a control station to program
high-level commands for the lander-
rover pair. These commands are up
linked to the space segment to be
executed autonomously. Data uploaded
to the space segment during a
communications window describes rover
and lander operations for an entire day.
This paper concludes with a glance at
the modern technology rovers, its
features.





INTRODUCTION:
The term `micro rover' is
currently used to designate small
automatic devices (often with a mass
significantly below 10 kg) used in
planetary exploration. Micro rover is
smaller than the large machines used in
the high cost missions of the recent past.
Robotic devices of this kind are not
limited to automatic exploration, as they
can be very useful in performing a
variety of tasks in scientific or
application missions. Many operations
related to the construction of a lunar
base, for example, or to the commercial
exploitation of space resources need the
use of automatic or semiautomatic
devices able to move objects around and
to perform different operations.
The mission has been defined in
its goals, characteristics and cost
allowance. It is true that a small size is
beneficial in cutting costs, on the other
hand it reduces the mobility, the speed
and the capacity of carrying useful
payloads and sophisticated control
systems. This can be achieved by using
end-to-end control systems.

THE END-TO-END CONTROL
SYSTEM:
The primary scientific objective
of the rover is to perform geo-science on
Mars.
Specifically, the rover must reach about
20 sites around the lander, where it
applies its instruments in order to
measure soil/rock characteristics.For this
mission,communication with Earth is
only possible once per day, via a data-
relay satellite. The satellite uses a store-
and-forward communication mechanism,
and has direct contact with Earth for less
than 8 hours per day. Due to this
restriction and the significant
communication delays, a high degree of
autonomy is required of the space
segment (the rover and the lander), in
order to achieve the mission objectives
in a reasonable time.
To accomplish this goal a novel
end-to-end control system has been
designed. This features a ground control
station to program high-level commands
for the lander-rover pair. These
commands are up linked to the space
segment to be executed autonomously.
Data uploaded to the space segment
during a communications window
describes rover and lander operations for
an entire day.
The ground system uses a digital
terrain model of the environment in
which the rover operates. The model is
constructed automatically from images
acquired by a stereoscopic camera
system placed on the lander. The model
includes the topography of the terrain
surrounding the lander as well as
estimated soil characteristics. The
ground system allows the mission
scientists to select sites in the terrain
where measurements must be performed.
Subsequently, a rover operator
interactively builds, explores and
evaluates the possible paths connecting
the selected sites. This exercise aims at
minimizing a weighted combination of
power consumption, risk of entangling
the tether, risk of slipping on slopes, etc.,
and is performed with the support of
various automation tools, including an
automatic path planner and a rover
simulator.
The rover operator also selects
which recovery action should be
automatically undertaken in case of
anomalies in the rover motion during the
autonomous operations on Mars. The set
of commands programmed in this way
implementing the daily mission of the
rover are then up linked to the space
segment, and executed autonomously by
the lander control computer. The rover is
steered along the defined paths and
driven to apply its instruments to the
desired sites in a completely autonomous
way. The lander computer tracks the
position of the rover using a computer-
vision localization system that uses the
stereoscopic cameras. Anomalies
detected via this means are addressed
using the pre-defined recovery methods.
However, when the anomalies cannot be
recovered autonomously, the rover is
automatically put into a safe mode,
allowing for ground-based operators to
define the recovery action during the
next communications window. In the
nominal case, the rover completes its
daily mission autonomously, and then
goes into standby, ready for a new set of
commands from the ground, describing
the next day's operations.
This end-to-end system comprises three
main components:

The Nanokhod rover and its On-
Rover Control System (ORCS);
The On Lander Control System
(OLCS) accommodated on a Lander
deploy the rover and establish the
communications with Earth, and
control the Imaging Head used to
make images of the Martian
environment and provide localization
capabilities for the rover;
The On Ground Control System
(OGCS) is a specifically
implemented to allow a non-robotics
expert to plan the rover activities and
consulting the results for previous
operations.


The following picture represents all
the elements of the end-to-end control
system:




THE GROUND CONTROL
SYSTEM:
The On-Ground Control System
software is based on the FAMOUS
robotics control station developed by
Space Applications Services (Flexible
Automation Monitoring and Operations
User Station). This station allows the
control of the rover six levels of
abstraction. The six levels are (from the
highest layer down): Compound Task
level, Task level, Action level, Actuator
level, Device level, and Physical level.
The rover is normally controlled at the
highest level, i.e. by Compound Tasks,
which typically include instructions to
move the rover from a starting position
to a desired final position and perform a
series of measurements at that location.
The operator defines the days
rover operations by selecting the
corresponding Compound Tasks. The
ground control station then prepares,
verifies and validates the Compound
Tasks and sends them to the lander for
execution. The operator may also choose
commands at lower levels as required,
typically down to level four only, i.e. at
Task, Action and actuation level, as
commanding at lower levels becomes
impractical due to the long end-to-end
delay times in the space-ground control
loop.


THE LANDER AND THE IMAGING
HEAD:
For supporting the control and
navigation of the Nanokhod rover,
additional equipment is placed on the
lander. This comprises:
An Imaging Head placed on the
lander top and fitted with optical
means to allow the localization of the
Nanokhod and to generate images to
be transmitted to the ground for
generation of the model of the terrain
around the Lander;


A computer that runs the navigation
software, controls the rover through
the tether.
The Imaging Head enables the
tracking of rover motion and supports
the determination of its relative position
and attitude of the rover with respect to
the lander itself. It captures stereo
images required for generating a map of
the terrain around the lander. The
Imaging Head is mounted on a rod of
some 1.5m height on the lander.
The lander performs the rover
localization control: it moves the
Imaging Head so its two
Cameras follow the rover position, it
activates the LEDs and evaluates the
rover position from the stereo views and
acquires images of the environment
around the lander, to allow the Ground
Station to reconstruct a Terrain Model.




The prototype Imaging Head
The lander also establishes and
maintains the communication to the
OGCS:
The up-link command channel
contains the desired compound
tasks, tasks or actuator commands,
etc.;
The downlink telemetry channel
contains the environment data, the
rover status as well as the sensor
information as requested from the
OGCS.
THE NANOKHOD ROVER:
The Nanokhod is a rugged,
simple, reliable yet effective rover, to
carry instruments in the immediate
surroundings of a lander. In order to
maximize locomotion efficiency, the
Nanokhod carries only what is strictly
needed for moving and deploying the
instruments. No batteries or other power
supply are on the rover, but it is
equipped with a tether cable, providing
the rover with power and data
connection to the lander. This results in a
very high instrument-mass/rover-mass
ratio.


The rover receives power and exchanges
data with the lander via the tether.
The actuators of the Nanokhod
breadboard comprise:
The left track and right track
locomotion motor, to allow the
Nanokhod to move (track left right
forward-backward-stop);
The payload cab (PLC) articulation
motor (PLC up-down-stop);
The lever articulation motor (lever
up-down-stop);
4 LEDs that can be individually
activated, used for localization of the
rover by the lander.
Odometers (as 2 magnetic encoders)
to measure the rover motion;
Angular encoders: one measuring the
PLC angle, the other measuring the
lever angle;
6 contact switches at the front of the
PLC.
PATH PLANNING FOR THE
ROVER:
The primary scientific objective
behind usage of the Nanokhod rover is
to achieve a geoscience mission by
determining the composition of a series
of rocks in particular areas. As a
consequence, the corresponding broad
concept will most likely be a kind of
circular movement in an annular
segment around the planetary lander,
with local radial excursions. The rover
follows trajectories composed of many
straight lines, which roughly will follow
the chosen circular segment, with some
local excursions. Around this reference
circular segment, the Nanokhod is
expected to visit about 20 sites of
scientific interest.
To execute the Rover movement
on Mars, a Path Planner on the ground
control station is used. The Path Planner
is capable of finding paths which
minimize: risk of tipping over, risk of
entangling the tether, length of tether
used, time to traverse, risk of sliding due
to slopes and poor soil contact, and risk
of getting stuck in loose soil.
The desired optimization criteria
can be selected by the OGCS operator. If
more then one criterion is selected, the
weighted sum of the corresponding cost
functions is minimized. To minimize the
risk of collision with obstacles, the Path
Planner takes into account localization
errors and avoids areas that are invisible
to the Imaging Head Cameras. The Path
Planner computes a route between the
current position and the desired site.
This route consists of a sequence of Path
Segments.
The Path Segments are identified
by applying a two-step A* algorithm. A
refinement of the path is performed with
a higher resolution map. The high-
resolution path is not directly usable and
needs to be decomposed into a collection
of segments and associated waypoints.
This leads to candidates for the Path
Segments. These candidates are
validated thought simulation if it is
possible to traverse them with single
Piloting Action (i.e. the rover does not
tip over, does not over-consume power,
etc.).
In the case of the Nanokhod, the valid
Piloting Actions are:
Move to a position (composed of a
rotation on the spot and a move
straight)
Climb an obstacle
Overcome a trench.
Although a Path has been found,
e.g. minimizing the traverse time, it may
not be suitable in terms of risks or
resources consumed. Risk taken,
resources requirements, etc.; along a
Path Segment are modelled by Profile
curves (i.e. the power consumption and
the slope for the rover varying with the
time). The maximum (or minimum)
required values can also be modelled by
such curves, and the feasibility of
traverse finally assessed by comparing
the simple / integrated Profile obtained
with a feasible or recommended
profile.
The peak currents for the most
demanding drive torques must stay
within a power envelope,
The slopes that the rover encounter
must be kept within the capabilities
of the rover the slopes that the rover
encounter must be kept within the
capabilities of the rover,
The Nanokhod must remain
sufficiently visible to the imaging
Head: this can be verified by
comparing the achievable accuracy
with a minimum recommended
value.





The minimum and maximum limits are
themselves Profile curves modelling the
capabilities of the rover and its relevant
parts (motors). The suitability of the
chosen Piloting Action with the
characteristics of the terrain along the
Path Segment to traverse can also be
evaluated by such comparison. For
example, climbing over a small obstacle
or moving to a point behind a 10 cm
obstacle can be identified as
questionable, at least for certain parts,
and therefore suggest ways to further
split the Path Segment








ADDITIONAL ROVER
FEATURES:
The rover discussed above need to be
improved in some aspects. Some of the
features have been added to the rovers to
make them more flexible and more
useful. These are discussed below.



NAVIGATION SENSORS:
Sensors, usually small cameras
are used for navigation. The newer Mars
Exploration Rovers (MERs) have nine
cameras, mounted on the lower portion
of the front and rear of the rover, these
black-and-white cameras use visible
light to capture three-dimensional (3-D)
imagery. This is used by the rovers
navigation software enabling it to "think
on its own." and avoid any unexpected
obstacles.
The other type of camera sensor
on a rover is used to create colour
stereoscopic images. On the MER these
are mounted on a mast assembly which
lifts the cameras to a height of 1.4
meters off the ground, giving scientists
on Earth a human-like perspective of the
surface of Mars.
The cameras used for Rover
navigation are small CCD (Charged
Couple Device) Cameras similar to Web
Cams With out cameras the Rover is
blind. Rovers typically have more than
one camera which they can use to
navigate with.



Pancam
Navcam and Hazcam
ME
Hazcam and Navcams are both
used for navigation. They take black-
and-white stereoscopic images with a
broad field of view. Pancams take high-
resolution views of the surface and sky.
They capture images within the Blue to
Infra Red light spectrum by using eight
different colored filter.




SOJ OURNER: 3 Cameras:
2 Forward B&W (Hazcam)
2 Rear B&W (Hazcam)
2 Mast B&W (Navcam)
2 Mast Color (blue to IR)
1 Arm mounted B&W
MER9 CAMARAS









POWER: Rovers require power
to operate. Without power they cannot
move, use science instruments, or
communicate with Earth. The main
source of power comes from a multi-
panel solar array and on board batteries.
The power generated by the solar array
and batteries is conditioned and
distributed using complex arrangement
of power Electronics.




Solar Panels provide power for
recharging the batteries, driving the
rover, running computers, lasers, motors,
radio, operating scientific instruments,
and heating the Warm Electronics Box
(WEB). During the times when there is
either too little or no sunlight for the
solar array, Batteries are used power the
Rover hardware. Battery power is used
cautiously since the batteries store only a
limited amount of energy and can only
be recharged when there is sunlight.
COMMUNICATIONS:
Rovers have to communicate
with their operators across great
distances. Ultra High Frequency (UHF)
radio waves are transmitted from Earth
into space. Radio Waves are used to
send commands to the Rover, while its
own radio transmitter sends data and
images back to Earth. The radio is
located inside the Rover's Warm
Electronics Box (WEB) where it is
protected from extreme environmental
conditions for this purpose they are
equipped with three antennas:
1. High gain antenna 2.UHF relay
antenna 3.small UHF antenna
CONCLUSION:
An end-to-end control system has
been described to implement highly
autonomous operations of a rover on a
planetary surface. The control system
allows scientists and operators to select
sites for exploration by the rover in a 3-
D model of the terrain surrounding the
lander. The ground control system then
automatically determines the optimal
rover path to visit these sites in a safe
and efficient manner, and uplinks this
information in the form of high-level
commands (the so called Compound
Tasks). The lander subsequently
executes these Compound Tasks and
ensures that the rover reaches its
destination as planned.
This technology has applications
on Earth, especially for control of
mobile robots or vehicles in harsh
environments. A practical application
which is currently being implemented is
to use the technology for the control of
vehicles and machines in underground
and surface mining in Canada. Other
potential applications include control of
robotics for clearance of anti-personnel
mines, and control of remotely operated
robots for sub-sea applications, arctic
exploration, etc.






(
(
(
N
N
N
a
a
a
n
n
n
o
o
o
t
t
t
e
e
e
c
c
c
h
h
h
n
n
n
o
o
o
l
l
l
o
o
o
g
g
g
y
y
y


s
s
s
e
e
e
c
c
c
u
u
u
r
r
r
e
e
e
s
s
s


t
t
t
h
h
h
e
e
e


n
n
n
a
a
a
t
t
t
i
i
i
o
o
o
n
nn
)
)
)




(World seems to be very small before you)




Name of the authors:M. DEEPAK
D. S. KARTHI K
EMAIL IDS: deepak_vits428@yahoo.com
dsk4986@yahoo.co.in
Branch: ECE

Name of College: VISHVODAYA INSTITUTE
OF TECHNOLOGICAL
SCIENCES
KAVALI, NELLORE (DT)-524201










ABSTRACT

In the present world every country thinking of making there nation
secured &safe.The countries which were devoloped are investing much on the
weapons,communication systems e.t.c., to strenghthen their strenghts.But the countries
which were still devoloping were unable to invest such large investments.The soldiers in
the war weighing large weapons,huge weights.So, to these all problems a solution is to be
there.
Yes ,for every problem there will be a solution.Here for
this problem the solution is NANO TECHNOLOGY.
With the potential benefits of nanotechnology, both for social
and national security reasons, technologies are in development that enables early
detection of hazardous conditions, goods and human behaviors. Sensing and detection,
surveillance, situational awareness, interpretation, automated scene understanding are
crucial aspects. In this paper we present the concept of the future war weapons, military
vehicles, unmanned vehicles & robots, armaments, command & control, soldier warfare,
human performance.



INTRODUCTION

Future military missions require its stake holders to do more with less.
So NANOTECHNOLOGY offers military stake holders
To get more performance, functionality, lifespan the present
technologies need more size, more space, more power, maintenance difficulty. So,
NANOTECHNOLOGY gives you above advantages with less weight, size, maintenance,
power and cost.



What is meant by NANOTECHNOLOGY?

Before going to the nanotechnology let us know about the meaning of NANO.
NANO means
One billionth of meter
NANOTECHLOGY is the technology which uses the particles of such nano-size to
implement in real time applications.




Particles which are used in NANOTECHNOLOGY

Quantum Dots are nano scale
Particles which change their
Properties in some useful way
With the addition or removal
Of an electron

Carbon nano tubes are
Stronger than steel, harder
than diamond, lighter than
Aluminum, more conductive
than copper and a good
Semiconductor (Ecole
Polytechnique)

Smart material that
changes its shape when
heated or automatically
responds to a specific
change in the environment (AFRL)



Properties of NANO materials

Nanometer sized particles are smaller than UV rays and can
Completely block UVA and UVB when dispersed in sunscreen












Nano composite materials provide higher strength, lower weight and new
functionality:
Super-tough nano composite coating withstands greater point load than
ceramics (AFRL).







Super-tough nanocomposite coating
Tube
Nanocomposite using carbon nanotubes to enhance the properties of polypyrrole
(Korean National Lab).

Boeing 7E7 features composite fuselage
and wings to reduce fuel and
support health monitoring systems

ADVANTAGES OF MINIATURISATION

As the nano materials are miniaturized in size they have some advantages.
They are as follows:

Miniaturization delivers better products at a lower price

Actually the researches done in laboratories that are in buildings gives
results in days duration.But, in labs present in box gives results in hours, in labs present
in chip gives you result in minutes very accurately.






\



Miniaturization fasts the test and gives accurate results.

Nano materials are expected to greatly reduce the volume of
materials required by industry











Nanotechnology allows researchers to work on the scale of nature:







Nanotechnology and nature will bring the Convergence of multiple
roadmaps:


The pyramid of NANOTECHNOLOGY


NANO DEFENSE

NANODEFENSE means using NANOTECHNOLOGY in the
systems used in military, navy, air force etc for solving some of unsolved problems and
to minimize the problems from another technology.

How Nanotechnology is impacting Military Technologies:

Nanotechnology and Microsystems are greatly improving the performance of military
end-products and reducing the cost.

Nano/Micro Materials and Devices

The devices designed using nano materials are having the following properties:
1) Low mass and ultra-small size 2) Radiation shielding
3) High performance and functionality 4) longer life
5) Resistant to the environment 6) Lower risk of catastrophe
7) Stronger materials 8) Reduced material failure
9) Non-corrosive 10) Embedded stress detectors








Lighter Aircraft which takes less moving parts
less fuel (E.g. Actuators) Less breakage, Less downtime,
Reduced maintenance

Uses of nanotechnology in the Military:-

The NANOTECHNOLOGY is used in military for the designing of

MILITARY VEHICLES
UNMANNED VEHICLES& ROBOTS
ARMANENTS
COMMAND & CONTROL
SOLDIER WARFARE
HUMAN PERFORMANCE
The blocks of technologies which were used in the platforms& the
platforms used in the systems& the systems used in the revolutionary military
applications are patronized in the below block diagram:






















Let us discuss briefly about every application noted above:


Military Vehicles

The vehicles which were used in the military should be very fast, light weight, high life
span etc features. So, NANOTECHNOLOGY can
Make justice to needs and also be very cheap.

Operability:-
We get Improved speed, maneuverability, range, &
Fuel economy from:
Lightweight nanocomposite structures
Miniature devices / MEMS (e.g. actuators)
Nano-coatings and self morphing for reduced
drag
New propulsion materials and systems
Flexibility:-
Platforms for modular vehicle design and support for upgrades
Longer life for materials and systems
Survivability:-
Stealth materials and systems for reduced signature
Built-in sensor network for structural health monitoring
Sensors and systems for poor weather control
Enabling Technologies:-
Fly-by-light/Power-by-wire
Fly-by-thought (brain activity, head movement, eye movement, voice sensing)
Maintainability:-
Miniaturized COTS components (devices, actuators, sensors)
Replacing hydraulics with electrical & optical devices and power
Improved paints and coatings (including paint removal technologies)
Structural prognostics and health management

Unmanned Vehicles / Robots

Battlefield Robot:-
Friend or foe & reconnaissance
(Carnegie Mellon)



Micro-Unmanned Air Vehicle:-
With motors, linkages, a video camera,
an up/down-link RF communication receiver and antenna.
(Intelligent Automation)

Underwater Robot:-
for mine clearing in shallow water
(Northeastern University, DARPA, ONR)



Fractal Shape-Changing Robot:-
can effect a change in the shape of the
robots structure (Blue Yonder)



Armaments

Lethality:-

Smaller, lighter weapon systems that is more mobile,
have longer range, and increased velocity from:

High strength, lightweight structural materials with
reduced aero-thermal-dynamic erosion
More powerful energetic materials
Improved sensing / target detection capabilities
Sensor protection from military and environmental threats
Smart materials to control impact and weapon functions
New fabrication methods for unique capabilities
(E.g. controlled fragmentation Warheads)

Survivability:-

Reduced chance of accident from energetic materials that are less sensitive to
environmental conditions, handling and storage

Accuracy:-

Better computers and electronics to process more information in less time
Smart imaging sensors for very precise target selection
Environmental sensors for global weather data




Command and Control

Operability:-

Low-cost miniature devices and Systems
making command and Control More
technologically and economically efficient
Computers, electronics, imaging,
Wireless technologies
Unmanned vehicles / robotics
NanoPico Sats

Information:-

Low-cost real-time sensors operating 24/7 in remote locations
Highly sensitive, multiplexing chemical and biosensors for early warning of CB threats.
Acoustic sensors for tracking of noise-emitting targets
Miniatures camera
Electro-optical sensors for improved and smaller night vision devices

Data Security:-
Quantum computing for encryption

Soldier Warfare

Mobility
Light, multifunctional materials & systems for
load reduction 100 lb. >30 lb.
Integrated power, data bus, antenna
Form fitting uniform
Waste elimination

Survivability
Multi-functional combat suit with devices and
Materials for:
Ballistic protection
NBC protection
Thermo baric protection
Laser light protection
Micro cooling and heating
Hydration system

Stealth:-
Materials for:
Reduced signature Active camouflage
Fight ability:-
Components for:
Multi-spectral vision system
Audio system and voice control
Actuators for increased human performance

Lethality:-
Components for:
Modular weapon system
Advanced laser system
Advanced sights

Human Performance & Well-being

Survivability:-

Low cost multiplexing sensors for the rapid detection of NBC threats
Low cost portable medical devices for field use
New drugs/drug delivery systems

Intelligence:-

Wearable devices & implants for:
Biomedical monitoring
Physiological monitoring
Automated self medicating
Genetic testing to determine disease and
Stress susceptibility factors

Performance Enhancement:-

Devices for improving vision, hearing and Other senses
MEMS devices and tools for increasing strength
Automated injection systems for the delivery of medications

Repair:-

Miniature implantable medical devices (e.g. Defribulators)
MEMS assisted prosthetics with improved biocompatibility
Improved technologies for restoring vision, hearing and other senses
Smart surgical tools and systems





APPLICATIONS OF NANOTECHNOLOGY IN OTER AREAS

Nanotechnology opens up vast new capabilities

IN MEDICINE:

1) 2)



Systematically organize and manipulate matter Camera that can be swallowed for
from the bottom up and control every atom medical imaging

3)

DNA Sequencing (NASA)





IN INDUSTRIES:-











Fuel Cell Membrane (Celanese) for filtering

Nano scale electrodes provide ultra sensitive detection and
highly focused stimulation:-

Brain Implant Eye Chip











Electrochemical Biosensor (NASA)

Micro and Nano Systems-on-a-chip provide multiple
functionality in a tiny footprint and COTS pricing:-











MNT (micro nano technology) Platforms are the Lowest Common
Denominators of Aerospace End-Products:-

The areas where NANOTECHNOLOGY is applied:



















Implications of Nanotechnology to DRDC and Canada:-

1. Nanotechnology has the potential to develop next generation technological solutions
2. Canadian challenges such as Peacekeeping, Border Security, and Biothreat Early
Warning Systems can benefit from innovation employing nanotechnology and other
technologies
3. DRDC funding can be leveraged by additional funding from other federal and
provincial agencies, the private sector and allied countries through a CANEUS-type
initiative. This can reduce time-to-market for new technologies and stimulate economic
development in Canada.

Conclusion:-

NANO DEFENSE secures the country& increases the strength. It gives large
with fewer efforts and less losses. Nanotechnology has the potential to provide the
aerospace industry a collective process for providing new platforms with predictable
release dates that have better performance, increased functionality, better reliability and
lower unit prices. Most industrial countries have created Top Down National
Nanotechnology Initiatives (NNI) for technological and economic development.

Nanotechnology has broad implications for the Military. By using
NANOTECHNOLOGY we can reduce the size of the devices, increase the speed,
decrease the power consumption etc.

So, the whole world seems to be very small before you. Lets us wait for a
NANOWORLD.












.
Going small for big Advances

By
M.HARI KRISHNA &
R.SANDEEP









Sri Venkateswara University College of
Engineering,

Tirupati 517502.

Address:
M.Hari Krishna,
R.Sandeep,
Room No.:1305,
Visweswara Block,
SVUCE Hostels,
Tirupati,
PIN:517502.
Email: krrish_105410@yahoo.co.in
sandeep.eee.342@gmail.com









Abstract

At present there are wide varieties of Technologies, which are vastly being
used to analyze biological cells to diagnose diseases and develop methodologies to cure
diseases. One such technology is Nanotechnology.
A nanometer is a billionth of a meter. It's difficult to imagine anything so
small, but think of something only 1/80,000 the width of a human hair. Ten hydrogen
atoms could be laid side-by-side in a single nanometer. Nanotechnology is the creation of
useful materials, devices, and systems through the manipulation of matter on this
miniscule scale. The emerging field of nanotechnology involves scientists from many
different disciplines, including physicists, chemists, engineers, and biologists.
Nanotechnology will change the very foundations of cancer diagnosis,
treatment, and prevention.
Nanoscale devices used for treatment of Cancer are based on the constant study of
cancer cells and nanotechnology. Nanoscale devices which are smaller than 50
nanometers can easily enter most cells, while those smaller than 20 nanometers can
move out of blood vessels as they circulate through the body.
Because of their small size, nanoscale devices can readily interact with
biomolecules on both the surface of cells and inside of cells. By gaining access to so
many areas of the body, they have the potential to detect disease and deliver treatment in
ways unimagined before now. Since biological processes that lead to cancer occur at the
nanoscale at and inside cells, nanotechnology offers a wealth of tools with new and
innovative ways to diagnose and treat cancer.
In our paper we design a device that contains sensors, transceivers, motors
and a processor, which are made up of biodegradable compound. No more destruction of
healthy cells due to harmful toxins and radiations generated through chemotherapy and
radiation therapy.
INTRODUCTION:
The paper deals with the
eradication of cancer cells by providing
an efficient method of destroying and
curing the cancer so that healthy cells
are not affected in any manner. This
technology also focuses on a main idea
that the patient is not affected by cancer
again. The purpose of using the RF
signal is to save normal cells.

NANOTECHNOLOGY IN THIS
CONTEXT

Nanotechnology refers to the
interactions of cellular and molecular
components and engineered materials at
the most elemental level of biology. This
paper emphasizes on the effective
utilization of Nanotechnology in the
treatment of cancer.

WHAT IS CANCER?

Cancer cells are different from healthy
cells because they divide more rapidly
than healthy cells. In addition, when
cells divide at an accelerated rate, they
form a mass of tissue called a tumor.
These cancerous cells that come in
excess amounts cause many problems to
the bodies of patients.
In general, the most common methods
used for the cancer treatment are
Chemotherapy, a treatment with
powerful medicines
Radiation therapy, a treatment
given through external high-
energy rays.


PROBLEM
Both the treatments mentioned
above are harmful. Healthy cells are
destroyed in the process. As a result, this
leaves the patient very weak, causing
him not able to recover quickly to
medical treatments. It has been proved
that any individual who had cancer can
survive on deadly chemotherapy up to a
maximum of five years and after that its
anybodys guess.



PROPOSED SOLUTION
The nanodevices can be
programmed to destroy affected cells
and kill only them, thus ending the
problem of destroying any normally
functioning cells which are essential to
ones well-being. Thus the treatment-
using nanotechnology will make the
affected man perfectly normal.

Noninvasive access to the interior
of a living cell affords the
opportunity for unprecedented
gains on both clinical and basic
research frontiers.

NANOTECHNOLOGY AND
DIAGNOSTICS
Nanodevices can provide rapid and
sensitive detection of cancer-related
molecules by enabling scientists to
detect molecular changes even when
they occur only in a small percentage of
cells.
CANTILEVERS
Nanoscale cantilevers - microscopic,
flexible beams resembling a row of
diving boards - are built using
semiconductor lithographic techniques.
These can be coated with molecules
capable of binding specific substrates-
DNA complementary to a specific gene
sequence, for example. Such micron-
sized devices, comprising many
nanometer-sized cantilevers, can detect
single molecules of DNA or protein.
As a cancer cell secretes its
molecular products, the antibodies
coated on the cantilever fingers
selectively bind to these secreted
proteins. These antibodies have been
designed to pick up one or more
different, specific molecular expressions
from a cancer cell. The physical
properties of the cantilevers change as a
result of the binding event. This change
in real time can provide not only
information about the presence and the
absence but also the concentration of
different molecular expressions.
Nanoscale cantilevers, thus can provide
rapid and sensitive detection of cancer-
related molecules.
NANOTECHNOLOGY AND
CANCER THERAPY
Nanoscale devices have the
potential to radically change cancer
therapy for the better and to dramatically
increase the number of highly effective
therapeutic agents. Nanoscale constructs,
for example, should serve as
customizable, targeted drug delivery
vehicles capable of ferrying large doses
of chemotherapeutic agents or
therapeutic genes into malignant cells
while sparing healthy cells, which would
greatly reduce or eliminate the often
unpalatable side effects that accompany
many current cancer therapies.
NANOPARTICLES
Nanoscale devices have the
potential to radically change cancer
therapy for the better and to dramatically
increase the number of highly effective
therapeutic agents.

In this example, nanoparticles
are targeted to cancer cells for use in the
molecular imaging of a malignant lesion.
Large numbers of nanoparticles are
safely injected into the body and
preferentially bind to the cancer cell,
defining the anatomical contour of the
lesion and making it visible.

These nanoparticles give us the
ability to see cells and molecules that we
otherwise cannot detect through
conventional imaging. The ability to
pick up what happens in the cell - to
monitor therapeutic intervention and to
see when a cancer cell is mortally
wounded or is actually activated - is
critical to the successful diagnosis and
treatment of the disease.
Nanoparticulate technology can
prove to be very useful in cancer therapy
allowing for effective and targeted drug
delivery by overcoming the many
biological, biophysical and biomedical
barriers that the body stages against a
standard intervention such as the
administration of drugs or contrast
agents.
WORKING PROCEDURE:
The initial step of identifying the
cancer and the location can be done by
scanning. Once the location has been
identified through scanning, the task is
to position the nanodevice to the exact
location. We focus on the positioning of
the nanodevice into the required location
by itself. The nanodevice is allowed to
be placed into any part of the body (or)
the nano device is injected through the
blood vessel. The positioning is done
with the help of mathematical
calculations. External Control signals
could be used to avoid mishap or any
other errors.
The nanodevice is loaded with
a microchip. The device is also provided
with the compounds concealed so that it
is initiated externally through a
computer. The nano device contains
sensors, motor, gene reader, processor,
transceiver, camera and power supply.
The location of the cancer cells is given
as coordinates in a 3-dimensional point
of view. This point is considered as the
reference and referred as (0, 0, 0).


POSITIONING


The nanodevice performs an
internal calculation based on the
difference between its current position
and the reference. Mathematical
computations involve such that only one
axis is compared between the nano
device and the reference at a time. The
motor fan is placed in a particular
direction for a particular reference
comparison. After one of the axis is
completed and comparison is done, then
the next axis is being compared followed
by the third. Thus the three co-ordinate
comparison of the nano-device results in
any 3-
Dimensional orientation of the nano-
device and results in exact positioning.




NAVIGATION

The output of the mathematical
operation is given to a driver circuit
(motor). The driver helps the device to
navigate through the blood with
precision in direction and with the
required speed. The device thus should
sample its new position with the
reference at a sampling rate. The
sampling rate is made such that their
value is less than the velocity of blood
flow.

The cancer killer could thus
determine that it was located in (say) the
big toe. If the objective were to kill a
colon cancer, the cancer killer in the big
toe would move to the colon and destroy
the cancer cells. Very precise control
over location of the cancer killer's
activities could thus be achieved. The
cancer killer could readily be
reprogrammed to attack different targets
using acoustic signals while it was in the
body.





ALGORITHM FOR NAVIGATION:
Step1: Marking the co-ordinates.
Step2: Initialize the start command.
Step3: Feed the axis.
Step4: Send command to emit
ultrasound.
Step5: Wait for T seconds.
Step6: If there is no signal reflected
back (or) if the reflected
signal is less than the
threshold value, then
activates the stepper motor
to rotate through a certain
distance. (Note: the distance
is proportional to one axis)

.
Step7: Subtract the axis value by
one.
Step8: Continue from step4 to step7
for both co-ordinates.
Step9: If the signal reflected back is
greater than the threshold
value then the motor is de-
activated.
Step10: The motor (perpendicular
to motor1) is activated. The
motor2 moves through
one step thus making the
motor1 to change the axis.
Step11: The motor1 is allowed to
travel until next change is
required.
Step12: Once the nanodevice
reaches the required spot,
the motor is deactivated
through external command.
Step13: Receives the RF radiation
for T seconds that has been
already calculated
depending upon the
intensity of tumor
IMAGING
With the available technology, a
camera is inserted which helps us to
monitor the internal process. Whenever
multiple directions are there in the blood
vessel, the device is made to stop
through the external control signal and
another signal is given to activate in the
right direction.

Current clinical ultrasound
scanners form images by transmitting
pulses of ultrasonic energy along various
beam lines in a scanning plane and
detecting and displaying the subsequent
echo signals. Our imaging is based on
the absolute scattering properties and in
the frequency dependence of scattering
in tissues, which will help to
differentiate between normal and
abnormal cells.

IDENTIFICATION
The nano device identifies the
cancer cells using a gene reader. A gene
reader is a sensor which contains ten to
fifty DNA probes or samples of cancer
cells that are complementary. The DNA
detection system generates an electronic
signal whenever a DNA match occurs or
when a virus causing cancer is present.
Whenever we get a signal indicating the
presence of cancer cells we go for
further process. Once the device has
been originally located, the next step is
the destruction of the cancer cells.
DESTRUCTION:
We can remotely control the
behavior of DNA using RF energy. An
electronic interface to the biomolecule
(DNA) can be created. RF magnetic
field should be inductively coupled to
nanocrystal antenna linked covalently to
a DNA molecule. The inductive
coupling results to the increase in the
local temperature of the bound DNA,
allowing the change of state to take
place, while leaving molecules
surrounding the DNA relatively
unaffected. The switching is fully
reversible, as dissolved molecules
dissipate the heat in less time duration.
Thus RF signal generated outside the
body can destroy the affected DNA.

RF HEATING
The treatment tip contains the
essential technology components that
transform RF to a volumetric tissue
heating source. The heat delivery surface
transmits RF energy to the cells. Tumors
that have little or no oxygen content (i.e.
hypoxia) also have increased resistance
to radiofrequency radiation. Thus, due to
high resistance to radio frequency
radiation the affected cells get heated
and hence destroyed. The RF carrier
frequency is in the biomedical range
(174 - 216MHz). A pair of RF pulses is
transmitted at a frequency of about 1-
2Hz.



HOW NANO DEVICE ESCAPES
FROM IMMUNE SYSTEM?
Generally our immune system
attacks all the foreign particles entering
any part of our body. The problem has
been that such nano particles are similar
in size to viruses and bacteria, and the
body has developed very efficient
mechanisms to deal with these invaders.
It is known that bacteria with
hydrophilic surfaces can avoid being
destroyed by immune system and remain
circulating in the body for longer
periods. To emulate this effect, our nano
device can be coated with a polymer
such as polyethylene glycol (PEG),
which is proved after the research?

CONCLUSION:
As per our aim we have proposed
the usage of nanotechnology and the RF
signal for the destruction of cancer cells.
This method doesnt affect the healthy
cells such that the cancer affected person
is healthy after the treatment. This
treatment doesnt involve critical
operations. This treatment will not take
longer time as in any other treatments.
Surely one day or the other cancer
treated patient will be affected again in
treatments other than nanotechnology
treatment. This can be very well used for
other dangerous diseases.

BIBLIOGRAPHY
Websites:
1. www.nano-tech.com
2. www.nature.com
3. www.adsx.com
4. www.ieee.org
5.www.nanopicoftheday.com
6.www.nanotechnology.com
Sri Kalaheeswara Institute of Tech.,
Sri Kalahasthi.

Topic:NANOTECHNOLOGY

Authors:
G.Balasubramanyam P.Kiran Kumar
Rollno:04381A0407 Rollno:04381A0421
Email:contact_subbu_407@yahoo.com Email:kiran_kumar.p@hotmail.co
vikymails2002@gmail.com
















NANOTECHNOLOGY

- Shaping the World Atom by Atom

The technology of 21st century,
Nanotechnology is the science of
developing materials at the atomic and
molecular level in order to imbue them
with special electrical and chemical
properties. Nanotechnology, which deals
with devices typically less than
billionths of a meter in size, is expected
to make a significant contribution to the
fields of computer storage,
semiconductors, biotechnology,
manufacturing and energy.
Nanotechnology involves such a steep
descent into the minuteness of matter
that it was considered science fiction
until few years ago. Nanometers are 100
times thinner than the fine lines
computer companies currently etch on
silicon chips, 50,000 times thinner than a
human hair. In Nanotechnology,
scientists are manipulating clumps of
atoms and their arrangement. Since, all
products are made from atoms. Their
properties depend on those how atoms
are arranged.
Abstract:
Nanotechnology research has already
been tagged as one of 11 critical areas.
With a major boost from President
Clinton, National Nanotechnology
initiative has been created. The
programme remarked $495 million for
research projects. Larry Bock, CEO of
Nanosys, who helped to launch more
than a dozen successful biotech
companies in his career, believes that
nanotech will impact even more
industries than biotech. In an excerpted
article from the March 2003 Nanotech
Report, he compared nanotechnology
with the microelectronics industry. Bock
said that "a single chemistry graduate
student can create novel devices and
device architectures not even imaginable
or manufacturable by today's biggest
microprocessor companies".


Introduction:
The noun nanotechnology has one
meaning:
the branch of engineering that
deals with things smaller than 100
nanometers. (Especially with the
manipulation of individual molecules)
Nanotechnology comprises
technological developments on the
nanometer scale, usually 0.1 to 100 nm.
(One nanometer equals one thousandth
of a micrometer or one millionth of a
millimeter.) The term has sometimes
been applied to microscopic technology.
The term nanotechnology is often used
interchangeably with molecular
nanotechnology (also known as "MNT"),
a hypothetical, advanced form of
nanotechnology believed to be
achievable at some point in the future.
Molecular nanotechnology includes the
concept of mechanosynthesis. The term
nanoscience is used to describe the
interdisciplinary field of science devoted
to the advancement of nanotechnology.


The size scale of nanotechnology makes
it susceptible to quantum-based
phenomena, leading to often
counterintuitive results. These nanoscale
phenomena may include quantum size
effects and molecular forces such as
VanderWaals forces. Furthermore, the
vastly increased ratio of surface area to
volume opens new possibilities in
surface-based science, such as catalysis.
History
The first mention of
nanotechnology (not yet using that
name) occurred in a talk given by Nobel
Prize winner Richard Feynman in 1959
entitled There's Plenty of Room at the
Bottom in which he said, "The
principles of physics, as far as i can see,
do not speak against the possibility of
maneuvering things atom by atom. We
need to apply at the molecular scale the
concept that has demonstrated its
effectiveness".
The term Nanotechnology was
created by Tokyo Science University
professor Norio Taniguchi in 1974 to
describe the precision manufacture of
materials with nanometre tolerances.
Later in 1980s the idea of Richard was
championed most famously in K.Eric
Drexter's book, "Engines of creation :
the coming Era of Nanotechnology".
This volume summarizes 15 years of
research, assembles the conceptual and
analytical tools needed to understand
molecular machinery and manufacturing
systems presents an analysis of their core
capabilities .Feynman points regarding
the redesign of some tools because the
relative strength of various forces would
change. Gravity would become less
important, surface tension would
become more important, Vander Waals
attraction would become important, etc.
Feynman mentioned these scaling issues
his MIT doctoral dissertation, later
expanded into Nanosystems. Nobody
has yet effectively refuted the feasibility
of his proposal. The goal of Drexler's
investigations is "building complex
structures with atom by atom control",
which is also the ultimate goal of
synthetic chemistry. Drexler's approach
is distinguished from conventional
chemistry in that complex structures are
to be made by using programmable
"nanoscale mechanical systems to guide
the placement of reactive molecules" to
about 0.1nm precision.




New materials, devices,
technologies
As science becomes more
sophisticated it naturally enters the realm
of what is arbitrarily labeled
nanotechnology. The essence of
nanotechnology is that as we scale things
down they start to take on extremely
novel properties. Nan particles (clusters
at nanometer scale), for example, have
very interesting properties and are
proving extremely useful as catalysts
and in other uses. If we ever do make
monocots, they will not be scaled down
versions of contemporary robots. It is the
same scaling effects that make
Nanodevices so special that prevent this.
Nan-scaled devices will bear much
stronger resemblance to nature's
nanodevices: proteins, DNA, membranes
etc. Supramolecular assemblies are a
good example of this.
One fundamental characteristic of
nanotechnology is that nanodevices self-
assemble. That is, they build themselves
from the bottom up. Scanning probe
microscopy is an important technique
both for characterization and synthesis
of nanomaterials. Atomic force
microscopes and scanning tunneling
microscopes can be used to look at
surfaces and to move atoms around. By
designing different tips for these
microscopes, they can be used for
carving out structures on surfaces and to
help guide self-assembling structures.
Atoms can be moved around on a
surface with scanning probe microscopy
techniques, but it is cumbersome,
expensive and very time-consuming, and
for these reasons it is quite simply not
feasible to construct nanoscaled devices
atom by atom. You don't want to
assemble a billion transistors into a
microchip by taking an hour to place
each transistor, but these techniques can
be used for things like helping guide
self-assembling systems.
One of the problems facing
nanotechnology is how to assemble
atoms and molecules into smart
materials and working devices.
Supramolecular chemistry is here a very
important tool. Supramolecular
chemistry is the chemistry beyond the
molecule, and molecules are being
designed to self-assemble into larger
structures. In this case, biology is a place
to find inspiration: cells and their pieces
are made from self-assembling
biopolymers such as proteins and protein
complexes. One of the things being
explored is synthesis of organic
molecules by adding them to the ends of
complementary DNA strands such as ---
A and ----B, with molecules A and B
attached to the end; when these are put
together, the complementary DNA
strands hydrogen bonds into a double
helix, AB, and the DNA molecule can be
removed to isolate the product AB.
Radical nanotechnology is a
term given to sophisticated nanoscale
machines operating on the molecular
scale. By the countless examples found
in biology it is currently known that
radical nanotechnology would be
possible to construct. Many scientists
today believe that it is likely that
evolution has made optimized biological
nanomachines with close to optimal
performance possible for nanoscale
machines, and that radical
nanotechnology thus would need to
made by biomimetic principles.
However, it has been suggested by
K.Eric Drexler that radical
nanotechnology can be made by
mechanical engineering like principles.
Drexler's idea of a diamondoid
molecular nanotechnology is currently
controversial and it remains to be seen
what future developments will bring.
Nanotechnology in
Industries
Nano-surgeons break the atomic
bond. The science of the small has
moved a huge step forward following
work in a subterranean Birmingham
laboratory, reports Roger Highfield. The
ultimate in surgery has been carried out
in a vibration-free bunker in deepest
Birmingham. Not only have scientists
working there managed to remove a
single atom of matter, measuring about a
tenth of a millionth of a millimetre
across, but they have achieved this feat
even though their subject was thrashing
around wildly. The feat is the ultimate in
the science of the small,
nanotechnology, that the practitioners
hope will one day help to remove
contaminants from the environment. One
can also see it as an extreme version of
precision chemistry, a far cry from what
usually happens in a laboratory.
Molecular Manufacturing
The proposal that advanced
nanotechnology will include artificial
molecular machine systems capable of
building complex systems to atomic
precision has been controversial within
the scientific community. In general,
proponents have argued from the
grounds of theoretical analysis coupled
with the existence of multiple plausible
implementation pathways from current
technology, while opponents have been
unimpressed with theoretical arguments.
Institute for Nanotechnology
At the forefront of this new
scientific frontier, the Institute for
Nanotechnology was established as an
umbrella organization for the
multimillion dollar nanotechnology
research efforts at Northwestern
University. The role of the Institute is to
support meaningful efforts in
nanotechnology, house state-of-the-art
nanomaterials characterization facilities,
and nucleate individual and group efforts
aimed at addressing and solving key
problems in nanotechnology.
As part of this effort, a $34 million,
40,000 square foot state-of-the-art
Center for Nanofabrication and
Molecular Self-Assembly was
constructed on the Evanston campus.
The new facility, which was anchored by
a $14 million grant from the Department
of Health and Human Services, is one of
the first federally funded facilities of its
kind in the United States and home to
the Institute headquarters.
Currently comprised of two major
interdisciplinary research centers and a
celebrated group of award-winning
faculty and students, the Institute
positions Northwestern University and
its partners in academia, industry, and
national labs as leaders in this exciting
field.
The multimillion dollar interdisciplinary
nanotechnology research efforts carried
out in the Institute are supported by the
NIH, NSF, ARO, ONR, AFOSR, DOE,
and many industrial and philanthropic
organizations.

International Association of
nanotechnology
The International Association of
Nanotechnology (IANT), is a non-profit
organization with the goals to foster
scientific research and business
development in the areas of Nanoscience
and Nanotechnology for the benefits of
society.
The annual meeting of the International
Congress of Nanotechnology (ICNT)
2005 will be held on October 31-
November 4, 2005 at the San Francisco
Airport Marriott Hotel.
ICNT 2005 Overall Program
ICNT is the premier international
conference, which covers a
comprehensive spectrum of the
emerging field of Nanotechnology: from
the latest research and development in
nanomaterials, nanoelectronics,
nanophotonics, nanobiotechnology,
nanomedicine, nanoethics, education,
environmental, societal and health and
safety implications, to nanotech venture
capital investment and technology joint
venture.
As of August 15, we are pleased to have
185 speakers and presenters.
A partial list of more than 100 speakers
and poster presenters from 33 countries
is given below. Additional speakers and
presenters will be posted online by end
of August.
CONFERENCE THEME:
Building Infrastructure for the Next
Frontier:
Presenting the ROAD MAP
for Nanotechnology Industry
Focusing on the converging
of Nano-Bio-Info
technologies
Presenting the state-of-the-art
Scientific and Technological
advancement
Highlight a wide spectrum of
its applications: from
electronics to chemical
industry, from semiconductor
to aerospace, from
automotive to textile, from
chemical to biotechnology
industry.
Working groups and forum
on developing Nomenclature
and Standards
Working groups and forum
on Nano Toxicology
Working groups and forum
on Education and Training
Investment Forum for
Emerging Nanotech
Companies
CONFERENCE TOPICS:
Nanomaterials
Nanodevices
Nanoelectronics
Nanobiotechnology
Nanomedicine
Nanotechnology in
Semiconductor Industry
Nanotechnology in Aerospace
Nanotechnology in
Biopharmaceutical Industry
Nanotechnology in Textile
industry
Nanotechnology in Energy
Industry
Nomenclature Terminology
International Standards
Nano Tools Nanoparticle
Toxicology
Societal & Environmental
Impacts
Heath Safety Implications
Research Collaboration
Education & Training
Capital Funding and Grants for
Start-up Ventures

USES
Nanotechnology kills cancer cells
Nanotechnology has been
harnessed to kill cancer cells without
harming healthy tissue.




The technique works by inserting
microscopic synthetic rods called carbon
nanotubules into cancer cells. When the
rods are exposed to near-infra red light
from a laser they heat up, killing the cell,
while cells without rods are left
unscathed.
Details of the Stanford
University work are published by
Proceedings of the National Academy of
Sciences. Researcher Dr.Hongjie Dai
said,"One of the longstanding problems
in medicine is how to cure cancer
without harming normal body tissue.
"Standard chemotherapy destroys cancer
cells and normal cells alike. That's why
patients often lose their hair and suffer
numerous other side effects."
"For us, the Holy Grail would be finding
a way to selectively kill cancer cells and
not damage healthy ones."
Nanotechnology: Hell or Heaven?
Perhaps a little bit of both,
when it comes to the
possibilities of nanotechnology, it can be
hard to know what to expect: glittering
visions of abundance and long, healthy
life spans; fears of out-of-control world-
destroying devices, pervasive
surveillance tyrannies, and devastating
nanotech wars; or maybe all of the
above. The Foresight Institute's First
Conference on Advanced
Nanotechnology held cross the Potomac
River from Washington, D.C., offered
hope, fear, and audacious scenarios for
the future.
To illustrate what nanotech progress
might do for the world's poor, Bruns
imagined a potential Whole Earth
Catalog for 2025, loaded with nanotech
devices. He found low energy ultra-
efficient water filtration systems that
could purify any contaminated or saline
water into fresh water suitable for
drinking or irrigation. (An earlier
presentation by Gayle Pergamit
described a water filtration system using
nanopore membranes now being
developed by Aguavia, in which a six-
inch cube of membranes could purify
100,000 gallons of water a day.)
Even the world's poorest could shop this
2025 catalog for cheap solar roofing
panels. The sturdy plastic panels are
composed of "failsoft" nanocells that
automatically reroute electric flows if
the panels are cut, nailed, or damaged.
As evidence that such panels are
possible, Bruns cited the work of
Konarka, a company developing cheap
solar panels that come in rolls like Saran
Wrap. This 2025 catalog would also
offer "comsets" extremely powerful
energy-sparing computers that would fit
entirely inside the frames of eyeglasses
or in jewelry.
Nanoclinics will be a popular
choice in 2025 for those living far from
hospitals and doctors. Nanoclinics the
size of suitcases, powered by those
cheap solar panels, will contain a full
range of diagnostics and therapeutics,
along with preventive and restorative
treatments. "Turn trash into treasure,"
reads the 2025 catalog copy for
nanorefineries that can break down any
unwanted consumer items, sewage
sludge, and any other waste. The
nanorefineries could be linked directly to
nanofabs to provide feedstocks for
producing new consumer goods.
In a presentation on "The Top
Ten Impacts of Molecular
Manufacturing," Phoenix predicted that
products made using a mature molecular
nanotechnology would cost $1 per
pound to make. After nanotech factories
hit their stride, molecular manufacturing
will provide more manufacturing
capacity than all the world's factories
offer today. We will see the advent of
cheap solar power and cheap energy
storage, and inconceivably cheap high-
powered computers the size of
wristwatches. The components needed to
put a kilogram of material into orbit
would fit inside of a suitcase.
Nanotechnology would make it possible
for 100 billion people to live sustainable
at a modern American standard of living,
while indoor agriculture using high-
efficiency inflatable ten-pound diamond
greenhouses would help restore the
world's ecology. The ultimate limit to
economic growth seems to be heat
pollution, the waste energy radiated
away from nanotech devices.
According to Robert Freitas, the medical
nanotech guru at the Institute for
Molecular Manufacturing, not only will
nanotechnology provide us with a lot of
cool stuff and eliminate global poverty,
it will also help us live a really long
time. In his lecture on "Nanomedicine
and Medical Nanorobotics," Freitas
predicted that we would see in the next
five years biologically active
nanoparticles used as diagnostic sensors.
He also described a project at the
University of Michigan to use tecto-
dendrimers, complex tree shaped
molecules that could be designed to
simultaneously sense and destroy cancer
cells.


But Freitas' vision and true passion is
medical nanorobots. He has designed
respirocytes composed of 18 billion
precisely arranged atoms, consisting of a
shell of sapphire with an onboard
computer. It will be embedded with
rotors to sort oxygen from carbon
dioxide molecules. These respirocytes
would be able to hold oxygen at 100,000
atmospheres of pressure. J ust five cc's of
respirocytes, 1/1000th the volume of the
body's 30 trillion oxygen and carbon
dioxide carrying red blood cells, could
supply enough oxygen to keep alive for
four hours a person whose heart had
stopped.

The hopeful news is that while
technological advances could in fact
make humanity worse off, that has not
been our record so far. Every
technological advance has produced
downsides, but so far the benefits have
far outweighed the risks. It's my bet that
that will also be true of nanotechnology.
We are unlikely to descend to nanotech
hell. But it is probably inevitable that
some of us will be scorched by a bit of
nanotech hellfire as we ascend to
nanotech heaven.
Nanotechnology in fiction
In movies and TV series:
the X-Files Episode 6x10 "S.R.
819", A.D. Skinner's blood is
infected with Nanobots
the Borg and also the race of
Nanites in Star Trek
the Replicators in Stargaze SG-1
the T-X (Terminatrix) in
Terminator 3: Rise of the
Machines
the Moonlight Butterfly system
in Turn A Gundam
In video games:
Deus Ex
Everything or Nothing
System Shock series
Neocron
Anarchy Online
In books:
Alexandr Lazarevich, The
NanoTech
Network (http://www.webcenter.
ru/~lazarevicha/ntn_toc.htm)
Greg Bear's Blood Music and
Slant
Stephen Euin Cobb's novel
Plague at Redhook
Michael Crichton's Prey
Neal Stephenson's The Diamond
Age and Snow Crash
Ben Bova's Grand Tour novels.
Robert Charles Wilson's A
Bridge of Years
Important people
Richard Feynman
Prof. Norio Taniguchi
K. Eric Drexler
Robert Freitas
Ralph Merkle
Sumio Iijima
Richard Smalley
Laboratories
The MEMS and Nanotechnology
Clearinghouse / The world's most
popular portal for
Nanotechnology information,
jobs, and events
The London Centre for
Nanotechnology / A research
centre jointly set up by
University College London and
Imperial College London
The MEMS and Nanotechnology
Exchange / A repository of
Nanotechnology fabrication
information
The Smalley Group / Carbon
Nanotechnology Laboratory
Center for Biological and
Environmental Nanotechnology
Center for Nano & Molecular
Science & Technology- CNM at
UT Austin
Center for Nanoscale Science
and Technology at Rice
University
Advanced Micro/Nanodevices
Lab at the University of
Waterloo
A PAPER PRESENTATION
On
NANOTECHNOLOGY AND CYOGENS:
PACKAGE APPROACH TO RAISE THE DEAD.
By
P.SRAVANI
&
D.SHILPA

III / IV E.C.E,

SRIKALAHASTHEESWARA INSTITUTE OF
TECHNOLOGY
SRIKALAHASTI-517560


Email: p_sravani_ece@yahoo.com, d_shilpa_ece@yahoo.com





1



INDEX

INTRODUCTION

PROCEDURE

4 STEPS

DAMAGE FROM ICE FORMATION AND ISCHEMIA

REVIVAL

CONCLUSION










ABSTRACT:
2
India had lost many of its eminent personalities at considerably young age to
diseases that did not have a cure in those days. These days we have medicines for many
diseases. If those great personalities could some how be preserved till now, we could
have saved them by treating them with latest technology. This paper deals with a
technology that can preserve people for 33000 years after their death named cryogens.
We also discuss about a technology that has the capability to treat these people some day:
the nanotechnology. Thus cryogenics is a prerequisite for effective utilization of
nanotechnology. The patients can thus be preserved until nanotechnology is properly
developed and then treated.
Cryonics (often mistakenly called "cryogenics") is the practice of cryopreserving
humans or animals that can no longer be sustained by contemporary medicine until
resuscitation may be possible in the future. The process is not currently reversible, and by
law can only be performed on humans after legal death in anticipation that the early
stages of clinical death may be reversible in the future. Some scientists believe that future
medicine will enable molecular-level repair and regeneration of damaged tissues and
organs decades or centuries in the future. Disease and aging are also assumed to be
reversible.
The central premise of cryonics is that memory, personality, and identities are
stored in the structure and chemistry of the brain. While this view is widely accepted in
medicine, and brain activity is known to stop and later resume under certain conditions, it
is not generally accepted that current methods preserve the brain well enough to permit
revival in the future. Cryonics advocates point to studies showing that high
concentrations of cryoprotectant circulated through the brain before cooling can mostly
prevent freezing injury, preserving the fine cell structures of the brain in which memory
and identity presumably reside.

PROCEDURE:
Cryonicists try to minimize ischemic and reperfusion injury by beginning cardio-
pulmonary support (much like CPR) and cooling as soon as possible after pronouncement
of death. Anti-clotting agents like heparin and antioxidants may be administered.
Figure1: alcor operation theater
3

Below we outline the major procedures used to place a patient into cryonic suspension.
There are four main steps:
1. Stabilize, cool, and transport the patient.
2. Perfuse the patient with cryoprotective solutions.
3. Lower the patient's temperature to -79C.
4. Lower the patient's temperature to -196C.

4 MAIN STEPS
Step 1 :
Follow these guidelines when the patient is pronounced dead:

Figure 2: cryonists at work

Maintain blood flow and respiration of the patient (with caution).
Cool the patient by surrounding with ice bags, especially the head
Inject 500 IU/kg of heparin.
Use sterile technique if possible. This procedure should be performed in conjunction with
a physician, nurse, or paramedic.
1) At the time of death maintain blood flow and oxygenation to limit ischemic injury.
Administer the oxygen through a facemask, or preferably an endotracheal tube. Avoid
mouth-to-mouth resuscitation, because of the danger of infection. Do cardiopulmonary
4
resuscitation manually until a mechanical heart-lung resuscitator (with 100% O
2
) can be
employed.
2) Establish venous cannulation in the forearm, employ a 3-way stopcock and tape
securely, before the time of death if possible, for the administration of pharmacological
agents.
3) Place the patient on a cooling blanket, if available, and circulate coolant. Surround the
patient with Ziploc ice bags, paying particular attention to cooling the head. Lower the
body temperature toward 0C.
4) Insert thermocouple probes in the esophagus and in the rectum, and monitor
temperature throughout the protocol.
5) Tape the eyelids closed to prevent dehydration.
6) Inject 300 mg Tagamet (cimetidine HCl), or administer 20 ml Maalox through a
gastric tube, to prevent HCl production by the gastro-intestinal tract.
7) When suitable, use a Foley catheter to drain the bladder.
Step 2 :
This perfusion step should be performed with the guidance of a surgeon,
perfusionist, and medical technician. Expose and cannulate the carotid artery and jugular
vein. Secure the cannulas and attach them to the tubing of the bypass circuit.
Figure 3: cryogenic equipment
l artery. These catheters should be coupled to pressure sensors. Monitor pH, O
2
, CO
2
, and
cryo-protectant concentration by using a refractometer.
Begin total body washout and replace the blood with 4 to 6 liters of cryoprotective
solution (one blood volume or 5 L / 70 kg). Discard the venous effluent into containers
holding Clorox bleach.
After perfusion is complete, decannulate and suture the surgical wounds.

Step 3 :
5
We have to place thermocouples on the surface of the skin, in the esophagus and
rectum. Monitor the patient's temperature and freeze gradually. Temperature lowering
should ideally be between 0.01 and 0.1 degrees C per minute, with slower preferred
especially after the patient has solidified.
Figure 4: patient shifted into a containe
Step 4 :
Place the patient in a container, and suspend the container above the (low) level of
liquid nitrogen in a Dewar, to begin vapor phase cooling to -196C. Cooling should
continue slowly at about 0.01C per minute if possible. Rapid cooling may cause stress
fractures.

DAMAGE FROM ICE FORMATION AND ISCHEMIA
The freezing process creates ice crystals, which some scientists have claimed
damage cells and cellular structures so as to render any future repair impossible.
Cryonicists have long argued, however, that the extent of this damage was greatly
exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the
body with cryo-protectant chemicals (traditionally glycerol) that inhibit ice crystal
formation.
SOLUTION TO THIS PROBLEM:
Vitrification preserves tissue in a glassy rather than frozen state. In glass,
molecules do not rearrange themselves into grainy crystals as they are cooled, but instead
become locked together while still randomly arranged as in a fluid, forming a "solid
liquid" as the temperature falls below the glass transition temperature. The Cryonics
6
Institute developed computer-controlled cooling boxes to ensure that cooling is rapid
above T
g
(glass transition temperature, solidification temperature) and slow below T
g
(to
reduce fracturing due to thermal stress). If the circulation of the brain is compromised,
protective chemicals may not be able to reach all parts of the brain, and freezing may
occur either during cooling or during rewarming. Cryonicists argue, however, that injury
caused during cooling might, in the future, be repairable before the vitrified brain is
warmed back up, and that damage during rewarming might be prevented by adding more
cryo-protectant in the solid state, or by improving rewarming methods. Again, however,
Cryonicists counter that future technology might be able to overcome this difficulty, and
find a way to combat the toxicity after rewarming. Some critics have speculated that
because a cryonics patient has been declared legally dead, their organs must be dead, and
thus unable to allow cryo-protectants to reach the majority of cells. Cryonicists respond
that it has been empirically demonstrated that, so long as the cryopreservation process
begins immediately after legal death is declared, the individual organs remain
biologically alive, and Vitrification (particularly of the brain) is quite feasible. . This
same principle is what allows organs, such as hearts, to be transplanted, even though they
come from dead donors.

REVIVAL
Revival requires repairing damage from lack of oxygen, cryo-protectant toxicity,
thermal stress (fracturing), and freezing in tissues that do not successfully vitrify. In
many cases extensive tissue regeneration will be necessary. Hypothetical revival
scenarios generally envision repairs being performed by vast numbers of microscopic
organisms or devices. More radically, mind transfer has also been suggested as a possible
revival approach if and when technology is ever developed to scan the memory contents
of a preserved brain.
It has been claimed that if technologies for general molecular analysis and repair
are ever developed, then theoretically any damaged body could be revived. Survival
would then depend on whether preserved brain information was sufficient to permit
restoration of all or part of the personal identity of the original person, with amnesia
being the final dividing line between life and death. Nanotechnology is capable of
7
delivering medication to the exact location where it is needed Organic dendrimers, a type
of artificial molecule roughly the size of a protein, would be ideal for the job of
delivering a medicine. They are more durable than proteins- as they have stronger
bonds.
They contain voids inside of them, giving them large internal surface areas; just
what is needed for delivering a medicine. There is a possibility of designing dendrimers
that swell and release cargo when the required target molecules are around- allowing
stuff intended for tissue A to reach tissue A and not somewhere else.
Figure 5: A dendrimer with a target cell.
Heart attacks kill more people a year in the United States than anything else. A
heart attack is the clogging of key arteries that support the heart. A nanorobot can prevent
these clots by acting as a type of Shepard. They can clear clots that start to form and
move the material along. Heart attacks, strokes and blood clots can be effectively
prevented by this method
. Figure 6: A nano robot at work
As seen in the figure 6 above, the nanorobots could be sent into the blood stream
and there the repair the damaged cells. These robots are programmed to deliver drugs to
particular cells that are damaged and the healthy cells would not be touched. The cell
responds to external triggers during gestation and development. Instructive apoptosis is
triggered by certain ligands coming in contact with certain receptors on the cells surface.
8
A Nanoprobe has the capability of carrying these ligands. Another relatively simple way
of triggering apoptosis is to breach the cell membrane of a target cell with a tube and
drain the cytoplasm. Getting rid of cells using apoptosis is advantageous since no damage
to other cells in the vicinity occurs. The contents of the cell are neatly packed and
discarded as the cell dies. Cancer, molds and other Eukaryotes can effectively be killed
using this method. Nanoprobes can be designed to target those organisms and effectively
destroy them.

CONCLUSION:
So you can see that people can be kept in a suspended state and later on cured.
Cryonics thus helps nanotechnology to prove itself. Cryonics and nanotechnology form a
useful pair. Nanotechnology is an infant science. But it has the potential to cure almost
every disease. Once the scientists succeed in reviving a person, people shall believe in it.
Let us use this technology for constructive purposes and bring back to life those who died
very young and those who are though old when age is considered but have to live for
many years to serve the science and humanity.

REFERENCES:
Ettinger, Robert C.W. (1964). The Prospect of Immortality, First, Doubleday.
Alcor Life Extension Foundation.
American Cryonics Society.
Cryonics Society - Resources and Advocacy.
The Prospect of Immortality, Free download of the book that started the cryonics
movement.
Cryonics, Volume 6 Issue 61, Alcor Life Extension Foundation.






9

NETWORK ON A CHIP


MODELING WIRELESS NETWORKS
WITH ASYNCHONOUS VLSI

















Presented by
S.Nithya
sekarsaras@yahoo.com

S.Parveen Reshma
parveenreshma@yahoo.com


Mepco Schlenk Engineering College
Sivakasi
Virudhunagar District
Tamil Nadu.































































TABLE OF CONTENTS

Abstract
Introduction
Background
Network simulation
Parallel Discrete event simulation
Conservative simulation
Optimistic simulation
Abstraction and Analytic Techniques
Asynchronous VLSI
Comparison of Asynchronous VLSI and
Networks
Network on a chip
Channel models
Data link and Medium access control
Routing
Evaluation
Asynchronous VLSI
Network configuration and traffic
Summary








ABSTRACT:
ALL NETWORKS ARE UNDER
CONTROL OF SINGLE CHIP
The abstract of this paper is designing wireless
networks with asynchronous VLSI. Network on
a chip is a programmable, asynchronous VLSI
architecture for fast and efficient simulation of
wireless networks. The approach is based on the
similarity between networks and asynchronous
VLSI. Our approach results in simulators that
can evaluate network much faster than real time.
Our architecture combines dedicated VLSI
components with software models that are used
to program the hardware to provide a flexible
simulation infrastructure.
The design of such chips for various
applications like telecommunications faces
demanding challenges due to large complexity of
systems and increasing design methodology in
required which allows reuse at all levels of
design
In this project we develop a new
architecture template called network on a chip
for future integrated telecommunication systems
Network on a chip template provides vertical
integration of physical and architectural levels in
system design. In the NOC template a chip
consists of contiguous areas called signals,
which are physically isolated from each other but
have special mechanism for communication with
each other .A region of NOC will be composed
of computing resources in the form of processor
cores and fields programmable logic blocks
distributed storage resources programmable I/O
and all these resources inter connected by
switching fabric allowing any resource to
communication with any other resources.

INTRODUCTION:
The complexities of modeling the behavior
of networks have resulted in the widespread use
of networks simulation as a technique to evaluate
network protocols and architectures. Network
simulators in determining whether a particular
protocol change would improve the overall
behavior of the network. Research in the area of
network simulation has been focused on
developing fast software simulation techniques
for Internet scale networks simulation
architectures.
A wireless channel is more complex than
a traditional wired transmission channel. For
instance, messages sent by different entities to
distinct destinations can interfere with one
another, something that is not a typical
occurrence in a switched wired network.

BACKGROUND:
Network Simulation:
A network consists of a number of
nodes that can communicate with one
another by exchanging messages.
In a wire line network, a node is
connected to a collection of neighboring
nodes by physical communication links.
In a wireless network, a node simply
broadcast a message and some subset of
nodes that are physically proximate to
the transmitting node will receive the
message.
A network simulator attempts to
simulate the behavior of the network.
The core of a network simulator is a
discrete event simulation engine.
Each entity in the network being
simulated can generate event. And the
events themselves trigger other events.
Each event occurs at some point in
time, known as its timestamp, and the
events are stored in an event queue that
is ordered in time.
The simulation proceeds by selecting
the first event, and executing it. This
event will in turn add additional events
to the queue. If the simulator has
accurate time estimates for each event,
and every physical event is modeled
correctly to the behavior of the
complete network.
The drawback to using such a discrete
event simulation approach is that each
event in the network is executed
sequentially.
Simulating an hour of activity in a
complex network can take many hours,
if not days of simulation time using a
direct discrete event based approach



The problem is enlarged when the number of
active nodes in the networks increases the
number of events that the network simulator
must execute in a given interval of time.

Several approaches to tackle the
problem of simulation speed are
The use of parallelism in the
simulator itself.
The use of abstraction and analytic
techniques.
Parallel Discrete event simulation:
This approach attempts to
run the discrete event
simulation engine on a
multiprocessor system.
There is one constraint
that any discrete event
simulator must maintain
namely, that the simulator
must produce a result that
is same as the result
obtained when each
simulation event is
executed in the order
specified by its
timestamp.
This constraint is easy to
achieve when the
simulator itself runs on
one processor because all
the events that is ordered
in time.
The key problem here is
that an event received
from a remote processor
may arrive late, thus
violating our simulation
requirement. This is
sometimes referred to as
the event ordering
problem.

Conservative Techniques:
These
techniques
cut out
local
execution
until the
processor
is certain
that any
event
received
from a
remote
processor
will not
arrive late.
Optimistic Techniques:
This
techniques
continue
local
execution,
but
provide a
roll-back
capability
in case an
cent from
a remote
processor
arrives late
The
additional
synchroniz
ation
required
among
processors
introduces
overhead
in the
parallel
simulation
that limits
speedup.
Abstraction and Analytic Techniques:
In abstraction technique
where by certain details
of the underlying
network are ignored in
the interests of
improving the
performance of the
simulation. Since
abstracting away details
might change the results
of the simulation, care
must be taken to
identify the parts of the
simulation that can be
safely ignored.
This above approach
may be combined with
analytic techniques,
where multiple steps of
the simulation are
mathematically
analyzed and the results
of the analysis are used
to reduce the number of
events required by the
discrete event simulator.
Asynchronous VLSI:
Traditional circuit
design uses clock
signals to implement
sequencing.
If two operations have
to be sequenced, then
the clock signal is used
to indicate when one
operation is complete
and the operation can
begin. Then clock
determines when the
circuit advances from
one step to the next, and
all parts of the circuit
must have completed
their local computations
in the time budget
provided by a clock
cycle.
In contrast, one can
design circuits where
sequencing is not
governed by a global
clock signal, such
circuits are said to be
asynchronous.
Asynchronous design is
an appealing
methodology for large
scale systems.
The absence of a global
clock means that local
signals have to be used
to determine when the
output of a function
block is ready. Large
asynchronous circuits
typically use delay
insensitive codes to
represent data, although
these implementations
are possible.
Asynchronous circuits
are designed by
describing the VLSI
computation using a
programming notation.
This notion is called us
communicating
Hardware Processes.



Advantages of
asynchronous VLSI:
In the notation, an
asynchronous VLSI
computation is
described as a collection
of sequential processes
that
Communicate with one
another by exchanging
messages.
This approach is
successfully applied to
the design of
asynchronous
microprocessor, as well
as other custom
asynchronous VLSI
implementations.
The circuits in an
asynchronous VLSI
implementation are idle
until they receive input
data. The arrival of an
input is an event that
triggers the circuit,
activate it, and makes it
produce an output that
in triggers other event in
the asynchronous
computation,
It is this event-driven
nature of asynchronous
circuits that makes them
resemble networks.









Comparison of
Asynchronous VLSI
and Networks:

Number
Asynchronous
VLSI

Networks
1 Each elementary
process in this
circuit is
sequential.
An elementary
component of a
network node
can only execute
one action at a
time.
2 A collection of
concurrently
executing CHP
processes can
behave like one
logical entity
A network node
can consist of
multiple,
concurrently
executing
components that
together provide
the functionality
of a single node.

3 Different CHP
processes
exchange
information by
sending messages
to each other
Network nodes
also exchange
information by
message passing.
4 This computation
,there is no
centralized
control that
sequences all
events in the
system
Here is no
centralized
control that
sequences all
events in the
system.







In this set of similarity that
allows us to draw an analogy
between asynchronous VLSI and
communication networks.
Network on a Chip:
Inspired by the similarity
between network and
asynchronous VLSI systems, we
can model networks in silicon.
However asynchronous VLSI
system share properties with
network.
There is a major difference:
Message passing speeds in
silicon are many orders of magnitude
faster than those in networks.

Our approach exploits this
difference to construct orders
of magnitude faster than those
in networks models in silicon
that can produce simulation
results faster than the networks
themselves. And faster than
real time.
Asynchronous VLSI approach
will be energy efficient
compared to a traditional
software based solution, and
the resulting implementation
could be integrated into
existing networking devices.
Modeling a wireless network
in silicon is similar in spirit to
existing component based
software models of wireless
networks. The main point of
departure is that several
components in the simulation
infrastructure are directly
modeled in VLSI. The NOC
architecture combines these
dedicated, VLSI components
with software models.



Our proposed architecture
includes, but is not limited to,
the following major
components:
1. Channel Model
2. Data Link Layer and
Medium access
control
3. Routing


1. Channel Model:
The Channel
Model is
responsible
for
determining
the quality of
the
communicati
on links
between
network
nodes.
This model
simulates
effects such
as signal
attenuation
and fading.
We require
an explicit
channel
model
because the
silicon
implementati
on contains
reliable
communicati
on links;
additional
circuits are
required to
make these
silicon
communicati
on channels
behave like
wireless
channels.
The channel
model is
directly
implemented
in hardware
with
parameters
that are
software
controlled so
as to avoid
incurring
software
overhead in
channel
modeling.
Once a
message
arrives at a
node after
passing
through a
channel
model, it
enters the
hardware
model of the
data link and
medium
access
control layer.



2. Data Link Layer:
This layer models the link delay and bandwidth,
including potential overhead incurred due to
media access control algorithms. Like channel
model, this model contains programmable
registers that can be externally set to control its
behavior.
Hardware support for routing in the
NOC architecture is used for two
purposes:
To provide a fast routing
technique for multi hop routing
protocols
To provide a flexible
mechanism by which a static
hardware structure can be
programmed to support
multiple network topology.

We could only stimulate one
topology with a single chip.
Another way to
think about this
implementation
strategy is to treat
each piece of the
VLSI network mode
as a component that
would traditionally
be an entity in a
parallel discrete
event simulation.
Indeed, the
similarities between
the strategy we have
described and
strategies for
component based
software network
simulation are
striking.
The key difference
between the VLSI
implementation
when compared with
parallel simulation
environment is that
the asynchronous
VLSI
implementation
mimics the network
itself and has
adjustable delay
parameters.

The net result is that
the time at which an
event; occurs in the
asynchronous circuit
will be scaled
version of the actual
time at which the
event occurs in the
network.
Therefore the VLSI
simulation does not
suffer from the
event ordering
problem because the
event ordering
problem is an
artifact of the
discrete event
simulation algorithm
and is not a physical
phenomenon.
Our simulation
architecture can also
take advantage of
the technological
scaling properties of
VLSI sometimes
referred to as
MOORES LAW.

MOORES LAW:
It states that the
number of
devices that fit
on a single chip
doubles every
eighteen months.
In addition the speed of
transistors on a chip the
performance of our
network simulator will
improve, but the number
of nodes we can simulate
in a fixed amount of area
will increase as well.
Evaluation:
Asynchronous VLSI
model:
An important property of the
VLSI network simulator that we
must preserve is absence of
deadlock.
It is well known that any
hardware routing strategy that
permits routing cycles cannot be
deadlock free.
Networks, on the other hand,
could have cycles in their routing
protocols.
To avoid this problem we adopt the same
behavior as a network. When routing
resources are exhausted, network packets
are dropped.
Our hardware model
uses a credit based
flow control system,
every node begins
with a certain
number of credits
for each link,
corresponding to the
amount of data
buffering available
for that link,.
Nodes drop packets
when incoming
traffic is too high,
just like
communication
network. Whenever
a node receives a
packet that
completely fills its
buffer for that port,
the node drops one
packet in that buffer
and returns credit to
the sender.

The combination of
credits and dropping
packets ensures the
VLSI circuit will be
deadlock free for
random routing
tables.
Different queue
management
strategies drop
different packets
when overflow
occurs, and some
times even before
overflow is
observed.
The VLSI
implementation of a
network node
consists of a routing
table, message
buffers, and models
for local packet
generation.
When a message
arrives from the
network it passes
through the routing
table if necessary
and is either routed
to the next hop or is
locally processed.
If a link is congested, the
message waits in a buffer until
routing resources are available
.The local node model injects
messages into the network for
different destination nodes.

Network Configuration:
We wrote several topology
generators that are capable of
generating ring, grid and
random network topologies.

Simulation:
1. Fixed traffic pattern
2. Random traffic
pattern
For ring topology each node
can be configured to send
packets to nodes a fixed
distance away, or to random
destinations.
For grids, each node can send
messages to a fixed distance
away in both dimensions.
SUMMARY:

We have taken the
first steps toward
designing a network on
a chip. Our
investigation to date
has produced
encouraging results,
showing that our
perception that
asynchronous VLSI
systems and
communication
networks behave in a
similar manner is
indeed correct. Our
simulation
demonstrated that it is
possible to predict
network latencies and
throughput using a
hardware simulation
approach, and that such
as approach can lead to
architectures that can
simulate network
behavior faster than
real-time.







A AP PP PL LI IC CA AT TI IO ON NS S O OF F I IN NT TE EL LL LI IG GE EN NT T N NE EU UR RA AL L
N NE ET TW WO OR RK KS S I IN N T TH HE E E EA AR RT TH H Q QU UA AK KE E D DI IA AG GN NO OS SI IS S









N.S.ANAND
D.SUNIL CHANDRA
EEE(4/4)
AURORAS ENG COLLEGE
Zeno_4u@yahoo.com
Sunilchandra4u@yahoo.co.in
Phone no: 9848356352
9849510985






ABSTRACT
An earthquake warning system has been developed to provide a time series profile
from which vital parameters such as the time until strong shaking begins, the intensity of
the shaking, and the duration of the shaking, can be derived. Interaction of different types
of ground motion and changes in the elastic properties of geological media throughout
the propagation path result in a highly nonlinear function. We use neural networks to
model these nonlinearities and develop learning techniques for the analysis of temporal
precursors occurring in the emerging earthquake seismic signal. The warning system is
designed to analyze the first-arrival from the three components of an earthquake signal
and instantaneously provide a profile of impending ground motion, in as little as 0.3 sec
after first ground motion is felt at the sensors. For each new data sample, at a rate of 25
samples per second, the complete profile of the earthquake is updated. The profile
consists of a magnitude-related estimate as well as an estimate of the envelope of the
complete earthquake signal. The envelope provides estimates of damage parameters,
such as time until peak ground acceleration (PGA) and duration. The neural network
based system is trained using seismogram data from more than 400 earthquakes
recorded in southern California. The system has been implemented in hardware using
silicon accelerometers and a standard microprocessor. The proposed warning units can
be used for site-specific applications, distributed networks, or to enhance existing
distributed networks. By producing accurate and informative warnings, the system has
the potential to significantly minimize the hazards of catastrophic ground motion.









DRAWBACKS OF THE PRESENT SYSTEM:
1) The regular network used in the present system requires a host of the
developmental challenges such as transmission of large volumes of network
sensor information to a central computer. Thus, increasing time of computation.
2) This increases hazardous level
3) Instrumentation is inadequate
4) Maintenance is very high
5) The operation is not economically feasible

INTRODUCTION TO THE METHOD OF INTELLIGENT NUERAL
NETWORK APPLICATON FOR EARTH QUAKE DIAGNOSIS:
Providing a reliable and selective informative warning system in the order of
few seconds to few minutes definitely reduces the hazardous level and also the damage
done to the property as well as lives lost.
The approach done here is a real time approach.
The requirements for doing so are as ---
1) System must provide useful estimates of any incoming hazardous earth quake
regardless of distance and direction.
2) Estimates must begin within 1secs the of 1
st
arrival signal.
3) The warning updates must be updated at least 10times/sec as the earth quake
emerges.
Here, we identify two major earth quake signals being used namely the primary and
secondary arrival signals
Here, in the figure shown below tells how the distance parameter varies with the warning
time.

Figure 1 Warning time varies with respect to distance. 434 earthquake recordings are plotted
illustrating the warning time measured from the P-wave arrival to the peak ground acceleration
(PGA) of each earthquake. For example, at a distance of 100 km, there is approximately 15 sec of
warning time between the earthquake onset and the arrival of the PGA.
The neural network concept which we use here has got the nonlinear behavior and uses
parallel processing units. The artificial neural networks (Ann) used here are primarily
for studying the nonlinear behavior paths of the primary and secondary paths.
The advantages of using the Ann is---
1) They are stable.
2) Less sensitive to spurious input and to noise.
3) Once trained are compact and operate at speeds that make them ideal for real
time applications.
4) They can be easily implemented in a compact IC.

DATABASE:
The data base here is obtained from Lawrence Liverpool National Laboratory
(LLNL), Landers, Califorina, US.


Figure2 Earthquakes that occurred from 1988 to 1992 surrounding Landers, CA.This study
focused on earthquakes at distances from 20 to 294 km from Landers recording station

TRAINING METHOD:
In the first of two phases, a neural network is trained to estimate earthquake scale
using the architecture shown in figure3a. Fifty randomly chosen earthquakes were
removed from the original 434 earthquakes in the data base. These 50 earthquakes were
not part of the training process and were set aside for neural network performance
measurement. The training input for each training consisted of 30 parameters. Each of
384 earth quakes provided six training examples using different window sizes (t). The
total number of training examples will here are 2304.
In the second phase, a separate neural network was trained to estimate earthquake
profile using the architecture shown in figure3b. This network was trained with the 2304
examples described in the first phase, having removed the same 50 randomly chosen
earthquakes. The training output for each earthquake in the second phase was the
normalized earthquake profile. The training input for both phases was identical. Again,
once trained, unseen examples were fed through the network, but in this phase, neural
network profile estimates were produced.

Figure 3 Neural network architectures used in this study: (a) neural network used to estimate
scale of the earthquake; (b) neural network used to estimate earthquake profile. In both cases, a
3-layer conjugation gradient back propagation network was used.
RESULTS AND DISCUSSION: The earthquakes used in this study were, in
general, weak motion, regional events with distances ranging from 20 to 294 km.
Earthquake magnitudes varied from 2.5 to 4.6. We are currently increasing this range to
include strong motion earthquakes. Earthquake focal distance measures tended to
dominate the scale values due to the relatively limited range of magnitudes.

Figure 4 Scatter plots showing the scale of the output envelope against (a) linear regression fit of
the local PGA value, (b) a linear regression fit of the local CAV measurement, and (c) the neural
network estimate of the envelope scale. The lines were plotted to represent a perfect correlation or
correlation coefficient of 1.
Figure 4a shows the true scale values for the 50 removed earthquakes plotted against the
regression-produced PGA estimates. Figure 4b shows the true scale values again plotted
against the regression-produced CAV estimates. Figure 4c shows the true scale values
plotted against the neural network scale estimates with Delta t = 1. The PGA and CAV
generated estimates of figures 4a and 4b each in itself produces a scale estimate with a
positive correlation to true scale. However, the neural network is able to improve the
correlation. The network is using the PGA and CAV measures as input during training
and has the ability to use these parameters to form its estimate. The network results are
also consistent across the range of scale values. The regression estimates are less
accurate at lower scale values. This illustrates the tolerance of the network to noisy
input, as the lower scale values correspond to lower magnitudes, and large focal
distances correspond to poor signal to noise. In a second experiment, neural network
estimates of scale values as a function of Delta t were obtained to examine the emergent
properties of the scale estimator as well as the accuracy at widely different scale values.

Figure 5 Neural network scale estimates using Delta t values ranging from 0.2 to 10 sec for four
previously unseen earthquakes. The superimposed line plots indicate the target or true scale value
for each earthquake
.
The fig 5 tells the measure of to examine the emergent capabilities of the system, each
earthquake was fed into the trained neural network scale estimator 15 times, using values
of Delta t ranging from 0.2 to 10 sec. In the second phase of the study, we trained a
second neural network to estimate earthquake profile done. The correlation coefficient
(Cc) between the true earthquake profile and the neural network profile estimate was
calculated. The correlation coefficient provides an indication of how well the shape of the
two profiles matches. The Cc for all 50 pairs of earthquake and neural network signals is
plotted in figure 6a.

Figure 6 The correlation coefficient Cc between the true earthquake profile and the neural
network profile estimate is shown in (a) for 50 previously unseen earthquakes. Likewise, the
correlation coefficient Cc between the true earthquake profile and the average earthquake signal
is shown in (b) for the same 50 earthquakes. Values were calculated using only the first second of
earthquake signal.

ADVANTAGES OF USING INTELLIGENT NUERAL NETWORKS:
1) development and maintenance is very low
2) easy implementation using pc and integrated in to an embedded system
3) replacement of each element in a regular network for increasing warning system
4) analysis of shaking of ground arriving at detection regardless of direction and
distance of earth quake
5) system described estimates here with in 0.3 secs after the arrival of the primary
signal
6) updating heres with in 0.04secs
FUTURE ASPECTS:
Adding of a distance estimate to training point (from seismic graph) would improve
perfection. This would improve the efficiency and effectiveness of systems providing a
chance for research and further enhancement.

CONCLUSION:
A neural network has been successfully trained with more than 4000 recorded
earth quake to produce a scale and profile. It has been implemented in hardware using
silicon accelerometers and a standard pc. The extensive use of back propagation method
also reveals about the various range of applications of neural networks such as pattern
recognition of an explosive, etc
Thus, implementation of intelligent neural networks for the earth quake diagnosis proved
to be effective, efficient and also economical.


NEUROMORPHIC VLSI DESIGN USING BAT
ECHOLOCATION



PRESENTED BY:

V.ASHALATHA
ETM 3/4
EMAIL ID:yours_ashalatha@yahoo.co.in
Phone no:9866917369

P.PADMAVATHHI
ECE 3/4
EMAIL ID:palla_padmavathi7@yahoo.co.in
Phone no:9866139231

From:
G.NARYANAMMA INSISTITUTE OF TECHNOLOGY AND
SCIENCE
SHAIKPET
HYDERABAD



CONTENTS:
1. ABSTRACT
2. INTRODUCTION
3. GOAL
4. AN ULTRASONIC COCHELA
5. DELAY TUNED CELLS
6. COMMERICAL APPLICATIONS
7. BIBILOGRAPHY

















NEUROMORPHIC VLSI DESIGN USING
BAT ECHOLOCATION
1.ABSTRACT:
Birds and bats have long been
the envy of engineers,
demonstrating fast, accurate
sensing and agile flight control
in complex, confined 3D
spaces, all in a tiny package.
Their ability to fly rapidly
through cluttered forest
environments in search of food
far exceeds the capabilities of
any existing man-made system.
The technology that is
developing and propose to
bring to this application domain
is neuromorphic VLSI. For
more than a decade, a growing
number of VLSI hers
worldwide have been
developing a common toolbox
of hybrid analog and digital
VLSI techniques to mimic the
signal processing of neural
systems. This effort has
spawned many projects in smart
vision sensors and systems:
silicon cochleae, retinal and
cochlear prosthetics, neural
prosthetics, biologically
realistic legged robotics, on-
chip learning systems and many
more.

Using these design techniques,
our laboratory recently has
pursued the development of
echolocation circuits that mimic
the neural processing in the big
brown bat, Eptesicus fuscus.
One population of neurons that
we have designed is ons that we
have designed is tuned to detect
the angle of echo arrival as
determined by the relative
loudness at two microphones
placed on a model bat head.
These biological algorithms are
implemented in commercially
available CMOS fabrication
processes (e.g., the MOSIS
service) and operate in real-
time with power consumption
in the range of milli watts.



2.INTRODUCTION:
From a computational
neuroscience perspective, bats
are remarkable because of the
very short timescale on which
they operate. The barrage of
returning sonar echoes from a
bat's near-environment lasts
approximately 30 milliseconds
following a sonar emission with
the echo from a specific target
lasting, at most, a few
milliseconds. At this timescale,
a particular neuron has the
opportunity to fire only one or
two spikes to represent the
echo. Unlike the traditional
view of cortical processing
where many spikes are
integrated over time to compute
an average rate, the bat must
rely on populations of neurons
that respond transiently but
selectively to different objects
in the environment. In these
neural circuits, the details of
spike timing, synaptic
dynamics, and neuron
biophysics become extremely
important. Flying at speeds
anywhere from 1m/s to 6 m/s, a
bats sensory world jumps from
pulse to pulse as it flies through
the world. Sensory prediction
is therefore likely to be very
important in this animal. In
spite of all this behavioral
specialization, the bat brain is
organized like most other
mammalian brains suggesting
that echolocation arises from
only small modifications of the
typical mammalian auditory
system.
3.GOAL:
Our goal is to construct a
flying bat-sized creature that
uses ultrasonic echolocation to
both navigate and scrutinize its
environment
sufficiently to distinguish
between obstacles and
"insects". The bat's sensory
and motor system will be
constructed from neural models
and implemented using
"neuromorphic" VLSI


techniques. Our intention is
two-fold: 1) to test these neural
algorithms in a real-time,
closed-loop behavioral
situation, and 2) to develop
useful sonar sensors for use in
miniature aircraft systems.




BAT HEAD:
We are working with two
different hardware systems: a
physically larger single-
frequency sonar system
("narrowband") and a tiny
broadband system. The
narrowband system is being
used to rapidly test concepts
following initial software tests.
Photos of these two systems are
found below:



In the photo to the left is our
narrowband sonar system that
operates only on a frequency of
40 kHz. The fixed arrangement
of the microphones was chosen
to produce a difference in echo
amplitude with azimuthal
direction. The current system
roughly extracts direction and
range and is capable of
servoing the head (which is
mounted on a model airplane
servo) to track moving targets
in real-time.
On the right, we have a
photo of our broadband system
using a baked polymer clay bat
head with a tiny Knowles
(FG3329) microphone soldered
to the end of a group of wires.
This system has two broadband
ultrasonic (and audio)
microphones that will feed our
silicon cochleae chips.
Both of these physical heads
produce intensity difference
cues at each microphone that
allows the system to determine
the angle of the arriving echo.
4.AN ULTRASONIC
COCHLEA:
Echo locating bats
specialize in high-frequency
hearing using echolocation
sounds that typically range in
frequency from 20 kHz to 100
kHz. While some bats are
specialized for specific
frequencies with cochlear
filtering at extremely high
Q10dB values, we are studying
bats that use a broadband
vocalization and are ultrasonic
frequency generalists (e.g.,
Myotis lucifugus) with Q10dB
values in the range of 10 to 30.
Good frequency resolution is
important for vertical
localization, discriminating
close objects as well as for prey
determination.
To support our ongoing
work in modeling bat
echolocation, a binaural,
ultrasonic cochlea-like filter
bank has been designed with
moderate quality (Q) factor (as
high as 65) with spiking
neurons that are driven by the
filter outputs. The neuron
addresses are reported off chip
at the time of the spike in an
un-arbitrated fashion and in
current-mode to reduce the
amount of capacitively-coupled
feedback into the filters. This
chip was fabricated in a
commercially- .5 um CMOS
process and consumes 0.425
milli watts at 5 volts.

When echoes arrive from
different directions, the number
of spikes generated in the
auditory nerve and the cochlear
nucleus varies with the
intensity at each ear. Using
this information, the first
binaural nucleus in the
mammalian auditory system,
the lateral superior olive (or
LSO) becomes selective to the
direction of arrival. These cells are
excited by the intensity from one ear
and inhibited by the intensity from the
other ear.
The binaural LSO
response and the monaural response from
the cochlear nucleus are projected to the
inferior colliculus (IC) via the doral
nucleus of the lateral lemniscus (or
DNLL), resulting in very similar
responses in both DNLL and IC. With
similar responses in the LSO as in the IC,
one can ask the question, "What kind of
computation is going on here?" In the
figure above is a set of tuning curves for
three LSO cells that have different
synaptic weightings from the left and right
ears. By comparing the responses of the
population of LSO cells, each of which
have different synaptic weightings, we

can determine which direction an echo
is arriving from.










5.DELAY TUNED
CELLS (RANGE
TUNING):
Information about target range
has many uses for bats during
both prey-capture and
navigation tasks. Beyond the
extraction of distance and
velocity, it may be important
for less obvious tasks, such as
optimizing the parameters of
the echolocation process. For
example, as a bat approaches a
target, it alters the repetition
rate, duration, spectral content,
and amplitude of it
vocalizations. Not only is
echolocation used for insect
capture, it provides to the bat
information about obstacles,
roosts, altitude, and other
flying creatures.
In the bats brainstem and
midbrain exist neural circuits
that are sensitive to the specific
difference in time between the
outgoing sonar vocalization
and the returning echo. While
some of the details of the
neural mechanisms are known
to be species-specific, a basic
model of reference-triggered,
post-inhibitory rebound timing
is reasonably well supported by
available data.
Neurons have been found in
bats that show a facilitated
response to paired sounds (a
simulated vocalization and an
echo) presented at particular
delays. The cells responses to
sounds presented at the
appropriate delays are much
greater than the sum of the
responses to the individual
sounds presented alone. These
cells are part of a larger class of
neurons called combination-
sensitive neurons, and are
specifically referred to as
delay-tuned cells. Delay-tuned
cells are found at many levels
in the bat auditory system.
They have been found in the
inferior colliculus (IC), the
medial geniculate body
(MGB), and the auditory
cortex. Disruption of cortical
delay-tuned cells has been
shown to impair a bats ability
to discriminate artificial pulse-
echo pair delays. It is likely
that delay- tuned neurons play
a role in forming the bats
perception of range, although
delay-tuned cells have also
been shown to respond to the
social calls of other bats.


6.COMMERCIAL
APPLICATIONS:
There are many obvious
commercial and industrial
applications of integrated
sensory systems implemented
in low-power VLSI. The
development of a small,
sophisticated, power-efficient,
low-cost echolocation system
has many potential applications
beyond neural modeling. In the
biomedical realm, such devices
are beginning to be used as
another option for collision
avoidance and spatial sensing
for blind or low vision patients.
These devices when properly
scaled down could also be used
to guide endoscopic
instruments or provide
additional information about
distance to monocular, visually
guided surgical tools. Air-
coupled sonar, las a basic
sensor module for; mobile
robotics, has not advanced
significantly beyond a narrow-
beam, closest-target sensor,
despite decades of use, with
robotic vacuum cleaners finally
hitting the market, a low power
module with significantly more
sensing capability at low cost
could facilitate a new range of
commercial products and toys
that have the ability to sense
objects in the near-field like a
full set of whiskers.
From a micro- aerial vehicle
(MAV) perspective, while GPS
have successfully enabled long-
range navigation, the final leg
of many desirable machines
occurs in locations where the
lack of GPS signals and
unmapped obstacles make
Navigation untenable; such
locations include inside
building, under the forest
canopy, in canyons, and in
caves. Obtaining the range to
objects directly, while
computing azimuth, sonar
systems are a natural
complement to vision systems
for these challenging
environments. When combined
with an ornithopter airframe, a
nearly silent device (to
humans), the ability to fly in
darkness seems to be within
reach.
7.CONCLUSION:
Overall this paper proves to be
wonderful framework of in
which to pursue different types
of scientific engineering-
oriented research and
education. Understanding bat
echolocation involves many
interesting problems of signal
processing within the context
of biological data
representations and neural
hardware.






BIBILOGRAPHY:
1.signal processing journal
(IEEE)
2.IEEE spectrum




NOISE ELIMINATION IN ECG USING DSP


PRSENTED BY





Jahnavi.UL Naga Tulasi .M

III/IV year III/IV year





DEPARTMENT OF



ELECTRONICS AND COMMUNICATIONS
ENGINEERING


NARAYANA ENGINEERING COLLEGE

NELLORE



















ABSTRACT:


Digital signal processing (DSP)
involves the representation,
transmission, and manipulation of
signals using numerical techniques and
digital processors. It has been an
exciting and growing technology
during the past few years. In the initial
years of its development, due to high
cost and slow processors, its pace was
slow. But due to the development of
fast processors and computers it has
made a tremendous progress. More
and more DSP algorithms have been
developed better tools are also
developed to implement there
algorithms. Its applications have also
been expanded vigorously to
encompass not only the traditional
radar signal process but also the Bio
medical signal processing.
Bio medical signal processing
is one of the latest developing fields in
the area of Digital signal processing
this involves the processing of the real
human bodys signals. The electro
cardiograph (ECG) signal obtained
from heart have to be processed before
the physician gets a view of graph. In
order to process this ECG signal, we
have a number of softwares. In this we
use a new software LabVIEW
developed by the National Instruments.
This software gained popularity due to
its greater flexibility, easy design and
fast processing in the advantage of this
software is that there is no necessity to
write any programmable code since all
the digital signal processing tools like
Fourier transforms, digital filters are
available as virtual instruments i.e. in
built functions.





INTRODUCTION


By a signal we mean a mathematical
function that carries or contains some
kind of information that can be
conveyed, displayed and manipulated
like speech, biomedical signals, sound
etc. Until the 1960s almost all of the
signal processing was analog. The
tremendous advances in both digital
computing technology and signal
processing theory led to change over of
digital signal processing (DSP).
Digital signal processing is
concerned with the digital
representation of signals and use of
digital processors to analyze, modify
or extract information from signals.
Most signals in nature are analog in
form, often meaning that they vary
continuously with time. The signals
used in most popular forms of DSP are
derived from analog signals which
have been sampled at regular intervals
and converted into a digital form. The
advantages of DSP are greater
flexibility, superior performance. DSP
is one of the fastest growing fields in
modern electronics, being used in any
area where information is handled in a
digital form or controlled by a digital
processor.
Application areas include
- image processing
- instrumentation/control
- speech/audio
- military
- telecommunication
- biomedical
- consumer applications
DSP technology is at the core
of many new emerging digital
information products and applications
that support the information society.
Such products and applications often
require the collection, processing,
analysis, transmission, display and/or
storage of real-world information,
sometimes in real time. The ability of
DSP technology to handle real-world,
signals digitally has made it possible to
create affordable, innovative, and high
quality products and applications for
large consumer markets (eg. digital
cellular mobile phones, digital
television and video games). The
impact of DSP is also evident in many
other areas, such as medicine and
healthcare, digital audio and personal
computer systems.

CONTAMINATION OF ECG:
signals are involved. Further more,
most DSP devices are still not fast
enough and can only process signals of
moderate bandwidths. Nevertheless
DSP devices are becoming faster and
faster.
Unless we are knowledgable in
DSP techniques and have the necessary
resources is software packages, DSP
designs can be time consuming and in
some cases almost impossible.
However the situation is changing as
many new graduates now possess some
knowledge of digital techniques and
many new softwares are coming into
existence. In the area of Bio-medical
signal processing a common problem
is the requirement for signals which
are free of contamination due to
number of sources these signals are
commonly used, for example, for
foetal monitoring to access the effects
of measurement system on electrical
activity of the fault heart.
An ECG is a small electrical
signal, which is produced due to
activity of heart. Its source can be
considered as a dipole located in the
partially conducting medium of thorax.
This dipole includes a body surface
potential, which can be measured and
used for diagnostic purposes.
Alphabetic designations have been
given to each of the prominent
features. These can be identified with
events related to action potential
pattern to facilitate analysis, the
horizontal segment of the wave form
proceeding, the P.wave is designated
as isopotential line. The P.wave
represents depolarization of at real
musculature. The QRS complex is
combined result of depolarization of
atria and depolarization of ventricles,
which occur almost simultaneously.
The T wave is ventricular
repolarization. The P-Q interval
represents the time during which the
excitation wave is delayed in fibers
near the atrioventricular mode (AV
mode). The reciprocal of heart period
is time interval between R-to-R peaks
(in milliseconds) multiplied by 60000
gives the instantaneous heart rate. The
wave form, however, depends greatly
upon the lead configuration. In general,
the cardiologist looks critically at the
various time intervals, polarities, and amplitudes to arrive at his diagnosis.

The normal ECG termed the normal
sinus Rhythm. Impulses originate in
the SA node regularly at a rate of 60-
100 per minute in adults and at faster
rates in older children (90-110) small
children (100-120) and infants (120-
160). The PR interval is 0.12 0.20
second and constant when A-V
conduction is normal, PR is prolonged
and/or variable when A-V block is
blocked.Each P is followed by a QRS
complex with the resulting P:QRS ratio
1-1. The QRS may be less than 0.11
sec or QRS may be wide and bizarre
when bundle branch block is present.
The RR intervals may be slightly
irregular especially in young and
elderly. The basic heart rate can be
calculated from.
Heart rate (beats min
-1
) = 1 x 60000
heart period (ms)

There are many factors that must be
considered in design of a system that is
capable at measuring these signals
without introducing contamination into
the signal. Patients who having their
ECGs taken on either a clinical electro
cardiograph, or continuously on a
cordial monitor, are often connected to
the pieces of electrical apparatus. Each
electrical device has its own ground
connection either through the power
line or, some cases, through a heavy
ground wire attached to some point in
the room. A ground loop can exist
when two or more electrical
monitoring devices are connected to
the patient. Another problem caused
by the ground currents is related to the
fact that, because the ground leads of
the electro cardiograph usually runs
alongside the signal leads, magnetic
fields caused by the current in the
grounding circuit can induce small
voltages in the signal lead wires. This
can produce interference on the
recorded data.
A major source of interference
is the electric-power system. Besides
providing power to the ECG system
itself, power lines are connected to
other pieces of equipment in a typical
hospital or physicians office such
interferences can appear on the
recorded data as a result of two
mechanisms, each operating singly or,
in some cases, both together. The first
is the electric-field coupling between
the power lines and the
electrocardiograph and/or patient and
is the result of the electric field
surrounding the main power lines. The
other source of interference from
power lines is magnetic induction.
Current in the power lines establishes a
magnetic field in the vicinity of line. If
basic precautions are taken a great deal
of this type of Contamination can be
minimized. To reduce the above
contamination, simple signal
processing techniques are used.
As DSP has its advantage, it
has its own disadvantages. DSP
designs can be expensive, especially
when large bandwidth

ORIGIN OF LABVIEW:
used to analyse data from Mars pathfinder etc.
A computer scientist relates this
Pseudo biblical history of the
computer programming world.
In the beginning, there was
only machine language, and all was
darkness. But soon, assembly
language was invented and there was a
glimmer of light in the programming
world. The came FORTAN and the
lights went out.
In the early 1980s it was a lot
of work to write hideously long
program to do even simple
measurements. Scientists and
engineers only automated their
operations when it was absolutely
necessary, and the causal users and
operators wouldnt tamper with the
software because of its complexity.
For scientist and engineers, the
classical battles with programming
language have been most counter
productive.
Well, times have changed in a
big way because LabVIEW has
arrived. At last, working troops have a
programming language that eliminates
that arcane syntax, hides the compiler,
and builds the graphical user interface
right in. And yet, the thought of
actually programming with pictures is
so incredible when contrasted with
ordinary computer languages.
LabVIEW expansion is Laboratory
virtual instrumentation engineering
work bench. A product of National
Instruments corporation (Austin,
Texas), it is a purely graphical, general
purpose programming language with
extensive libraries of functions, an
integral compiler and an application
builder for stand alone applications.
The LabVIEW programs are portable
among various platforms. The concept
of virtual instruments (VIs), pioneered
by LabVIEW, permits us to transform
a real instrument into another, software
based instrument, thus
increasing the versatility of available
hardware. And all programming is
done via a block diagram, consisting of
icons and wires, which is directly
compiled to executable code, there is
no underlying procedural language or
menu driven system.
LabVIEW made the concept of
the virtual instrument (VI) a practical
reality the objective in virtual
instrumentation is to use a general
purpose computer to mimic real
instruments with their dedicated
controls and displays, but with the
added versatility that comes with
software for example, instead of
buying a strip chart recorder an
oscilloscope, and a spectrum analyzer,
we can buy one high performance
analog to digital converter and use a
computer running LabVIEW to
simulate all of these instruments and
more. The virtual instrument (VI)
concept is so fundamental to the way
that LabVIEW works that the
programs we write in LabVEW are in
fact called VI. Virtual instrumentation
offers the greatest benefit over real
instruments in the areas of
price/performance, flexibility, and
customization. For the price of a
dedicated high performance
instrument, we can assemble a
personal computer based system with
the fundamental hardware and
software components to design virtual
instruments targeted for specific
applications.
Some of success stories of
LabVIEW is that if was aboard the
mid-October 1993 columbia space
shuttle mission; it was

LabVIEW Signal processing VIS

LabVIEW contains the following VIs
for signal processing.
+ Signal generation
contains VIs for
generating signals such
as since wave, square
wave, white noise etc.
+ Digital signal
processing contains VI,
for computing Fast
fancier transform (FFT),
auto correlation, cross
correlation, power
spectrum and other
functions.
+ Windows contain VIs
for implementing
windows such as
Hanning, Hamming,
Blackman, exponential,
and the flat top window.
Further it also contains
specialized toolkits like Digital filter
design toolkit etc.
DIGITAL FILTERS

A filter is essentially a system or
network that selectively changes the
wave shape amplitude-frequency
and/or phase-frequency characteristics
of a signal in a desired manner.
Common filtering objectives are to
improve the quality of a signal (for two
or more signals previously combined
to make, for example, efficient use of
an available communication channel.
A digital filter, as we shall see
later, is a mathematical algorithm
implemented is hardware and/or
software that operates on a digital
imput signal to produce a digital output
signal for the purpose of achieving a
filtering objective. The term digital
filter refers to the specific hardware or
software routine that performs the
filtering algorithm. Digital filters often
operate on digitized analog signals or
just number, representing some
variable, stored in a computer memory.
A simplified block diagram of a
real-time digital filter, with analog
input and output signals, is given in
Figure 6.1. The bandlimited analog
signal is sampled periodically and
converted into a series of digital
samples, x(n),n=0,1, The
digital processor implements the
filtering operation, mapping the input
sequence, x(n), into the output
sequence, y(n), in accordance with a
computational algorithm for the filter.
The DAC converts the digitally filtered
output into analog values which are
then analog filtered to smooth and
remove unwanted high frequency
components.
Digital filters play very
important roles in DSP, Compared
with analog filters they are preferred in
a number of applications (for example
data compression, biomedical signal
processing, speech processing, image
processing, data transmission, digital
audio, telephone echo cancellation) following advantages.
because of one or more of the
Digital filters can have
characteristics which are not
possible with analog filters,
such as a truly linear phase
response.
Unlike analog filters, the
performance of digital filters
does not vary with
environmental changes, for
example thermal variations.
This eliminates the need to
calibrate periodically.
The frequency response of a
digital filter can be
automatically adjusted if it is
implemented using a
programmable processor,
which is why they are widely
used in adaptive filters.
Several input signals or
channels can be filtered by one
digital filter without the need to
replicate the hardware.
Digital filters can be used at
very low frequencies, found in
many biomedical applications
for example, where the use of
analog filters is impractical.
Also, digital filters can be made
to work over a wide range of
frequencies by a mere change
to the sampling frequency.
TYPE OF DIGITAL FILTERS: FIR and IIR filters

Digital filters are broadly divided into
two classes, namely impulse response
(IIR) and finite impulse response (FIR)
filters, Either type of filter, in its basic
form, can be represented by its impulse
N-1
response sequence, h(k) (k=o,1.).
The input and output signals to the
filter are related by the convolution
sum, which is given in Equation 6.1 for
the IIR and 6.2 for the FIR filter.
Y(n) = h(k)x(n-k)
K=o
N-1
y(n) = h(k)x(n-k)
k=0

It is evident from these equations that,
for IIR filters, the impulse response is
of infinite duration whereas for FIR it
is of finite duration, since h(k) for the
FIR has only N values. In practice, it
is not feasible to compute the output of
the IIR filter using Equation because
the length of its impulse response is
too long (infinite in theory). Instead,
the IIR filtering equation is expressed
in a recursive form:
N M
Y(n) - h(k)x(n-k) = b
k
x(n-k) - a
k
y(n-k)
K=o k=0 k=1

where the a
k
and b
k
are the coefficients
of the filter. Thus, Equations are the
difference equations for the FIR and
IIR filters respectively. These
equations, and in particular the values
of h(k), for FIR, or a
k
and b
k
for IIR,
are often very important objectives of
most filter design problems. We note
that, in Equation the current output
sample, y(n), is a function of past
outputs as well as present and past
input samples, which is the IIR is a
feedback system of some sort. This
should be compared with the FIR
equation in which the current output
sample, y(n), is a function only of past
and present values of the input. Note,
however, that when the b
k
are set zero.
Alternative representations for
the FIR and IIR filters are given in
Equations respectively. These are the
transfer functions for these filters and
are very useful in evaluating their
frequency responses.
As will become clear in the
next few sections, factors that
influence the choice of options open to
the digital filter designer at each stage
of the design process are strongly
linked to whether the filter in question
is IIR or FIR. Thus, it is very
important to appreciate the differences
between IIR and FIR, their peculiar
characteristics, and more importantly,
how to choose between them
.
MODELLING THE PROCESS:
To reduce the contamination in the
ECG ,it is possible to employ simple
signal processing techniques. We use
the LabVIEW signal processing
techniques .We begin by modeling a
ECG signal ,then applying a digital
filter to remove a selected component
of the signal . Therefore ,the basic aims
of this activity are:

1.To create a virtual instrument that
will simulate a heart beat signal that is
contaminated by a mains frequency
(50 Hz).

2.To develop a infinite impulse
response filter (IIR) to reduce
contamination.
3.To test the virtual instruments
(VI) using the real ECG data obtained
from a data acquisition card with a
safe medical isolation amplifier.
4.To generate a LabVIEW program
to simulate an ECG signal plus 50Hz
contamination.
5.To use a standard LabVIEW
function to create an IIR filter that will
notch out the 50Hz contamination and
create an appropriate front panel
display to show the raw data and
filtered output.
6.To devise a means of counting
the beats per minute.
PROCEDURE:
The pole-zero placement method with the following specifications :
The following activity is designed in
two parts. The first part assumes that
the ECG signal is modelled and the
second part will be using the real ECG
data obtained from a patient.
The first requirement is to simulate the
heartbeat .To achieve a simple method
is used. This is achieved by combining
the sum of the sinusoidal signals that
represent the basic components of the
ECG signal. Careful selection of the
noise source is frequencies can be
made to accurately simulate the
contamination due to a ground loop.
These are 60,40 and 20Hz respectively
(the system is assumed to be designed
in UK whose mains frequency is 50
Hz.) . The contamination can vary
from 50Hz. In order to construct this
simulation of this, a white noise source
is chosen. The noise is fed into a 5
th

order Butterworth bandpass filter with
a bandwidth of 49Hz to 51Hz.
The amplitude of the
white noise source is set to 150 and
the gain of the SINE PATTERN
generators are unity. The sampling rate
was set to 600 samples/second. The
details of the virtual instrument are
illustrated in figure 2 and the output
waveform it generates is shown in
figure 3. To achieve the objective of
removing the contamination from the
simulated ECG signal an infinite
impulse response(IIR) filter was
chosen. The chosen design procedure is

Notch frequency 50 Hz
3dB width of Notch 5 Hz
Sampling frequency 600 Hz

The component in a signal may be
rejected by placing a pair of zeros at
points on the unit circle of the zplane
corresponding to 50 Hz.
Hence pole placement :
2 x 50 =0.523 rad/s
600
The position of the poles in relation to
the zeros determines the sharpness and
amplitude response on either side of
the notch.

Thus the poles are placed on the same
radius as the zeros at a radius r<1
where
r=1 - BW
f
s


Where BW is the required notch width. Hence

r=1 10 =0.9476
600

Thus the poles are positioned at the
same radius as the zeros. The transfer
function is given by :
H(z) = (z-e-
j30
) (z-e
j30
)
(z-0.9476e-
j30
) (z-0.9476 e
j30
)

H(z
-1
) = 1-1.732z
-1
+z
-2
1-1.6413z
-1- +
0.898z
2

H(z) = z2 1.732z +1
z2 1.64132z +0.898

Therefore the difference equation is :

y(n) =x(n) 1.732 x(n-1) +x(n-2) +1.6413y (n-1) 0.898y(n-2)

Hence the coefficients of the notch filter are :

a
0
=1 b
0
=1
a
1
=-.7321 b
1
=-1.641
a
2
=1 b
2
=0.898

The implementation of the filter is
achieved using a standard digital filter
VI and entering the above coefficient
values. This is illustrated.
The next objective of the
design example is to count the number
of beats per minute. To achieve this
there must be no dc offset on the signal
and the effect of artifact owing to
muscle noise must be removed. The
aim is to count the number of peaks in
the signal. Obviously, if the number of
peaks within a given period were to be
counted for the calculation of
beats/minute then it would be desired
that the beat signal should remain at an
constant level and not drift. To remove
the drift requires removal of the low
frequency
component of the signal. This can be
achieved by placing a high pass filter is
series with the output of the notch
filter. The beat rate can be as low as
30 beats per minute, which requires
that the lowest frequency component
of the ECG signal be at least 0.5Hz.
This can occur when a patient is
monitored in a coma. As a
compromise, the cut-off frequency for
the highpass filter is set to 0.25 Hz.
A second source of
contamination which can affect the
estimation of the beat rate is muscle
noise, this can occur at approximate 40
Hz or more. This can simply be
eliminated by lowpass filtering of the
ECG signal. These extra filters can be
reduced to a bandpass filter with cut-
off frequencies at :
Lower cut-off =0.25 Hz
Upper cut-off =40.00 Hz
By placing the bandpass filter
in series with the IIR sub-filter, and
connecting the output of the bandpass
to the waveform display, we eliminate
any d.c. off-set and muscle artifact in
the beat signal. The full
implementation is given
.





CONCLUSION:


A system was developed which
eliminates the noise contamination
from the ECG signals. The
contamination was eliminated using
digital filters. The digital filter chosen
was infinite impulse response (IIR)
filters. The filter was designed using
pole zero compensation techniques.
The system was developed by
modeling a ECG signal.
So, the system is tested for a
real ECG date. A data acquisition card
is be placed in a PC with an isolation
amplifier. This isolation amplifier must
be medically. The AI Acquire
Waveform VI enables the retrieval of
data from the DAQ card. This VI can
be found in the Data Acquisition
menu in the Functions palette. The
sampling rate and the number of
samples should be set at 600 units.
Shows a diagram to illustrate the
connection of the data acquisition
system to the patient.
Two conductive pads are
placed on the subject from whom the
real ECG signal is to be obtained. A
medically safe optical isolated
differential amplifier is connected to
the two conductive pads. This signal is
them fed into an aliasing filter and the
signal is monitored using an
oscilloscope and fed into a data
acquisition card. Care is to be taken to
avoid bypassing the isolation when
using the oscilloscope.



REFERENCES:
1.DIGITAL SIGNAL PROCESSING: A PRACTICAL APPROACH
BY Emmanuel c.ifeachor
Barrie w.jervis
2.LabVIEW SIGNAL PROCESSING:
By Mahesh L. Chugani
Abhay R. Samant
Michael Cern n



1
A TECHNICAL PAPER
ON
EMBEDDED SYSTEMS
(An Implementation Of Elliptic Curve Digital Signature Algorithm In
FPGA-Based Embedded System For Next Generation IT Security)



By
M.Kishore Naik
&
K.Rajasekhar



Department of Electronics and Communication Engineering
SRI VENKATESWARA UNIVERSITY LEGE OF ENGINEERING COL
TIRUPATI


Address for communication
M.Kishore Naik K.Rajasekhar
Room no.1226, Room no.1131,
Visveswara Block, Visveswara block
S.V.U.C.E.H, S.V.U.C.E.H,
TIRUPATI. TIRUPATI.
Email ID:naik_svuce@yahoo.co.in Emai ID:raj_316_eee@yahoo.co.in


2



ABSTRACT

This paper proposes a high-performance FPGA-based embedded cryptosystem
implementing the Elliptic Curve Digital Signature Algorithm (ECDSA). A key
application of the proposed embedded system is in next-generation PKI enabled IT
security hardware platforms, providing security services of authentication, non-
repudiation and data integrity. The hardware architecture consists of a 32-bit Nios
embedded processor integrated with three dedicated hardware assist blocks, which
function as coprocessors for 163-bit Elliptive Curve cryptography (ECC), hashing
(SHA-1), and Large integer modular arithmetic processing(MAP). These crypto IP
modules, which are designed and developed UTM in-house, results in fast execution
of the ECC-based digital signature computations. The cryptosystem is designed in
VHDL, and implemented into a single Altera Stratix EP1S40F780C5 FPGA
microchip using SoC technology. The embedded device driver is scripted in C, while
the APIs is developed in Visual Basic for execution in the host PC. To test and
demonstrate the capabilities (robustness, functionality and reusability) of the
proposed digital signature crypto-system, a real-time e-document application
prototype, for secure document transfer via an insecure channel, has then been
developed. Running on a clock of 40 MHz, the system achieves the execution time of
0.59 msec for the signing operation, and 1.07msec for signature verifying,
corresponding to throughputs of 1697 and 937 operations/sec respectively.

INTRODUCTION

Nowadays, it is difficult to open a newspaper, watch a television program, or even have a
conversation without some mention of the Internet, e-commerce, smart cards, etc. The
rapid progress in wireless communication systems, mobile systems, and smart card
technology in our society makes information more vulnerable to abuse. In a
communication system, the content of the communication may be exposed to an
eavesdropper, or system services can be used fraudulently. For these reasons, it is
important to make information systems secure by protecting data and resources from
malicious acts. Crypto (cryptography) algorithms are the core of such security systems,
offering security services of data privacy, data integrity, authenticity and non-repudiation.
The latter three services can be provided by digital signature schemes. Hence, a key
3
application of the proposed embedded system is in providing a digital signature
subsystem for next-generation PKI-enabled IT security hardware systems, such as smart
cards, trust hardware platforms, and secure mobile communication devices. PKI is Public
Key Infrastructure will be widely applied in secure military communications,
e-commerce, e-health and e-government initiatives (eg.MyKad).
The basis of a digital signature scheme is public key cryptography. The
current de-facto public key crypto algorithm is RSA. Although RSA is highly secure and
widely used, there are some potential problems associated with its use. Processing time
and key storage requirements effectively increases with the increase of its already large
key size. In addition, its key generation process is complex and time consuming. The
problems are not necessarily critical for a network server, but they are potentially major
problems for resource- constrained devices, such as smart cards or mobile phones .
Since its introduction by Koblitz and Miller in 1985, Elliptic Curve
Cryptography (ECC) is rapidly gaining popularity due to its comparatively high security
level and low bandwidth requirements. The main strength of ECC rests on the concept of
discrete logarithm problem overpoints on an elliptic curve, which provides higher
strengthper-bit than any other current public-key schemes including RSA. Only ECC
offers equivalent security to other competing technologies at much smaller key sizes that
enable faster computations, lower power consumption, as well as memory and bandwidth
savings compared to traditional key choices. Using 224-bit ECC for secure web
transactions requires 3.5 times less servers than equivalent 2048-bit RSA key sizes. A
160-bit ECC achieves the similar security strength offered by a 1024-bit modulus RSA-
based digital signature system.
Clearly, there have two ways to implement any algorithm,i.e. either
hardware or software. It is fairly easy to implement crypto algorithms in software, but
such approaches are typically too slow for real-time applications, such as mobile
embedded systems, network routers, mobile communications, etc. Therefore, hardware
always appears to be the ultimate choice; when utilized as coprocessors, can offload time-
consuming algorithms and reduce the ensuing bottlenecks. Indeed, these dedicated
hardware assists or accelerators greatly provide, in general, for faster implementations
than software, and at the same time, offering more intrinsic security .
In hardware implementations, the flexibility and high speed
capability of FPGAs make them a suitable platform for cryptographic applications. Their
structure allows complex arithmetic operations that are not suited to general purpose
CPUs to be implemented more efficiently . They also offer a more cost- effective solution
than traditional ASIC hardware, which has a much longer design cycle. In fact, before
any VLSI or ASIC design, the fast prototyping development time of an FPGA design
allows modifications to be implemented with relative ease. This reconfigurable design
methodology is further enhanced with the advent of sophisticated SoC development
platforms, available commercially, which contain very high density FPGA devices. In
this work, the cryptosystem is designed in VHDL, and implemented into a single Altera
Stratix EP1S40F780C5 FPGA microchip using SoC technology. This paper is organized
as follows. First, the ECDSA detail algorithm is presented. The digital signature
cryptosystem architecture is then described. This is followed by a discussion on the
FPGA implementation, test and performance of the proposed digital signature
cryptosystem. Finally, the conclusion is presented.
4


Elliptic Curve Digital Signature Algorithm(ECDSA)

The ECDSA scheme consists of three main functions, which are Key Pair Generation,
signature Signing and Signature Verification as shown in Figure 1.


The Digital Signature Cryptosystem
The architecture of the proposed ECDSA cryptosystem is provided in Figure 2. The
design of the crypto coprocessors is out of the scope of this paper. Referring to the
diagram, the proposed embedded system consists of three main design components,
briefly described as follows:
Crypto Processor Block: This hardware block consists of three IP cores, functioning as
coprocessors, to perform the crypto algorithm, which include the 163-bit Elliptic Curve
Arithmetic, 163-bit Large Integer Modular Arithmetic and SHA-1 Hashing function.
These crypto cores, designed and developed UTM in-house, results in fast execution of
the elliptive curve computations.
5


Device Drivers: These are embedded software executed on the Nios embedded
processor. It ensures the correct execution of the IP cores, and acts as a bridge between
the cipher computation blocks and the software APIs running on the host PC during data
transmission. It also performs pseudo random key generation.
APIs: Application Programming Interface, executed on host PC, to aid application
developers. It performs high-level function such as input file reading and output file
writing. It handles data transfer between the host and the cryptosystem hardware.

Before implementing ECDSA, several basic factors have to be considered
simultaneously, and choices made. The factors include security consideration, suitability
of method available for optimizing finite field and elliptic-curve arithmetic, application
platform, constraints of the implementation environment . The current version of the
proposed cryptosystem applies the ECC domain parameters, which are summarized as
follows:



The designs of the crypto are described in VHDL (Very High Speed Integration Circuit
Hardware Description Language). Each part of the design is synthesize, functionally
6













7
simulated, place and routing and timing analysis was carried out with QuartusII software
from Altera. The combined architecture was simulated, and then implemented into a
single FPGA chip for hardware validation and evaluated.
Figure 3 shows the software architecture of the ECDSA cryptosystem, illustrating the
device driver (embedded software on Nios CPU) and the software APIs (host PC).

The device driver and API routines perform three main functions, which
are ECC key pair generation based on the specify system parameter, ECDSA digital
signature signing and ECDSA digital signature verification to a file stored in a host PC.





Test & Performance Evaluation
The presented digital signature cryptosystem is prototyped on an Altera Nios Prototyping
Board containing the Stratix EP1S40F780C5 FPGA chip. To our knowledge, this
approach of combining all the cryptos into a single FPGASoC microchip has not been
reported previously in literature. The timing performance of ECDSA operations achieved
in our implementation in shown in Table 1. For evaluation purposes, the input of
signature signing/verifying modules is set to be a block of 512-bit message from the
hashing operation. Note that, the system parameters are fixed to domain parameters over
.
The result in Table 1 shows that the speed achieved is extremely promising
for real-time applications. Running on a clock of 40 MHz, the system achieves the
execution time of 0.59 msec for signing, and 1.07msec for signature
verifying,corresponding to throughputs of 1697 and 937 operations/sec respectively.
8

A Real-Time Data Security Application:

Secure Document Transfer
To evaluate the functionality of the proposed digital signature cryptosystem and the
reusability of the APIs and device drivers, we have developed a real-time e-document
system for the application of secure document transfer via insecure medium (eg.
Internet). This demonstration application prototype combines the ECC-based digital
signature subsystem proposed here, with a hybrid encryption cryptosystem. The latter
cryptosystem, which provides AES symmetric encryption and RSA public-key
encryption of the session-key, is not reported here, since it is out of scope of this paper. In
this application prototype, documents transferred electronically via FTP mechanism in
a Local Area Network (LAN) environment, are made secure, by encrypting and signing
in real-time, using the proposed cryptosystem. Test on the system indicates that all
the required security services are achieved at an approximate speed of 2.2 KB /s.
Figure 4 shows the Visual Basic GUI of the control and monitoring e-document security
software, while Figure 5 and Figure 6 show the file uploading and downloading process,
respectively.
9


In the sending process, a document will go through processes below:
1. Generate a random session key for AES encryption.
2. Encrypt the examination questions using AES encryption by random generated session
key.
3. Encrypt the random generated session key using RSA public key encryption by
receivers RSA public key.
4. Generate the ECDSA digital signature to both of the ciphertext and the RSA encrypted
session key using senders ECC private key. Senders digital signature Ciphertext SHA
Hashing ECDSA Verifying Message digest AES Decryption Encrypted session key RSA
Decryption
10




In the receiving process, a secured document of will go through processes below to
recover the original data:
1. Verify the ECDSA digital signature to check the integrity of the document using
sender s ECC public key. If signature verification process fails, reject the document.
11
2. Decrypt the encrypted session key using receiver s RSA private key to recover session
key.
3. Using the recovered session key to decrypt ciphertext using AES decryption to recover
the examination question.

CONCLUSION

In this paper, we have presented the hardware/software design and
implementation of a digital signature cryptosystem based on SoC technology. The
embedded system is consisted of an embedded general-purpose processor, tightly coupled
with a set of crypto coprocessors, ECC, SHA-1 hashing, and a large integer modular
arithmetic module. The hardware design is described completely in VHDL, and is
designed modularly with parameterization. The cryptosystem is hardware prototyped into
a single Altera Stratix FPGA microchip.The digital signature cryptosystem was evaluated
on a realtime electronic document application prototype. It performs secure document
transfer using FTP in a LAN. In fact, the digital signature cryptosystem can be extended
to be used in Internet-based application such as secure transmission of the prescription
order in telemedicine application, secure payment transactions like E-Commerce or M-
commerce,etc. The high performance achieved, and the flexibility of the proposed ECC-
based digital signature cryptosystem indicate its high potential application in next
generation PKI-enabled IT security hardware platforms. The work has also successfully
explored the application of System-on-Chip (SoC) methodology in advanced embedded
system design. An SoC is designed as a programmable platform that integrates most of
the functions of the end product into a single microchip.
It integrates, at least, one main processing element
(eg.microprocessor) that runs the systems embedded software, with a number of
dedicated coprocessors. As a result, the designs of custom hardware, embedded
processors and software that go into them become very tightly coupled. Any changes in
the implementation of one of the components affect the design of other components and,
in turn, the performance of the system. Hence, the traditional wisdom of designing and
developing each component as a separate entity is no longer efficient. A more integrated
approach is needed, in which the concept of hardware/software co-design and integration
of reusable IP cores is applied. This, in essence, is the SoC method, which should become
the underlying technology of next generation microchip design.
REFERENCE

[1] Paul C. van Oorschot, Alfred J . Menezes, and Scott A.Vanstone, 1996. Handbook of
applied Cryptography.CRC press Inc., Florida.
[2] Mohamed Khalil Hani, Hau Yuan Wen, Lim Kie Woon. Public Key Crypto Hardware
for Real-Time Security Application. In Proceedings of the National
Real-Time Technology and Application Symposium (RENTAS2004), 1-6.
[3] Certicom Corporation. April 1997. The Elliptic Curve Cryptosystem Current
Public-Key Cryptography Schemes




A PAPER ON
AN APPLICATION OF NANOTECHNOLOGY


PILL CAMERA










PRESENTATION BY


P.SURESH P.KRISHNA
B.Tech: ( EEE) B.Tech: (EEE)
Email:suriponnam@gmail.com



VALLURUPALLI NAGESWARA RAO
VIGNANA JYOTHI INSTITUTE OF ENGINEERING AND TECHNOLOGY













CONTENTS

1. Introduction
2. How it all started?
3. How to manufacture using
Nanotechnology?
4. Applications of Nanotechnology
5. Conclusion
6. Bibliography


















Abstract



The aim of any new technology is to make products in a large scale for
cheaper prices and increased quality . The current technologies have attained
a part of it , but the manufacturing technology is at macro level.The future
lies in manufacturing product right from the molecular level.Research in
this direction started way back in eighties.At that time manufacturing at
molecular and atomic level was laughed about.But due to advent of
nanotecnlogy we have realized it to a certain level.
One such product manufactured is PILL CAMERA, which is used for the
treatment of cancer,ulcer and anemia.It has made revolution in the field of
medicine.
This tiny capsule can pass through our body,without causing any harm to
it.It takes pictures of our intestine and transmits the sameto the receiver of
the computer analysis of our digestive system.This process can help in
tracking any kind of disease related to digestive system.


Also we have discussed the drawbacks of PILL CAMERA and how these
drawbacks can be overcome using Grain sized motor and bi-directional
wireless telemetry capsule .Besides this we have reviewed the process of
manufacturing products using nanotechnology . Someother important
applications are also discussed along with their potential impacts on various
fields.













INTRODUCTION: we have made great progress in manufacturing
products. Looking back from where we stand now, we started from
flint knives and stone tools and reached the stage where we make
such tools with more precision than ever. The leap in technology is
great but it is not going to stop here. With our present technology
we manufacture products by casting, milling, grinding, chipping and
the likes. With these technologies we have made more things at a
lower cost and greater precision than ever before . In the
manufacture of these products we have been arranging atoms in
great thundering statistical herds.

All of us know manufactured products are
made from atoms.The properties of those products depend on how
those atoms are arranged . If we rearrange the atoms in coal, we get
diamonds. If we rearrange the atoms in sand(and add a pinch of
impurities) we get computer chips. If we rearrange atoms in dirt
,water and air we get grass. The next step in manufacturing
technology is to manufacture products at molecular level. The
technology used to achieve manufacturing at molecular level is
NANOTECHNOLOGY. Nanotechnology is the creation of useful
materials , devices and system through manipulation of such miniscule
matter (nanometer)..Nanotechnology deals with objects measured in
nanometres. Nanometer can be visualised as billionth of a meter or
millionth of a millimeter or it is 1/80000 width of human hair.


HISTORICAL OVERVIEW:

Manipulation of atoms is first talked about by noble laureate Dr.
Richard Feyngman long ago in 1959 at the annual meeting of the
American Physical Society at the California institute of technology -
Caltech. and at that time it was laughed about. Nothing was pursued in
it till 80s. The concept of nanotechnology is introduced by Drexler in
the year 1981 through his article The Engines of Creation. In 1990,
IBM researchers showed that it is possible to manipulate single
atoms. They positioned 35 Xenon atoms on the surface of nickel
crystal, using an atomic force microscopy instrument. These
positioned atoms spelled out the lettersIBM.





Image of the IBM spelled with 35xenon atoms



MANUFACTURING PRODUCTS USING NANOTECHNOLOGY


There are three steps to achieving nanotechnology-produced goods:

Atoms are the building blocks for all matter in our Universe. All the
products that are manufactured are made from atoms . The properties
of those products depend of how those atoms are arranged .for e.g. If
we rearrange the atoms in coal we get diamonds, if we rearrange the
atoms in sand and add a pinch of impurities we get computer chips.

Scientists must be able to manipulate individual atoms. This means
that they will have to develop a technique to grab single atoms and
move them to desired positions. In 1990, IBM researchers showed this
by positioning 35 xenon atoms on the surface of a nickel crystal, using
an atomic force microscopy instrument. These positioned atoms
spelled out the letters "IBM."

The next step will be to develop nanoscopic machines, called
assemblers, that can be programmed to manipulate atoms and
molecules at will. It would take thousands of years for a single
assembler to produce any kind of material one atom at a time. Trillions
of assemblers will be needed to develop products in a viable time frame.

In order to create enough assemblers to build consumer goods, some
nanomachines, called replicators will be developed using self replication
process, will be programmed to build more assemblers.

Self replication is a process in which devices whose diameters are of atomic
scale, on the order of nanometers, create copies of themselves. For of self
repliction to take place in a constructive manner , three conditions
must be met

The 1
st
requirement is that each unit be a specialised machine called
nanorobot, one of whose functions is to construct atleast one copy of
itself during its operational life apart from performing its intended
task.An e.g. of self replicating nanorobot is artifical antibody. In addition to
reproducing itself, it seeks and destroys disease causing organism.

The 2
nd
requirement is the existence of all the energy and ingredients
necessary to build complete copies of nanorobot in question. Ideally, the
quantities of of each ingredient should be such that they are consumed in
the correct proportion., if the process is intended to be finite , then
when desired number of nanorobots has been constructed , there should
be no unused quantities of any ingredient remaining.

The 3rd requirement is that the environment be controlled so that the
replication process can proceed efficiently and without
malfunctions. Excessive turbulence , temperature extremes, intense
radiation, or other adverse circumstances might prevent the proper
functioning of the nanorobot and cause the process to fail or falter.

Once nanorobots are made in sufficient numbers , the process of
most of the nanorobots is changed from self replication to mass
manufacturing of products.The nanorobots are connected and controlled by
super computer which has the design details of the product to be
manufactured. These nanorobots now work in tandem and start placing each
molecules of product to b manufactured in the required position.


POTENTIAL EFFECTS OF NANOTECHNOLOGY:
As televisions, airplanes and computers revolutionized the world in the
last century, scientists claim that nanotechnology will have an even more profound
effect on the next century. Nanotechnology is likely to change the way almost
everything, including medicine, computers and cars, are designed and constructed.



One of the fascinating application of nanotechnology in the field of medicine is in
the form of Pill camera. This Pill camera has shown to the world what wonders
miniaturization can workout.

PILL CAMERA

Introduction:
Imagine a vitamin pill-sized camera that could travel through your body taking
pictures, helping diagnose a problem which doctor previously would have found
only through surgery. No longer is such technology the stuff of science-fiction
films


Conventional method:
Currently the standard method of detecting abnormalities in the
intestines is through endoscopic examination in which doctors advance a scope
down into the small intestine via the mouth. However, these scopes are unable to
reach through all of the 20-foot-long small intestine, and thus provide only a
partial view of that part of the bowel. With the help of pill camera not only can
diagnoses be made for certain conditions routinely missed by other tests, but
disorders can be detected at an earlier stage, enabling treatment before
complications develop

DESCRIPTION:
The device, called the Given Diagnostic Imaging System, comes in
capsule form and contains a camera, lights, transmitter and batteries. The capsule
has a clear end that allows the camera to view the lining of the small intestine.
Capsule endoscopy consists of a disposable video camera encapsulated into a pill-
like form that is swallowed with water. The wireless camera takes thousands of
high-quality digital images within the body as it passes through the entire length of
the small intestine. The latest pillcamera is sized at 26*11 mm and is
capable of transmitting 50,000 color images during its traversal through the
digestive system of patient.

Video chip consists of the IC CMOS image sensor which is used to take pictures
of intestine . The lamp is used for proper illumination in the intestine for taking
photos. Micro actuator acts as memory to store the software code that is the
instructions . The antenna is used to transmit the images to the receiver. For the
detection of reliable and correct information, capsule should be able to designed
to transmit several biomedical signals , such as pH ,temp and pressure. This is
achieved with the help of Soc.





WORKING:
It is slightly larger than normal capsule. The patient swallows the capsule, and
the natural muscular waves of the digestive tract propel it forward through the
stomach, into the small intestine, through the large intestine, and then out in the
stool. It takes snaps as it glides through digestive tract twice a second. The
capsule transmits the images to a data recorder, which is worn on a belt around the
patient's waist while going about his or her day as usual. A. The physician then
transfers the stored data to a computer for processing and analysis The complete
traversal takes around eight hours and after it has completed taking pictures it
comes out of body as excreta.
Study results showed that the camera pill was safe, without any side effects, and
was able to detect abnormalities in the small intestine, including parts that cannot
be reached by the endoscope.

DRAWBACKS:
It is a revolution, no question about it but the capsule poses medical risks

1)"Unfortunately, patients with gastrointestinal strictures or narrowings are not
good candidates for this procedure due to the risk of obstruction" . It might also
happen that the pill camera might not be able to traverse freely inside the
digestive system, which may cause the tests to be inconclusive.

2. If there is a partial obstruction in the small intestine, there is a risk that the pill
will get stuck there and a patient who might have come in for diagnostical reasons
may end up in the emergency room for intestinal obstruction.

3)The pill camera can transmit image from inside to outside the body
.consequently it becomes impossible to control the camera behavior , including
the on off power functions and effective illuminations inside the intestine

The first drawback is overcome using another product manufactured with the
help of nanotechnology which is the rice- grain sized motor. This miniature
motor, when attached to the pill camera gives it a propelling action inside
the body, which makes it easy for the pill to find its way through the
digestive system. Also the grain-sized motor has an application of its own
too. It can be employed to rupture and break painful kidney stones inside
the body.

The other two drawbacks can be overcome using a bidirectional wireless
telemetry camera.
The current paper presents the design of a bidirectional wireless telemetry
camera , 11mm in diameter , which can transmit video images from inside the
human body and receive the control signals from an external control unit. It
includes transmitting antenna and receiving antenna , a demodulator , a decoder ,
four LEDs , a CMOS image sensor, along with their driving circuits. The
receiver demodulates the received signal that is radiated from the external control
unit. Next, the decoder receives this serial stream and interprets the five of the
binary digits as address code. The remaining signal is interpreted as binary data As
a result proposed telemetry model can demodulate the external signals to control
the behavior of the camera and 4 LEDs during the transmission of video images.

























The CMOS image sensor is a single chip 1/3 inch format video camera , OV7910,
which can provide high level of functionality with in a small print footage. The
image sensor supports an NTSC-type analog color video and can directly
interface with VCR TV monitor . Also image sensor has very low power
consumption as it requries only 5 volt dc supply.

Circuit Block diagram of Transmitter and Receiver

















In the first block diagram , one SMD type transistor amplifies the video signal for
efficient modulation using a 3 biasing resistor and 1 inductor. In the bottom block,
a tiny SAW resonator oscillates at 315 MHZ for modulation of the video signal.
This modulated signal is then radiated from inside the body to outside the body.

For Receiver block diagram a commercialized ASK/OOK(ON/OFF Keyed)
superheterodyne receiver with an 8-pin SMD was used . This single chip receiver
for remote wireless communications, which includes an internal local oscillator
fixed at a single frequency, is based on an external reference crystal or clock. Plus
the decoder IC

Receives the serial stream and interprets the serial information as 4 bits of binary
data. Each bit is used for channel recognition of the control signal from outside
the body. Since the CMOS image sensor module consumes most of the power
compared to the other components in the telemetry module, controlling the
ON/OFF of the CMOS image sensor is very important. Moreover, since lightning
LEDs also use significant amount of power, the individual
ON/OFF control of each LED is equally necessary. As such the control system is
divided into 4 channels in the current study. A high output current amplifier with a
single supply is utilized to drive the loads in capsule.

EXTERNAL CONTROL UNIT
A schematic of the external control circuit unit is illustrated below, where the
ON/OFF operation of the switch in the front of the unit is encoded into 4 channel
control signals. These digital signals are then transferred to a synthesizer and
modulated into an RF signal using a OOK transmitter with a carrier frequency of
433 Mhz.








To verify the operation of the external control unit and telemetry capsule, CH1
was used to control the ON/OFF of CMOS image sensor and CHs 2-4 to control
the led lighting. The four signals infront of the control panel were able to make 16
different control signals(4 bit,2^4 =16).

The bi-directional operation of the telemetry module is verified by transmitting
video signal from CMOS image sensor. This image data was then displayed on a
computer.

The proposed telemetry capsule can simultaneously transmit a video signal and
receive a control determining the behavior of the capsule. As a result, the total
power consumption of the telemetry capsule can be reduced by turning off the
camera power during dead time and seperately controlling the LEDs for proper
illumination in the intestine. Accordingly, the proposed telemetry module for bi-
directional and multi-channel communication has the potential applications in
many


APPLICATIONS OF NANOTECHNOLOGY IN OTHER FIELDS
Nanotechnology may have its biggest impact on the medical industry.
Patients will drink fluids containing nanorobots programmed to attack and
reconstruct the molecular structure of cancer cells and viruses to make them
harmless.
Nanorobots could also be programmed to perform delicate surgeries --
such nanosurgeons could work at a level a thousand times more precise
than the sharpest scalpel. By working on such a small scale, a nanorobot
could operate without leaving the scars that conventional surgery does.
Additionally, nanorobots could change your physical appearance. They
could be programmed to perform cosmetic surgery, rearranging your atoms
to change your ears, nose, eye color or any other physical feature you wish
to alter.
There's even speculation that nanorobots could slow or reverse the aging
process, and life expectancy could increase significantly.

In the computer industry, the ability to shrink the size of transistors on
silicon microprocessors will soon reach its limits. Nanotechnology will be
needed to create a new generation of computer components. Molecular
computers could contain storage devices capable of storing trillions of bytes
of information in a structure the size of a sugar cube.

Nanotechnology has the potential to have a positive effect on the
environment. For instance, airborne nanorobots could be programmed to
rebuild the thinning ozone layer. Contaminants could be automatically
removed from water sources, and oil spills could be cleaned up instantly.

The promises of nanotechnology sound great, don't they? Maybe even
unbelievable? But researchers say that we will achieve these capabilities
within the next century. And if nanotechnology is, in fact, realized, it might
be the human race's greatest scientific achievement yet, completely
changing every aspect of the way we live.

Conclusion: Though nanotechnology has not evolved to its full
capacity yet the first rung of products have already made an impact
on the market . In the near future most of the conventional
manufacturing processes will be replaced with a cheaper and better
manufacturing process nanotechnology. Scientists predict that this is
not all nanotechnology is capable of. They even foresee that in the
decades to come, with the help o f nanotechnology one can make
hearts, lungs, livers and kidneys, just by providing coal, water and
some impurities and even prevent the aging effect.
Nanotechnology has the power to revolutionize the world
of production, but it is sure to increase unemployment.
Nanotechnology can be used to make miniature explosives, which
would create havoc in human lives. Every new technology that comes
opens new doors and horizons but closes some. The same is true
with nanotechnology too.












BIBLIOGRAPHY



www.zyvex.com

www.nanozine.org

www.nist.com

www.peddige.org

www.ipt.arc.nasa






















































PLASMONICS
The next chip-scale technology













Authors:

B.SANTOSH R.GUNA SEKHAR
III B.Tech (E.C.E) III B.Tech (E.C.E)
S.K.I.T S.K.I.T
SRI KALAHASTI. SRI KALAHASTI.
Email- Email-



ABSTRACT
The ever-increasing demand for
faster information transport and
processing capabilities is undeniable.
Our data-hungry society has driven
enormous progress in the Si electronics
industry and we have witnessed a
continuous progression towards smaller,
faster, and more efficient electronic
devices over the last five decades. The
scaling of these devices has also brought
about a myriad of challenges. Currently,
two of the most daunting problems
preventing significant increases in
processor speed are thermal and signal
delay issues associated with electronic
interconnection. Optical interconnects,
on the other hand, possess an almost
unimaginably large data carrying
capacity, and may offer interesting new
solutions for circumventing these
problems. Unfortunately, their
implementation is hampered by the large
size mismatch between electronic and
dielectric photonic components which
are at least one or two orders of
magnitude larger than their nanoscale
electronic counterparts. This obvious
size mismatch between electronic and
photonic components presents a major
challenge for interfacing these
technologies. Further progress will
require the development of a radically
new chip-scale device technology that
can facilitate information transport
between nanoscale devices at optical
frequencies and bridge the gap between
the world of nanoscale electronics and
microscale photonics.
We discuss a candidate
technology that has recently emerged
and has been termed Plasmonics. This
device technology exploits the unique
optical properties of nanoscale metallic
structures to route and manipulate light
at the nanoscale. By integrating
plasmonic, electronic, and conventional
photonic devices on the same chip, it
would be possible to take advantage of
the strengths of each technology. We
present some of the recent studies on
plasmonic structures and conclude by
providing an assessment of the potential
opportunities and limitations for Si chip-
scale plasmonics.





Plasmonics as a new device
technology
Metal nanostructures may
possess exactly the right combination of
electronic and optical properties to
tackle the issues outlined above and
realize the dream of significantly faster
processing speeds. The metals
commonly used in electrical
interconnection such as Cu and Al allow
the excitation of surface plasmon-
polaritons (SPPs). SPPs are
electromagnetic waves that propagate
along a metal-dielectric interface and are
coupled to the free electrons in the metal
(Fig.1).

Fig. 1 An SPP propagating along a metal-dielectric
interface. These waves are transverse magnetic in
nature. Their electromagnetic field intensity is highest
at the surface and decays exponentially away from the
interface. From an engineering standpoint, an SPP
can be viewed as a special type of light wave
propagating along the metal surface.

From an engineering standpoint,
an SPP can be viewed as a special type
of light wave. The metallic interconnects
that support such waves thus serve as
tiny optical waveguides termed
plasmonic waveguides. The notion that
the optical mode (light beam) diameter
normal to the metal interface can be
significantly smaller than the wavelength
of light has generated significant
excitement and sparked the dream that
one day we will be able to interface
nanoscale electronics with similarly
sized optical (plasmonic) devices.
Current Si-based integrated
circuit technology already uses
nanoscale metallic structures, such as Cu
and Al interconnects, to route electronic
signals between transistors on a chip.
This mature processing technology can
thus be used to our advantage in
integrating plasmonic devices with their
electronic and dielectric photonic
counterparts. In some cases, plasmonic
waveguides may even perform a dual
function and simultaneously carry both
optical and electrical signals, giving rise
to exciting new capabilities.

Imaging SPPs with a photon
scanning tunneling microscope
In order to study the propagation
of SPPs, we constructed a photon
scanning tunneling microscope (PSTM)
by modifying a commercially available
scanning near-field optical microscope.
PSTMs are the tool of choice for
characterizing SPP propagation along
extended films as well as metal stripe
waveguides. Fig. 2a shows how a
microscope objective at the heart of our
PSTM can be used to focus a laser beam
onto a metal film at a well-defined angle
and thereby launch an SPP along the top
metal surface.
A sharp, metal-coated pyramidal
tip (Figs. 2b and 2c) is used to tap into
the guided SPP wave locally and scatter
light toward a far-field detector. These
particular tips have a nanoscale aperture
at the top of the pyramid through which
light can be collected. The scattered light
is then detected with a photomultiplier
tube. The signal provides a measure of
the local light intensity right underneath
the tip and, by scanning the tip over the
metal surface, the propagation of SPPs
can be imaged.
The operation of the PSTM can
be illustrated by investigating the
propagation of SPPs on a patterned Au
film (Fig. 2d). Here, a focused ion beam
(FIB) was used to define a series of
parallel grooves, which serve as a Bragg
grating to reflect SPP waves. Fig. 2e
shows a PSTM image of an SPP wave
excited with a 780 nm wavelength laser
and directed toward the Bragg grating.
The back reflection of the SPP from the
grating results in the standing wave
interference pattern observed in the
image. From this type of experiment the
wavelength of SPPs can be determined
in a straightforward manner and
compared to theory.
The PSTM can also be used to
image SPP propagation directly in
plasmonic structures and devices of
more complex architecture to determine
their behavior. The PSTM provides a
clear advantage by providing a direct
method to observe the inner workings of
plasmonic devices, offering a peek
inside the box.


Fig. 2 (a) Schematic of the operation of a PSTM that
enables the study of SPP propagation along metal film
surfaces. The red arrow shows how an SPP is
launched from an excitation spot onto a metal film
surface using a high numerical aperture microscope
objective. (b) Scanning electron microscopy (SEM)
image of the near field optical cantilever probe used
in experiments. The tip consists of a micro fabricated,
hollow glass pyramid coated with an optically thick
layer of Al. Light can be collected or emitted through
a ~50 nm hole fabricated in the Al film on the top of
the pyramid. (c) A cross-sectional view of the same
hollow pyramidal tip after a large section was cut out
of the sidewall with a focused ion beam (FIB). (d)
SEM image of a Au film into which a Bragg grating
has been fabricated using a FIB. (e) PSTM image of
an SPP wave launched along the metal film toward
the Bragg grating.

Experiments and simulations on
plasmonic waveguides
The valuable information about
plasmonic structures provided by PSTM
measurements allows us to evaluate the
utility of plasmonics for interconnection.
Plasmonic stripe waveguides provide a
natural starting point for this discussion
as such stripes very closely resemble
conventional metal interconnects.
Electron beam lithography has
been used to generate 55 nm thick Au
stripes on a SiO2 glass slide with stripe
widths ranging from 5 m to 50 nm. Au
stripes are ideal for fundamental
waveguide transport studies as they are
easy to fabricate, do not oxidize, and
exhibit a qualitatively similar plasmonic
response to Cu and Al. Fig. 3a shows an
optical micrograph of a typical device
consisting of a large Au area from which
SPPs can be launched onto varying
width metal stripes. An scanning
electron microscopy (SEM) image of a
250 nm wide stripe is shown as an inset.
The red arrow shows how light is
launched from a focused laser spot into a
1 m wide stripe.


Fig. 3 (a) Optical microscopy image of a SiO2
substrate with an array of Au stripes attached to a
large launchpad generated by electron beam
lithography. The red arrow illustrates the launching of
an SPP into a 1 m wide stripe. (b, c, and d)
PSTM images of SPPs excited at = 780 nm and
propagating along 3.0 m, 1.5 m, and 0.5 m wide
Au stripes, respectively.

Figs. 3b, 3c, and 3d show PSTM
images of SPPs excited at = 780 nm and
propagating along 3.0 m, 1.5 m, and
0.5 m wide Au stripes, respectively.
The 3.0 m wide stripe can be used to
propagate signals over several tens of
microns. Similar to previous far-field
measurements along Ag stripes, it is
clear that the propagation distance of
SPPs decreases with decreasing stripe
width.
Recent numerical work has
demonstrated that the modal solutions of
plasmonic stripe waveguides are hybrid
transverse electric-transverse magnetic
(TE-TM) modes, and therefore their
analysis requires numerical solution of
the full vectorial wave equation.
It is worth noticing that SPP
modes are supported on both the top and
bottom Au surfaces shown in Fig. 4.
These waves can simultaneously carry
information without interacting. The
mode propagating along the top
metal/air interface is called a leaky
mode and the mode at the bottom
metal/glass interface is called a bound
mode. The fields associated with these
modes have a large Hx component,
which is reminiscent of the purely TM
nature of SPP modes on infinite metal
films. For this reason they are often
called quasi-TM modes.

Fig. 4 Simulated SPP mode profiles for a 55 nm thick
and 3.5 m wide Au stripe on a SiO2 glass substrate.
It shows the fundamental leaky (left) and bound (right)
SPP modes propagating at the top air/metal and
bottom glass/metal interfaces, respectively. Both
modes can be employed
simultaneously for information transport

In addition to calculating the
field-intensity distributions of SPP
modes, the real and imaginary parts of
the propagation constants values can
also be found. Fig. 5 shows the complex
propagation constants (sp + i sp)
determined for the lowest order leaky,
quasi-TM modes supported by 55 nm
thick Au stripes of various widths W at
an excitation wavelength of 800 nm. The
inset in the bottom graph shows the
geometry used in the simulations. For
this simulation, the dielectric properties
of Au (Au = -26.1437 + 1.8497 i) at the
excitation wavelength of = 800 nm
were used.



Fig. 5 Calculated complex propagation constants (sp
+ i sp) for the eight lowest order leaky, quasi-TM
SPP modes of varying width Au stripe waveguides.
For these calculations, the Au stripe thickness was t =
55 nm and the free space excitation wavelength was
= 800 nm. The magnitudes of sp and sp were
normalized to the real part of the free space
propagation constant 0. The inset shows the
simulation geometry and the coordinate frame.

Several important trends can be
discerned from these plots. Similar to
dielectric waveguides, larger structures
tend to support an increased number of
modes. The decrease of the propagation
constant with decreasing stripe width
first results in a reduced confinement of
the modes and finally cutoff occurs for a
width of ~1.3 m. At this width, the SPP
propagation constant has become equal
to the propagation constant in air, 0.
The diminished confinement for narrow
stripes results in a concomitant increase
in the radiation losses into the high-
index SiO2 substrate. This explains the
larger sp observed for small stripe
widths. The 0.5 m wide waveguide in
Fig. 3 is below the cutoff and does not
support a quasi-TM mode.
It is clear that the short
propagation distances found for
plasmonic waveguides preclude direct
competition with low-loss dielectric
waveguide components. However,
plasmonic structures can add new types
of functionality to chips that cannot be
obtained with dielectric photonics. One
category of structures offering unique
capabilities is active plasmonic
interconnects. Such interconnects may
offer new capabilities by introducing
nonlinear optical or electrical materials
into otherwise passive plasmonic
waveguides.
While we have shown that
weakly guided stripe waveguides cannot
achieve deep sub-wavelength
confinement, there exist alternative
strongly guiding geometries that can
provide markedly better confinement.
This category of structures is of great
interest for novel interconnection
schemes. Waveguides consisting of two
closely spaced metals also combine
propagation distances of a few microns
with deep-sub-wavelength confinement.
Fig. 6a shows a comparison of
propagation lengths (where the
exponential decay in | Ez|2 falls to the
1/e point) for planar
metal/insulator/metal (MIM) waveguides
and waveguides consisting of a metal
film sandwiched between two insulators
(IMI waveguides). These calculations
were performed using the well-
established reflection pole method at the
important telecommunications
wavelength of 1.55 m. The metal used
is Au (Au = -95.92 + 10.97 i at = 1.55
m) and the insulator as air.


Fig. 6 Plot of (a) SPP propagation length and (b)
spatial extent of the SPP modes as a function of the
center-layer thickness for MIM and IMI plasmonic
waveguides. The insets illustrate plotted terms. The
reflection pole method was used for = 1.55 m with
Au as the metal and air as the insulator.

For a sufficiently large center-
layer thickness, the propagation lengths
for these two types of waveguides
converge to the propagation length
found for a single interface (dashed line
in Fig. 6a). This is reasonable since the
SPP modes on the two metal surfaces
decouple when the spacing between
them becomes large. As the center layer
thickness decreases and the SPPs at the
two interfaces start to interact, the
propagation length along MIM structures
decreases while it increases along IMI
structures. In fact, IMI waveguides can
reach centimeter propagation distances
for very thin metal films. For obvious
reasons, these are termed long-range
SPP modes. These large propagation
distances can be understood by realizing
that the spatial extent of the modes
becomes as large as 10 m at these
extremely thin metallic film thicknesses
(Fig. 6b). In this case, the SPP waves are
almost entirely propagating in the very
low loss air region with very little field
intensity inside the lossy (resistive)
metal.
MIM structures allow routing of
electromagnetic energy around sharp
corners and signals to be split in T-
junctions. These unique features can be
used to realize truly nanoscale photonic
functionality and circuitry, although the
maximum size of such circuits will be
limited by the SPP propagation length. It
is important to realize that for every type
of waveguide, there is a clear, but
different, trade-off between confinement
and propagation distance (loss). The use
of one type of waveguide over another
will thus depend on application-specific
constraints.

Plasmonics can bridge
microscale photonics and
nanoscale electronics
Based on the data presented
above, it seems that the propagation
lengths for plasmonic waveguides are
too short to propagate SPPs with high
confinement over the length of an entire
chip (~1 cm). Although the
manufacturability of long-range SPP
waveguides may we straightforward
within a CMOS foundry, it is unlikely
that such waveguides will be able to
compete with well-established, low-loss,
high-confinement Si, Si3N4, or other
dielectric waveguides. However, it is
possible to create new capabilities by
capitalizing on an additional strongpoint
of metallic nanostructures. Metal
nanostructures have a unique ability to
concentrate light into nanoscale volumes
This diagram shows a detail of a
chip on which optical signals are routed
through conventional dielectric optical
waveguides which is typically one order
of magnitude larger than the underlying
CMOS electronics. Antenna can be used
to concentrate the electromagnetic
signals from the waveguide mode into a
deep sub wavelength
metal/insulator/metal waveguide and
inject it into a nanoscale photo detector.
The small size of the detector ensures a
small capacitance, low-noise, and high-
operation. By using metallic
nanostructures as a bridge between
photonics and electronics, we play to the
strengths of metallic nanostructures
(concentrating fields and sub-
wavelength guiding), dielectric
waveguides (low-loss information
Despite the numerous studies on
antennas in the microwave optical
regimes, their application to solve
current issues in chip- interconnection
has remained largely unexplored. The
field concentrating abilities of optical
antennas may serve to bridge large gap
between microscale dielectric photonic
devices and electronics (Fig. 7).


Fig. 7 Schematic of how a nanoscale antenna
structure can serve as a bridge between micro-scale
dielectric components and nanoscale electronic
devices.

transport), and nanoscale electronic
components (high-speed information
processing).

Conclusions
Plasmonics has the potential to play a
unique and important role in enhancing
the processing speed of future integrated
circuits. The field has witnessed an
explosive growth over the last few years
and our knowledge base in plasmonics is
rapidly expanding. As a result, the role
of plasmonic devices on a chip is also
becoming more well-defined and is
captured in Fig. 8.

Fig. 8 Operating speeds and critical dimensions of
various chip-scale device technologies, highlighting
the strengths of the different technologies.

This graph shows the operating
speeds and critical dimensions of
different chip-scale device technologies.
In the past, devices were relatively slow
and bulky. Plasmonics offers precisely
what electronics and photonics do not
have the size of electronics and the
speed of photonics. Plasmonic devices,
therefore, might interface naturally with
similar speed photonic devices and
similar size electronic components. For
these reasons, plasmonics may well
serve as the missing link between the
two device technologies that currently
have a difficult time communicating. By
increasing the synergy between these
technologies, plasmonics may be able to
unleash the full potential of nanoscale
functionality and become the next wave
of chip-scale technology.

Keywords:
1. Plasmonics
2. SPPs - Surface plasmon-
polaritons
3. PSTM - Photon Scanning
Tunneling Microscope
4. SEM - Scanning Electron
Microscopy
5. TE - Transverse electric
6. TM - Transverse magnetic

References:
1. article in www.stanford.edu -by
Rashid zia, Jon.A.Schuller, Anu
Chandran
2. Molecular Plasmonics by
Ricard P.Van Duyne
3. Plasmonics- light on wire,
an article in E.F.Y. Magazine
















































PLASMONICS
The next chip-scale technology













Authors:

B.SANTOSH R.GUNA SEKHAR
III B.Tech (E.C.E) III B.Tech (E.C.E)
S.K.I.T S.K.I.T
SRI KALAHASTI. SRI KALAHASTI.
Email- Email-



ABSTRACT
The ever-increasing demand for
faster information transport and
processing capabilities is undeniable.
Our data-hungry society has driven
enormous progress in the Si electronics
industry and we have witnessed a
continuous progression towards smaller,
faster, and more efficient electronic
devices over the last five decades. The
scaling of these devices has also brought
about a myriad of challenges. Currently,
two of the most daunting problems
preventing significant increases in
processor speed are thermal and signal
delay issues associated with electronic
interconnection. Optical interconnects,
on the other hand, possess an almost
unimaginably large data carrying
capacity, and may offer interesting new
solutions for circumventing these
problems. Unfortunately, their
implementation is hampered by the large
size mismatch between electronic and
dielectric photonic components which
are at least one or two orders of
magnitude larger than their nanoscale
electronic counterparts. This obvious
size mismatch between electronic and
photonic components presents a major
challenge for interfacing these
technologies. Further progress will
require the development of a radically
new chip-scale device technology that
can facilitate information transport
between nanoscale devices at optical
frequencies and bridge the gap between
the world of nanoscale electronics and
microscale photonics.
We discuss a candidate
technology that has recently emerged
and has been termed Plasmonics. This
device technology exploits the unique
optical properties of nanoscale metallic
structures to route and manipulate light
at the nanoscale. By integrating
plasmonic, electronic, and conventional
photonic devices on the same chip, it
would be possible to take advantage of
the strengths of each technology. We
present some of the recent studies on
plasmonic structures and conclude by
providing an assessment of the potential
opportunities and limitations for Si chip-
scale plasmonics.





Plasmonics as a new device
technology
Metal nanostructures may
possess exactly the right combination of
electronic and optical properties to
tackle the issues outlined above and
realize the dream of significantly faster
processing speeds. The metals
commonly used in electrical
interconnection such as Cu and Al allow
the excitation of surface plasmon-
polaritons (SPPs). SPPs are
electromagnetic waves that propagate
along a metal-dielectric interface and are
coupled to the free electrons in the metal
(Fig.1).

Fig. 1 An SPP propagating along a metal-dielectric
interface. These waves are transverse magnetic in
nature. Their electromagnetic field intensity is highest
at the surface and decays exponentially away from the
interface. From an engineering standpoint, an SPP
can be viewed as a special type of light wave
propagating along the metal surface.

From an engineering standpoint,
an SPP can be viewed as a special type
of light wave. The metallic interconnects
that support such waves thus serve as
tiny optical waveguides termed
plasmonic waveguides. The notion that
the optical mode (light beam) diameter
normal to the metal interface can be
significantly smaller than the wavelength
of light has generated significant
excitement and sparked the dream that
one day we will be able to interface
nanoscale electronics with similarly
sized optical (plasmonic) devices.
Current Si-based integrated
circuit technology already uses
nanoscale metallic structures, such as Cu
and Al interconnects, to route electronic
signals between transistors on a chip.
This mature processing technology can
thus be used to our advantage in
integrating plasmonic devices with their
electronic and dielectric photonic
counterparts. In some cases, plasmonic
waveguides may even perform a dual
function and simultaneously carry both
optical and electrical signals, giving rise
to exciting new capabilities.

Imaging SPPs with a photon
scanning tunneling microscope
In order to study the propagation
of SPPs, we constructed a photon
scanning tunneling microscope (PSTM)
by modifying a commercially available
scanning near-field optical microscope.
PSTMs are the tool of choice for
characterizing SPP propagation along
extended films as well as metal stripe
waveguides. Fig. 2a shows how a
microscope objective at the heart of our
PSTM can be used to focus a laser beam
onto a metal film at a well-defined angle
and thereby launch an SPP along the top
metal surface.
A sharp, metal-coated pyramidal
tip (Figs. 2b and 2c) is used to tap into
the guided SPP wave locally and scatter
light toward a far-field detector. These
particular tips have a nanoscale aperture
at the top of the pyramid through which
light can be collected. The scattered light
is then detected with a photomultiplier
tube. The signal provides a measure of
the local light intensity right underneath
the tip and, by scanning the tip over the
metal surface, the propagation of SPPs
can be imaged.
The operation of the PSTM can
be illustrated by investigating the
propagation of SPPs on a patterned Au
film (Fig. 2d). Here, a focused ion beam
(FIB) was used to define a series of
parallel grooves, which serve as a Bragg
grating to reflect SPP waves. Fig. 2e
shows a PSTM image of an SPP wave
excited with a 780 nm wavelength laser
and directed toward the Bragg grating.
The back reflection of the SPP from the
grating results in the standing wave
interference pattern observed in the
image. From this type of experiment the
wavelength of SPPs can be determined
in a straightforward manner and
compared to theory.
The PSTM can also be used to
image SPP propagation directly in
plasmonic structures and devices of
more complex architecture to determine
their behavior. The PSTM provides a
clear advantage by providing a direct
method to observe the inner workings of
plasmonic devices, offering a peek
inside the box.


Fig. 2 (a) Schematic of the operation of a PSTM that
enables the study of SPP propagation along metal film
surfaces. The red arrow shows how an SPP is
launched from an excitation spot onto a metal film
surface using a high numerical aperture microscope
objective. (b) Scanning electron microscopy (SEM)
image of the near field optical cantilever probe used
in experiments. The tip consists of a micro fabricated,
hollow glass pyramid coated with an optically thick
layer of Al. Light can be collected or emitted through
a ~50 nm hole fabricated in the Al film on the top of
the pyramid. (c) A cross-sectional view of the same
hollow pyramidal tip after a large section was cut out
of the sidewall with a focused ion beam (FIB). (d)
SEM image of a Au film into which a Bragg grating
has been fabricated using a FIB. (e) PSTM image of
an SPP wave launched along the metal film toward
the Bragg grating.

Experiments and simulations on
plasmonic waveguides
The valuable information about
plasmonic structures provided by PSTM
measurements allows us to evaluate the
utility of plasmonics for interconnection.
Plasmonic stripe waveguides provide a
natural starting point for this discussion
as such stripes very closely resemble
conventional metal interconnects.
Electron beam lithography has
been used to generate 55 nm thick Au
stripes on a SiO2 glass slide with stripe
widths ranging from 5 m to 50 nm. Au
stripes are ideal for fundamental
waveguide transport studies as they are
easy to fabricate, do not oxidize, and
exhibit a qualitatively similar plasmonic
response to Cu and Al. Fig. 3a shows an
optical micrograph of a typical device
consisting of a large Au area from which
SPPs can be launched onto varying
width metal stripes. An scanning
electron microscopy (SEM) image of a
250 nm wide stripe is shown as an inset.
The red arrow shows how light is
launched from a focused laser spot into a
1 m wide stripe.


Fig. 3 (a) Optical microscopy image of a SiO2
substrate with an array of Au stripes attached to a
large launchpad generated by electron beam
lithography. The red arrow illustrates the launching of
an SPP into a 1 m wide stripe. (b, c, and d)
PSTM images of SPPs excited at = 780 nm and
propagating along 3.0 m, 1.5 m, and 0.5 m wide
Au stripes, respectively.

Figs. 3b, 3c, and 3d show PSTM
images of SPPs excited at = 780 nm and
propagating along 3.0 m, 1.5 m, and
0.5 m wide Au stripes, respectively.
The 3.0 m wide stripe can be used to
propagate signals over several tens of
microns. Similar to previous far-field
measurements along Ag stripes, it is
clear that the propagation distance of
SPPs decreases with decreasing stripe
width.
Recent numerical work has
demonstrated that the modal solutions of
plasmonic stripe waveguides are hybrid
transverse electric-transverse magnetic
(TE-TM) modes, and therefore their
analysis requires numerical solution of
the full vectorial wave equation.
It is worth noticing that SPP
modes are supported on both the top and
bottom Au surfaces shown in Fig. 4.
These waves can simultaneously carry
information without interacting. The
mode propagating along the top
metal/air interface is called a leaky
mode and the mode at the bottom
metal/glass interface is called a bound
mode. The fields associated with these
modes have a large Hx component,
which is reminiscent of the purely TM
nature of SPP modes on infinite metal
films. For this reason they are often
called quasi-TM modes.

Fig. 4 Simulated SPP mode profiles for a 55 nm thick
and 3.5 m wide Au stripe on a SiO2 glass substrate.
It shows the fundamental leaky (left) and bound (right)
SPP modes propagating at the top air/metal and
bottom glass/metal interfaces, respectively. Both
modes can be employed
simultaneously for information transport

In addition to calculating the
field-intensity distributions of SPP
modes, the real and imaginary parts of
the propagation constants values can
also be found. Fig. 5 shows the complex
propagation constants (sp + i sp)
determined for the lowest order leaky,
quasi-TM modes supported by 55 nm
thick Au stripes of various widths W at
an excitation wavelength of 800 nm. The
inset in the bottom graph shows the
geometry used in the simulations. For
this simulation, the dielectric properties
of Au (Au = -26.1437 + 1.8497 i) at the
excitation wavelength of = 800 nm
were used.



Fig. 5 Calculated complex propagation constants (sp
+ i sp) for the eight lowest order leaky, quasi-TM
SPP modes of varying width Au stripe waveguides.
For these calculations, the Au stripe thickness was t =
55 nm and the free space excitation wavelength was
= 800 nm. The magnitudes of sp and sp were
normalized to the real part of the free space
propagation constant 0. The inset shows the
simulation geometry and the coordinate frame.

Several important trends can be
discerned from these plots. Similar to
dielectric waveguides, larger structures
tend to support an increased number of
modes. The decrease of the propagation
constant with decreasing stripe width
first results in a reduced confinement of
the modes and finally cutoff occurs for a
width of ~1.3 m. At this width, the SPP
propagation constant has become equal
to the propagation constant in air, 0.
The diminished confinement for narrow
stripes results in a concomitant increase
in the radiation losses into the high-
index SiO2 substrate. This explains the
larger sp observed for small stripe
widths. The 0.5 m wide waveguide in
Fig. 3 is below the cutoff and does not
support a quasi-TM mode.
It is clear that the short
propagation distances found for
plasmonic waveguides preclude direct
competition with low-loss dielectric
waveguide components. However,
plasmonic structures can add new types
of functionality to chips that cannot be
obtained with dielectric photonics. One
category of structures offering unique
capabilities is active plasmonic
interconnects. Such interconnects may
offer new capabilities by introducing
nonlinear optical or electrical materials
into otherwise passive plasmonic
waveguides.
While we have shown that
weakly guided stripe waveguides cannot
achieve deep sub-wavelength
confinement, there exist alternative
strongly guiding geometries that can
provide markedly better confinement.
This category of structures is of great
interest for novel interconnection
schemes. Waveguides consisting of two
closely spaced metals also combine
propagation distances of a few microns
with deep-sub-wavelength confinement.
Fig. 6a shows a comparison of
propagation lengths (where the
exponential decay in | Ez|2 falls to the
1/e point) for planar
metal/insulator/metal (MIM) waveguides
and waveguides consisting of a metal
film sandwiched between two insulators
(IMI waveguides). These calculations
were performed using the well-
established reflection pole method at the
important telecommunications
wavelength of 1.55 m. The metal used
is Au (Au = -95.92 + 10.97 i at = 1.55
m) and the insulator as air.


Fig. 6 Plot of (a) SPP propagation length and (b)
spatial extent of the SPP modes as a function of the
center-layer thickness for MIM and IMI plasmonic
waveguides. The insets illustrate plotted terms. The
reflection pole method was used for = 1.55 m with
Au as the metal and air as the insulator.

For a sufficiently large center-
layer thickness, the propagation lengths
for these two types of waveguides
converge to the propagation length
found for a single interface (dashed line
in Fig. 6a). This is reasonable since the
SPP modes on the two metal surfaces
decouple when the spacing between
them becomes large. As the center layer
thickness decreases and the SPPs at the
two interfaces start to interact, the
propagation length along MIM structures
decreases while it increases along IMI
structures. In fact, IMI waveguides can
reach centimeter propagation distances
for very thin metal films. For obvious
reasons, these are termed long-range
SPP modes. These large propagation
distances can be understood by realizing
that the spatial extent of the modes
becomes as large as 10 m at these
extremely thin metallic film thicknesses
(Fig. 6b). In this case, the SPP waves are
almost entirely propagating in the very
low loss air region with very little field
intensity inside the lossy (resistive)
metal.
MIM structures allow routing of
electromagnetic energy around sharp
corners and signals to be split in T-
junctions. These unique features can be
used to realize truly nanoscale photonic
functionality and circuitry, although the
maximum size of such circuits will be
limited by the SPP propagation length. It
is important to realize that for every type
of waveguide, there is a clear, but
different, trade-off between confinement
and propagation distance (loss). The use
of one type of waveguide over another
will thus depend on application-specific
constraints.

Plasmonics can bridge
microscale photonics and
nanoscale electronics
Based on the data presented
above, it seems that the propagation
lengths for plasmonic waveguides are
too short to propagate SPPs with high
confinement over the length of an entire
chip (~1 cm). Although the
manufacturability of long-range SPP
waveguides may we straightforward
within a CMOS foundry, it is unlikely
that such waveguides will be able to
compete with well-established, low-loss,
high-confinement Si, Si3N4, or other
dielectric waveguides. However, it is
possible to create new capabilities by
capitalizing on an additional strongpoint
of metallic nanostructures. Metal
nanostructures have a unique ability to
concentrate light into nanoscale volumes
This diagram shows a detail of a
chip on which optical signals are routed
through conventional dielectric optical
waveguides which is typically one order
of magnitude larger than the underlying
CMOS electronics. Antenna can be used
to concentrate the electromagnetic
signals from the waveguide mode into a
deep sub wavelength
metal/insulator/metal waveguide and
inject it into a nanoscale photo detector.
The small size of the detector ensures a
small capacitance, low-noise, and high-
operation. By using metallic
nanostructures as a bridge between
photonics and electronics, we play to the
strengths of metallic nanostructures
(concentrating fields and sub-
wavelength guiding), dielectric
waveguides (low-loss information
Despite the numerous studies on
antennas in the microwave optical
regimes, their application to solve
current issues in chip- interconnection
has remained largely unexplored. The
field concentrating abilities of optical
antennas may serve to bridge large gap
between microscale dielectric photonic
devices and electronics (Fig. 7).


Fig. 7 Schematic of how a nanoscale antenna
structure can serve as a bridge between micro-scale
dielectric components and nanoscale electronic
devices.

transport), and nanoscale electronic
components (high-speed information
processing).

Conclusions
Plasmonics has the potential to play a
unique and important role in enhancing
the processing speed of future integrated
circuits. The field has witnessed an
explosive growth over the last few years
and our knowledge base in plasmonics is
rapidly expanding. As a result, the role
of plasmonic devices on a chip is also
becoming more well-defined and is
captured in Fig. 8.

Fig. 8 Operating speeds and critical dimensions of
various chip-scale device technologies, highlighting
the strengths of the different technologies.

This graph shows the operating
speeds and critical dimensions of
different chip-scale device technologies.
In the past, devices were relatively slow
and bulky. Plasmonics offers precisely
what electronics and photonics do not
have the size of electronics and the
speed of photonics. Plasmonic devices,
therefore, might interface naturally with
similar speed photonic devices and
similar size electronic components. For
these reasons, plasmonics may well
serve as the missing link between the
two device technologies that currently
have a difficult time communicating. By
increasing the synergy between these
technologies, plasmonics may be able to
unleash the full potential of nanoscale
functionality and become the next wave
of chip-scale technology.

Keywords:
1. Plasmonics
2. SPPs - Surface plasmon-
polaritons
3. PSTM - Photon Scanning
Tunneling Microscope
4. SEM - Scanning Electron
Microscopy
5. TE - Transverse electric
6. TM - Transverse magnetic

References:
1. article in www.stanford.edu -by
Rashid zia, Jon.A.Schuller, Anu
Chandran
2. Molecular Plasmonics by
Ricard P.Van Duyne
3. Plasmonics- light on wire,
an article in E.F.Y. Magazine















































PROTEIN MEMORY

N.Dinesh
Computer Science And Engineering, Koneru Lakshmaiah College Of Engineering,
Vaddeswaram
Contact No: 9440879430
E-Mail: din_foru@yahoo.com

A.S.M.P.Kumar
Electonics and Electical Engineering, Loyola College Of Engineering,
Sattanepalli
Contact No: 9985176613
E-Mail:asmpkumar@gmail.com


Abstract

While magnetic and semi-conductorbased
information storage devices have been in use
since the middle 1950s, todays computers
and volumes of information require
increasingly more efficient and faster methods
of storing data. Among the most promising of
the new alternatives are photopolymer-based
devices, holographic optical memory storage
devices, and protein-based optical memory
storage using rhodopsin , photosynthetic
reaction centers.This paper focuses mainly on
protein-based optical memory storage using
the photosensitive protein bacteriorhodopsin
with the two-photon method of exciting the
molecules,and briefly explains how this
memory is processed.
Protein based storage is an
experimental means of storing data. Using
proteins that respond to light from bacteria
found in salt water, a small cube can store
large amounts of data. By using lasers the
protein can be changed depending on various
wave lengths, allowing them to store and
recall data. With this new found technology
scientists are now developing a larger more
efficient storage media.. The current work is
to hybridize this biological molecule with the
solid state components of a typical computer.
Several biological molecules are being
considered for use in computers, but the
bacterial protein-Bacteriorhodopsin (bR)-has
generated much interest among scientists.



1. History
The fastest and most expensive storage
technology today is based on electronic storage
in a circuit such as a solid state "disk drive" or
flash RAM. This technology is getting faster and
is able to store more information . Plans are
underway for putting up to a gigabyte of data
onto a single chip.
In the early 1970s Walther Stoeckenius and
Dieter Oesterhelt at Rockefeller University in
New York discovered that a protein isolated
from a salt marsh bacterium exhibited
photosensitive properties. They called this
protein bacteriorhodopsin because it was very
similar to the protein, rhodopsin that is found in
the eyes of humans and animals.
The Soviets saw potential in
bacteriorhodopsin for use in computing. They
created a project entitled Project Rhodopsin.
They used the photosensitive properties to store
and manipulate data, and deserve credit for
demonstrating the significant potential of this
protein for computer memory.
He was originally interested purely in
understanding how the light-activated changes
occurred. In the late 1970s he became interested
in bacteriorhodopsin. Professor Birge attempted
to apply the photosensitive properties of the
protein to the design of computer memories.

2. Introduction
There have been many methods and proteins
researched for use in computer applications in
recent years. However, among the most
promising approaches, and the focus of this
particular page, is 3-Dimensional Optical RAM
storage using the light sensitive protein
bacteriorhodopsin. These are protein molecules
which is called bacteriorhodopsin. It's purple,
absorbs the light and presents in a membrane of a
microorganism called halobacterium halobium.
This bacterium lives in salt bogs where the
temperature can reach +150 C. When a level of
oxygen contents is so low in the ambient that to
obtain power breathing (oxidation) is not
enough, it uses protein for photosynthesis.
Bacteriorhodopsin is a protein found in the
purple membranes of several species of bacteria,
most notably Halobacterium halobium. This
particular bacteria lives in salt marshes. Salt
marshes have very high salinity and temperatures
can reach 140 degrees Fahrenheit. Unlike most
proteins, bacteriorhodopsin does not break down
at these high temperatures.
While bacteriorhodopsin can be used in any
number of schemes to store memory, we will
focus our attention on the use of
bacteriorhodopsin in 3-Dimensional Optical
Memories.

3. Advantages over semiconductor
memory
First, it's based on protein which is produced
in large volumes and at low price.
Secondly, the system can operate in the
wider range of temperatures than the
semiconductor memory.
Thirdly, data are stored constantly - even in
case of power switching off, it won't cause data
loss.
And at last, bricks with data which are rather
small in size but contain gigabytes of data can be
placed into an archive for storing copies (like
magnetic tapes). Since the bricks do not have
moving parts, it's more convenient than usage of
portable hard discs or cartridges with magnetic
tape.

4.Storage Devices




4.1 Bacteriorhodopsin:
Early research in the field of protein-based
memories yielded some serious problems with
using proteins for practical computer
applications. Among the most serious of the
problems was the instability and unreliable
nature of proteins, which are subject to thermal
and photochemical degradation, making room-
temperature or higher-temperature use
impossible. Largely through trial and error, and
thanks in part to nature's own natural selection
process, scientists stumbled upon
bacteriorhodopsin, a light harvesting protein that
has certain properties which make it a prime
candidate for computer applications.
Bacteriorhodopsin, a light harvesting
bacterial protein, is the basic unit of protein
memory and is the key protein in halobacterial
photosynthesis. It functions like a light-driven
photo-pump. Under exposure to light it
transports photons from the halobacterial cell to
another medium, changes its mode of operation
from photosynthesis to respiration, and converts
light energy to chemical energy. The response of
this molecule to light energy can be utilized to
frame protein memories.

4.2 Diagram of molecular electronic
device:

5.Methods to prepare protein memory
Methods used to prepare protein memory are
thermal, chemical, chromophore analog
substitution, and genetic engineering. Chemical
enhancing of the protein overlaps with the
changing of the chromophore.

5.1 Process
The process of making the protein cube has
many different steps. First the bacterias DNA is
splice and mutated to make the protein more
efficient for use as a volumetric memory. Then,
the bacteria must be grown in large batches and
the protein extracted. Finally, the purified protein
is put into the cube and used as a volumetric
storage medium.
The cube is read by two lasers as binary
code. One laser is used to activate the protein in
a section of the cube. The other laser is used to
write or read binary information in the same
section. The data is assigned as either a zero or a
one. The binary code is then analyzed by the
computer as various pieces of infoOne of the
more interesting idea includes optical 3D protein
memory. This is a volumetric type memory
which is three dimensional and could give as
much as 1000-fold increase in storage capacity
compared to current technologyrmation.
The technology uses a photosensitive protein
which has some similarities to proteins in the
human retina. Transformed into a gel block, one
side is accessed by a laser blade, effectively
creating a slice through the substance. The
block's front is lit by a digital laser light pattern
composed of dark and light spots. So when these
two laser blocks are intersected, it gives a
recordable or read back of that particular slice of
protein. the read back is optically detected by a
charge couple device which senses the light and
dark areas which are created through differential
light absorbtion. The first laser can then slice a
different section.
One is solid state illumination, some fruits of
which we're beginning to see appear in the Far
East. LEDs have much longer lifetimes than light
bulbs, and can achieve 20 to 30 lumens a watt,
these days. The perfect figure is 80 lumens a
watt, which now appears achievable.



5.2 How it works
bR-state (logical value of the bit "0") and Q-
state (logical value of the bit "1") are
intermediate states of the molecule and can
remain stable during many years. This process is
known as sequential one-photon architecture, or
two-photon absorption. This property (which in
particular provides a wonderful stability of
protein) was obtained by an evolutional way in
the struggle for survival under severe conditions
of salt bogs.

5.3 Recording of data
When recording data first you need switch on
a yellow-wave "page" laser - for converting the
molecules into Q-state. The SLM which
represents an LCD-matrix creating a mask on the
beam way stimulates appearing of an active
(excited) plane in the material inside the cuvette.
This poweractive plane is a page of data which
Data Recording Technique Diagram
can house 4096x4096 bit array. Before returning
of protein into the quiescent state (in such state it
can remain quite a long time keeping the
information) a red-wave recording laser lights
on; it's positioned at the right angle to the yellow
one. The other SLM displays binary data, and
this way creates the corresponding mask on the
way of the beam, that's why only definite spots
(pixels) of the page will be irradiated. The
molecules in these spots will convert into Q-state
and will represent a binary one. The remaining
part of the page will come into the initial bR-
state and will represent binary zeros
While early efforts to make use of this
property were carried out at cryogenic
temperatures (liquid nitrogen temperatures),
modern research has made use of the different
states of bacteriorhodopsin to carry out these
operations at room-temperature out these
operations at room-temperature

5.4 Data Reading Technique:
To reduce the factor of corruption we use, A
two photon method ,instead of the method that is
used while reading RAMs.This would reduce
corruption. The two photons would each have
only part of the energy needed to change the
state of the bacteriorhodopsin. Therefore they
would pass through the polymer until they
coincided at a point and changed a molecule of
bacteriorhodopsin. The single photon method
would not be a good choice for a three-
dimensional memory. A single photon would
excite all of the molecules that are in its path. If
the surface of the chunk of polymer was used to
store something for the computer, that
information would be corrupted by the photon as
the computer attempted to write to some of the
molecules in the inside of the polymer. The
photon would also excite all of the molecules in
its path through the polymer chunk.
First, the green paging beam is fired at the
square of protein to be read. After two
milliseconds (enough time for the maximum
amount of O intermediates to appear), the entire
red laser array is turned on at a very low
intensity of red light. The molecules that are in
the binary state 1 (P or Q intermediate states) do
not absorb the red light, or change their states, as
they have already been excited by the intense red
light during the data writing stage.
The data page can be read without corruption
up to 5000 times. Each page is traced by a meter
and after 1024 reading the page gets refreshed
(regenerated) with a new recording operation.
Considering that a molecule changes its states
within 1 ms, the total time taken for read or write
operations constitutes around 10 ms. But similar
to the system of a holographic memory this
Data Writing Technique Diagram
device makes a parallel access in the read-write
cycle, what allows counting on the speed up to
10 Mbps. It is assumed that combining 8 storing
bit cells into a byte with a parallel access, you
can reach, then, 80 Mbps, but such method
requires a corresponding circuit realization of the
memory subsystem. Some versions of the SLM
devices implement a page addressing, which in
cheap constructions is used when sending the
beam to the required page with the help of rotary
system of galvanic mirrors. Such SLM provides
1ms access but costs four times more.
On detecting of a defect it's
necessary to resend the beam for accessing these
pages from the other side. Theoretically, the
cuvette can accommodate 1TBytes of data.
Limitation of capacity is mainly connected
with problems of lens system and quality of
protein.

5.5 Bacteriorhodopsin Optical Memory
To achieve this type of memory,we need
Purple membrane from Halobacterium
Halobium
Bistable red/green switch
In protein coat at 77K, 107-108 cycles
10,000 molecules/bit
Switching time, 500 femtoseconds
Monolayer fabricated by self-assembly
Speed currently limited by laser addressing

For three-dimensional memory to work, all
of the molecules need to be reached without
altering any other molecuels. In this we explain
with a single photon. A chunk of
bacteriorhodopsin-laden polymer would be the
memory in this example. The source of light in
this example would be a laser of appropriate
wavelength to excite the bacteriorhodopsin from
the bR to the M or Q state. As a person was
using this computer, the RAM would begin to be
used up. The surface of the chunk of polymer
with our favorite protein would slowly get used
up. Eventually, the need to use the memory
storage capacity inside the chunk of polymer
would arise. .Now we have to shine the laser on
the molecules inside. the chunk..

6. Applications
This protein memory plays critical role in
Artificial Intelligence development.
This find its application in military field as it is
rugged one as compared with semiconductor
memories.

7. Future:
The molecular optical memory research
underway by Prof. Robert Birge and his group at
Syracuse University. Using the purple membrane
from the bacterium Halobacterium Halobium,
they've made a working optical bistable switch,
fabricated in a monolayer by self-assembly, that
reliably stores data with 10,000 molecules per
bit. The molecule switches in 500 femtoseconds-
-that's 1/2000 of a nanosecond, and the actual
speed of the memory is currently limited by how
fast you can steer a laser beam to the correct spot
on the memory.
Computer hard drive capacity could be
increased a hundredfold by using a common
protein to fabricate nano-scale magnetic
particles, claims UK company Nanomagnetics .It
uses the protein apoferritin, the main molecule in
which iron is stored in the body, to create a
material consisting of magnetic particles each
just a few nanometres in diameter.Each particle
can store a bit of information and together they
can be packed onto a disk drive at much greater
density than is possible using existing hard disk
manufacturing methods.


8. Disadvantages
Main disadavantage is this memory may
exhaust after certain years and we have to refresh
it thoroughly

9. Conclusion
The ability of molecules serve not only as a
memory but also as a computer switches
.Molecular switches, if these become a reality,
will offer appreciable reduction in hardware size,
since these are themselves very small (about
1/1000th the size of a semiconductor transistor).
One can then imagine of a biomolecular
computer about 1/20th the size of present-day
semiconductor-based computers. Small size and
fast operation will account for the development
of most modern computers.So we hope that in
the coming future we use the computers that will
use the gigabytes of memory which is stored in
small chips.

10. References
[1] www.knowledgefoundation.com
[2] www.buzzle.com

















SRI VENKATESWARA UNIVERSITY COLLEGE OF ENGINEERING

TIRUPATI
ANDHRA PRADESH




By
CH.SIDDARTHA SAGAR
&
M.PRANEETH




Address:
CH.SIDDARTHA SAGAR (III ECE),
M.PRANEETH (III ECE),
Room No: 1213,
Visweswara Block,
S.V.U.C.E Hostels,
Tirupati,
PIN -517502.

E-mails: siddartha_437@yahoo.co.in
Praneeth_mu@yahoo.co.in



























CONTENTS




s.no Topics page no


1. Abstract 4


2. Introduction 5


3. Red tacton 5


4 . Working principle. 6


5. Human area network 7


6. Features 8


7. Applications 9


8. Comparison with other technology 13


9. Conclusion 14















HUMAN AREA NETWORK


ABSTRACT:

Technology is making many things easier; I can say that our
concept is standing example for that. So far we have seen LAN, MAN
, WAN , INTERNET & many more but here is new concept of
RED TACTON which makes the human body as a
communication network by name .... HAN (Human Area Network).


NTT lab from J apan is currently testing & developing this
revolutionary technology .Red Tacton is a new Human Area
networking technology that uses the surface of the human body as a
safe, high speed network transmission path. Red Tacton uses the minute
electric field generated by human body as medium for transmitting the
data. The chips which will be embedded in various devices contain
transmitter and receiver built to send and accept data in digital format.

In this paper we will discuss about red tacton, and its working.
States, and applications of red tacton various fields. And we will
compare our red tacton with the other technology for data transmission.
And know about human area network .












Introduction :

We may have imagined the feature as a place crawling
with antennas and emitters, due to the huge growth of wireless
communications. And it is seems that the current means of transferring
data might already have a very serious competitor none other than the
human boby.

Thus NTT labs from J apan has announced that is
currently testing a revolutionary technology called red tacton ,which
use the electric fields generated by the human body as medium for
transmitting the data . The chips which will embedded in various
devices contain a transmitter and receiver built to send and accept data
in digital format. The chips can take any type of file such as mp3 music
file or mail and convert it in to the format that takes the form of digitals
pulse that can be passed and read through a human being electric field
.the chip in receiver devices reads these tiny changes and convert the
file back into its original form.

Red tacton:


Red Tacton is a new Human Area Networking
technology that uses the surface of the human body as a safe, high speed
network transmission path. Red Tacton uses the minute electric field emitted
on the surface of the human body. Technically, it is completely distinct from
wireless and infrared .A transmission path is formed at the moment a part of
the human body comes in contact with a Red Tacton transceiver. Physically
separating ends the contact and thus ends communication Using Red Tacton,
communication starts when terminals carried by the user or embedded in
devices are linked in various combinations according to the user's
Communication is possible using any body surfaces, such as the hands,
fingers, arms, feet, face, legs or torso. Red Tacton works natural, physical
movements.


Working principle :


Using a new super-sensitive photonic electric field sensor,
Red Tacton can achieve duplex communication over the human body at a
maximum seed of 10 mbps






The Red Tacton transmitter induces a weak electric field on
the surface of the body. The Red Tacton receiver senses changes in the weak
electric field on the surface of the body caused by the transmitter .Red tacton
relies upon the principle that the optical properties of an electro-optic crystal
can vary according to the changes of a weak electric field. Red Tacton
detects changes in the optical properties of an electro-optic crystal using a
laser and converts the result to an electrical signal in a optical receiver
circuit. The transmitter sends data by inducing fluctuations in the minute
electric field on the surface of the human body. Data is received using a
photonic electric field sensor that combines an electro-optic crystal and a
laser light to detect fluctuations in the minute electric field.

The naturally occurring electric field induced on the surface
of the human body dissipates into the earth. Therefore, this electric field is
exceptionally faint and unstable. The photonic electric field sensor
developed by NTT enables weak electric fields to be measured by detecting
changes in the optical properties of an electro-optic crystal with a laser
beam.

Human area network :




In addition to the WANs (Internet) and LANs, there are
applications best served by Human Area Networks (HANs) that connect the
last meter.

Human society is entering an era of ubiquitous computing,
where everything is networked.By making Human Area Networks
feasible, RedTacton will enable ubiquitous services based on human-
centered interactions and therefore more intimate and easier for people
to use.


Feature of red tacton:

Red Tacton has three main functional features.


1.TOUCH:
Communication with just a touch or step

Touching, gripping, sitting, walking, stepping and other
human movements can be the triggers for unlocking or locking, starting
or stopping equipment, or obtaining data. using RedTacton,
communication starts when terminals carried by the user or embedded
in devices are linked in various combinations through physical contact
according to the human's natural movements.






2.BROADBAND & INTERACTIVE :
Duplex, interactive communication is possible at a maximum speed
of 10Mbps. Because the transmission path is on the surface of the body, transmission
speed does not deteriorate in congested areas where many people are communicating
at the same time .Taking advantage of this speed, device drivers can be downloaded
instantly and execute programs can be sent.


























Communication speed can
deteriorate in crowded spaces
due to a lack of bandwidth
(wireless LAN)
Device drivers can be
downloaded instantly
and executable programs
can be quickly
sent.(RED TACTON)


























































































































.

]














































3.ANY MEDIA:

In addition to the human body, various conductors and
dielectrics can be used as transmission media. Conductors and dielectrics may also be
used in combination

DI ELETRICS CONDUCTORS
( signals pass through materials) (signals travel along surface)
.

A communication environment can be created easily and
at low-cost by using items close at hand, such as desks, walls, and metal objects.
But there is one limitation on the length of the conductor to be propagated, on
installation locations, and on the thickness of the dielectric to be passed through.

Application fields:

Many application using Red Tacton are introduced. Some are:




Red tacton devices embedded medicine bottles
transmit information on the medicines' attributes. If the user touches the wrong
medicine, an alarm will trigger on the terminal he is carrying. The alarm sounds only
if the user actually touches the medicine bottle, reducing false alarms common with
passive wireless ID tags, which can trigger simply by proximity.


When a consumer stands in front of an advertising
panel, advertising and information matching his or her attributes is automatically
displayed. By touching or standing in front of items they are interested in, consumers
can get more in-depth information.



Print out where you want just by touching the
desired printer with one hand and a PC or digital camera with the other hand
to make the link Complicated configurations are reduced by downloading
device drivers "at first touch".


By shaking hands, personal profile data can be
exchanged between mobile terminals on the users. (Electronic exchange of
business cards) Communication can be kept private using authentication and
encryption technologies


Your own phone number is allocated and billing
commences .Automatic importing of personal address book and call history.


The seat position and steering wheel height adjust
to match the driver just by sitting in the car. The driver's home is set as the destination
in the car navigation system .The stereo plays the driver's favorite song



An electrically conductive sheet is embedded in the
table. A network connection is initiated simply by placing a lap-top on the table.
Using different sheet patterns enables segmentation of the table into subnets.




Red Tacton can carry music or video between
headsets, mobile devices, mobile phones, etc. Users can listen to music from a Red
Tacton player simply by putting on a headset or holding a viewer.




Carrying a mobile RedTacton-capable device
in one's pocket, ID is verified and the door unlocked when the user holds the doorknob
normally. Secure lock administration is possible by combining personal verification
tools such as fingerprint ID or other biometric in the mobile terminal.

Prototype :
NTT has made three types of prototypes





Communicationspeed:10Mbps

Protocols : TCP/IP

Communication method :Half-duplex

Interface: PCMCIA



Communicationspeed:10Mbps

Protocols :TCP/IP

Communication method : Half-duplex

Interface: RJ 45



(under constructions)




Comparison with other technologies:



Wireless Infrared

Evaluation
criteria
Wireless
LAN
Close-
range
wireless
Contactless
IC
cards
Passive
wireless
ID tag
Standard
data
communi-
cation
Red
Tacton
Transfer speed
(Can DVD-
quality images
be sent?)
E P P P P E
Performance
deterioration
during periods
of congestion
(Simultaneous
use by many
people in small
spaces)
P P E E E E

Duplex data
transfer
(Interactive
processing)
E E E P E E
Data
configuration at
initiation of
communications
(Registration of
ID profiles, etc)
E E P P E E
Tasks required
at time of
each
communication
(Adjustment of
contacts and
optical axis)
E E P E P E

Synchronization
with
user behavior
(Specification
of
user
positioning)
P P E E P E

E: Excellent P: Poor
Conclusion:

So we conclude that, red tacton is the best technology when compared with
other data communication technologies present today. In this technology there is no
problem of hackers as our body is itself a media. If red tacton is introduced into cyber
market it brings a marvelous & tremendous change and will be adopted by many more
people.













PRESENTED

BY

A.AMBICA,
ECE,
SIR C. R. R COLLEGE OF ENGINEERING,
ELURU_534007,
W.G DIST,AP


E-mai l i d:
ambi annavarapu@gmai l .com

Contact Number
08812_234614






ABSTRACT

I n thi s worl d of sci ence and technol ogy roboti cs i s one of the boom to the
worl d. Recentl y, there has been i ncreasi ng i nterest i n the emergi ng fi el d of roboti cs.
Todays robots are mechanical arms controlled by computers that are programmed to perform a range of
handling activities. They are establishing themselves in manufacturing automation systems to produce a
range of goods with great precision. The emerging era of robots calls for different types of skills. Entering
non-industrial areas, the first fledgling robots for domestic use are coming off the production lines. Robots
are being used in hazardous places, such as outer space or under the sea. Technical advances are gradually
endowing robots with properties that actually increase their similarity to humans
I n thi s paper we are goi ng to present an i dea of roboti cs and i ts appl i cati ons
i n vari ous fi el ds that we come across. Wi th thei r computati onal and agi l e ski l l s robots
perform tasks that are di ffi cul t or hazardous to humans. One area where robots are
begi nni ng to excel i n, and whi ch causes l ess controversi al debate on thei r useful ness,
i s as expl orers. The expl orati on of Mars has l ong been consi dered as a maj or goal i n
the expl orati on of the Sol ar System. Thi s expl orati on i s performed by usi ng robots
known as Mars Expl orati on Rovers. We are goi ng to present a vi ew of the rover and
i ts operati on.

INTRODUTION:

A robot i s a machi ne constructed as an assembl age of j oi ned l i nks so that
they can be arti cul ated i nto desi red posi ti ons by a programmabl e control l er and
preci si on actuators to perform a vari ety of tasks. The fi el d of roboti cs has i ts ori gi ns
i n sci ence fi cti on. The term robot was deri ved from the Engl i sh transl ati on of a
fantasy pl ay wri tten i n Czechosl ovaki a around 1921.I t took another 40 years before
the modern technol ogy of i ndustri al roboti c began. Today, robots are hi ghl y
automated mechani cal mani pul ators control l ed by computers. Roboti cs i s the sci ence
of studyi ng and creati ng robots.

Advances i n mi crochi ps, mi croprocessors, sensors, control systems,
mechani cal engi neeri ng, transducers, and tel ecommuni cati ons have resul ted i n
wi despread growth of roboti c processes and appl i cati ons Robots come i n many shape
and si zes and have many di fferent abi l i ti es. Basi cal l y, a robot i s si mpl y a computer
wi th some sort of mechani cal body desi gned to do a parti cul ar j ob. Usual l y, i t i s abl e
to move and has one or more el ectroni c senses. These senses are not nearl y as
powerful as our own sense of si ght and heari ng. However, sci enti sts and engi neers are
worki ng hard to i mprove robots. They are constantl y comi ng up wi th ways to make
them see, hear and respond to the envi ronment around them.

. Engi neers are attempti ng to add sensors to the current breed of i ndustri al
robots, so that they can see, touch, and even hear. Machi nes wi th thi s extra power wi l l
obtai n i nformati on about events i n the outsi de worl dwhat engi neers cal l feedback
and the hardware wi l l be abl e to react accordi ng to the changes i n ci rcumstances,
i nstead of si mpl y repeati ng a fi xed routi ne of i nstructi ons.

Computers that control robots are becoming faster and more sophisticated imbibed with
reasoning powers that may match those of humans. This will endow robots with greater versatility. They
will have the capacity, at least to some degree, to workout modes of action entirely for themselves. The
simplest definition of a robot could be "a mechanism which moves and reacts to its environment"




LAWS OF ROBOTI CS:

I saac Asi mov establ i shed i n hi s hi story three fundamental l aws of roboti cs as
fol l ows:

A robot may not i nj ure a human bei ng or, through i nacti on, al l ow a human bei ng
to come to harm.
A robot must obey the orders gi ven to i t by human bei ngs, except where such
orders woul d confl i ct wi th the fi rst l aw.
A robot must protect i ts own exi stence as l ong as such protecti on does not confl i ct
wi th the fi rst and second l aws.


CHARACTERISTICS

A robot has these essential characteristics:

Sensing First of all, your robot would have to be able to sense its surroundings. It would do this
in ways that are not similar to the way that you sense your surroundings. Giving your robot
sensors: light sensors (eyes), touch and pressure sensors (hands), chemical sensors (nose), hearing
and sonar sensors (ears), and taste sensors (tongue) will give your robot awareness of its
environment.
Movement A robot needs to be able to move around its environment. Whether rolling on
wheels, walking on legs or propelling by thrusters a robot needs to be able to move. To count as a
robot either the whole robot moves, like the Sojourner or just parts of the robot moves, like the
Canada Arm.

Energy A robot needs to be able to power itself. A robot might be solar powered, electrically
powered, battery powered. The way your robot gets its energy will depend on what your robot
needs to do.

Intelligence A robot needs some kind of "smarts." This is where programming enters the
pictures. A programmer is the person who gives the robot its 'smarts.' The robot will have to have
some way to receive the program so that it knows what it is to do.



CLASSIFICATION OF ROBOTS:

So our first level of definition is a mobile robot, no surprises there as all humans, animals and insects are
the typical role models for our well-established robot genre - here are some classifications of Robots:-

1) Tele-robots - Those guided by remote control by a human operator.

2) Telepresence Robots - Similar to Tele-robots but with feed back of video, sound and other data to
allow the operator to feel more like they are in the Robot.

3) Static Robots - Such as the widely used arms employed in factories and laboratories worldwide.

4) Mobile robots - Those which need to navigate and perform tasks without human intervention.

5) Autonomous Robots - able to carry out their task without intervention and obtain their power
from their environment.

6) Androids - Robots built to mimic humans

Where are robots used?

. Robots, ori gi nal l y, were used i n hazardous operati ons, such as handl i ng toxi c
and radi o -acti ve materi al s, and l oadi ng and unl oadi ng hot workpi eces from furnaces
and handl i ng them i n foundri es.
Some rul e-of-thumb appl i cati ons for the robots are:
Four D s:
Dul l
Di rty
Dangerous
Di ffi cul t
Four H s:
Hot
Heavy
Hazardous
Humbl e

Why are robots used?

Improved product quality
Increased productivity
Reduce production cost
Operation in hazardous working conditions
Better management and control
Improved flexibility and decreased debugging time

Sensors

A robot sensor is a device or transducer that detects information about tbe robot and its
surroundings, and transmits it to the robot controller. Generating artificial equivalents of human senses is
one of the greatest challenges facing the robotics fraternity. The ultimate aim is to produce robots with a
perception of their surroundings comparable to that enjoyed by human beings; robots that would be able to
see, feel, hear, and perhaps even smell things and capable of communicating with their human collaborators
in ordinary natural language. We are, at the moment, a long way from such an ideal situation.

It is imperative to understand difficulties involved in the development of robot senses and find
out where the problem lies. Technology has found ways of matching, and in many cases surpassing, the
abilities of natural sense organs such as eyes and ears; radar, sonar, directional microphones, body
scanners, etc enable to see, hear, or detect things far beyond the range of our own sensory equipment. The
real problem is not the gathering of information but understanding what it means.

For example, devising a robot that can go to the surface of the North Sea, or the surface of Mars,
or the inside of a nuclear reactor is quite different from equipping that robot with the intelligence to enable
it to detect a leaking point in the pipeline from an image of the undersea oil route. The image of the
pipeline merely shows a bit of seaweed wrapped around the joint. It requires even greater intelligence if the
robot is to be capable of deciding for itself that, say, it will be able to resolve this ambiguous image if it
moves to the right to get a better view or zooms in to take a closer look.

Robots should provide both the information that we require to understand the real world and the
information needed to deal with or respond to the real world. There is little point in providing robots with
senses unless we also enable them to understand their surrounding events to get a control over them.

Exploring Mars Using Intelligent Robots

Mars, which is the fourth planet from the sun and the third smallest in size, got its name because
of its rusty red color. The exploration of Mars has long been considered as a major goal in the exploration
of the Solar System. At the beginning of the 20th century it was believed that intelligent life existed on
Mars, while until 1960 the possibility of plant life was still considered. However, detailed information from
the Mariner9 orbiter and the two Viking Landers that landed on Mars in 1976, ruled out the possibility of
any life on Mars, but created the perspective for future human settlement of the Red Planet, as it came to be
known.
Mars is the most Earth-like planet and the best candidate for the first human settlement off Earth.
Information from the Vikings reveals that Mars is a cold, dry planet with extreme temperatures and a thin
atmosphere. The terrain is rough and often untraversable. However, certain features are most encouraging.
From the thin atmosphere nitrogen, argon and water vapor can be extracted, which are enough to prepare
breathable air and water. Water can also be found in the soil and at the polar caps in the form of ice. Wind
and the sun are plausible sources of power. Furthermore, extracts from the atmosphere and the soil can be
used to produce rocket propellants, fertilizer and other useful compounds and feedstocks.

Further exploration needs to be done in order to obtain a better insight to the Martian environment.
Unfortunately manned mission is out of the question for the time being for several reasons trip to Mars
would require almost 2 years away from Earth, which creates a problem of supply of consumables. Then,
the physical factors still need to be thoroughly investigated. The extreme temperatures (reaching -100
degrees centigrade) and the need to produce breathable air still present problems. An unmanned mission is
therefore a necessary precursor to a piloted flight to Mars.

The use of robotic rovers is an attractive and necessary option if exploration of Mars is to go
forward. Having decided on this route, further problems come to surface. The delay for radio signals
between Mars and Earth vary between 6 to 41 minutes while the long distance imposes a low
communication bandwidth. This precludes the use of teleoperation for controlling the vehicle.(A
teleoperated vehicle is one which every individual movement would be controlled by a human being).
Therefore, some autonomy of the vehicle is needed. However, a totally autonomous vehicle that could
travel for extended periods carrying out its assigned tasks is simply beyond the present state of the art of
artificial intelligence. This report considers the technical issues involved in the operation of a Mars
Exploration Rover. In particular, the operation of rovers and related technologies are discussed, while up to
date robots and their performance are used as examples There are three current missions to Mars: The Mars
Exploration Rovers, the Mars Global Surveyor and the Mars Odyssey programs.

About to land on mars





LANDING OF ROVERS

NASA's Mars Exploration Rover (MER) mission has sent two robots Rovers to Mars. The two
robots landed at different places on Mars. The places are called Gusev Crater and Meridiani Planum.
Scientists think there might have been water at each of the places in the past. Places that had water are the
best places to look for signs of life. The rovers have special instruments to look for signs of water

Two robot rovers landed on Mars in J anuary 2004. They are called the Mars Exploration Rover
mission. One rover is named Spirit. The other is called Opportunity. Opportunity landed at a place called
Meridiani Planum. Meridiani Planum is a very flat plain. "Planum" means "plain". Some minerals on Earth
form in wet places. One of those minerals is called hematite. Scientists think there is hematite at Meridiani
Planum. Opportunity is searching for hematite. If it finds some, that might mean that the plain used to be
wet.
Opportunity has a twin. Its twin is named Spirit. Spirit landed at a different place on Mars. Spirit
landed at a place called Gusev Crater.
Meridiani Planum. Opportunity landed inside Gusev Crater. Spirit landed inside the yellow
yellow oval oval




Gusev Crater may have been filled with water long ago. It may have been a big lake. Spirit is a
robot geologist. Some kinds of rocks form in places where there is water. Spirit is trying to find those kinds
of rocks. If it does, that might prove that Gusev Crater really was a lake.

The only water that we know about on Mars right now is frozen - it is ice! Many scientists think Mars
used to be warmer. They think there may have been liquid water on Mars in the past. Liquid water is good
place to find life, especially microbes. If we find water on Mars, or clues about where there used to be
water, that might help us figure out whether Mars ever had life.


MARS EXPLORATION ROVERS:

The Mars Exploration Rover vehicles are exploring the surface of Mars.. The rovers are
"geologists" that are looking at rocks and soil on Mars. They are trying to find rocks and minerals that
might have formed in water. The rovers have six-wheels and are powered by solar panels. The rovers are
about the size of a golf cart. Each vehicle has a mass of 170 kilograms. Each weighs 375 pounds on Earth.
Since the gravity on Mars is weaker, each rover weighs just 140 pounds on Mars.





Figure showing parts of rover

Each rover has nine cameras! Six of the cameras help the robot steer and keep it from running into
rocks or falling into craters. One is a microscope camera that takes close-up views of rocks. Two cameras
are on top of a pole that is about as tall as a person. They are giving us views of Mars that are like what we
would see if we were standing on Mars!

Each rover has a robot arm. The arm has instruments on it that it uses to examine rocks and
soil. Two of those instruments tell us about the kinds of minerals and elements that are in the soil and
rocks. The arm has a scraper, called a RAT (Rock Abrasion Tool), which it uses to scrape off the outer
surface layer of rocks. After the rock's surface has been scraped clean, the other instruments can look at
"fresh" material on the inside of the rock. The microscope camera is also on the arm.
The rover has a:

a body: a structure that protects rovers "vital organs"
brains: computers to process information
temperature controls: internal heaters, a layer of insulation, and more
a "neck and head": a mast for cameras to give the rovers a human scale view
eyes and other "senses": cameras and instruments that give the rovers information about their
environment
arm: a way to extend its reach
wheels and legs: parts for mobility
energy: batteries and solar panels
communications: antennas for speaking and listening

The rovers body:
The rover body is called the warm electronics box, or "WEB" for short. Like a car body, the
rover body is a strong, outer layer that protects the rovers computer, electronics, and batteries (which are
basically the equivalent of the rovers brains and heart). The rover body thus keeps the rovers vital organs
protected
The warm electronics box is closed on the top by a triangular piece called the Rover Equipment
Deck (RED). The Rover Equipment Deck makes the rover like a convertible car, allowing a place for the
rover mast and cameras to sit out in the Martian air, taking pictures and clearly observing the Martian
terrain as it travels.
The gold-painted, insulated walls of the rover body also keep heat in when the night
temperatures on Mars can drop to -96 degrees Celsius (-140 degrees Fahrenheit).

The rovers "brains"
Unlike people and animals, the rover brains are in its body. The rover computer (its "brains") is
inside a module called "The Rover Electronics Module" (REM) inside the rover body. The communication
interface that enables the main computer to exchange data with the rovers instruments and sensors is called
a "bus" (a VME or Versa Module Europa bus to be exact). This VME bus is an industry standard interface
bus to communicate with and control all of the rover motors, science instruments, and communication
functions.

Better memory than ever
The computer is composed of equipment comparable to a high-end, powerful laptop computer.
It contains special memory to tolerate the extreme radiation environment from space and to safeguard
against power-off cycles so the programs and data will remain and will not accidentally erase when the
rover shuts down at night.
On-board memory includes 128 MB of DRAM with error detection and correction and 3 MB of
EEPROM. Thats roughly the equivalent memory of a standard home computer. This onboard memory is
roughly 1000 more than the Sojourner rover from the Pathfinder mission had.
Better "nerves" for balance and position
The rover carries an Inertial Measurement Unit (IMU) that provides 3-axis information on its position,
which enables the rover to make precise vertical, horizontal, and side-to-side (yaw) movements. The device
is used in rover navigation to support safe traverses and to estimate the degree of tilt the rover is
experiencing on the surface of Mars.

Monitoring its "health"
J ust like the human brain, the rover computers register signs of health, temperature, and other
features that keep the rovers "alive."
The software in the main computer of the rover changes modes once the cruise portion of the mission is
complete and the spacecraft begins to enter the Martian atmosphere. Upon entry into the Martian
atmosphere, the software executes a control loop that monitors the "health" and status of the vehicle. It
checks for the presence of commands to execute, performs communication functions, and checks the
overall status of the rover. The software does similar health checks in a third mode once the rover emerges
from the lander.
This main control loop essentially keeps the rover "alive" by constantly checking itself to ensure that it
is both able to communicate throughout the surface mission and that it remains thermally stable (not too hot
or too cold) at all times. It does so by periodically checking temperatures, particularly in the rover body,
and responding to potential overheating conditions, recording power generation and power storage data
throughout the Mars sol (a Martian day), and scheduling and preparing for communication sessions.

Using its "computer brains" for communications
Activities such as taking pictures, driving, and operating the instruments are performed under
commands transmitted in a command sequence to the rover from the flight team.
The rover generates constant engineering, housekeeping and analysis telemetry and periodic event reports
that are stored for eventual transmission once the flight team requests the information from the rover.

The rovers temperature controls
Like the human body, the Mars Exploration Rover cannot function well under excessively hot or
cold temperatures. In order to survive during all of the various mission phases, the rovers "vital organs"
must not exceed extreme temperatures of -40 Celsius to +40 Celsius (-40 Fahrenheit to 104 Fahrenheit).
The rovers essentials, such as the batteries, electronics, and computer, which are basically the rovers heart
and brains, stay safe inside a Warm Electronics Box (WEB), commonly called the "rover body." Heaters
are packed inside the rover body, and like a warm coat, the WEB walls help keep heat in when the night
temperatures on Mars can drop to -96 Celsius (-140 Fahrenheit). J ust as an athlete sweats to release heat
after an intense workout, the rover's body can also release excess heat through its radiators, similar to ones
used in car engines.
There are several methods engineers used to keep the rover at the right temperature:
preventing heat escape through gold paint
preventing heat escape through insulation called aerogel
keeping the rover warm through heaters

Temperature control systems necessity for rovers

Many of these methods are very important to making sure the rover doesnt "freeze to death" in the
cold of deep space or on Mars. Many people often assume that Mars is hot, but it is farther away from the
sun and has a much thinner atmosphere than Earth, so any heat it does get during the day dissipates at
night. In fact, the ground temperatures at the rover landing sites will swing up during the day and down
again during the night, varying by up to 113 degrees Celsius (or 235 degrees Fahrenheit) per Mars day.
Thats quite a temperature swing, when you consider that Earth temperatures typically vary by tens of
degrees on average between night and day.
At the landing sites, an expected daytime high on the ground might be around 22 Celsius (71 Fahrenheit).
An expected nighttime lows might be -99 Celsius (-146 Fahrenheit). Atmospheric temperatures, by
contrast, can vary up to 83 Celsius (181 Fahrenheit). An atmospheric daytime high might be -3 Celsius
(26 Fahrenheit), while a nighttime low might be -96 Celsius (-140 Fahrenheit).


"Neck and head"
What looks like the rover "neck and head" is called the Pancam Mast Assembly. It will stand from
the base of the rover wheel 1.4 meters tall (about 5 feet). This height will give the cameras a special
"human geologists" perspective and wide field of view.
The pancam mast assembly serves two purposes:
to act as a periscope for the Mini-TES science instrument that is housed inside the rover body for
thermal reasons
to provide height and a better point of view for the Pancams and the Navcams

Essentially, the pancam mast assembly enables the rover to see in the distance. The higher one
stands, the more one can see. You can test this out for yourself by laying on the ground and observing as
much as possible, then standing up and seeing the difference in the amount of greater detail you can
observe about the world with a broader field of view.
One motor for the entire Pancam Mast Assembly head turns the cameras and Mini-TES 360 in the
horizontal plane. A separate elevation motor can point the cameras 90 above the horizon and 90 below the
horizon. A third motor for the Mini-TES elevation enables the Mini-TES to point up to 30 over the horizon
and 50 below the horizon.

During cruise, the Pancam Mast Assembly lays flat against the top of the rover equipment deck in a
stowed configuration. After the lander opens on the surface of Mars, pyros release the bolts holding it
down. Pyros are solid mechanical devices that contain as much power as a bullet release mechanism in a
gun. Pyros were designed in the 1960s for the Apollo mission as a safe way to release bolts and strong
devices on spacecraft while humans were in the vicinity.
The Pancam Mast Assembly rises from the rover equipment deck by driving a motor that moves the
Pancam upward, in the shape of a helix. It sweeps out in a cone-like manner as it deploys. Once the Pancam
Mast Assembly is in its full-upright position, it will not stow again, but will stay upright for the entire
duration of the mission.

The rovers "arm"
The rover arm (also called the instrument deployment device or IDD) holds and maneuvers the
instruments that help scientists get up-close and personal with Martian rocks and soil.
Much like a human arm, the robotic arm has flexibility through three joints: the rover's shoulder, elbow,
and wrist. The arm enables a tool belt of scientists instruments to extend, bend, and angle precisely against
a rock to work as a human geologist would: grinding away layers, taking microscopic images, and
analyzing the elemental composition of the rocks and soil.
At the end of the arm is a turret, shaped like a cross. This turret, a hand-like structure, holds various tools
that can spin through a 350-degree turning range.
The four tools, or science instruments, on the robotic arm are:
MI:
Provides close-up images of rocks and soil
APXS
analyzes the elemental composition of rocks and soil
RAT:
grinds away the outer surface of rock to expose fresh material
Mossbauer spectrometer
Analyses the iron in rocks and soil

The forearm also holds a small brush so that the Rock Abrasion Tool can spin against it to "brush
its teeth" and rid the grinding tool of any leftover pieces of rock before its next bite.
Thirty percent of the mass of the titanium robotic arm comes from the four instruments it holds at the end
of the arm. This weight makes maneuvering the lightweight arm a bit of a challenge -- like controlling a
bowling ball at the end of a fishing rod. The arm must be as lightweight as possible for the overall health of
the mission, and holes are even cut out in places where there is no need for solid titanium.
Protecting the arm
Once the arm and instruments have succeeded in one location but before the rover begins another
traverse, the arm stows itself underneath the "front porch" of the rover body. The elbow hooks itself back
onto a pin, and the turret has a T-bar that slides back into a slotted ramp. The fit is almost as tight as a
necklace clasp, and it can withstand shocks of 6 Gs while roving along the rocky terrain. "Six Gs" is
roughly equivalent to dropping a box onto a hard floor from a height of 20 centimeters (almost 8 inches).
During launch and landing, the arm is restrained by a retractable pin restraint, and can withstand even
higher loads of 42 Gs.

Eyes and Senses
Six engineering cameras aid in rover navigation and three cameras perform science investigations.
Each camera has an application-specific set of optics.
4-engineering Hazcams (Hazardous Avoidance cameras
Mounted on the lower portion of the front and rear of the rover, these black-and-white cameras
use visible light to capture three-dimensional (3-D) imagery. This imagery safeguards against the rover
getting lost or inadvertently crashing into unexpected obstacles, and works in tandem with software
that allows the rover make its own safety choices and to "think on its own."
The cameras each have a wide field of view of about 120 degrees. The rover uses pairs of
Hazcam images to map out the shape of the terrain as far as 3 meters (10 feet) in front of it, in a
"wedge" shape that is over 4 meters wide at the farthest distance. It needs to see far to either side
because unlike human eyes, the Hazcam cameras cannot move independently; theyre mounted directly
to the rover body
Two Engineering Navcams
Mounted on the mast (the rover "neck and head), these black-and-white cameras use visible light to
gather panoramic, three-dimensional (3D) imagery. The Navcam is a stereo pair of cameras, each with
a 45-degree field of view to support ground navigation planning by scientists and engineers. They
work in cooperation with the Hazcams by providing a complementary view of the terrain.
Two science Pancams
This color, stereo pair of cameras is mounted on the rover mast and delivers three-dimensional
panoramas of the Martian surface. As well as science panoramas, the narrow field of view and height
of the cameras basically mimic the resolution of the human eye (0.3 milliradians), giving the world a
view similar to what a human geologist might see if she or he were standing on the surface of Mars.
Also, the Pancam detectors have 8 filters per "eye" and between the two "eyes" there are 11 total
unique color filters plus two-color, solar-imaging filters to take multispectral images. The Pancam is
also part of the rovers navigation system. With the solar filter in place, the Pancam will be pointed at
the Sun and therefore will be used as an absolute heading sensor. Like a sophisticated compass, the
direction of the Sun combined with the time of day tells the flight team exactly which way the rover is
facing.
One Science MI
This monochromatic science camera is mounted on the robotic arm to take extreme close-up pictures of
rocks and soil. Some of its studies of the rocks and soil will help engineers understand the properties of the
smaller rocks soil that can impact rover mobility

The rovers wheels "legs"
The Mars Exploration Rover has six wheels, each with its own individual motor.
The two front and two rear wheels also have individual steering motors (1 each). This steering capability
allows the vehicle to turn in place, a full 360 degrees. The 4-wheel steering also allows the rover to swerve
and curve, making arching turns.

Movement of wheels:
The design of the suspension system for the wheels is similar to the Sojourner rover "rocker-bogie"
system on the Pathfinder mission. The suspension system is how the wheels are connected to and interact
with the rover body.
The term "bogie" comes from old railroad systems. A bogie is a train undercarriage with six wheels that
can swivel to curve along a track.
The term "rocker" comes from the design of the differential, which keeps the rover body balanced, enabling
it to "rock" up or down depending on the various positions of the multiple wheels. Of most importance
when creating a suspension system is how to prevent the rover from suddenly and dramatically changing
positions while cruising over rocky terrain. If one side of the rover were to travel over a rock, the rover
body would go out of balance without a "differential" or "rocker", which helps balance the angle the rover
is in at any given time. When one side of the rover goes up, the differential or rocker in the rover
suspension system automatically makes the other side go down to even out the weight load on the six
wheels. This system causes the rover body to go through only half of the range of motion that the "legs"
and wheels could potentially experience without a "rocker-bogie" suspension system.
The rover is designed to withstand a tilt of 45 degrees in any direction without overturning. However, the
rover is programmed through its "fault protection limits" in its hazard avoidance software to avoid
exceeding tilts of 30 degrees during its traverses.
The rover rocker-bogie design allows the rover to go over obstacles (such as rocks) or through holes that
are more than a wheel diameter (25 centimeters or 10 inches) in size. Each wheel also has cleats, providing
grip for climbing in soft sand and scrambling over rocks.

The rovers energy
The rover requires power to operate. Without power, it cannot move, use its science
instruments, or communicate with Earth. The main source of power for each rover comes from a multi-
panel solar array. They look almost like "wings," but their purpose is to provide energy, not fly.
When fully illuminated, the rover solar arrays generate about 140 watts of power for up to four hours per
sol (a Martian day). The rover needs about 100 watts (equivalent to a standard light bulb in a home) to
drive. Comparatively, the Sojourner rovers solar arrays provided the 1997 Pathfinder mission with around
16 watts of power at noon on Mars. Thats equivalent to the power used by an oven light. This extra power
will potentially enable the rovers to conduct more science.
The power system for the Mars Exploration Rover includes two rechargeable batteries that provide energy
for the rover when the sun is not shining, especially at night. Over time, the batteries will degrade and will
not be able to recharge to full power capacity. Also, by the end of the 90-sol mission, the capability of the
solar arrays to generate power will likely be reduced to about 50 watts of power due to anticipated dust
coverage on the solar arrays (as seen on Sojourner/Mars Pathfinder), as well as the change in season. Mars
will drift farther from the sun as it continues on its yearly elliptical orbit, and because of the distance, the
sun will not shine as brightly onto the solar arrays. Additionally, Mars is tilted on its axis just like Earth is,
giving Mars seasonal changes. Later in the mission, the seasonal changes at the landing site and the lower
position of the Sun in the sky at noon than in the beginning of the mission will mean less energy on the
solar panels.
Communications of Rovers:

The rovers antennas
The rover has both a low-gain and high-gain antenna that serve as both its "voice" and its "ears".
They are located on the rover equipment deck (its "back"). The low-gain antenna sends and receives
information in every direction; that is, it is "omni-directional." The antenna transmits radio waves at a low
rate to the Deep Space Network (DSN) antennas on Earth. The high-gain antenna can send a "beam" of
information in a specific direction and it is steerable, so the antenna can move to point itself directly to any
antenna on Earth. The benefit of having a steerable antenna is that the entire rover doesnt necessarily have
to change positions to talk to Earth. Like turning your neck to talk to someone beside you rather than
turning your entire body, the rover can save energy by moving only the antenna.
Not only can the rovers send messages directly to Earth, but they can uplink information to
other spacecraft orbiting Mars, utilizing the 2001 Mars Odyssey and Mars Global Surveyor orbiters as
messengers who can pass along news to Earth for the rovers. The orbiters can also send messages to the
rovers.
The benefits of using the orbiting spacecraft are that the orbiters are closer to the rovers than
the Deep Space Network antennas on Earth and the orbiters have Earth in their field of view for much
longer time periods than the rovers on the ground.
The radio waves to and from the rover are sent through the orbiters using UHF antennas, which
are close-range antennas, which are like walky-talkies compared to the long range of the low-gain and
high-gain antennas. One UHF antenna is on the rover and one is on the petal of the lander to aid in gaining
information during the critical landing event. The Mars Global Surveyor will be in the appropriate location
above Mars to track the landing process
Thus in this way the exploration of Mars by rovers is done and the information is received by the
research stations on the earth.


OTHER APPLICATIONS:

Home and industry Labrador

Your life could be improved by owning a robotic dog to help out on mundane daily chores. RS-01
model of a Labrador-sized Robodog. is the worlds largest and most sophisticated four-legged automation
pet. It can understand and respond to verbal commands, see colours, and read out e-mails using a
permanent wireless Internet connection. It can walk, perform tricks, sit up, and beg. It responds when
stroked and is strong enough to carry a five-year-old child on its back.


Surgical robots

It enables them to perform surgical movements without using their hands, while seated at a console
far away from the operating table. The robots hands are placed directly inside the patients body, replicate
any movement the surgeons hands make.

It can cut, suture, grasp, cauterise, clamp, and retract. The only thing it doesnt do is move by itself.
A high-resolution endoscope attached to one of the robotic arms gives the surgeon a magnified 3D view of
the inside of the patient. With da Vinci, surgeons are now able to make very small cuts, insert the robots
hands in the patients body, and perform operations in very tight spaces. Small incisions result in smaller
scars, reduced blood transfusions, quicker recovery time, and smaller risk of complications. Surgery is
more precise and less painful.

A major advantage of robotic tools is that these do not tremble like human hands. Doctors can
program precise configurations to use at different points in the surgery. They can conduct operations on
patients in far-flung space stations and battlefields.

Beyond human capabilities

Robots are needed to clean up hazardous waste sites and handle wastes to help in restoration of the
environment. At times, equipment used in nuclear operations become sufficiently radioactive. In such
situations, humans cannot perform maintenance or repairs.

ORNL has developed methods to perform these tasks remotely using servo manipulators mounted
on mobile platforms. Mimicking the motion of human arms and hands, these servo manipulators can carry
out the necessary repairs and maintenance of failed equipment, in a variety of hazardous environments.
Researchers at ORNL are developing robotics for dangerous situations involving decontamination and
dismantling of radioactive devices, and measuring radioactivity levels at waste sites. They have engineered
an eight-wheeled, remotely operated robotic surveillance vehicle with electronic systems, sensors,
computers, and automated data acquisition system to survey radioactive waste sites for contamination
levels.

A future armour rearm system (FARS), the first of several military robotic projects, eliminates the
need for manual rearming of tanks. This technology improves safety and efficiency by automatically
reloading battlefield vehicles with ammunition.

The future

Human-sized robots that can be programmed for any simple chore will be introduced in the market
by 2010.These intelligent robots will free humans from much of the drudgery-based work we do today.
However, these advanced concepts will rely on human-like artificial intelligence that has not yet been
developed. Though technology for producing mechanical body parts for sophisticated robots already exists,
the lack of a sufficient brain for robots has prevented the science-fiction writers predictions from
materializing.

Computers are becoming more capable of understanding speech and text, and can be taught to
recognize objects and textures. Mass-produced consumer robots called driods will be capable of vacuuming
rooms by searching out dirty areas. These driods will operate on wireless from a docking station where they
can recharge and dispose of dirt. The vacuuming robot uses over 50 sensors and a computer to perform
basic vacuuming drudgery.

Third-generation robots would have their central brain working on the principles of AI. They would
play a big role in everyday aspects of life, from helping out in the home to cleaning the streets, or even
assisting a fighter pilot to accomplish increasingly complex missions. By 2040 freely moving robots with
built-in intellectual capacity, similar to humans, will be developed. They will be able to perform medical
diagnosis, route plans, make financial decisions, configure computer systems, analyze seismic data to
locate deposits, etc. All these views may not be far-fetched.

I n summary, some future foreseen appl i cati ons are l i sted here by i ndustry and
speci al category:

1. Aerospace i ndustry
2. Agri cul tural i ndustry
3. Transportati on i ndustry
4. Texti l e i ndustry
5. Nucl ear i ndustry
6. Fi refi ghti ng
7. Uti l i ty i ndustry
8. Manufacturi ng i ndustry
9. Lab automation
10. Household robots and so on.



CONCLUSION


In reality, robot technology has an exciting future. We can expect to see the robot move out of
factory and foundry and enter domestic worlds. The domestic robot will appear as an electronic pet and
soon will develop the ability to perform useful tasks in home. The sensory ability of all robots will greatly
improve. In the long run, robots will acquire capabilities they have been described as having in the movies
and science fiction books. Self-reproducing factories may be placed on the moon or on other planet to help
to meet our growing need for energy and goods. Finally, the current and future applications of robots are
moving in the direction to provide us with more and capabilities like those of humans, which are going to
increase our quality of life and global economy
























































PRESENTED BY

A.KARTHIK B.BALAJI
III B.TECH (E.C.E) III B.TECH (E.C.E)
ROLL: 104430 ROLL: 104426
SVUCE SVUCE
TIRUPATI TIRUPATI
Email: karthik_958@yahoo.co.in Email: balaji_426@yahoo.co.in

COLLEGE:

SRI VENKATESWARA UNIVERSITY
COLLEGE OF ENGINEERING,
TIRUPATI 517502 , A.P


I N D E X

1. ABSTRACT

2. INTRODUCTION

3. SMART ANTENNA SYSTEM

4. GOALS FOR A SMART ANTENNA SYSTEM

5. APPLICATIONS OF SMART ANTENNA

6. SMART ANTENNA SYSTEM FOR IS-136

7. SUMMARY

8. BIBLIOGRAPHY
ABSTRACT

Smart antennas and adaptive antenna arrays have recently received
increasing interest to improve the performance of wireless systems. A smart antenna
system combines multiple antenna elements with a signal processing capability to
optimize its radiation and reception pattern automatically in response to the signal
environment. This system includes a large number of techniques that attempt to enhance
a desired signal and suppress interfering signals.
In truth, antennas are not smartantenna systems are smart. Generally
collocated with a base station, a smart antenna system combines an antenna array with a
digital signal-processing capability to transmit and receive in an adaptive, spatially
sensitive manner. In other words, such a system can automatically change the
directionality of its radiation patterns in response to its signal environment. This can
dramatically increase the performance characteristics (such as capacity) of a wireless
system. The dual purpose of a smart antenna system is to augment the signal quality of
the radio-based system through more focused transmission of radio signals while
enhancing capacity through increased frequency reuse.
A smart antenna for IS-136 systems with significantly improved
uplink and downlink performance over conventional systems may be best attained by a
hybrid approach that combines switched beam antennas with power control for the
downlink,and conventional antennas and adaptive array processing for the uplink.
INTRODUCTION:

Smart antennas and adaptive antenna arrays have recently received
increasing interest to improve the performance of cellular radio systems. Smart antennas
include a large number of techniques that attempt to enhance a desired signal and
suppress interfering signals. While adaptive antenna arrays and steerable beam antennas
have been used for military applications for decades, only in the last few years
have these techniques begun to be applied to commercial systems. A smart antenna
system combines multiple antenna elements with a signal processing capability to
optimize its radiation and reception pattern automatically in response to the signal
environment. This system includes a large number of techniques that attempt to enhance
a desired signal and suppress interfering signals.


These have recently received increasing interest to improve the
performance of cellular radio systems and wireless systems. The dual
purpose of a smart antenna system is to augment the signal quality of the radio-
based system through more focused transmission of radio signals while
enhancing capacity through increased frequency reuse.

SMART ANTENNA SYSTEM

In truth, antennas are not smartantenna systems are smart. Generally
collocated with a base station, a smart antenna system combines an antenna array with a
digital signal-processing capability to transmit and receive in an adaptive,
spatially sensitive manner. In otherwords, such a system can automatically change the
directionality of its radiation patterns in response to its signal environment. This can
dramatically increase the performance characteristics (such as capacity) of a
wireless system.

Types of Smart Antenna Systems

Terms commonly heard today that embrace various aspects of a smart
antenna system technology include intelligent antennas, phased array, SDMA, spatial
processing, digital beam forming, adaptive antenna systems, and others. Smart antenna
systems are customarily categorized, however, as either switched beam or
adaptive array systems.
The following are distinctions between the two major categories of
smart antennas regarding the choices in transmit strategy:

Switched beam:
A finite number of fixed, predefined patterns or combining strategies of
sectors.
Adaptive array:
An infinite number of patterns (scenario-based) that are adjusted in real
time.

Switched Beam Antennas

Switched beam antenna systems form multiple fixed beams with
heightened sensitivity in particular directions. These antenna systems detect signal
strength, choose from one of several predetermined, fixed beams, and switch from one
beam to another as the mobile moves throughout the sector. Instead of shaping the
directional antenna pattern with the metallic properties and physical design of a single
element (like a sectorized antenna), switched beam systems combine the outputs of
multiple antennas in such a way as to form finely sectorized (directional)
beams with more spatial selectivity than can be achieved with conventional, single-
element approaches.

Figure 1. Switched Beam System
Coverage Patterns (Sectors)

Adaptive Array Antennas
Adaptive antenna technology represents the most advanced smart
antenna approach to date. Using a variety of new signal-processing algorithms, the
adaptive system takes advantage of its ability to effectively locate and track various types
of signals to dynamically minimize interference and maximize intended signal reception.
Both systems attempt to increase gain according to the location of the user; however,
only the adaptive system provides optimal gain while simultaneously identifying,
tracking, and minimizing interfering signals.

Figure 2. Adaptive Array Coverage



Research on adaptive antenna arrays for cellular systems dates from the
early to mid- 1980s, but R&D on smart and adaptive antennas for cellular systems has
intensified only in the last few years. In 1995, Nortel introduced smart antenna
technology for PCS-1900 systems. Other companies such as Metawave have introduced
similar technology, and the European Advanced Communications Technologies and
Services (ACTS) TSUNAMI project is considering adaptive antennas for third-
generation wireless systems. A number of R&D efforts have also considered adaptive
antennas for the TDMA standard (IS-54/IS-136)

GOALS FOR A SMART ANTENNA SYSTEM

The dual purpose of a smart antenna system is to augment the signal
quality of the radio-based system through more focused transmission of radio signals
while enhancing capacity through increased frequency reuse. More specifically, the
features of and benefits derived from a smart antenna system include those listed in
following table:





APPLICATIONS OF SMART ANTENNA

1. 2G System Applications

2. 3G System Applications:
EDGE
MIMO-EDGE
OFDM-MIMO-EDGE

SMART ANTENNA SYSTEM FOR IS : 136





A smart antenna for IS-136 systems with significantly improved
uplink and downlink performance over conventional systems may be best
attained by a hybrid approach that combines switched beam antennas
with power control for the downlink, and conventional antennas and
adaptive array processing for the uplink.
IS-136 is based on FDD(Frequency Division Duplex) operation, so the
uplink and downlink signals generally fade independently.

One solution is to place multiple receivers at the handset, but that may not be
economical.

Another proposal is to use angle-of-arrival algorithms for an uplink and form
downlink beams at that estimated angle. These techniques have been also
extended to include null-steering of the BS transmit array based on the received
signals to reduce interference. A disadvantage of this approach is the requirement
for transmit paths that are well phase calibrated to the different elements of the
adaptive array,which is a concern for conventional systems with antenna elements
on a tower connected to RF circuitry at the base of the tower by individual coaxial
lines. Also, transmit null beam forming for FDD systems is likely to be sensitive
to angular spread of the signal.

Another proposal for adding smart antenna techniques to downlinks uses switched
beam approaches where a beam for the downlink is assigned based on that is
selected for the uplink.

DIVERSITY TECHNIQUES

Spatial Diversity Uses multiple antennas.
Each antenna is physically separated.(Arrays)
Too large for compact handsets


Polarization Diversity Uses a dual antenna system.
Each antenna pair uses orthogonal polarizations.
Polarization Pairs: Horizontal/Vertical, 45 slant.


FOUR BRANCH ADAPTIVE ANTENNA UPLINK

Uplink Processing

A smart antenna is only employed at the base station and not at
the handset or subscriber unit. The received signal from the spatially distributed antenna
elements is multiplied by a weight, a complex adjustment of amplitude and a phase.
These signals are combined to yield the array output. An adaptive algorithm controls the
weights according to predefined objectives.These dynamic calculations enable the system
to change its radiation pattern for optimized signal reception. Listening to the cell is an
uplink processing.
In the adaptive antenna array case, four conventional 120 degrees
sector antennas may be spaced uniformly with a total aperture of 1020 wavelengths (5
10 ft at 1.9 GHz).
A goal of this antenna configuration is to achieve minimum correlation
in fading between the four elements to maximize diversity gain. The probability of
simultaneous fades is then low. The signals from the four antenna elements are
individually processed and combined using digital signal processing. The four signals are
combined using weights that maximize the desired signal and suppress any interfering
signals. There are many possible algorithms for generating the combining weights, and
also many different antenna element configurations that could be considered. Other
possible antenna configurations include using two dual polarized antennas with 120
degrees beamwidths.
A four-branch adaptive array offers the potential to provide up to 6
dB range extension over a conventional two-branch antenna system. If the signal
attenuation is assumed to increase on the average at a rate proportional to range to the
fourth power in a cellular or PCS environment, a 6 dB range extension will increase the
range by the square root of two and decrease the required number of BSs(Base Stations)
by a factor of two to satisfy coverage constraints. In a realistic environment with
significant correlation in fading between the signals received on a four-element antenna
array, the gain in range extension may fall to about 4 dB, which means that an area can be
covered with about two-thirds the number of BSs using a four-element adaptive antenna
array compared to a conventional two-branch spatial antenna diversity system.

FIXED SWITCHED BEAM DOWNLINK

Downlink Processing

Switched beam systems communicate with users by changing between
preset directional patterns, largely on the basis of signal strength.It is a base station to
user process.Speaking to user is a down linkprocessing.
Switched beam antenna technology appears to be the most feasible for
IS-136 downlinks. For IS-136 TDMA, downlink smart antenna approaches are further
complicated by the requirement that downlink signals be locally continuous in time,
which precludes switching or adapting beams in a step function between time slots.
One approach would be to modify the standard to support a discontinuous downlink so
that beam-switching could occur between time slots. However,many terminals depend on
using both the synchronization word for the desired burst and the synchronization
word from the following burst.
A good approach to this problem is to transmit on all downlink
beams using independent power control for each beam that satisfies the power
requirements for any and all terminals in each beam. Changes to power control levels
would need to be applied with smooth trajectories that do not violate the requirement for
local signal continuity.
An antenna configuration that satisfies this requirement for a
conventional 120s sector would consist of a single switched beam antenna with
four 30s beams for transmission and two dual-polarized 120s antennas spatially separated
from the switched beam antenna for reception.

The switched beam antenna approach has the potential advantage
of a small aperture compared to the two-branch spatial diversity arrangement generally
used for cellular or PCS BS receivers, which uses two antennas spaced by 510 ft.
However, a single switched beam antenna with four beams per
sector will only support about the same link budget as a comparable full sector
spatial diversity arrangement with two antennas.
The two-branch spatial diversity arrangement typically provides 46
dB improvement in signal quality in a fading environment and the switched beam
antenna provides up to 6 dB peak gain over a conventional antenna within a beam
and the possibility of angle diversity by combining signals from different beams.

SUMMARY

In this paper we have attempted to provide motivation for the
use of Smart antenna system techniques to support the analysis of IS-136 Second
generation system. Smart antenna technology can significantly improve wireless system
performance and economics for a range of potential users. It enables operators of PCS,
cellular, and wireless local loop (WLL) networks to realize significant increases in signal
quality, capacity, and coverage. Smart antenna systems offer the most flexibility in
terms of configuration and upgradeability are often the most cost-effective long-term
solutions.

BIBLIOGRAPHY

1. www.iec.org
2. www.ictp.trieste.it.com
3. www.webforum.com
4.http://www.TelecomWriting.com/PCS/Multiplexing.html








SMART OPTICAL SENSOR







SUBMITTED BY:

MALLYNATH TURLAPATI. YAGNA NARAYANA P.

mnturlapati@yahoo.com yagnam88@gmail.com


THIRD YEAR
ELETRONICS &COMMUNICATION ENGINEERING,
SIR C.R.REDDY COLLEGE OF ENGINEERING,
ELURU, ANDHRA PRADESH.


SMART OPTICAL SENSOR


Abstract:
The Fiber optic sensors
which can readily be integrated with
smart materials and structures offer an
immense potential in the realization of
intelligent systems. This paper presents
an overview of fiber optic sensor sand
their application to smart structures. The
advantages of fiber optic sensors are
freedom from EMI, wide bandwidth,
compactness, geometric versatility and
economy. In general, FOS is
characterized by high sensitivity when
compared to other types of sensors.
Fiber optic sensor technologies are
finding growing application in the area
of monitoring of civil structures. In large
part due to exceptional advances in fiber
telecommunications technologies, the
costs for fiber sensors has been dropping
steadily, and this trend will continue.
Further, measurement capabilities and
system configurations (such as
wavelength multiplexed, quasi
distributed sensor arrays) that are not
feasible with conventional technologies,
now are possible with fiber sensors,


enabling previously unobtainable
information on structures to be acquired.

1. INTRODUCTION
Smart sensors:-
A sensor producing an electrical output
when combined with interface electronic
circuits is said to an intelligent sensor if
the interfacing circuits can perform (a)
ranging (b) calibration and(c) decision
making for communication and
utilization of data.
Intelligent sensors are also called
SMART sensors which in a more
acceptable term now. the initial
motivation behind the development of
smart sensors include compensation for
the non-ideal behavior of the sensortion.
There has been a growing interest in the
area of smart structures which are
designed to react to their environment by
use of integrated sensors and actuators in
their body. Such structures not only can
monitor the health of their body but also
forewarn about the onset of
abnormalities in their states and hence
the impending failures. There are many
advantages in such a system: less down
time, less frequent maintenance, better
utilization of material usage and
improved safety, reliability and
economy. Several forecasts have
predicted that there will be a tremendous
growth in the sensors market. In contrast
to the classical sensors based largely
upon the measurement of electrical
parameters such as variable resistance or
capacitance, modern sensors make use of
a variety of novel phenomena. More
importantly, these sensors are directly
suitable for digital control and also have
a degree of smartness incorporated in
them to combat problems of nonlinearity
and long term drift. Several key
technologies
are likely to playa major role in the
sensors of the future. Micro
electromechanical (MEM),
microelectronic sensors based on
piezoelectric materials, organic polymers
and silicon have tremendous potential as
smart sensors. Fiber optics based sensors
are also emerging as a viable and
competitive technology. While many
types of stand alone sensors are
available, only some of them can be
considered for integration with smart
structures. Among these, fiber optic
sensors are in the forefront in their
choice for incorporation into materials
and structures made of carbon and glass
fiber reinforced polymer composites.
2. FIBER OPTIC SENSORS
The technology and applications of
optical fibers have progressed very
rapidly in recent years. Optical fiber,
being a physical medium, is subjected to
perturbation of one kind or the other at
all times. It therefore experiences
geometrical (size, shape) and optical
(refractive index, mode conversion)
changes to a larger or lesser extent
depending upon the nature and the
magnitude of the perturbation. In
communication applications one tries to
minimize such effects so that signal
transmission and reception is reliable.
On the other hand in fiber optic sensing,
the response to external influence is
deliberately enhanced so that the
resulting change in optical radiation can
be used as a measure of the external
perturbation. In communication, the
signal passing through a fiber is already
modulated, while in sensing, the fiber
acts as a modulator. It also serves as a
transducer and converts measured like
temperature, stress, strain, rotation or
electric and magnetic currents into a
corresponding change in the optical
radiation. Since light is characterized by
amplitude (intensity), phase, frequency
and polarization, any one or more of
these parameters may undergo a change.
The usefulness of the fiber optic sensor
therefore depends upon the magnitude of
this change and our ability to measure
and quantify the same reliably and
accurately. The advantages of fiber optic
sensors are freedom from EMI, wide
bandwidth, compactness, geometric
versatility and economy. In general, FOS
is characterized by high sensitivity when
compared to other types of sensors. It is
also passive in nature due to the
dielectric construction. Specially
prepared fibers can withstand high
temperature and other harsh
environments. In telemetry and remote
sensing applications it is possible to use
a segment of the fiber as a sensor gauge
while a long length of the same or
another fiber can convey the sensed
information to a remote station.
Deployment of distributed and array
sensors covering extensive structures
and geographical locations is also
feasible. Many signal processing
devices(splitter, combiner, multiplexer,
filter, delay line etc.) can also be made
of fiber elements thus enabling the
realization of an all-fiber measuring
system. There are a variety of fiber optic
sensors. These can be classified as
follows.
A) Based on the modulation and
demodulation process a sensor can be
called as intensity (amplitude), a phase, a
frequency, or a polarization sensor.
Since detection of phase or frequency in
optics calls for interferometric
techniques, the latter is also termed as an
interferometric sensor. From a detection
point of view the interferometeric
technique implies heterodyne
detection/coherent detection. On the
other hand intensity sensors are basically
incoherent in nature. Intensity or
incoherent sensors are simple in
construction, while coherent detection
(interferometric) sensors are more
complex in design but offer better
sensitivity and resolution.
B) Fiber optic sensors can also be
classified on the basis of their
application: physical sensors (e.g.
measurement of temperature, stress,
etc.); chemical sensors (e.g.
measurement of pH content, gas
analysis, spectroscopic studies, etc.);
bio-medical sensors (inserted via
catheters or endoscopes which measure
blood flow, glucose content
and so on). Both the intensity types and
the interferometric types of sensors can
be considered in any of the above
applications.
C) Extrinsic or intrinsic sensors is
another classification scheme. In the
former, sensing takes place in a region
outside of the fiber and the fiber
essentially serves as a conduit for the to-
and-fro transmission of light to the
sensing region efficiently and in a
desired form. On the other hand, in an
intrinsic sensor one or more of the
physical properties of the fiber undergo a
change as mentioned in A) above.
3. BASIC COMPONENTS
A fiber optic sensor in general will
consist of a source of light, a length of
sensing (and transmission) fiber, a photo
detector, demodulation, processing and
display optics and the required
electronics.
3.1 Optical fibers:
These are thin, long
cylindrical structures which support light
propagation through total internal
reflection.








An optical fiber consists of an inner core
and an outer cladding typically made of
silica glass, although, other materials
like plastics is some times used. Three
types of fibers are in common use in
FOS. The multimode (MM) fiber
consists of a core region whose diameter
(~50 _ m) is a large multiple of the
optical wavelength. The index profile of
the core is either uniform (step-index) or
graded (e.g., parabolic). Plastic fibers
have a step index profile and a core size
of about 1mm. The micro bend type or
the evanescent type intensity sensors use
MM fibers. MM fiber has the advantage
that it can couple large amount of light
and is easy to handle both arising from
its large core size.



3.2 Sources: In FOS semiconductor
based light sources offer the best
advantages in terms of size, cost, power
consumption and reliability. Light
emitting diodes (LEDs)
and laser diodes (LDs) are the right type
of sources for FOS although in
laboratory experiments the He-Ne laser
is frequently used. Features of LED
include very low
coherence length, broad spectral width,
low sensitivity to back reflected light
and high reliability. They are useful in
intensity type of sensors only. LDs on
the other hand
exhibit high coherence, narrow line
width and high optical output power, all
of which are essential in interferometric
sensors
3.3 Detectors:
Semiconductor
photodiodes (PDs) and avalanche
photodiodes (APDs) are the most
suitable detectors in FOS. APD can
sense low light levels due to the inherent
gain because of avalanche
multiplication, but need large supply
voltage typically about 100 V. The
various noise mechanisms associated
with the detector and electronic circuitry
limit the ultimate detection capability.
Thermal and shot noise is two main
noise sources and need to be minimized
for good sensor performance. Detector
response varies as a function of
wavelength. Silicon PD is good for
visible and near IR wavelengths.
Generally there is no bandwidth
limitation due to the detector as such,
although the associated electronic
circuits can pose some limitation.
4. APPLICATION IN
VARIOUS FIELD
4.1 Civil engineering
In general fiber optic sensors for smart
structure applications require the
following capabilities:
1. Monitor the manufacturing process of
composite structures.
2. Check out the performance of any part
or point of the structure during
assembly.
3. Form a sensor network and serve as a
health monitoring and performance
evaluation system during the operational
period of the structure. Using a micro
bend sensor as shown in fig.8, pressure,
load and displacement measurements
can be made on civil structures such
buildings and bridges. Such a sensor is
attractive because it is simple to use, low
cost and very rugged. Initial calibration
could be done with a compression
testing machine.


A Michelson optical fiber interferometer
can be used to make high temperature
measurements as discussed below. The
basic principle behind the Michelson
interferometer is that whenever temp
change fiber Bragg grating will expand
or contract so, refractive index period
changes therefore ultimately change the
wavelength of reflected wave. Working
of this device is given like this: in fig
incident light beam splits between fixed
path &Moving reflector. the end
reflector in path are same, beams of a
single wavelength combined at beam
splitter. so at the detector produce
sinusoidal intensity variation, as moving
reflector changes optical path difference.
It is a complete cycle of intensity
maximum and minimum known as
interference fringe .reference signal is
used accurately calibrate interferometer.
Determine absolute wavelength in the
Basic principle behind the civil
engineering application is strain and
stress directly calibrated in terms of
micro band rays. At initial condition
pressure is not applied or force is absent
so all light waves travel from source to
detector without loss. but in the presence
of pressure due to passing heavy vehicle
plate are compressed or expand so,
reflective index period will change then
micro bend loss occur &output side
optical intensity of light beam decrease .
4.2. High temperature
measurement: (Michelson
Interferometer)
spectrum of light which is calibrated in
terms of temperature variation.


4.3 Recent Trend of Optical
Sensor
NASA Goddard Space Flight Center
(GSFC) has developed a new method for
preparing fiber optic devices combining
chemistry and materials processing. The
resulting devices have a crack-free core
that can be doped to achieve specific
detection properties.
This sol-gel technology can be used in a
wide range of medical applications,
including in vitro diagnostics and public
health detection systems.


How it Works :-
Existing sol-gel fiber optic units coat
optical fibers with sol-gel and
experience the drawback of detection
occurring outside the fiber rather than
inside. Researchers at NASA Goddard
Space Flight Center have incorporated
sol-gel as a core element rather than as a
coating on fibers. This technology is a
method for fabricating a fiber optic
device, ensuring that sol-gels emitted
luminescence or fluorescence is
transmitted directly to the detector. It
involves integrating the sol-gel into a
hollow-core optical fiber
following these basic steps:
#Identify the condition to be monitored.
#Identify the appropriate sol-gel for the
specific condition.
#Chemically fabricate the sol-gel and
introduce the appropriate dopant.
#Inject the sol-gel into the hollow core
fiber.
#Cure the sol-gel inside the fiber to
create the probe.
#Attach the probe to a communications
fiber optic waveguide.
#The probe is ready for use.
Why it is Better
Sensors offer a fast response time and
sensitivity several orders of magnitude
higher than that of existing sol-gel
detector technologies. Many sol-gelfilled
fiber optic units can be bundled for
custom applications where the fiber acts
as sensor and reaction substrate. Sol-gel
fiber optic units can be used single-
ended or in-line, as required by the
application. The resulting fiber optic
sensing units possess all the beneficial
characteristics of typical
fiber optic sensors and sol-gels. Sol-gels
can be tailored to obtain specific
electrical and optical properties. By
carefully selecting dopants, a wide range
of sensing responses can be achieved.
#GSFC researchers have demonstrated
that sol gels can be doped with highly
pH sensitive fluorescent dyes, while
retaining their sensitivity. Application In
vitro diagnostics of physiological
analytes and other body chemistry
# Monitoring of drug
dosage/concentrations and blood
constituents.
# Rapid detection of bacterial
infections/contamination
#Detection and monitoring of chronic
diseases and critical biomarkers
#Biological sensing or monitoring (i.e.,
using luminescent materials for
chemical, pressure, or temperature
sensing or stress monitoring)
# Detection of phosphatases and
measurement of phosphatase activity
(i.e., in detection of cancer and
biochemical processes in cells)
# Screening combinatorial chemistry
libraries to accelerate discovery of new
pharmaceuticals
#Interventional cardiology to assess risk
of subsequent coronary occlusion or
pathological bleeding periprocedurally
# Improved cancer chemotherapeutic
drug delivery (i.e., providing the optimal
dose of drugs at the tumor site)
References:
1. Electronic Communication by Roddy
Coolen
2. Photonics magazine
3. IEEE journals (Control Systems)
4. www.google.com









Presented by-
D.SREEHARI G.MADAN MOHAN REDDY
III B.Tech (E.C.E) III B.Tech (E.C.E)
S.K.I.T S.K.I.T
SRI KALAHASTI. SRI KALAHASTI.
Email-hari_447@yahoo.co.in Email- writetoreddy455@gmail.com



ABSTRACT:
Spintronics is a rapidly emerging field of science and technology that will most
likely have a significant impact on the future of all aspects of electronics as we continue
to move into the 21st century. Conventional electronics are based on the charge of the
electron. Attempts to use the other fundamental property of an electron, its spin, have
given rise to a new, rapidly evolving field, known as spintronics, an acronym for spin
transport electronics. Initially, the spintronics program involved overseeing the
development of advanced magnetic memory and sensors based on spin transport
electronics. It was then expanded to include Spins IN Semiconductors (SPINS), in the
hope of developing a new paradigm in semiconductor electronics based on the spin
degree of freedom of the electron. Studies of spin-polarized transport in bulk and low-
dimensional semiconductor structures show promise for the creation of a hybrid device
that would combine magnetic storage with gain-in effect, a spin memory transistor. This
paper reviews some of the major developments in this field and provides a perspective of
what we think will be the future of this exciting field. And on basic principles of spin-
polarized transport. It is not meant to be a comprehensive review of the whole field but
reflects a bias on the part of the authors toward areas that they believe will lead to
significant future technologies.














INTRODUCTION:

Nanotechnology is the latest
technology giving hope to man, to
achieve the smallest possible devices. As
rapid progress in the miniaturization of
semiconductor electronic devices leads
toward chip features smaller than 100
nanometers in size, device engineers and
physicists are inevitably faced with the
looming presence of quantum
mechanics--the branch of physics where
wavelike properties of the electron
dominate the particle (charge) behavior.
One such is the quantum property of the
electron known as spin, which is closely
related to magnetism. Devices that are
based on an electron's spin to perform
their functions form the basis of
spintronics (spin-based electronics), also
known as magneto electronics.
Magnetism has always been important
for information storage. Spintronics is a
new branch of electronics in which
electron spin, in addition to charge, is
manipulated to yield a desired outcome.


CONCEPT:
All spintronic devices act
according to the simple scheme: (1)
information is stored (written) into spins
as a particular spin orientation (up or
down), (2) the spins, being attached to
mobile electrons, carry the information
along a wire, and (3) the information is
read at a terminal. Spin orientation of
conduction electrons survives for a
relatively long time (nanoseconds,
compared to tens of femtoseconds
during which electron momentum
decays), which makes spintronic devices
particularly attractive for memory
storage and magnetic sensors
applications, and, potentially for
quantum computing where electron spin
would represent a bit of information.
The hard drives of the latest computers
rely on a spintronic effect, giant
magnetoresistance (GMR), to read dense
data.





TYPES OF SPINTRONIC
DEVICES:
Spintronic devices can be divided into
three categories.
1. The first is flow of spin-polarised
current metal-based devices .
2. In the second category, the spin-
polarized currents flow in
semiconductors instead of metals.
Achieving practical spintronics in
semiconductors would allow a number
of existing microelectronics techniques
to be co-opted and would also invent
many more types of devices made
possible by semiconductors' high-quality
optical properties and their ability to
amplify both optical and electrical
signals. This avenue of research may
lead to a new class of multifunctional
electronics that combine logic, storage
and communications on a single chip.

3. The most complex is the third
category of devices: ones that
manipulate the quantum spin states of
individual electrons. This category
includes spintronic quantum logic gates
that would enable construction of large-
scale quantum computers, which would
extravagantly surpass standard
computers for certain tasks. A research
of the technologies is aimed toward the
goal:ions in magnetic traps, "frozen"
light, ultracold quantum gases called
Bose-Einstein condensates and nuclear
magnetic resonance of molecules in
liquids etc.
SPIN POLARISED CURRENT:

In an ordinary electric current,
the spins point at random and play no
role in determining the resistance of a
wire or the amplification of a transistor
circuit. Only the charge and flow of
electrons as particles matters. Spintronic
devices, in contrast, rely on differences
in the transport of "spin up" and "spin
down" electrons. In a ferromagnetic
material, such as iron or cobalt, the spins
of certain electrons on neighboring
atoms tend to line up. In a strongly
magnetized piece of ferromagnetic
material, this alignment extends
throughout much of the metal. When a
current passes through the ferromagnet,
electrons of one spin direction tend to be
obstructed. This results in a current
called as SPIN POLARISED
CURRENT in which all the electron
spins point in the other direction.
A ferromagnet can even
affect the flow of a current in a nearby
nonmagnetic metal. This principle is
used in present-day read heads in
computer where hard drives use a device
dubbed a spin valve, in which a layer of
a nonmagnetic metal is sandwiched
between two ferromagnetic metallic
layers. The magnetization of the first
layer is fixed, or pinned, but the second
ferromagnetic layer is not. As the read
head travels along a track of data on a
computer disk, the small magnetic fields
of the recorded 1's and 0's change the
second layer's magnetization back and
forth, parallel or antiparallel

Electrons Electrons




Magnetic
Non
magnetic
Magnetic

Substrate
to the magnetization of the pinned layer.
In the parallel case, only electrons that
are oriented in the favored direction flow
through the conductor easily. In the
antiparallel case, all electrons are
impeded. The resulting changes in the
current allow GMR read heads to detect
weaker fields than their predecessors, so
that data can be stored using more
tightly packed magnetized spots on a
disk, increasing storage densities by a
factor of three.
The basic GMR device
consists of a three-layer sandwich of a
magnetic metal such as cobalt with a
nonmagnetic metal filling such as silver
(see diagram, above). A current passes
through the layers consisting of spin-up
and spin-down electrons. Those oriented
in the same direction as the electron
spins in a magnetic layer pass through
quite easily while those oriented in the
opposite direction are scattered. If the
orientation of one of the magnetic layers
can easily be changed by the presence of
a magnetic field then the device will act
as a filter, or 'spin valve', letting through
more electrons when the spin
orientations in the two layers are the
same and fewer when orientations are
oppositely aligned. The electrical
resistance of the device can therefore be
changed dramatically.





SPINTRONIC QUBITS:

In any electronic orbit or energy level
there can be a maximum of two
electrons with two distinct spins, up or
down. Each electron spin can
represent a bit; for instance, a 1 for spin
up and a 0 for spin down. With
conventional computers, engineers go to
great lengths to ensure that bits remain
in stable, well-defined states. A quantum
computer, in contrast, relies on encoding
information within quantum bits, or
qubits, each of which can exist in a
superposition of 0 and 1. By having a
large number of qubits in superpositions
of alternative states, a quantum computer
intrinsically contains a massive
parallelism so that quantum algorithms
can operate on many different numbers
simultaneously.Spin polarization decays
becomes all the more acute if one is to
build a quantum computer based on
electron spins. That application requires
control over a property known as
quantum coherence, in essence the pure
quantum nature of all the computer's
data-carrying components. Quantum
data in semiconductors based on the
charges of electrons tend to lose
coherence, or dissipate, in mere
picoseconds, even at cryogenic
temperatures..
For electrons the horizontal quantum
spin states are actually coherent
superpositions of the spin-up and spin-
down states. In effect, such electrons are
in both the up and the down state at the
same time. This is precisely the kind of
coherent superposition of states
employed by quantum computers.
Electron-spin qubits interact only weakly
with the environment surrounding them,
principally through magnetic fields that
are nonuniform in space or changing in
time. Such fields can be effectively
shielded. Experiments were conducted to
create some of these coherent spin states
in a semiconductor to see how long they
could survive. The results are also useful
for understanding how to design devices
such as spin transistors that do not
depend on maintaining and detecting the
quantum coherence of an individual
electron's spin.
SPIN DRAGGING IN SEMICONDUCTORS

Figure 1
Figure 1:shows POOLS OF
ELECTRONS in spin-polarized states
are dragged more than 100 microns by
an electric field--a fundamental step
toward technology relying on large-scale
quantum coherence. The peaks diminish
as quantum coherence of the states is
lost.
Apart from GMR, another three-layered
device, the magnetic tunnel junction, has
a thin insulating layer between two
metallic ferromagnets. Current flows
through the device by the process of
quantum tunneling: a small number of
electrons manage to jump through the
barrier even though they are forbidden to
be in the insulator. The tunneling current
is obstructed when the two
ferromagnetic layers have opposite
orientations and is allowed when their
orientations are the same.

Whereas the metallic spintronic devices
just described provide new ways to store
information, semiconductor spintronics
may offer even more interesting
possibilities. Because conventional
semiconductors are not ferromagnetic, it
is a question how semiconductor
spintronic devices can work at all. One
solution employs a ferromagnetic metal
to inject a spin-polarized current into a
semiconductor.

SPIN FET:

Figure 2
Figure 2:shows a model of spin fet.
In 1990 Supriyo Datta and Biswajit A.
Das, then at Purdue University, proposed
a design for a spin-polarized field-effect
transistor, or spin FET. In a conventional
FET, a narrow semiconductor channel
runs between two electrodes named the
source and the drain. When voltage is
applied to the gate electrode, which is
above the channel, the resulting electric
field drives electrons out of the channel
(for instance), turning the channel into
an insulator. The Datta-Das spin FET
has a ferromagnetic source and drain so
that the current flowing into the channel
is spin-polarized. When a voltage is
applied to the gate, the spins rotate as
they pass through the channel and the
drain rejects these antialigned electrons.
A spin FET would have several
advantages over a conventional FET.
Flipping an electron's spin takes much
less energy and can be done much faster
than pushing an electron out of the
channel. There is also a possibility of
changing the orientation of the source or
drain with a magnetic field, introducing
an additional type of control that is not
possible with a conventional FET: logic
gates whose functions can be changed on
the fly.

MAGNETIC
SEMICONDUCTORS:
As yet, however, no one has
succeeded in making a working
prototype of the Datta-Das spin FET
because of difficulties in efficiently
injecting spin currents from a
ferromagnetic metal into a
semiconductor. Although this remains a
controversial subject, recent optical
experiments carried out at various
laboratories around the world indicate
that efficient spin injection into
semiconductors can indeed be achieved
by using unconventional materials,
called MAGNETIC
SEMICONDUCTORS, which
incorporate magnetism by doping the
semiconductor crystals with atoms such
as manganese. Some magnetic
semiconductors have been engineered to
show ferromagnetism, providing a
spintronic component called a gateable
ferromagnet. In this device, a small
voltage would switch the semiconductor
between nonmagnetic and ferromagnetic
states. A gateable ferromagnet could in
turn be used as a spin filter--a device
that, when switched on, passes one spin
state but impedes the other. The filtering
effect could be amplified by placing the
ferromagnet in a resonant tunnel diode.
Conventional resonant tunnel diodes
allow currents to flow at a specific
voltage, one at which the electrons have
an energy that is resonant with the
tunneling barrier. The version
incorporating a ferromagnet would have
a barrier with different resonant voltages
for up and down spins. A key research
question for this second category of
spintronics is how well electrons can
maintain a specific spin state when
traveling through a semiconductor or
crossing from one material to another.
For instance, a spin FET will not work
unless the electrons remain polarized on
entering the channel and after traveling
to its far end. Subsequent studies of
electrons in Znse, gallium arsenide
(GaAs) have shown that, under optimal
conditions, spin coherence in a
semiconductor could last hundreds of
nanoseconds at low temperatures.



HAZARDS OF HOLES:
These experiments also revealed
characteristics that are crucial for
attaining long spin coherence times. Of
primary importance is the nature of the
carriers of spin and charge. A
semiconductor has two key bands of
states that can be occupied by electrons:
a valence band, which is usually full,
and (at a slightly higher energy) a
conduction band, which is usually
empty. Charge carriers in
semiconductors come in two flavors:
conduction electrons, which are
electrons in the conduction band, and
valence holes, which are electrons
missing from the valence band. The
holes carry a spin because in a filled
valence band all the spins cancel out: the
removal of one electron leaves a net spin
imbalance in the same way that it leaves
behind a net positive charge. .
Holes have dramatically shorter spin
coherence times than electrons, and spin
exchange between electrons and holes is
very efficient, accelerating the
decoherence of both. For these reasons,
it is advantageous to have no hole
carriers, a condition that is achieved by
using n-doped semiconductor crystals,
which are doped to have some excess
electrons in the conduction band without
any corresponding valence holes. When
holes have been eliminated, the
dominant remaining source of
decoherence comes from a relativistic
effect: a body moving at high speed
through an electric field sees the field
partially transformed into a magnetic
field. For an electron moving in a
semiconductor, the crystal structure of
the material provides the electric field.
The spin of a fast-moving electron
precesses around the resulting local
magnetic field that it sees. Two electron
spins that start off parallel can soon
evolve to point in opposite directions. As
this misalignment among the electrons
grows, the average spin polarization of
the population diminishes, which the
experiment conducted measures as loss
of coherence. This population-based
origin of decoherence holds forth the
hope that the spin coherence times of
individual electrons may turn out to
greatly exceed even the remarkably long
times seen in ensembles.
In conjunction with the carrier's
lifetime, two other properties are crucial
for semiconductor applications: how far
excitations can be transported and how
fast the state of a device can be
manipulated. Macroscopic spin transport
was first demonstrated in n-doped
gallium arsenide. A laser pulse excited a
"puddle" of coherently processing
electrons, much as in the lifetime
experiments, but then a lateral electric
field dragged the electrons through the
crystal. Spin packets traveled more than
100 microns with only moderate loss of
spin polarization. Recent experiments
have successfully driven coherent spins
across complex interfaces between
semiconductor crystals of different
composition (for instance, from GaAs
into ZnSe). A number of semiconductor
applications, from lasers to transistors,
are based on heterostructures, which
combine disparate materials. The same
design techniques can be implemented
on spintronics devices.

LATEST DEVELOPMENTS:
Ohio State University
researchers have made nearly all the
moving electrons inside a sample of
plastic spin in the same direction an
effect that could yield plastic memories.
Spintronics with traditional inorganic
semiconductors is difficult because most
such materials are only magnetic at very
low temperatures. Creating the cold
environment needed would be expensive
and space-consuming. The researchers
used a plastic called vanadium
tetracyanoethanide. The material is
magnetic at high temperatures, and
would be suitable for use inside a
computer operating above room
temperature. Plastic Spintronic devices
would weigh less than traditional
electronic ICs and cost less to
manufacture. They suggests that
inexpensive inkjet technology could one
day be used to quickly print entire sheets
of plastic semiconductors for
spintronics.

APPLICATIONS:
1. GMR sensors have a wide range of
applications.
Applications of GMR include:

1. Fast accurate position and
motion sensing of mechanical
components in precision
engineering and in robotics
2. All kinds of automotive sensors
for fuel handling systems,
electronic engine control,
antiskid systems, speed control and
navigation
3. Missile guidance
4. Position and motion sensing in
computer video games
5. Key-hole surgery and post-
operative care

2. MRAMS: Magnetic random access
memory, a type of memory that would
retain their state even when the power
was turned off, but unlike present forms
of nonvolatile memory, they would have
switching rates and rewritability
challenging those of conventional RAM.
In today's read heads and MRAMs, key
features are made of ferromagnetic
metallic alloys. Such devices are actually
under the first category of spintronics
discussed above.

CONCLUSION:
This spintronic device technology will
easily overcome the problem of
exhausting of the semiconductor
materials. It is very useful for the field of
advanced magnetic memory and sensors
based on spin transport electronics. So
this spintronics, rapidly emerging field
that will most likely have a significant
impact on the future of all aspects of
electronics as we continue to move into
the 21st century.







REFERENCES:

1. Scientific American 2006
2. Advanced Technology Oct, 2005
3. Semi Conductor Spintronic
devises by David D. Awscalom,
Micheal E. Flatte, Nitin samarth.




Thin Film Transistors & Flexible Displays


PRESENTED BY

V.S.KASHYAP P.SURYA PRAKASH
vsk.kashyap@gmail.com prakash1431@gmail.com
ph no :9247562052 ph no:9885233780
ECE DEPARTMENT ECE DEPARTMENT
GMRIT, RAJ AM GMRIT, RAJ AM



Introduction:
The reason TFTs are so important to flexible electronics
is that almost every rigid electronic device that in use today depends
upon transistors to function. For flexible electronics to accommodate
several markets, especially those beyond large-area displays, TFTs
will be necessary as power sources, logic devices, switches and
more. The Thin Film Transistor (TFT) is a special kind of field-effect
transistors that are made by the deposition of thin films of the
different materials that make up the transistor. Field effect transistors
(FET) consist of a piece of semiconductor made into four regions that
are referred to as the source, drain, gate, and body. (Figure 1) The
source and drain components are doped with charge carriers that will
provide excess holes (p-doped) or electrons (n-doped) in the
semiconductor. Between the source and drain is the body, which is
semiconductor that is either non-doped or oppositely doped from the
source and drain. The gate is placed near the body region, but
separated by an electrical insulator, (Often oxides) and may be either
a conducting metal or semiconductor oppositely doped from the
source and drain. In an n-channel, or npn, FET, applying positive
voltage to (a negative voltage in the case of a hole-doped p-channel,
or pnp, transistor) to the gate, the gate provides and electric field that
causes electrons to be mobile in the body of the transistor, which
becomes conductive and even current amplifying. This electric field
affect coins the name FET.


Current State of Technology:
The most prevalent current use of TFTs is in active
matrix liquid crystal displays. An active matrix is addressed using a
column/row mechanism, which lowers the number of I/Os needed to
the drivers. The AMLCD uses TFTs to store the state of a cell
temporarily with a capacitor, which maintains the state of the pixel
until the next refresh of the screen changes the pixels state by
adjusting the voltage across the capacitor and thus the pixel. In this
use, the most important characteristics of the TFT are a low leakage
current (which is essentially the reverse bias current of the transistor)
and a fast response time (which depends upon the current carrier
mobility of the transistor). The limitation of TFTs use in flexible
electronics for display applications typically has been due to the need
for flexible materials, typically organics and polymers are referred to
in literature, and high enough current carrier mobility. The main
method of manufacturing TFTs includes sputtering/deposition
processes, photolithography, and chemical and dry etching to create
TFTs upon glass substrates. Depending on the desired layout of
the TFT, the processes involved may come in di_erent orders. TFTs
may be made in staggered or planar structures. (Figure 2) Staggered
structures can be made into both top-gate and bottom-gate staggered
structures



Figure 2: Diagram of planar and staggered TFT structures
(Figure 3) which refer to the location of the gate electrode of the TFT
compared to the substrate. The typical materials and typical process
in the creation of a TFT are outlined The substrate most often used
in the creation of a TFT is glass. This is because glass has several
important properties. Glass has a very low transmission of liquid
water and water vapor while still being transparent, which prevents



damage to the device chemically in addition to physical protection.
Glass also is a good insulator so electrical devices placed upon it will
not leak current by conduction. Glass substrates are typically created
through the formation of a continuous, high purity SiO2 sheet upon a
flat, non-stick surface. For small displays, this may be a coated metal,
but for larger displays, liquid metal or viscous, 1A number of sources
were used in this description .Heavy fluids can be used to get a
uniform flat shape. In the creation of a top-gate staggered n-channel
TFT, the next step will be the deposition, typically by Plasma
Enhanced Chemical Vapor Deposition Process. (PECVD) Plasma of
the desired material, hydrogenated amorphous silicon is the case we
describe here, is contained above the substrate in solution with inert
gases, such as Argon and Helium. There are several reasons for the
use of hydrogenated amorphous silicon. The first is that it is readily
available for electronic companies and is simple in molecular
construction being an elemental molecule. The second is that it has a
good charge carrier mobility once hydrogenated. In addition, this
charge carrier mobility can be enhanced through the excitation of
hydrogenated amorphous silicon (a-Si:H) into hydrogenated
polycrystalline silicon (p-Si:H). Excitation is performed with laser
annealing by an excimer laser.2 When the plasma contacts the
substrate, it forms a deposition. The material that is often used in this
case is amorphous silicon, meaning that the silicon atoms are not in a
crystal structure with short or long range order. The first deposition
will produce the body of the TFT that is non- or p-doped
hydrogenated amorphous silicon. On top of this layer is placed a
layer of SiO2 (or another non-conducting oxide) which will provide an
electronic barrier from the body to the gate. Photolithography will next
take place. By masking the desired layout of the body with a positive
resist, the exposure of the martial to UV light will result in a change of
the chemical structure so that only the desired body of the TFT is left.
Next, PECVD is used again for the deposition of the n-doped source
and drain. This will be amorphous silicon and the dopent materials
will be phosphorous, nitrogen, or other 5-valence electron atoms that
will provide additional negative charge carriers in the semiconductor.
This material is then also subject to a photolithography treatment. At
this point, most or the entire semiconductor is formed for the TFT and
the rest of the process allows for the connections to this block from
the rest of the device. Next, a second isolation/insulation/barrier layer
of SiO2 (or other oxide) is deposited with sputtering or similar
methods. However, this oxide requires careful photolithography since
below or next to the area being cleaned of the oxide is the doped
layer of a-Si:H. (Or p-Si:H if it has been crystallized) The empty spots
left by the mask patterning are then sputtered with aluminum (or other
conducting metal) to make the drain, source, and gate contacts.
This is most of the procedure for the creation of an n-channel top-
gate staggered TFT, which extra steps for the connection of the
device, extra barrier layer and coating deposition, and pixel formation
left out.

J ust as prevalent, if not more so, of a structure is the bottom-gate
staggered TFT. In this case, much of the procedure is similar,
however the first deposition upon the glass substrate is a film or
indium tin oxide (ITO) which is a transparent conductor that is used
for connection to the gate.3 Then the deposition processes
proceed to deposit the isolation layer, the body and doped source
and drain, and the contacts as before, removing patterning and
deposition steps to form the gate on top of the device.

The reason why hydrogenated polycrystalline silicon is
and will be used in high performance devices is that it has a greater
than two order of magnitude increase in electron mobility as
compared to hydrogenated amorphous silicon. (Table 1)


Requirements for Flexible Electronics:
In order to produce electronic devices for flex
applications, several physical constraints must be met. A flexible
device, such as a screen for a display, must be thin to be able to
bend without enough physical stress to break adhesion between
device layers. Less than one-hundred micrometer (100 m) pitch will
be necessary to prevent cracking in flex applications. In addition,
flexible electronic devices will need to be able to withstand many
temperature cycles. This means the coeffcient of thermal expansion
(CTE) of the different materials used in construction must be similar
enough that fracture will not occur from repeated heat changes.
In different environments, including those with a high relative
humidity, semiconductors and circuits, such as OLEDs, must be
isolated from the climate changes. For flexible electronics, this
humidity issue is a large problem. Although many thin, rugged, and
flexible polymer films are available, the water absorption/
transmission of the materials are many orders of magnitude higher
than glass. (Figure 4) The current leading method of making polymer
films more water resistant is by depositing a barrier layer upon
the component side of the film, so that the polymer film protects the
barrier layer, but water does not pass the barrier layer into the device.
The production of a-Si:H TFTs for AMLCDs has
become relatively cheap compared to what it cost less than 10 years
ago. In addition, the performance of TFT AMLCDs is increasing. This
current trend has caused more people to be interested in AMLCDs,
lowering the cost-per-unit and thus reducing the price more.


This is good in terms of TFT research standpoint, since AMLCDs
depend on TFTs for performance and p-Si:H is the next logical step
for faster, clearer screens. The reduction in cost, however, causes a
problem for flexible electronics entrepreneurs. Although there is more
money coming in from the market as a whole, each individual unit is
more expensive. To produce a flexible display, you must compete
with these flat panel displays, and most AMLCD manufacturers will
not invest into a product that wont give similar performance than
current AMLCDs at similar or reduced costs. Thus, for TFTs and
flexible display applications to become a reality, several
achievements need to be made.
Cheap, rugged, flexible, and transparent polymer substrates must
be developed that are able to withstand many work cycles without
cracking while still maintaining a low water permeability.
The creation of a polymer or organic semiconductor that will be
lower in cost than silicon yet have the necessary carrier mobility for
typical consumer performance in televisions, computer monitors, and
other displays.
The replacement of conventional LCD technology with different
methods of light production and manipulation, while (again) being low
in cost.
The final, and probably the most difficult, achievement will be the
realization of the roll-to-roll manufacturing method with integration
deposition, photolithography, and quality assurance.
These achievements could make flexible displays
cheaper and more widely usable than current LCD technology. This
may seem like a presumptuous statement, but considering the
current proliferation of cheap LCDs it seems completely possible that
if a cheaper, more adaptable alternative is found, displays will find
more uses than we can imagine now.
Current Research & Development in TFTs & Displays:
Organic Thin Film Transistors:
The most widely used organic semiconductors, such as pentacene,
thiophene oligomers, and regioregular polythiophene, seem to have
reached maturity as far as their performance is concerned. This
assumption was based upon a graph (Figure 5) that showed an
apparently asymptotic limit of carrier mobility for organic thin film
transistors. (OTFT) The limit may be seen as an extension of the
charge transport mechanisms between amorphous and
polycrystalline silicon and organics. In silicon semiconductors, the
atoms are connected with strong covalent bonds and charge carriers
move as delocalized waves and thus have a high mobility. However,
in this case of organic semiconductors, carrier transport takes place
by hopping between localized states. However, the IBM researchers
also proposed that by either strengthening the bonds in the organics
or by using singular chains for charge transport, this apparent limit
could be breached. In the past several years, many papers have
been published about pentacene exploring the exact operation
of the material, substrates that seem to maximize the performance of
pentacene crystals, and circuits created by multiple pentacene
OTFTs. However, none of these papers has shown or alludes to
advancement of the electron mobility of organic semiconductors,
specifically of pentacene, beyond that of a-Si:H. In order to make high
performance devices, p-Si:H is still required. Recent advances have
been made in the reduction of cost in the production of p-Si:H from a-
Si:H using excimer lasers at room temperature. This allows polymers
to be used as substrates, moving toward cheap TFT driven flexible
displays. This is possible because the form factor of conventional
TFTs made from p-Si:H is good enough for flexible application.
However, using a-Si-H and excimer crystallization does drive
up the cost, which means it must be made up for with lower polymer
cost and process cost.
Polymers:
In order to reduce the cost of flexible displays, many
polymer materials as a substrate have been investigated. For flexible
applications, polymer substrates will have to overcome challenges in
maintaining low CTEs, low shrinkage from heating, high enough
processing temperature for semiconductor excitation, surface
smoothness, solvent & moisture resistance, and clarity. Some
currently investigated or accessible polymers are polyimide, (PI)
polyethylene terephthalate, (PET) and polyethylene naphthalate
(PEN).

The necessity to have high enough processing
temperatures, low enough CTEs, and low shrinkage from heating in
a polymer substrate is clearly a materials selection issue. Several
available materials overcome these challenges fairly well and some
examples are commercially available through DuPont Teijin films.2
Silicons CTE in both amorphous and polycrystalline phases is on the
order of 1 (ppm/c) making PEN, PET, and PI usable in this respect.
Polyimide has higher Tg than the others, but its decomposition <
600
0
C does not allow for excimer laser annealing of p-Si:H without
low-temperature methods. However, low temperature excimer laser
annealing of p-Si:H lends itself better to roll-to-roll processing, which
may help reduce costs after initial capital investment. Surface
smoothness and moisture & solvent permeability are significant
issues in polymer films as substrates for TFTs in flexible electronics.
(See Figure 4) However, the current solution to both of these
difficulties is the introduction of coatings to the interior of the flexible
film to act as barrier and smoothing layers. There is already
commercially available PEN, PET, and PI coated polymer films from
a company called Vitex SystemsTM. Vitex SystemsTM. markets their
BarixTMcoated polymer films for the use with OLEDs in flexible
displays. However, according to Mark Poliks of Endicott Interconnect,
defects can occur in these coatings unless the polymer films have
been properly smoothed beforehand. Currently, Dupont Teijin
FilmsTMis developing a family of coatings to solve these issues.
Poliks did not present this technology as being ready for market, even
though Vitex SystemsTMappears to believe it should be commercially
available. Clarity in polymer films, especially for flexible displays, may
become an issue because of heat dissipation. However, the use of
OLEDs as the luminescent component will help to reduce thermal
issues caused by the lower light transmission.
Roll-to-Roll Processing:
The dream of flexible electronics researchers and developers is the
use of roll-to-roll (R2R) processing. The main advantage of R2R is in
the reduction in cost and increase in time efficiency for the production
of electronic devices. In development for many years, this goal has
never been closer to a reality for commercialization. Several
laboratories across the country are working for the realization of
commercial R2R processing. Photolithography, laser annealing,
PECVD deposition, and other processes already used in the creation
of TFTs are not too difficult to move from plate-to-plate processing to
R2R. The current difficulty with R2R processing techniques is that it
requires a low failure rate of devices and a large capital investment.
Conclusion:
The technology of TFTs, OLEDs, and polymer
substrates is such that for a large cost you could currently get a
specialized flexible display. However, the cost is still too large for
widespread commercialization. However, as OLEDs become more
widely accepted as a replacement for LCDs and conventional LEDs,
flexible research will get more investment and large area, low
performance flexible displays will start to appear. With recent
advances in OLEDs as sources of white light and thus possible
replacements for florescent lights in wide-area lighting, it seems only
a matter of time before these products show up on the market.
However, to make high performance OLED, TFT driven, flexible
displays a reality, the reduction in cost of devices from the discovery
of organic or polymer semiconductors with carrier mobility
comparable to that of p-Si:H is necessary. Until then, conventional
high performance, LCDs are inexpensive enough to prevent
development, even if OLEDs phase out LCDs.
References:
[1] E.A. Al-Nuaimy and J .M. Marshall. Excimer laser crystallization
and doping of source and drain
regions in high quality amorphous silicon thin film transistors. Applied
Physics Letters, 69(25):3857
3859, December 1996.
[2] C.D. Dimitrakopoulos and D.J . Mascaro. Organic thin film
transistors: A review of recent advances.
IBM J ournal of Research & Development, 45(1):1127, J anuary
2001.
[3] Emmanuel P. Giannelis. Ubiquitous electronics: Why now?, March
2006. A presentation in the Cornell
College of Engineering class, MS&E 542: Flexible Electronics.
[4] D.J . Gundlach, C.-C. Kuo, T.N. J ackson, and S.F. Nelson. Organic
thin film transistors with field effect mobility > 2 cm2/vs. Device
Research Conference Digest, 57:164165, J une 1997.


A NEW REVOLUTIONARY SYSTEM

TO DETECT HUMAN BEINGS BURIED UNDER

EARTHQUAKE RUBBLE

USING MICROPROCESSOR OR MICROCONTROLLER.

(An Embe

dded System)


PRESENTED BY

K.KRISHNA KISHORE B.SIVA PRASAD
III B.TECH (E.C.E) III B.TECH (E.C.E)
ROLL: 104411 ROLL: 104416
SVUCE SVUCE
TIRUPATI TIRUPATI

Email: sivaprasad_104416@yahoo.co.in
COLLEGE:

SRI VENKATESWARA UNIVERSITY
COLLEGE OF ENGINEERING,
TIRUPATI 517502 , A.P






ABSTRACT

Thousands of persons killed as a cause of earthquake. The above words
arent the headlines of the newspaper but daily news everyone come across whenever we
go through a newspaper or watching over a TV news.

A persons life is precious and meaningful to his loved ones.

We, as responsible Engineers felt a part of society to bring a system to avoid these
mishaps. With the meteoric Embedded systems along with microprocessor our designed
system in preventing deaths and providing safe guided measures.

A new revolutionary microwave life detection system, which is used to locate
human beings buried under earthquake rubble, has been designed. This system operating at
certain frequency can remotely detect the breathing and heartbeat signals of human beings
buried under earthquake rubble. By proper processing of these signals, the status of the
person under trap can be easily judged. The entire process takes place within a few seconds
as the system is controlled by a microprocessor (8085) or microcontroller unit.

By advent of this system the world death rate may decrease to greater extent as
large percentage of death occur due to earthquake.


We welcome and wish you to a safe journey of this paper.
INTRODUCTION:
At present as we all know the need of the hour is to find an effective
method for rescuing people buried under earthquake rubble (or) collapsed building. It has
to be done before we experience another quake. Present methods for searching and
rescuing victims buried (or) tapped under earthquake rubble are not effective. Taking all
the factors in mind, a system, which will be really effective to solve the problem, has
been designed.
BLOCK DIAGRAM:

PRINCIPLE OF OPERATION:
The basic principle is that when a microwave beam of certain frequency
[L (or) S band (or) UHF band] is aimed at a portion of rubble (or) collapsed building
under which a person has been trapped, the microwave beam can penetrate through the
rubble to reach the person.
When the microwave beam focuses the person, the reflected wave from
the persons body will be modulated (or) changed by his/her movements, which include
breathing and heartbeat. Simultaneously, reflected waves are also received from the
collapsed structures.
So, if the reflected waves from the immovable debris are cancelled and the
reflected wave from the persons body is properly distinguished, the breathing and
heartbeat signals can be detected.
By proper processing of these signals, the status of the person under trap
can be easily judged. Thus a person under debris can be identified.


MAJOR COMPONENTS OF THE CIRCUIT:
The microwave life detection system has four major components. They are
1. A microwave circuit which generates, amplifies and distributes microwave
signals to different microwave components.
2. A microwave controlled clutter cancellation system, which creates an optimal
signal to cancel the clutter from the rubble.
3. A dual antenna system, which consists of two antennas, energized sequentially.
4. A laptop computer which controls the microprocessor and acts as the monitor

WORKING FREQUENCY:
The frequency of the microwave falls under two categories, depending on the type
and nature of the collapsed building. They are
1. L (or) S band frequency say 1150 MHz
2. UHF band frequency say 450 MHz

Let us see the advantages and disadvantages of both the systems later.






CIRCUIT DESCRIPTION:
The circuit description is as follows:
Phase locked oscillator:
The phase locked oscillator generates a very stable electromagnetic wave say
1150 MHz with output power say 400mW.
Directional coupler 1 (10 dB):
This wave is then fed through a 10 dB directional coupler and a circulator before
reaching a radio frequency switch, which energizes the dual antenna system. Also, the ten dB
directional coupler branches out one-tenth of the wave (40mW) which is then divided equally
by a directional coupler 2 (3 dB).
Directional coupler 2 (3 dB):
One output of the 3 dB directional coupler 2 (20mW) drives the clutter
cancellation unit. Other output (20mW) serves as a local reference signal for the double
balanced mixer.
Antenna system:
The dual antenna system has two antennas, which are energized sequentially by
an electronic switch. Each antenna acts separately.
Clutter cancellation system:
The clutter cancellation unit consists of
1. A digitally controlled phase shifter I
2. A fixed attenuator
3. A RF amplifier
4. A digitally controlled attenuator.
WORKING:
Clutter cancellation of the received signal:
1 ) The wave radiated by the antenna I penetrates the earthquake rubble to reach the buried
person.
2 ) The reflected wave received by the antenna 2 consists of a large reflected wave from the
rubble and a small-reflected wave from the persons body.
3 ) The large clutter from the rubble can be cancelled by a clutter-canceling signal.
4 ) The small reflected wave from the persons body couldnt be cancelled by a pure
sinusoidal canceling because his/her movements modulate it.
5 ) The output of the clutter cancellation circuit is automatically adjusted to be of equal
amplitude and opposite phase as that of the clutter from the rubble.
6 ) Thus, when the output of the clutter cancellation circuit is combined with the directional
coupler 3 (3 dB), the large clutter from the rubble is completely cancelled.
7 ) Now, the output of the directional coupler 3 (3 dB) is passed through a directional coupler
4 (6 dB).
8 ) One-fourth of the output directed is amplified by a RF pre-amplifier and then mixed with
a local reference signal in a double balanced mixer.
9 ) Three-fourth of the output is directed by a microwave detector to provide dc output,
which serves as the indicator for the degree of the clutter cancellation.
10 ) When the settings of the digitally controlled phase shifter and the attenuator are swept the
microprocessor control system, the output of the microwave detector varies accordingly.


Demodulation of the clutter cancelled signal:
At the double balanced mixer, the amplified signal of the reflected wave from the persons
body is mixed with the local reference signal.
The phase of the local reference signal is controlled by another digitally controlled phase
shifter 2 for an optimal output from the mixer.
The output of the mixer consists of the breathing and heartbeat signals of the human plus some
avoidable noise.
This output is fed through a low frequency amplifier and a band pass filter (0.4 Hz) before
displayed on the monitor.
The function of the digitally controlled phase shifter 2 is to control the phase of the local
reference signal for the purpose of increasing the system sensitivity.
The reflected signal from the persons body after amplification by the pre-amplifier is mixed
with the local reference signal in a double balanced mixer.
MICROPROCESSOR CONTROL UNIT:
The algorithm and flowcharts for the antenna system and the clutter cancellation
system are as follows:
Antenna system:
Initially the switch is kept in position 1 (signal is transmitted through the antenna 1)
Wait for some predetermined sending time, Ts
Then the switch is thrown to position 2 (signal is received through the antenna 2)
Wait for some predetermined receiving time, Tr
Go to step 1
Repeat the above procedure for some predetermined time, T.

Clutter cancellation system:
1. Send the signal to the rubble through antenna 1.
2. Receive the signal from the rubble through antenna 2.
3. Check the detector output. If it is within the predetermined limits go to step 5.
4. Otherwise send the correction signal to the digitally controlled phase shifter 1
and attenuator and go to step 1.
5. Check the sensitivity of the mixer. If the optimum go to step 7.
6. Otherwise send the correction signal to the digitally controlled phase shifter 2
to change the phase and go to step 1.
7. Process the signal and send it to the laptop.












FLOW CHART FOR ANTENNA SYSTEM


































FLOW CHART FOR CLUTTER CANCELATION SYATEM:















































ADVANTAGES OF L (OR) S BAND FREQUENCY SYSTEM:
Microwaves of L (or) S band frequency can penetrate the rubble with metallic
mesh easier than that of UHF band frequency waves.
ADVANTAGES OF UHF BAND FREQUENCY SYSTEM:
Microwaves of UHF band frequency can penetrate deeper in rubble (without
metallic mesh) than that of L (or) S band frequency waves.
FREQUENCY RANGE OF BREATHING AND HEARTBEAT SIGNAL:
The frequency range of heartbeat and breathing signals of human beings lies
between 0.2 and 3 Hz.
HIGHLIGHTS:
1. The location of the person under the rubble can be known by calculating the
time lapse between the sending time, Ts and receiving time, Tr.
2. Since it will not be possible to continuously watch the system under critical
situations, an alarm system has been set, so that whenever the laptop computer system
processes the received signal and identifies that there is a human being, the alarm sound starts.
3. Also under critical situations, where living beings other than humans are not
required to be found out, the system can detect the signals of other living beings based on the
frequency of the breathing and heartbeat signals.






CONCLUSION:

Thus a new sensitive life detection system using microwave radiation for locating
human beings buried under earthquake rubble (or) hidden behind various barriers has been
designed. This system operating either at L (or) S band, UHF band can detect the breathing and
heartbeat signals of human beings buried under earthquake rubble.


VIRTUAL REALITY
EXPERIENCE TO THE REAL WORLD



PRESENTED BY:


V.MURALI KRISHNA B.DINAKAR
ECE ECE
murli_vemula@yahoo.com dina_njoyy@yahoo.co.in










BAPATLA ENGINEERING COLLEGE
BAPATLA

1. INTRODUCTION
The Virtual Reality uses a huge
number of technologies to produce a
Virtual world so that a user can interact
and manipulate with the virtual objects
in the so produced Virtual Worlds. With
the aid of some specially developed
gadgets like a Head Mounted Display,
Electronic Glove and Mechanical
armatures that fit the human organs we
can immerse the human into the Virtual
world. The simulation technique is
combined with the motion of the human
to produce the output that what human
expects for. For example the person in a
Virtual world looks at a particular object
then he has to get the feeling that he is
actually looking at that object and he
should also get the feeling of the sounds
that come from that object. The word
Virtual Reality is an oxymoron. It
means using two words that are
contradictory to each other side by side.
Virtual Reality is a collection of
technologies for many people. The word
Virtual Reality has many contexts. The
word Virtual Reality is defined as
follows in the book The Silicon
Mirage.

The most difficult thing in the
Virtual Reality is to produce the
interaction between Virtual world and
the human but not the production of the
Virtual world. The type of Virtual
Reality in which the human is actually
immersed into the Virtual world is called
the immersive Virtual Reality. In such
a type of Virtual Reality the human is
completely isolated from the outside
world and he is placed an entirely
computer generated world. The
applications being developed for Virtual
Reality are wide range utilities. Among
them the real time applications occupy
the prominent place.
2 CLASSIFICATION OF
VIRTUAL REALITY
After a deep study of this emerging
technology, we have classified Virtual
Reality into three types.

2.1 VIRTUAL REALITY
USING SOFTWARE
The first category of Virtual
Reality that we consider is Virtual
Reality using software. Here we use
many tools to develop the virtual worlds.
The most commonly used tools for
developing 3d worlds are VRML v1.0,
VRML97, VRML v2.0, 3d Studio max,
Rhino3d, Amapi3d, ALICE99,
BLENDER and other such software. The
VRMLv1.0 is the child language
developed from the XML family. There
arent many differences between the
later versions of VRML (VRML97 and
VRML v2.0). The programming
paradigms are entirely different from
VRMLv1.0 to VRMLv2.0. There are
many companies dedicated to develop
the tools for creating virtual worlds, such
as Parallel Graphics Co., and Trapezium
Co.,. Also there are many concepts of
developing the virtual worlds using the
software.
They are :
2.1.1 RENDERING
In this we are conscious about the
rendering techniques. Here we use the
technique of wire framing. A sample
output and its wire frame model are
shown below. After developing the wire
frame model what we have to do is
simply to apply the texture to it. This is
called texturing. The texture applied can
be a photograph or any predefined
textures such as metal, rock, wood and
cement flooring.
2.1.2 PROGRAMMING
The other tool available for
developing the virtual worlds is by
programming. There are many
programming languages by which we
can develop the virtual worlds. The best
one we prefer is by using VRML v2.0.
Prior to this language, people used to
develop the virtual worlds using the
traditional programming language,
J AVA. As we have mentioned earlier
VRML is a language born from the
family of XML. VRML is Virtual
Reality Modeling Language. VRML
v2.0 is more advanced compared to the
version 2.0.
Here we are presenting one simple
program in each of the versions. For
developing the virtual worlds using
VRML we need a plug-in for the
Internet Explorer or Netscape Navigator.
With the help of this only we can
interact with the virtual world. More
information regarding the browser plug-
ins, parsers and editors is included at
www.vrml.org/vrml/.
2.2 VIRTUAL REALITY
USING HARDWARE
2.2.1 MANIPULATION AND
CONTROL DEVICES
It is a common thing that are
three axes X, Y, Z. In order to track the
motion of an object in the virtual world
we need to track the motion of the
particular object in all the three axes.
These calculations may cause some
latency. The latency is the major
problem in any Virtual Reality systems.
The simplest hardware for Virtual
Reality is a Mouse, Track ball and a
J oystick.These are the conventional
hardware used for Virtual Reality using
2d systems. Creative programming can
make them useful for 3d and 6d controls.
Also there are a number of 3dand 6d
mice, Track balls and joysticks. These
have some extra buttons for controlling
not only the XY transformations, but
also the Z transformations of the pointer,
its rotation also. One other most
commonly used motion-tracking device
is an Electronic Glove. An electronic
glove is different from a normal glove in
the sense that it has a number of sensors.
There are a number of sensors that can
be used. The optical fiber sensors are
used for tracking the motion of fingers
and the magnetic sensors are used for the
tracking of rest of the arms.
2.2.2 POSITION TRACKING
There are many tools
available for tracking the position of the
object in the Virtual world. Some of
them are as described below. Mechanical
armatures are the most commonly used
ones. They help in providing very
accurate and fast results. Some of them
seem like a table lamp while some others
look as exoskeletons that are outfitted to
objects in the Virtual world. The main
disadvantage using them is that they are
cumbersome. Ultrasonic sensors are also
most commonly used tools. Here a set of
emitters as well as receivers is used to
note the time lags between the known
relationships and the results. Though
these are more effective, the main
disadvantages are the time lags and
echoes that come from the surroundings.
Magnetic sensors use a coil to note the
deflections produced in the fields due to
the change in the position of the object.
These are used to determine the
strengths and angles of deflections in the
fields.
2.2.3 STEREO VISION
Stereo vision is a technique
where two different images are
developed one for each eye. The two
images developed may vary a little bit or
may completely be the same. The Head
mounted displays having two LCDs are
for each eye is commonly required. Here
the images are to be computed from a
particular distances and angles at each
moment. There are number of techniques
for producing the two images. The two
images can be placed side by side and
the viewer may be asked to cross his
eyes against each other. Using two
differently polarized filters; we can
project the images. There is another
technique for producing the two images,
i.e., by placing two different LCDs
before each eye and producing the
image. Thus by shutting down each
shutter at perspective times the user can
be made to think the depth of the image.
The serious health hazard by this stereo
vision is that the two eyes can be
affected very adversely.


2.3MAPPINGAND STITCHING
There is also software for
producing the virtual 3D photos such as
PHOTOMODELER. The technology
used to develop a 3D photo is stitching.
For producing a 3D photo of a node
several photos are taken and they are
stitched together. The user gets a sense
that he is in the middle and he is seeing
the 3d photo by revolving around
himself. QUICK TIME is used to see
these photos. You can see one such
model at
http://www.bbc.co.uk/history/multimedi
a_zone/3ds/index.shtml
3 MORE ABOUT IMMERSIVE
VIRTUAL REALITY
1) An immersive Virtual Reality
adds special gadgets like Head
mounted displays, Boom, Multiple
2) Stereo scopic viewing adds enhanced
features like deep peeping through
the Virtual world.
3) Eliminating the real world and
placing the human in a computer-
generated world is one of the
enhancements.
4) The convincing factor about IVR is
the auditory, haptic, touch and other
non-visual senses.
5) Interactions with the objects in the
Virtual world are controlled by a
data glove, head mounted display
and other gadgets.
3.1HEADMOUNTEDDISPLAY
The Head mounted display consists of
two miniature display screens that
produce the stereo scopic images and an
optical position tracking system that
tracks the orientation of the humans head
in the Virtual world and that produces
the impulse to the image generating
largeprojectioAreas to get more
immersive feeling.The characteristics of
the IVR can be summarized as follows.
Using a Head mounted display can
increase the capabilities like walk
through, look around, fly through in the
3d Virtual world.computer. The image-
generating computer produces the
respective view corresponding to the
orientation of the user head in the
Virtual world. This is the basic device
used in the IVR. As a result the user can
see in the direction that he wants and he
can walk through the Virtual world.
3.2 BOOM AND CAVE
To over come the intrusiveness
with the HMD the Boom and Cave are
used. These are also extensively used in
the IVR. Screens and stereo scopic
image generating apparatus are fixed in a
box, which is attached to a multi link
arm. The user peeps into the Virtual
world through two holes, and controls
his motions with the arms. The Cave is
an interesting topic in IVR. A cave
consists of a cube shaped room. The
stereo scopic images are projected on to
the walls and the floor of the room with
the help of a number of projectors. The
head tracking system worn by the
leading user controls the view of the
Virtual world. Several users may sit on
the Virtual world at a time.
4.REALTIME APPLICATIONS
4.1 VIRTUAL REALITY IN
WAR STRATEGIES
SIMNET is the first war related
Virtual Reality application. This project
is standardization being pushed by the
USA Defense Department to enable
diverse simulators to be interconnected
into a vast network The soldiers can be
trained to the war by developing a
Virtual world that looks exactly the war
field. This helps them in knowing how to
deal in war fields. Distributed Interactive
System (DIS) protocol has been
developed by the Orlando Institute of
Training and Simulation, which is the
future of Virtual Reality in war
strategies.
4.2 VIRTUAL REALITY IN
COCKPIT SIMULATION
The next interesting step in
Virtual Reality is cockpit simulation.
This is used in training the pilots. With
the help of Virtual Reality the entire
cockpit is simulated. The pilot is placed
in the thus developed Virtual world and
the computer guides him by giving the
maps and the feeling that he is piloting
an actual flight. This helps him in
dealing with the critical conditions that
may arise while he is piloting an actual
flight. The data glove plays a vital role
in this type of training. A variety of
instruments along with the data glove are
used. The maps generated by the
computers and the data available with
the pilot are correlated to know the
paths.
4.3 VIRTUAL REALITY IN
MEDICAL APPLICATIONS
Virtual Reality is now being used
to train physicians to carry out intricate
surgical procedures such as
laparoscopies, arthroscopies,
endoscopies and other minimally
invasive surgeries. Virtual Reality
provides a view of the surgical field
normally blocked during such
procedures and enables trainees to get
much needed practice in the left-right
motion inversion obligatory for the
operation of instruments in minimally
invasive surgeries. In contrast to the use
of cadavers which can only be dissected
once and which are becoming more
difficult to procure, it allows students to
access libraries of healthy and
pathologic body tissue 3-D images at
their convenience, and perform the
same.
Nanosurgery is another medical
application, where the doctors located at
a distinct place guide the robots. They
guide the robots with the help of multi-
link arms that we have already seen in
the case of booms.
4.4 VIRTUAL REALITY IN
DESIGNING ASPECTS
Virtual Reality helps in designing the
virtual models of the certain objects. By
building the virtual models we can see
how the model works, what the defects
may be and how we can overcome the
previous defects. These all can not be
seen by actually developing the model as
it includes a lot of cost and laborious
time. This concept of Virtual Reality is
being is mostly used in the designing of
onceptual cars.

c

Concept cars are being designed to study
new ideas. Most of these designs are
never built. Virtual reality provides a
tool for evaluating such designs in full
scale without building time consuming
and costly physical prototypes.
4.5 VIRTUAL REALITY IN
AMUSEMENT PARKS
Virtual Reality is also playing a
vital role in the amusement parks. The
conceptual cars that we have discussed
above and the racing games are being
developed to attract people. With the
help of electronic gloves, head-mounted
displays and stereo scopic vision racing
games attract the people.
5 VIRTUAL REALITY
TRANSFERRING PROTOCOL
The capabilities of the Virtual
Reality Modeling Language (VRML)
permit building large-scale virtual
environments using the Internet and the
World Wide Web. However the
underlying network support provided by
the hypertext transfer protocol (http) is
insufficient for large-scale virtual
environments. Additional capabilities for
many-to-many peer-to-peer
communications plus network
monitoring need to be combined with the
client-server capabilities of http. To
accomplish this task, we present a
detailed design rationale for the virtual
reality transfer protocol (VRTP). VRTP
is designed to support interlinked VRML
worlds in the same manner as http was
designed to support interlinked HTML
pages. VRTP will be optimized in two
ways: on individual desktops and across
the Internet. VRTP appears to be a
necessary next step in the deployment of
all encompassing interactive inter-
networked 3D worlds.
6 FUTURE OF VIRTUAL
REALITY
Yesterday Virtual Reality was a
science fiction fantasy. Today it is a
research topic in laboratories and
amusement parks. Tomorrow it will
certainly replace our televisions and
computers. Many researches are being
done to find more and more applications
of Virtual Reality. In the forth coming
days the web sites developed using
Virtual Reality will replace the entire
present web industry.
7 CONCLUSIONS
The ability of Virtual Reality to
produce realistic worlds of data, objects,
with which the users can interact and
manipulate in a realistic and an intuitive
manner, opens up a vast wealth of
possibilities for work-related
applications. The concept of Virtual
Reality provides an innovative mix of
entertainment, education and State-of-
Art. From waterbeds to gyroscopes and
hydraulic units, a variety of platforms
will provide a new kind of travel; into
Cyberspace; into virtual worlds where
one can swim with the dolphins and
experience intense sensory stimulation..
Working in many fields like medicine,
rocket launching, massive constructions,
designing and modeling, war training
and cockpit training, it is very important
to be more precise and accurate and here
Virtual Reality provides a solution by
providing a platform which makes it
possible by using the applications of
Virtual Reality.


A HIGH RESOLUTION SYNTHETIC APERTURE RADAR







by,









K. sravan kumar G. BALAJI
Mail ID: Mail ID:
Sravan19_kumar@yahoo.co.in balaji_105@yahoo.co.in
Ph no:9395350372 Ph no: 9985295858







Department of Electronics and Communication
Engineering
G.M.R.I.T,
Rajam,
Srikakulam Dist.




Lynx: A high -resolution synthetic aperture radar



ABSTRACT:
Lynx is a high resolution, synthetic aperture radar (SAR) that has been designed and
built by Sandia National Laboratories in collaboration with General Atomics (GA). Although
Lynx may be operated on a wide variety of manned and unmanned platforms, it is primarily
intended to be fielded on unmanned aerial vehicles. In particular, it may be operated on the
Predator, I-GNAT, or Prowler II platforms manufactured by GA Aeronautical Systems, Inc.
The Lynx production weight is less than 120 lb. and has a slant range of 30 km (in 4 mm/hr
rain). It has operator selectable resolution and is capable of 0.1 m resolution in spotlight mode
and 0.3 m resolution in stripmap mode. In ground moving target indicator mode, the minimum
detectable velocity is 6 knots with a minimum target cross-section of 10 dBsm. In coherent
change detection mode, Lynx makes registered, complex image comparisons either of 0.1 m
resolution (minimum) spotlight images or of 0.3 m resolution (minimum) strip images. The Lynx
user interface features a view manager that allows it to pan and zoom like a video camera. Lynx
was developed under corporate funding from GA and will be manufactured by GA for both
military and commercial applications. The Lynx system architecture will be presented and some
of its unique features will be described. Imagery at the finest resolutions in both spotlight and
strip modes have been obtained and will also be presented.
Keywords: Synthetic Aperture Radar, SAR, Remote Sensing, UAV, MTI, GMTI, CCD
1. INTRODUCTION
Lynx is a state of the art, high resolution synthetic aperture radar (SAR). Lynx was
designed and built by Sandia National Laboratories and incorporates General Atomics design
requirements to address a wide variety of manned and unmanned missions. It may be operated
on the Predator, I-GNAT, or Prowler II platforms which are manufactured by General Atomics
(GA). It may also be operated on manned platforms. Lynx was developed entirely on GA
corporate funds. GA is presently beginning the manufacture of Lynx and intends to sell Lynx
units and Lynx services to military and commercial customers Lynx is a multimode radar. Its
SAR modes include a spotlight mode and two stripmap or search modes. In addition, Lynx has a
ground moving target indicator (GMTI) mode. Lynx also features acoherent change detection
(CCD) mode which can indicate minute changes in two SAR images taken at different times.
CCD may be performed with either spotlight or stripmap images. Lynx also features a uniquely
flexible user interface. The user interface features a view manager that allows Lynxto pan and
zoom like a video camera. Lynx also features a conventional waterfall display for stripmap
display. Lynx operates at Ku band and is capable of 0.1 m resolution in spotlight mode and 0.3
m resolution in stripmap mode. It has a slant range of 30 km in weather and weighs less than
125 lb.


Figure 1. GA Aeronautical Systems, Inc. I-GNAT UAV
2. SYSTEM DESIGN
The Lynx SAR was designed for operation on a wide variety of manned and unmanned
aircraft. In particular, it can be operated from the Predator, I-GNAT, and Prowler II platforms
manufactured by GA. During System Integration testing it was operated on board Sandias
DOE DeHavilland DH-6 Twin-Otter aircraft.The Lynx SAR operates in the Ku-Band anywhere
within the range 15.2 GHz to 18.2 GHz, with 320 W of transmitter power. It is designed to
operate and maintain performance specifications in adverse weather, using a Sandia derived
weather model that includes 4 mm/hr rainfall. It forms fine resolution images in real-time and
outputs both NTSC video as well as digital images. The Lynx SAR has four primary operating
modes. These are described as follows. SAR Geo-Referenced Stripmap Mode In Geo-ref Mode,
the operator specifies a precise strip on the ground to be imaged. The SAR then patches together
a continuous and seamless string of images to yield the strip until the specified end-point is
reached or the radar is commanded to do otherwise. The aircraft is not constrained to fly
parallel to the strip, and images can be formed on either side of the aircraft. Specifications for
this mode are given in Table 1.









SAR Transit Stripmap Mode:
In Transit Mode, the operator specifies a range from the aircraft to the target line and
the SAR forms a Stripmap parallel to the aircrafts flight path. The SAR then patches
together a continuous and seamless string of images to yield the strip, and will continue to do so
until commanded otherwise, or until the vehicle deviates too far from the original flight path.

In the event of such a deviation, a new Transit Stripmap will begin immediately. In all
other respects, the performance of Transit Mode Stripmap is identical to Geo-ref Stripmap
Mode.


SAR Spotlight Mode:
In Spotlight Mode, the operator specifies the coordinates of a point on the ground and the
SAR dwells on that point until commanded otherwise, or until the imaging geometry is exceeded.
As with Stripmap modes, imaging may be on either side of the aircraft.
This mode allows finer resolutions than the Stripmap modes. Performance is summarized
in Table 2.
In addition, an auto-zooming feature is also supported, where subsequent images are
formed at ever finer resolutions until the SARs limits are reached, or commanded to do
otherwise.

Ground Moving Target Indication GMTI:
The relatively slow velocities of UAVs allow fairly simple exo-clutter GMTI schemes to
offer reasonably good performance. The Lynx GMTI mode allows scanning over
270 degrees with performance summarized in Table 3.




Coherent Change Detection CCD:

Coherent Change Detection is a technique whereby two SAR images of the same scene
are interfered. Any changes in the complex reflectivity function of the scene are manifested as a
decorrelation in the phase of the appropriate pixels between the two images. In this manner,
even very subtle changes in the scene from one image to the next can be detected. Necessarily,
the images themselves must remain complex for this to work.[3]

In the SAR modes, the radar can output complex (undetected) images that are necessary
for Coherent Change Detection to work. These images can be transmitted to the ground station
where ground-processing of the current image along with a library image allows near-real-time
detection of changes in the scene. This operates with either Stripmap or Spotlight SAR images.


User Interface:
Consistent with the philosophy for other sensors of the GA UAV family, the user
interface for the SAR was designed to allow easy operation by an operator with minimal
radarspecific knowledge. The operator selects resolution and operating mode, and then basically
points and shoots the radar, much like the optical sensors. Radar images are transmitted to the
radar operator by any of two means. The first is an NTSC video link which allows the SAR to be
treated as just another sensor to a UAV payload operator. The radar actually forms larger
images than can be displayed over the NTSC video link, but novel View Manager software
allows the operator to pan and zoom within its memory. Images may be saved in on-board
buffers for later viewing. The second means of image transmission is a digital data link that can
transmit an entire image at full resolution. This data can then be formatted to comply with the
National Imagery Transmission Format, NITFS 2.0. Target coordinates are easily extracted
from any SAR image to facilitate pointing the SAR for new images. In GMTI mode, locations of
detected movers are transmitted for display on map overlays.

3. HARDWARE
While SARs tend to be fairly complex instruments, a primary goal for Lynx was ease of
manufacture. This drove all aspects of design. The system has been designed as two relatively
generic packages. These are the Radar Electronics Assembly (REA) and the Sensor Front-End
or Gimbal Assembly. The combined weight is currently about 125 lb with some variance due to
different cable assemblies for different platforms.
Radar Electronics Assembly REA:
The REA contains radar control, waveform generation, up-conversion, receiver, video,
ADC, and signal processing functions. These functions exist in a custom VME chassis, with
individual boards/assemblies roughly divided as follows. The RF/Microwave functions are
within a set of five VME boards/assemblies. These include the STALO module, Upconverter
module, Ku-Band module, Receiver module, and the RF interconnect module. The only major
RF/microwave functions not found in these modules are the transmitter TWTA and the receiver
LNA. Digital Waveform Synthesis (DWS) is accomplished by a custom VME board that
generates a chirp with 42-bit parameter precision at 1 GHz. Although the board is custom, all
components are off-the-shelf.
The Analog-to-Digital Conversion (ADC) is also accomplished by a custom VME board
that operates at 125 MHz andprovides 8-bit data. This data can be presummed and otherwise
pre-processed before being sent across a RACEway bus to the signal processor. The Signal
Processor consists of 16 nodes of Mercury Computer Systems RACEway connected 200 MHz
Power PCs. These implement a scalable architecture for image formation. Fewer nodes may be
installed for a less capable SAR system. Four additional nodes are used for other radar functions
including Motion Measurement, Radar Control, and optional data recording.


Gimbal Assembly:
The Gimbal assembly contains antenna, motion measurement hardware, and front-end
microwave components including the TWTA. The gimbal itself is a 3-axis gimbal custom
designed and built by Sandia specifically for the Lynx radar. All components are mounted on
the inner gimbal. The antenna was custom designed at Sandia specifically for the Lynx radar. It
is a vertically polarized horn-fed dish antenna with a 3.2 degree azimuth beamwidth and a 7
degree elevation beamwidth.

Motion measurement is a Carrier-Phase-GPS-aided Inertial Navigation System centered around
a Litton LN-200 Fiber Optic IMU. This is augmented by an Interstate Electronics Corporation
GPS receiver. The front-end microwave components include a TWTA capable of outputting 320
W at 35% duty factor averaged over the Lynx frequency band, and an LNA that allows an
overall system noise figure of about 4.5 dB.
4. IMAGE FORMATION
Image formation in all SAR modes is accomplished by stretch processing[2], that is, de-
ramping the received chirp prior to digitizing the signal. After the ADCs, presumming is
employed to maximize SNR in the image and minimize the load on the subsequent processors.
The algorithm used thereafter is an expanded version of the Sandia developed Overlapped-
Subaperture (OSA) processing algorithm[1], followed by Sandia developed Phase-Gradient
Autofocus (PGA)[3]. Either complex images or detected images can be exported to View Manager
software.
5. MOTION MEASUREMENT/COMPENSATION
Motion measurements are received from an Inertial Measurement system mounted on
the back of the antenna itself. These are augmented by carrier-phase GPS measurements and
combined in a Kalman filter to accurately estimate position and velocity information crucial to
proper motion compensation in the SAR. This processing is done on a single Power PC
processing node.
The Motion Compensation philosophy for this radar is to perform compensation as early
as possible in the signal path.Transmitted waveform parameters are adjusted, as well as pulse
timing, to collect optimal data on the desired space-frequency grid. This is prior to digital
sampling, and minimizes the need for subsequent data interpolation.[4] During image formation,
residual spatially variant phase errors are compensated as spatial coordinates become available
during OSA processing. Finally, any errors due to unsensed motion are mitigated by an
autofocus operation.
6. FLIGHT TESTS
Flight tests began in July 1998 with the radar mounted in Sandias DOE Twin-Otter
manned aircraft, and continued through February 1999. The first flights in a GA Aeronautical
Systems, Inc. I-GNAT UAV occurred in March 1999. To date, two Lynx SARs have been built
by Sandia. GA is currently constructing a third unit, and will build all subsequent units. The
SAR currently meets its image quality goals and routinely makes high-quality fine-resolution
images. The first CCD images have been processed at the time of this writing. For GMTI mode,
data has been collected and is undergoing analysis to adjust the processing for optimal
performance.





7. SUMMARY:
We have described Lynx - a lightweight, high performance SAR. Lynx operates in Ku
band and features spotlight and stripmap SAR modes, a GMTI mode, and CCD. In spotlight
mode it is capable of 0.1 m resolution, while in stripmap mode it is capable of 0.3 m resolution.
At the finest resolution in weather, Lynx has a slant range of 25 km, but at coarser resolution
can be operated up to 45 km. It is designed be operated on a variety of manned and unmanned
platforms. All the image Figure 9. Sandia National Laboratories DOE DeHavilland DH-6 Twin-
Otter processing is done in the air and the imagery (real or complex) is downlinked from an
unmanned platform. Phase histories and/or imagery may be recorded in a manned platform. No
post-processing of the imagery is required (except CCD). The Lynx production weight is less
than 120 lb. Lynx also features a user friendly mode of operation that allows the SAR to be used
like a video camera. At the time of this writing all SAR image specifications have been met or
exceeded in manned flight tests. Integration into the GA I-GNAT is proceeding, and the first
unmanned flight tests are about to begin.







Submitted by :

E.Sai Sandeep P.S.S.H.Saran

Sandeep_edupuganti@rediffmail.com Career47@gmail.com








THIRD YEAR
ELECTRONICS & COMMUNICATION ENGINEERING
SIR.C.R.REDDY COLLEGE OF ENGINEERING
ELURU 534007







A WIRELESS COMMUNICATION SYSTEM
BETWEEN MOBILE ROBOTS






Abstract:


The paper presents a solution for realizing the communication system between mobile robots. The
state-of-art of wireless transmission methods is reviewed. For data transmission, in the paper
suggested that the so - called Personal Area Networks (PAN) technology - devices could be
appropriate, because they allow low-energy, large-band interconnection within the range of 10-100
meters. The hardware realization of the microprocessor controller, using wireless (Bluetooth)
communication is presented.

Two technologies in particular seem to be moving toward an interesting convergence: mobile
robotics and wireless sensor networks. The two main questions here are:

Can a mobile robot act as a gateway into a wireless sensor network?
Can sensor networks take advantage of a robot's mobility and intelligence?














INTRODICTION:


Mobile robot teams have many useful
applications such as search and rescue,
exploration and hazard detection and
analysis. Communication between the robots
of a team as well as between the robots and a
human operator or controller are useful for
many applications. Many applications of
mobile robots involve scenarios in which no
communication infrastructure such as base
stations exist or the existing infrastructure
is damaged. In such scenarios, it is necessary
for mobile robots to form an ad hoc network
to enable communication by forwarding
each others packets. In many applications,
group communication can be used for
flexible control, organization, and
management of the mobile robots. Multicast
provides a bandwidth efficient
communication method between a source
and a group of robots.

In many mobile robot applications,
communication infrastructure may be
damaged or not present requiring the mobile
robots to form an ad hoc network using each
other as forwarding nodes to enable
communication. Group communication
among the mobile robots thus requires
protocols that can operate without central
control and handle dynamic topology
changes due to the mobility of the mobile
robots.

Wireless technologies give new
possibilities for creating a communication
network between groups of mobile robots as
well the widening of application areas for
mobile robots. For example: studying
environmental parameters in distance,
security, entertainment and so

Communication between mobile
robots is useful and even critical in many
applications. In many mobile robot
applications, communication infrastructure
may be damaged or not present requiring
the mobile robots to form an ad hoc network
using each other as forwarding nodes to
enable communication. Group
communication among the mobile robots
thus requires protocols that can operate
without central control and handle dynamic
topology changes due to the mobility of the
mobile robots. Multicast is the most
important group communication primitive
and is critical in applications where close
collaboration of teams (e.g. rescue teams,
search teams) are needed.

Sensor network provides distributed
computation, communication and sensing
for mobile robots. Figure 1 shows a
connection between mobile device (robots)
and sensor networks.



Fig.1 The connection between sensor
network and mobile robots


The basic features of the wireless sensor
networks (WSTs) are:

Large collection of nodes with
sensor/radio and microprocessor self-
organized into a network;

Nodes have microprocessor, radio,
memory and sensors (temperature,
humidity, pressure, vibration,
magnetic, sound, etc.);

Some even have cameras.

WIRELESS SENSOR NETWORK-
ARCHITECTURE AND BASIC
CHARACTERISTICS:

Some of the wireless technologies can
be unit as Personal Area Network.
A personal area network (PAN) is the
interconnection of information technology
devices within the range of an individual
person, typically 10 meters .
PAN Technologies including the
following protocols:
IrDA;
Bluetooth 802.15.1;
Zigbee 802.15.4;
SYSTEM
SENSOR NETWORK
Distributed computation;
Distributed sensing;
Device that is stretched
out over environment.
TASK
Ultra Wide Band 802.15.3a.

MOBILE ROBOTS
Actuation;
Deploy an maintain
sensor network;
Mobility
Basic advantages of PAN technologies
are :
Short-range;
Low Power;
Low Cost;
Small networks;
Communication of devices within a
Personal Operating Space.
We made the detailed review and we
consider the two newest technologies
ZigBee and Bluetooth are most appropriate
for realizing a first stage of the
communication system between mobile
devices. Which one of them is better, but in
general it is depends of application.

ZigBee is advantageous for a
communication system due to low energy
consumption and battery life, and wider
range. However, it has a relatively limited
bandwidth (20 / 250kbps). In contrast, the
bandwidth of Bluetooth (720 kbps) is about
three times as much as that of Zigbee. This is
achieved, however, at the cost of increased
energy consumption: a Bluetooth-based
system is expected to have autonomy of up to
several days. The ZigBee protocol is
designed for a greater autonomy (several
months) that, however, could be obtained
only in case of low-rate data transmission.

Bluetooth networks have a more
limited range than ZigBee networks (10m.
vs. 100m), which could be problematic for
Bluetooth-based systems in cases of
experiments in a wide natural environment.
However, laboratory test-scenes or cages are
usually limited to several meters, which is
within the range of the basic low-
consumption Bluetooth devices .

Commercial Bluetooth solutions are
available as fully self-contained transceiver
modules. They are designed to be used as
add-on peripherals.

Wireless sensor network (WSN) is a
recent research topic. This network is
composed of autonomous and compact
devices called sensor nodes. The availability
of integrated low-power sensing devices,
embedded processors, communication kits,
and power equipment is enabling the design
of sensor nodes.

Wireless Sensor Network has the
potential for many applications: e.g. for
military purpose, it can be use for
monitoring, tracking and surveillance of
borders; in industry for factory
instrumentation; in a large metropolis to
monitor traffic density and road conditions;
in engineering to monitor buildings
structures; in environment to monitor forest,
oceans, precision agriculture, etc. Others
applications include managing complex
physical systems like airplane wings and
complex ecosystems.

A sensor node is compost of a power
unit, processing unit, sensing unit, and
communication unit. The processing unit is
responsible to collect and process signals
captured from sensors and transmit them to
the network. Sensors are devices that
produce a measurable response to a change
in a physical condition like temperature and
pressure. The wireless communication
channel provides a medium to transfer
signals from sensors to exterior world or a
computer network, and also a mechanism of
communication to establish and
maintenance of WSN, which is usually ad-
hoc. Power consumption is and will be the
primary metric to design a sensor node.

The sensor node needs to be wireless.
In many applications, the environment being
monitored does not have installed
infrastructure for communications. Thus,
the nodes should use wireless
communication channels. A node being
wireless also enable to install a network by
deploying nodes and can be used in many
others studies for example liquid flow of
materials.

Each sensor node should be able to
process local data, using filtering and data
fusion algorithms to collect data from
environment and aggregate this data,
transforming it to information.

Figure 2 presents the system
architecture of a generic sensor node. It is
composed of four major blocks: power
supply, communication, processing unit and
sensors . The power supply block consists of
a battery and the dc-dc converter and has
the purpose to power the node. The
communication block consists of a wireless
communication channel. Most of the
platforms use short- range radio. Others
solutions includes laser and infrared.

The processing unit is composed of
memory to store data and applications
programs, a microcontroller and an Analog-
to-Digital Converter to receive signal from
the sensing block. This last block links the
sensor node to the physical world and has a
group of sensors and actuators that depends
on the application of the wireless sensor
network.
Figure 2 also illustrates some
challenges for wireless sensor network .


Fig. 2 Sensor node system architecture and
challenges

Wireless sensor networks are
networks of compact microsensors with
wireless communication capability. A power
management layer is necessary to control the
main resource of a sensor node, its energy
level. The power management layer could
use the knowledge of batterys voltage slope
to adapt dynamically the system
performance . Another advantage is that
other energy source can be added and the
power management can make the best use of
the energy resource. New network protocols
are necessary, including link, network,
transport, and application layers to solve
problems like routing, addressing,
clustering, synchronization and they have to
be energy-efficient.

Illustration of the base definition for
Wireless Sensor Network (WSN) presented
on Figure 3: A collection of cooperating
algorithms (controllers) designed to achieve
a set of common goals, aided by interactions
with the environment through distributed
measurements (sensors) and actions
(actuators) .



Fig.3 Wireless Sensor and Actuator
Networks

This is a new venture that is focused
on intelligent mobile robots -- robots that
are used in flexible environments, not
automated toolsets in fixed locations. One
major issue with a mobile robot acting as a
gateway is the communication between the
robot and the sensor network.

HARDWARE REALIZATION:

As it pointed above a system consists
of several blocks. The choice of
microcontroller is important part of the
work, because it limits a capability of the
system.

Some of the key characteristics that were
important for choosing the most appropriate
solution were: (a) physical dimensions; (b)
active power; (c) number of ADC; (d) RAM.

The microprocessor module,
developed for communication system is
based on microprocessor ATmega128L, one
of the most powerful microcontrollers,
produced by Atmel. The ATmega128L is low
power CMOS 8-bit microcontroller, based
on the AVR enhanced architecture. By
executing powerful instructions in a single
clock cycle, the ATmega128L achieves 1
MIPS per MHz There are 2 UARTs
onboard: one for digital communication
with the external Bluetooth module and
another for debugging .
Sensors
Controllers
Actuators

This section presents a developed
microprocessor controller, supporting
wireless protocol Bluetooth (for distance
communication), 1-wire bus (for data
reading from some type of sensors,
supporting that protocol and RS 232 for
connection to the base station or for
preliminary tests. The work is centered on
three important parts: (i) wireless
technologies for remote information
collection; (ii) software tools using an
integrated platform for information services
and information management and (iii)
special designed microprocessor controller
(OCTOPORT_BT, supporting: Bluetooth, 1
wire bus, RS422/RS232).

Figure 4 presents a general block-scheme of
the microprocessor controller OCTOPORT-
BT. This module is based on microcontroller
ATmega128L (Atmel) and Promi-ESD
(INITIUM) long range Bluetooth modem.



Fig.4 Basic block scheme of the
microprocessor module OCTOPORT_BT

This module (OCTOPORT_BT) has
several main parts:
Microcontroller (ATmega128L);
Bluetooth modem (Promi-ESD01);
Memory (256 K I2C EEPROM);
Scheme for power control for
longer battery life. This scheme
checks for power and if it is lower
then the low limit, that it
disconnects some function from the
processor.

The main aim of the developed
microprocessor module is wide range of
applications supporting standard interfaces
for connection to other devices and personal
computer. Wireless communication (based
on Bluetooth technology) gives possibility for
much more freedom in nodes location.
Bluetooth is currently emerging as one of the
most promising personal wireless network
technologies . The automation industry is
also showing interest in using Bluetooth for
more industrial applications. It is possible to
place actuators and sensors without
worrying about the location of the control
node.

For realizing wireless communication it
is used Bluetooth modem - Promi ESD 01.
The main reasons for this choice are
Transmission Power Class 1, 3~3.3V DC
power supply, bandwidth up to 230
Kbit/sec and 41 AT commands for
Bluetooth communication . Additional
advantage of this choice is the dimensions:
27 x 27 (mm). - Figure 5

The number of supported interfaces is
one of the advantages of the microprocessor
module. It allows a wide range of different
applications (this system is appropriate for
special measurements, remote measurement
and control and working with special
designed intelligent devices, using one of the
supported interfaces).

Fig. 6 shows the module OCTOPORT-
BT. In the figure are pointed the places to
connect sensors, using 1- wire bus and
Bluetooth modem PromiESD.
Microprocessor module provides data
collecting, primary data processing and
primary control and transferring of the
information.






Fig. 5
Bluetooth
modem -
model
PromiESD,
produced
from Initium

Fig. 6 Module OCTOPORT-
BT


The presented communication system
can be used as well in entertainment robots
(football and other games), as for data
collecting in different areas (ecology,
environment, ect.). Each of the mobile
participants in the network is provided with
this module OCTOPORT-BT and
connected to it Bluetooth modem -
PromiESD. Each of the devices can be
transmit data between them or to send their
data to the base station. The basic idea is
each devise to have information for
measured data from all of them. Each of the
Bluetooth modems has own 64- bit address,
which is unique and it uses to determine the
devices and to avoid the collision in
communication. The ad hoc connection in
the system is piconet.

Based on the specification of the
protocol Bluetooth, it is possible up to 8
devices to work together in the network. The
ad hoc connection is possible to be piconet (1
master and up to 7 slaves in the network) or
scatternet (a set of piconects, connected
through sharing devices). The second
function is not well developed today and
most of the researchers have largely
concentrated on theoretical concepts. From
the other hand, in accordance with
specification of Bluetooth modem PromiEsd,
it is possible up to 10 devices to work in the
network. We still have not checked this
ability.

In this scenario one of the devices can
be a master, and other - slaves. The master
is possible to communicate with a base
station or other controller OCTOPORT-BT.
The general idea for application of the
presented communication system is for data
collecting (temperature, humidity, oxygen)
in the different zones of the selected area,
where is difficult (or impossible) for human
reach.
CONCLUSION:

The presented idea for
communication has many possible
applications. Some of tests were successfully.

The presented in the paper
microprocessor module is used in
experiments for developing a telemetry
system for single-cell recording in birds.
This is a joint research project, regarding
bilateral cooperation between BASc
(Bulgaria) and CNR (Italy).

The future developing aims
implementation of other wireless
technologies, such as ZigBee (for low
distance) and GPS/GPRS (for long distance)
and including a block for control of the
motors and movement of the robots.































REFERENCE:

Vieira, M. A. M., da Silva Jr., D. C., Coelho Jr., C. J. N., and da Mata, J. M., Survey on
wireless sensor network devices.

http://www.acadjournal.com

www.initium.co.kr

http://www.itpapers.com


www.isi.edu



AUDIOVISUAL PERSON
TRACKING WITH MOBILE ROBOTS

P. Ramya Patro CH. Ramya
2/4 CSE 2/4 CSE
MVGR COLLEGE MVGR COLLEGE
Id:ramya_happy_cool@yahoo.com Id:ramya_cutesmiles@yahoo.co.in


ABSTRACT:
Mobile service robots are recently gaining
increased attention from industry as they
are envisaged as a future market. Such
robots need natural interaction capabilities
to allow unexperienced users to make use
of these robots in home and office
environment. Inorder to enable the
interaction between humans and a robot,
the detection and tracking of persons in the
vicinity of the robot is necessary. In this
paper we present a person tracking system
for a mobile robot which enables the robot
to track several people simultaneously. Our
approach is based on variety of inputs cues
that are fused using a multi-modal
anchoring framework. The sensors
providing input data are two microphones
for sound source localization, a pan-tilt
camera for face and torso recognition, and
a laser range finder for leg detection.
Through processing camera images to
extract the torso position, our robot can
follow a person guiding the robot that is
not oriented towards the robot and that is
not speaking. In this case the torso
information is especially important for
robust tracking during temporary failure of
the leg detection.

1. INTRODUCTION:
Mobile service robots are fundamentally
different from static setups used for
research on human-machine-interfaces. In
typical static setups, the presence and
position of the user is known beforehand
as the user either wears a close-talking
microphone or stands at a designated
position. On a mobile robot that operates in
an environment where several people
are moving around, it is often difficult for
the robot to determine which of the
persons in its vicinity wants to interact
with it. In order to enable the robot to
automatically recognize its instructor, it is
necessary to develop techniques that allow
to robustly track the persons in the
robots surrounding.
For person tracking a variety of
cues can be used with some of them also
applicable for the subsequent task of
detecting whether the human instructor is
currently interacting with the robot or with
other persons nearby. In a scenario where
one or several humans are in the vicinity of
the robot, tracking is often accomplished
using data from a laser range finder
containing characteristic patterns resulting
from the legs of the surrounding humans.
While this information is important for
determining the appropriate velocity for
following a specific human, it does not
allow to infer whether a human is facing
the robot or not. Two other cues that can
be used to track a person are its face and its
voice. Faces can be detected based on
image data provided by a pan-tilt camera.
Similarly, a stereo microphone allows to
localize a sound source, i.e., the voice of a
talking person. Both types of information
represent additional position information
while at the same time they can be used for
attention control.
For our mobile robot BIRON
Bielefeld robot companion we have
realized a system integrating the three
modalities described to perform not only
tracking of persons but also focusing of
attention. BIRON has already performed
tracking and attention control successfully
during several demonstrations, for example
at the International Conference on
Computer Vision Systems (ICVS) 2003 in
Graz.
However, the three different types of
information that are fused for obtaining the
position of a person are not always
available. If the robot is required to follow
a person guiding the robot, the face is
usually not visible and no sound
information is available. Therefore, the
leg detection based on laser range data is
the only available cue for tracking a non-
speaking person that is not facing the
robot. Consequently, a failure in the leg
detection, e.g., due to an obstacle in the
field of view of the laser range finder, will
result in loosing the person. Therefore,
additional cues are required to cope with
such situations. In this paper we present
a color-based torso recognition approach
based on camera images that provides
information about the direction of a person
relative to the robot. Through adding a
perceptual subsystem for creating a color
model of the torso and for recognizing the
torso in images, persons can be tracked
more reliably.
The paper is organized as follows: At first
we discuss related work on person tracking
in section 2. Then, in section 3 our robot
hardware is presented. Next, multi-modal
person tracking is outlined in section 4.
Our approach for recognizing the torso of a
person is described in section 5 and section
6 presents a small evaluation of the overall
system performing person tracking. The
paper concludes with a short summary in
section 7.

2. RELATED WORK:
A mobile robot does not act in a closed
or even controlled environment. A
prototypical application is its use as a tour
guide in scientific laboratories or
museums. All humans approaching or
passing the robot have to be tracked in
order to enable the robot to focus its
attention on one person that intends to
interact with the robot. Another application
scenario becoming increasingly interesting
is showing a robot around in a private
home. This home tour scenario is an
interaction situation of fundamental
importance as a human has to teach all the
objects and places relevant for interaction
to the robot. Interaction, however, requires
the robot to be able to track all the humans
in its vicinity.
For person tracking a variety of
approaches have been developed which
fuse different sensing modalities. Darrell et
al.integrate depth information, color
segmentation, and face detection results for
person tracking. The individual tracks are
fused using simple rules. Feyrer and Zell
also track persons based on vision and
laser range data. Here the two types of
sensor data are fused using a potential field
representation for the person positions.
Besides parallel fusion of different types of
sensor data, some approaches perform
sequential processing in a hierarchical
architecture. After associating coarse
position estimates, a smaller search space
is used for processing more precise sensor
data. For example, Schlegel et al. propose
vision-based person tracking that uses
color information to restrict the image area
that is processed in order to find the
contour of a human. A more sophisticated
method to realize a sequential search space
reduction is proposed by Vermaak et al. In
their approach sound and vision data are
sequentially fused using particle filtering
techniques. A related probabilistic fusion
approach that applies graphical models for
combining sound and vision data is
presented by Beal et al. While all these
person tracking approaches either use
simple rules or learned/designed
probabilistic relations to fuse different
types of data, we will present in the
following a person tracking approach that
relies on a structured framework to track
multiple persons simultaneously based on
their individual components. The main
focus of this paper lies on the
incorporation of the position of the human
torso through learning and tracking the
color of the clothing of the upper body part
of a person.
3. ROBOT HARDWARE:
BIRON



The hardware platform for BIRON is
a Pioneer PeopleBot from ActivMedia
with an on-board PC (Pentium III, 850
MHz) for controlling the motors and the
on-board sensors and for sound processing.
An additional PC (Pentium III, 500 MHz)
inside the robot is used for image
processing.
The two PCs running Linux are
linked with a 100 Mbit Ethernet and the
controller PC is equipped with wireless
Ethernet to enable remote control of the
mobile robot. For the interaction with a
user a 12 touch screen display is provided
on the robot.
A pan-tilt color camera (Sony
EVI-D31) is mounted on top of the robot
at a height of 141 cm for acquiring images
of the upper body part of human
interacting with the robot. Two AKG far-
field microphones which are usually used
for hands free telephony are located at the
front of the upper platform at a height of
106 cm, right below the touch screen
display. The distance between the
microphones is 28.1 cm. A SICK laser
range finder is mounted at the front at a
height of approximately 30 cm.
For robot navigation we use the ISR
(Intelligent Service Robot) control
software developed at the Center for
Autonomous Systems, KTH, Stockholm

4. COMBINING MULTIPLE
MODALITIES FOR PERSON
TRACKING:
Person tracking with a mobile robot is a
highly dynamic task. The sensory
perception of persons is constantly
changing as both the persons tracked and
the robot itself might be moving.
Another difficulty arises from the fact that
a complex object like a person usually
cannot be captured completely by a single
sensor system alone. Therefore, we use the
sensors presented in section 3 in order to
obtain different percepts of a person:
(i) The camera is used to recognize faces
and torsos. Our detection of faces (in
frontal view) is based on the framework
proposed by Viola and J ones. This method
allows to process images very rapidly and
with high detection rates. From the face
detection step the distance, direction, and
height of the observed person are
extracted, while an identification step
provides the identity of the person if it is
known to the system beforehand.
Furthermore, the clothing of the upper
body part of a person (the color of its
torso) can be used to track this person,
especially if it is not oriented towards the
robot, i.e., its face is not visible for face
detection. The torso recognition is
described in detail in section 5.
(ii) The stereo microphones are applied to
locate sound sources using a method based
on Cross-Powerspectrum Phase Analysis.
An extensive evaluation of our sound
source localization has shown that the use
of only one pair of microphones is
sufficient for robust speaker localization
within the multi-modal anchoring
framework.
(iii) The laser range finder is used to
detect legs. In range readings pairs of legs
of a human result in a characteristic pattern
that can be easily detected. From detected
legs the distance and direction of the
person relative to the robot can be
extracted.
The percepts resulting from processing
the data of these sensors provide
information about the same overall object:
the person. Consequently, the information
about the individual percepts has to be
fused. For combining the percepts from the
different sensors we proposed multi-modal
anchoring. The goal of anchoring is
defined as establishing connections
between processes that work on the level
of abstract representations of objects in the
world (symbolic level) and processes that
are responsible for the physical
observation of these objects (sensory
level). These connections, called anchors,
must be dynamic, since the same symbol
must be connected to new percepts every
time a new observation of the
corresponding object is acquired.
Our multi-modal anchoring
framework allows to link the symbolic
description of a complex object to different
types of percepts, originating from
different perceptual systems. It enables
distributed anchoring of individual
percepts from multiple modalities and
copes with different spatio-temporal
properties of the individual percepts. Every
part of the complex object which is
captured by one sensor is anchored by a
single component anchoring process.
The composition of all component anchors
is realized by a composite anchoring
process which establishes the connection
between the symbolic description of the
complex object and the percepts from the
individual sensors. In the domain of person
tracking the person itself is the composite
object while its components are face, torso,
speech, and legs, respectively.
The framework for anchoring the
composite object person is based on
anchoring the four components face, torso,
speech, and legs. Potentially, more than
one person might be present in the vicinity
of the robot. In order to track multiple
composite objects simultaneously, the
anchoring framework is extended by a so-
called supervising module. This module
manages all composite anchoring
processes, e.g., it coordinates the
assignment of percepts to the individual
anchoring processes in order to cope with
potential ambiguities. Moreover, the
supervising module establishes new
anchoring processes if percepts cannot be
assigned to the existing ones. Contrary,
anchoring processes are removed if no
percepts were assigned for a certain period
of time.
If several persons are tracked by the
robot simultaneously, it must be able to
recognize which of the persons is the
robots current communication partner. For
this purpose, an attention system is
provided, which enables the robot to focus
its attention on the most interesting person.
It is assumed that a communication partner
is speaking and at the same time looking at
the robot. Therefore, the pan-tilt camera is
always turned towards the person which is
currently speaking. Then, from the face
detection process it can be determined
whether the speaker is also looking at the
robot and hence is considered the
communication partner.
There we demonstration of the
robots capabilities in multi-modal human-
robot interaction, i.e., detection and
tracking of multiple persons as well as
detecting and eventually following
communication partners.
5. COLOR-BASED TORSO
RECOGNITION:
In order to supply the anchoring
framework presented in with information
about the torso position of an observed
person, camera images of size 256*192 are
analyzed to find an image area that
matches a previously learned model of the
torso color. Now the direction of a person
relative to the robot can be extracted based
on the camera orientation and the position
of the torso within an image. A mixture of
Gaussians is used to represent the torso
color as such a parametric model is well
suited for efficient modeling of previously
unknown color distributions. Previous
applications using a mixture of Gaussians
in order to track a coke can or faces under
varying lighting conditions have
demonstrated the flexibility of applying a
parametric model.
For color representation we use the
LUV color space as it is perceptually
uniform. This allows us to apply a
homogeneous distance criterion for
calculating the fit between an observed
color and the color model. Initialization of
the color model is performed after
successfully detecting for the first time the
face of the person that is being tracked.
Using the anchoring framework, the
distance of the person is known and a
position 35 cm below the face position can
be transformed into image coordinates. At
this position in the image an elliptical
image area is selected for creating the
initial mixture model. The parameters of
the individual components of the Gaussian
mixture are calculated using a k-means
clustering algorithm. For modeling a
typical torso (the color of the clothing), a
mixture with three components has been
shown to provide good results. The
number of mixture components has to be
increased appropriately for colorful
clothing which exhibits a large variety of
different colors.
In order to distinguish the torso
color described by the mixture model from
the background, either a background model
needs to be available or a suitable rejection
criterion has to be defined. In our
application the constantly varying
background is unknown and, therefore,
cannot be described by some color
distribution. Consequently, we use a
rejection criterion based on an
automatically determined threshold Sclass
on the probability density scores p(xi)
obtained from the mixture model. First, we
estimate a discrete distribution of the
expected probability density values by
calculating a histogram of all scores p(xi)
for color vectors xi contained in the current
training set. The bins of the histogram thus
represent a small range of possible scores.
We then adjust the threshold Sclass such
that 98 % of all scores observed on the
training data lie above Sclass, i.e., the
probability of observing a score greater
than Sclass is equal to 0.98:
Pr(Y >Sclass) =0:98; Y =p(xi) (1)
Choosing a fraction of 98 % of the
training pixels has been determined
empirically. In this way, outliers contained
within the last 2 % of the training set are
ignored.
Using this threshold, an image can be
classified in torso and non-torso pixels. In
order to remove isolated pixels from the
resulting label image and to provide a
more homogeneous result, a median of size
5*5 is applied to smooth the label image.
Next, a connected components analysis is
carried out to obtain the region
segmentation result. Subsequently,
polygonal descriptions of the image
regions and region features like
compactness, pixel count, center of mass,
etc. are calculated. Using additional
constraints with respect to the minimal and
maximal size of the torso region, the result
of this step is the segmented torso region
Rtorso. Based on the center of mass of this
region and the current camera position, the
direction of the person relative to the robot
can be calculated.
As the mobile robot encounters varying
lighting conditions while following a
person, the torso color model has to be
adapted to a changing visual appearance of
the torso. For this purpose an image region
has to be determined for updating the color
model appropriately. Note that the shape of
the torso area is not fixed due to, e.g.,
variations in the orientation of the persons
body. Additionally, parts of the torso may
be temporarily occluded by the skin-
colored arms if the person wears clothes
with short sleeves. Therefore, we have
chosen to construct the update region
Rupdate by enlarging the segmented
region Rtorso. The polygon describing
Rtorso is stretched to describe a region
with an area 1.5 times the size of the
segmented region. In this way image parts
next to the currently segmented torso area
are included in the update region Rupdate.
In order to avoid adapting the color
model to background areas that are
contained in the stretched update region,
we use a training threshold Strain to select
only those pixels from Rupdate that exhibit
a color close to the current torso color
model. Similar to the classification
threshold Sclass (see Eq. 1), the training
threshold is calculated from the histogram
of probability density values obtained on
the previous training set. The threshold
value is chosen such that the scores p(xi)
of 99 % of the training pixels from the
previous update step are above the
threshold:
Pr(Y >Strain) =0:99; Y =p(xi) (2)
Only those pixels in the current update
area that have a probability value above
the training threshold Strain are considered
for updating the torso color model. In this
way, we enforce a smooth adaptation of
the color model. Note that this method to
detect the torso based on its color is only
applicable if no background objects exhibit
a color similar to the color of the torso.
FIG 2:


Input image



Resulting torso color mixture model





Segmentation result

An example for the processing of input
images for torso recognition is depicted in
above Fig. 2.Based on the face position,
the anchoring framework provides the
image position of the torso. An elliptical
image area around this position indicated
by a white ellipse is used for training the
color mixture model. In the center image
the resulting mixtures are depicted. The
right image shows the resulting
segmentation result before median filtering
with the center of mass selected later as
torso position indicated by a white cross.
The processing of the input images is
performed on the 500 MHz Pentium III
computer at a frame rate of 5 Hz. In every
step the image is analyzed with the face
detection algorithm consuming about 20 %
of the processing time and the torso color
segmentation as well as the updating of the
color model are carried out.

6. SYSTEM PERFORMANCE:
With the integration of the perceptual
system for torso recognition, the person
tracking has become much more robust.
Previously, when a person was guiding the
robot to another place without facing the
robot, i.e., not walking backwards, the
person was only followed based on
detected legs. If the legs were not
detectable for several processing cycles,
the robot would have lost the person. After
integrating the torso information, the robot
can now track persons more reliably even
if they are not facing the robot.


FIG 3 : Setup for Evaluation



View from behind the robot

Another more important scenario are
situations in which the legs of a person are
partially occluded by, e.g., furniture. In
order to demonstrate the performance of
our system, we have chosen a setup in our
office room as shown in Figure 3. The
robot (R) observes the door of the room
and is allowed to steer the camera and to
rotate its base. A person (P) is supposed
to enter the room and to verbally instruct
the robot to track the person. Then, the
person approaches a desk in order to
interact with an object (O). When passing
the flower (F) which is located on the
floor, the legs of the person are not
detectable for the robot anymore as the
flower and the cupboard next to it occlude
the lower body part of the person. Because
the person is turned away from the robot
when interacting with object O, neither the
legs nor the face of the person are
observable by the robot. Moreover, the
person is assumed not to speak, which
prevents the robot from acquiring any
speech percepts. Therefore, only the torso
percepts are available for tracking. After
interacting with the object, the person
returns to the position next to the door. If
the robot was able to successfully track the
person based on torso percepts then the
robot is still focusing on the person.
We carried out 25 runs with several
subjects wearing different shirts and
sweaters. In 80 % of the test cases the
person was correctly tracked throughout
the interaction with the object. During the
interaction phase lasting five to ten
seconds only the torso percepts were
available as the legs and the face were not
visible and the person was not speaking.
Without incorporation of the torso
information the robot would have lost the
person in all cases.
The processing speed, i.e., the rate at
which percepts are fused to obtain the
overall person position, depends on the
rate at which the individual percepts are
provided. Face recognition and torso
detection are performed on images of a
size of 256*192 at a rate of 5.0 Hz.
Localization of sound sources runs at a rate
of 5.5 Hz. The laser range finder provides
new data at a rate of 4.7 Hz while the
processing time for the detection of legs is
negligible. The anchoring processes of the
persons are updated asynchronously as
percepts become available.
Although the torso information
substantially improves tracking in this
scenario, the tracking algorithm relies on
just a single cue for a considerable period
of time. Therefore, it is desirable to have
more perceptual cues available for robust
tracking in a wider variety of situational
contexts. This can be achieved by
complementing the existing system with
detectors for, e.g., the human head-
shoulder contour or faces in non-frontal
view.

7. SUMMARY:
In this paper we presented a multi-modal
person tracking system for our mobile
robot BIRON. The described approach
applies audiovisual cues in addition to
laser range data. In previous work we
demonstrated that the system is able to
simultaneously track multiple persons in
the vicinity of the robot based on face
recognition, speech localization, and leg
detection. In this paper we introduced an
additional perceptual sub-system for
extracting information about the torso of a
tracked person. Based on the color of the
clothes worn by the human, the torso is
tracked with an adaptive color
segmentation approach. The torso helps to
track a person when other cues are not
available, e.g., due to visual occlusion of
the legs. Results of a small evaluation in an
office environment demonstrate the
increase in robustness resulting from the
incorporation of the torso detection in our
framework for tracking persons.






MOBILE AND CELLULAR
COMMUNICATIONS USING
IPSTAR SATELLITE





J.N.T.U. COLLEGE OF ENGINEERING,
ANANTAPUR.


Paper presented

by



G.UDAY KUMAR REDDY K.VISWESWARA REDDY
III B.TECH, E.C.E. III B.TECH , C.S.E.
uday_shhh@yahoo.com evergreen1680@yahoo.com

(9908656500) (9849098976)


ABSTRACT:

Shin Satellite
PLC(SSA) has conceived a new
generation of Internet Protocol
(IP)satellite that would serve the
demand for high-speed
broadband Internet access in
the future . Broadband via
satellite has always suffered
from high cost compared to
other systems available. SSA
developed IPSTAR technology to
increase system capacity and
efficiency such that the cost of
service would be considerably
lower than that currently
provided by conventional
satellites. IPSTAR-1 will be the
first of a new generation of
broadband satellites that will act
both as an Internet backbone
connection to fiber optic cables
for ISPs and as a last-mile
broadband Internet service to
consumers, competing with cable
modem and ADSL. Once
launched, IPSTAR-1 satellite will
be one of the largest
communications satellites ever
built, with a massive bandwidth
capacity of 45 Gbps, almost
equivalent to all satellites
serving Asia today. In our paper
we are presenting the technology
behind the IPSTAR satellite and
its application in voice
conferencing and intranet and
VPN.



KEYWORDS:
VPN : virtual private network
Gbps : giga bits per second

















IPSTAR TECHNOLOGY:
Shin Satellite's IPSTAR broadband
satellite system utilizes advanced
technological concepts in conjunction
with a new geostationary satellite to
provide bandwidth capacity unmatched
by conventional satellite technology. Let
us know about the satellite description,
space technology, ground technology.
SATELLITE DESCRIPTION:
Manufacturer:
Space
Systems/Loral
(Palo Alto,
USA)
Model: FS-1300L
No. of
Transponders:
114
Power: 14 KW
Life: 12 years
Launch Date:
11th August
2005
(Thursday)
Service Date: 2005


IPSTAR SPACE TECHNOLOGY:

IPSTAR-1 Satellite is a
Geostationary orbit satellite utilizing Ku-
band spectrum for user applications. Ku-
band spectrum provides the optimal
solution for services in the Asia-Pacific
region with a high Link-Availability
(Margin) for user applications using a
small user antenna size. Proprietary
waveforms are used for air interface
between a user terminal and IPSTAR
gateway. The downlink (Forward Link)
to the Terminal will be TDM overlaying
OFDM with patent pending enhancement
to maximize the efficiency of spectrum
utilization. The uplink channels (Return
Link) from the user terminal will be
based on Multimode Multiple Access
MF-TDMA. The access methods can be
selected by gateway Network
Management System (NMS) to match
application bit rate and traffic density
requirement, including TDMA-DAMA
for voice, and Slotted Aloha for web
browsing and other bursty traffic. Every
mode will employ advanced error
correction coding, which will allow the
uplink to use small antennas and power
amplifiers even for high-speed uplink
data rates. Future generation may employ
spread spectrum CDMA technology to
further enhance system capability.
Optionally, there will be one-way version
using telephone line as the Return Link,
providing a cheaper alternative to a full
two-way version. The Satellite is a bent-
pipe satellite achieving unprecedented
capacity and functionality with no on-
board regenerative payload. This
eliminates the need for low reliability,
heavy and power consuming on-board
processor. Therefore, the Satellite will be
as reliable as any conventional
communications satellites and will
certainly be more reliable than any
broadband satellite that employs on-board
processor. All intelligence, switching and
routing capability will be put on the
ground at gateway and network control
centers. This will allow future upgrade of
all electronics and software, which have
been evolving with more capability and
cost effectiveness at a very rapid pace.
The Satellite will have the capability to
allocate its precious on-board resources
(Dynamic Power Management and
Dynamic Bandwidth Management)
appropriately according to the actual need
to maintain communication links at the
highest level possible Quality of Service
(QoS). This allocation is monitored and
controlled on a dynamic basis through
Satellite Payload Operation Center
(SPOC) with the link quality information
processed on-line by Gateway and
Network Management Centers (GNCs).


KEY TECHNOLOGY
FEATURES:
Spot Beam Coverage: Traditional
satellite technology utilizes a broad single
beam to cover entire continents and
regions. With the introduction of multiple
narrowly focused spot beams and
frequency reuse, IPSTAR is now capable
of maximizing the available frequency for
transmissions. Increasing bandwidth 20
times more than traditional Ku-band
satellites translates into a more efficient
satellite. Despite the higher costs
associated with spot beam technology,
the overall cost per circuit will be much
lower than existing shaped beam
satellites.

Dynamic Power Allocation: This
new technology optimizes the use of
power among beams and allocates a
power reserve of 20% to be allocated to
beams that may be affected by rain fade,
thus maintaining the link. Given the vast
geographical coverage of the Satellite, it
is unlikely that rain would fall
simultaneously across the entire region;
therefore this dynamic allocation of
power only to beams in need is a very
effective way to increase IPSTAR
System's overall high link availability and
reliability.

IPSTAR TECHNICAL
INFORMATION:

Spacecraft
Configuration
Description and
Approximate
Capacity
Spacecraft
Geo-stationary
satellite with
Bent Pipe
payload
configuration
(no on-board
processing to
avoid the
obsolescence of
processing
technology
which develops
at a remarkable
speed, no inter-
satellite link)
12 years life
15 kW power
Orbital Slot
Located at 120
+/- 0.5 East
Additional
satellite for co-
location or
different slot in
the future
Total Digital
Bandwidth
Capacity
45 Gbps of Spot
Beam aggregate
capacity at 84-
120 cm. Dish
Equivalent to
1,000+
Transponders of
36 MHz of
conventional
coding and
modulation
Ku Beams
Ka Beams
84 Spot beams,
3 Shaped
beams, 7
Regional
Broadcast
beams
18 Feeder
beams and
Gateways
Beam
Configuration
Description and
Approximate
Capacity
Ku Spot Beams
For highly
populated areas
20+ Gbps
uplink (Return
Link) capacity
20+ Gbps
downlink
(Forward Link)
capacity
(excluding
broadcast
capacity)
Ku Shaped
Beams
For less
populated areas
0.5+ Gbps
uplink capacity
0.5+ Gbps
downlink
capacity
Frequency and
Access
Description and
Approximate
Capacity
Uplink (Return
Link)
(A narrow band
data link from a
consumer
Terminal to the
Gateway)
14.000-14.375
GHz (Spot
Beam)
14.375-14.500
GHz
(Shaped/Spot
Beam)
13.775-13.975
GHz (Broadcast
beam)
Multi-mode
Multiple Access
Slotted
AlohaTDMA
Aloha Return
Link STAR
(IPSTAR
Proprietary)



BEAM AND COVERAGE:

IPSTAR-1 is a regional
satellite system, whose beams will cover
22 countries throughout the Asia-
Pacific Rim, with Ku-Band (84 Spot
Beams, 3 Shaped Beams, and 7 Regional
Broadcast Beams) and Ka-Band (18
Feeder Beams & Gateways).

IPSTAR GROUND TECHNOLOGY:




TERMINAL,GATEWAY,NETWORK
CONNECTIONS:

IPSTAR user terminal is low-cost,
flexible and high performance two-way
satellite terminal that works in
conjunction with the Satellite and the
Gateway. Air interface employing
advanced waveforms on forward and
return channels are optimized for the
overall system efficiency. The waveform
for forward channel is based on TDM-
OFDM technology that utilizes
bandwidth and power more efficiently.
The forward channel
is optimized to accommodate multiple
data rates, a variable number of users of
different modulation formats and forward
error correction coding. On the other
hand, for the return channel is based on
MF-TDMA technology for bursty traffic
and dedicated allocation for high data rate
applications.

The waveform is fixed to a more robust
modulation to ensure link-availability at
low transmission power. However, if
higher transmission bit rates are required,
the return channel can be configured to
dedicated allocation behaving like a
SCPC-Like (pre-assigned TDMA)
channel that can support higher transmit
data rates up to 2.0 Mbps at the expense
of required additional transmit power
from the user terminal

USER TERMINAL AND
GATEWAY:

The terminal and gateway
incorporate the proprietary modulation
and coding technologies, and
transmission architecture optimized for
the IPSTAR satellite system. The
interface for the terminal and gateway to
any device or network will be based on
the industry standard Internet Protocol, to
ensure a seamless integration to the
existing system, software, hardware,
applications, services and networks. The
ground System is deployed in 2 phases:
The First Generation (Pre-broadband
Satellite) IPSTAR is emphasizing the
early development of the user terminal to
ensure system integration and
competitive prices by the time new
IPSTAR satellite becomes operational.
The
first generation user terminal is
available and applicable to the
Thaicom Fleet and other
conventional satellites.

the first generation (FG) terminal
utilizes available Ku-band
bandwidth from the existing
Thaicom Fleet and, will be
compatible to IPSTAR. The soft
launch is
designed, planned and
implemented to help ramp up the
service infrastructure and volume
production for the launch of the
IPSTAR system and to achieve
time-to-market requirement and
first-mover advantages. It will
serve mainly Internet access,
telephony and broadcast services
with Forward Link bandwidth per
Terminal of up to 4 Mbps.

The prototype of the FG-IDU (In-
door Unit) was completed in the
fourth quarter of 2000 and the
start of the commercial roll out
was in the fourth quarter of 2001.
We expects to reach economies of
scale during the soft launch phase,
which will make the unit cost
even more compelling by the time
the Satellite is launched.
The Second Generations (Post-
broadband Satellite) After the launch of
the IPSTAR-1 Satellite, all connections
and systems using IPSTAR FG soft
launch services will be migrated to the
Satellite. This will mark the official
commencement of the SG IPSTAR
System services. We expects the larger
volume productions and more advanced
chip integration to bring down the cost
due to economies of scale. Furthermore,
different form factors and IDU models
will be introduced to the market. New
models will likely include standalone-
boxed units for professional uses and
home communications centers and one-
way high-speed download units.

IPSTAR APPLICATIONS:

Due to its less expensive, high
speed Ipstar finds wide range of
applications. Some of them are
1. Voice conferencing
2. Broadband access.
3. Intranet and VPN.
4. Broadcast and multicast
application
5. Video-on-demand
application.
Out of them two applications are
discussed in this paper.

VOICE CONFERENCING
APPLICATION:
IPSTAR's video conferencing
solution can be used for corporate
meetings, technical support functions,
distance learning, telemedicine, job
recruiting interviews, direct sales, legal
work, telecommuting, and manufacturing.
IPSTAR's video conferencing solution
can minimize the time and resources
associated with unnecessary corporate
travel. IPSTAR provides the best of video
conferencing by offering the simplest,
richest featured, most intuitive,
interoperable, scalable and affordable
way to meet over IP-based networks By
delivering these attributes IPSTAR's
video conferencing solution is
unmatched.

IPSTAR's video conferencing solution
can be used for corporate meetings,
technical support functions, distance
learning, telemedicine, job recruiting
interviews, direct-sales, legal work,
telecommuting, and manufacturing.
IPSTAR's video conferencing solution
can minimize the time and resources
associated with unnecessary corporate
travel.


KEY FEATURES:
Supports Client/Server based
video conferencing
Supports standard H323 (VOIP)
and T.120 for interoperability and
document/application sharing
Flexible data speeds ranging from
64, 128, 256, 384 kbps and above
Broadcast and Multicast features -
Supports both Point-to-Point and
Point-to-Multipoint conferencing
. CENTRALIZED management for
conference control
- Add and remove video
conference participants
- Multiple conference
rooms
- Shared files
- Shared whiteboard
KEY SERVICES:
Ideal for mass corporate training,
internal corporate TV
Broadcasting and e-learning using
Broadcast and Multicast features
Enables ASP (Application Service
Provider) Model - Video
Conferencing Server at IPSTAR
Gateway to be shared by multiple
users and allows occasional video
conferencing access for
individuals or small groups
KEYBENEFITS:
The IDEAL solution for Video
Conferencing - allows
conferencing ANYWHERE over
the IP network.
Great COST and TIME
SAVINGS
- LESS COMPLEX for
equipment and link setup-FAST
deployment
- IMPROVED Bandwidth
Efficiency
- Manage and allocate
bandwidth only to ACTIVE
terminals-as opposed to
costly traditional applications
- DYNAMIC Bandwidth
Adjustment for OPTIMAL
Bandwidth UTILIZATION -
offers UNIFORM QUALITY
regardless oflocation

INTRANET AND VAN
APPLICATIONS:
IPSTAR can provide secure and efficient
high-speed Virtual Private Network (VPN) to
companies with multiple remote branches
over a public network. Organizations can
easily set up intranet networks for data, voice
and video applications.IPSTAR is able to
provide secure and efficient high-speed virtual
private network (VPN) to companies with multiple
remote branches over a public network.
Organizations are able to easily set up Intranet
networks for various data, voice and video
applications.
KEY FEATURES:
- SHARED and
CUSTOMIZED bandwidth for
corporate requirements
- Bandwidth allocation for
each site through a user-friendly web
interface
- Supports standard GRE
Tunneling protocol


KEY SERVICES
- Intranet-based - Private network to
connect company's remote locations
together.
Extranet-based - Private
connection between the company
and its suppliers or customers
allowing a shared environment.
KEY BENEFITS:
Low cost, tunneled connection
with Quality of Service (QoS) to
ensure reliable and secure
throughput
Extends geographic connectivity
Reduces operational cost
compared to traditional WAN
SHARED infrastructure - Self-
managed throughput for each
location.
No need for expensive dedicated
leased line network
Suitable for corporations with
already installed private IP
network (but do not want to
change their existing IP address
configuration)
State-of-the-art IPSTAR User
Terminal with built-in GRE
tunneling protocol to reduce cost
of router
SINGLE Satellite Hop between
HQ and remote offices















CONCLUSION:
IPSTAR can provide secure and deficient
high -speed Virtual Private
Network(VPN) to companies with
multiple remote branches over a public
network. Organizations can easily setup
intranet networks for data, voice and
video applications. IPSTARs video
conferencing solution can be used for
corporate meetings.technical support
functions,distance
learning,telemedicine,job recruiting
interviews,direct-sales,legal work,
telecommuting and manufacturing.
IPSTARs video conferencing solution
can minimize the time and resources
associated with unnecessary corporate
travel. In this way satellites are used for
mobile and cellular communications
which became a part and parcel of our
life. As technology is growing rapidly
providing we may get better satellite with
more facilities than this satellite.









REFERENCE:
MOBILE AND CELLULAR
COMMUNICATIONS by
BALAKRISHNA
MOBILE WIRELESS
COMMUNICATIONS by
MISCHA SCHWARTZ
WIRELESS AND MOBILE
COMMUNICATIONS by JACK
HOLTZMAN
W CDMA MOBILE
COMMUNICATIONS SYSTEM by
KEIJI TACHIKAWA
















1. Dense Wavelength Division Multiplexing Optical Communications

Table of Contents



Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..01
The Growing Demand. . . . . . . . . . . . . . . . . . . . . .02
Bandwidth Demand Driven By . . . . . . . . . . . . . . .02
Telecommunications Infrastructure Good
But Overwhelmed . . . . . . . . . . . . . . . . . . . . . .. 04
Achieving Bandwidth Capacity Goals . . . . . . . . . ... 05
Dense Wavelength Division Multiplexing . . . . . . ... 06
Network Management . . . . . . . . . . . . . . . . . . . 08
Measurements of Performance. . . . . . . . . . . . . . 09
Applications for DWDM . . . . . . . . . . . . . . . . . . 10
The Future of DWDM
Building Block of the Photonic Network . . . . . . . . . . . 11
ANNEX: Practical Considerations of
DWDM Deployment . . . . . . . . . . . . . . . . .11
Conclusion...12
References...12
Websites...12














2. Dense Wavelength Division Multiplexing Optical Communications
DWDM -- Applications of Optical Communications in broad band


The significant problems we face cannot be solved
By the same level of thinking that created them.
-----Albert Einstein



G.Meghana V.Meghana Mrunalini
3
rd
ECE 3
rd
ECE
SITAMS, SITAMS
Murukumbattu Murukumbattu
Chittoor -517001 Chittoor -517001
meghana_g453@rediffmail.com meghana_mrunalini@yahoo.co.in



Abstract
Today, the tendency is towards optical
communication. DWDM offers an attractive
solution to increasing LAN bandwidth without
disturbing the existing embedded fiber, which
populates most buildings and campuses, and
continue to be the cable of choice for the near
future. By multiplexing several relatively
coarsely spaced wavelengths over a single,
installed multimode network, the aggregate
bandwidth can be increased by the multiplexing
factor.The applications can include a
multiplexed high-bandwidth library resource
system, simultaneous information sharing,
supercomputer data and processor interaction, a
myriad of multimedia services, video
applications, and many undreamed-of services.
As demands for more network bandwidth
increase, the need will become apparent for
multiuser optical networks, with issues such as
functionality, compatibility, and cost determining
which systems will eventually be implemented.
Introduction
Need For Speed
Over the last decade, fiber optic cables
have been installed by carriers as the backbone
of their interoffice networks, becoming the
mainstay of the telecommunications
infrastructure. Using time division multiplexing
(TDM) technology, carriers now routinely
transmit information at 2.4 Gb/s on a single
fiber, with some deploying equipment that
quadruples that rate to 10 Gb/s. The revolution in
high bandwidth applications and the explosive
growth of the Internet, however, have created
capacity demands that exceed traditional TDM
limits. As a result, the once seemingly
inexhaustible bandwidth promised by the
deployment of optical fiber in the 1980s is being
exhausted. To meet growing demands for
bandwidth, a technology called Dense
Wavelength Division Multiplexing (DWDM) has
been developed that multiplies the capacity of a
single fiber. DWDM systems being deployed
today can increase a single fibers capacity
sixteen fold, to a throughput of 40 Gb/s! This
cutting edge technologywhen combined with
network management systems and add-drop
multiplexersenables carriers to adopt
optically-based transmission networks that will
meet the next generation of bandwidth demand at
a significantly lower cost than installing new
fiber.





3. Dense Wavelength Division Multiplexing Optical Communications
2. The Growing Demand
It is clear
that as we approach the 21st century the
remarkable revolution in information services
has permeated our society. Communication,
which in the past was confined to narrowband
voice signals, now demands a high quality
visual, audio, and data context .Every aspect of
human interplayfrom business, to
entertainment, to government, to academia
increasingly depends on rapid and reliable
communication networks. Indeed, the advent of
the Internet alone is introducing millions of
individuals to a new world of information and
technology. The telecommunications industry,
however, is struggling to keep pace with these
changes.
Early predictions that current fiber capacities
would be adequate for our needs into the next
century have proven wrong.







Bandwidth Demand Driven By

Growing Competition

During the past several years, a trend has
developed throughout the world to encourage
competition in the telecommunication sector
through government deregulation and market-
driven economic stimulation. Since competition
was introduced into the US long-distance market
in 1984, revenues and access lines have grown
40 percent, while investment in outside plant has
increased 60 percent.
The 1996 telecommunication
Reform Act is giving way to an even broader
array of new operators, both in the long-distance
and local-exchange sectors, which promise to
drive down telecommunications costs and
thereby create new demand for additional
services and capacity. Moreover, while early
competition among long distance carriers was
based mainly on a strategy of price reduction,
todays competitive advantage depends
increasingly on maximizing the available
capacity of network infrastructures and providing
enhanced reliability

Network Survivability

Another significant
cause of bandwidth demand is the carriers need
to guarantee fail-safe networks.
As telecommunications
has become more critical to businesses and
individuals, service providers have been required
to ensure that their networks are fault tolerant
and impervious to outages. In many cases,
telephone companies must include service level
guarantees in business contracts, with severe
financial penalties should outages occur.To meet
these requirements, carriers have broadened
route diversity, either through ring
configurations or 1:1 point-to-point networks in
which back-up capacity is provided on alternate
fibers. Achieving 100% reliability, however,
requires that spare capacity be set aside and
dedicated only to a backup function.This
potentially doubles the bandwidth need of an
already strained and overloaded system, since the
protective path capacity must equal that of the
revenue-generating working path.

New Applications

At the same time that
carriers are enhancing network survivability,
they must also accommodate growing customer
demand for services such as video, high
resolution graphics, and large volume data
processing that require unprecedented amounts
of bandwidth. Technologies such as Frame Relay
and ATM are also adding to the need for
capacity. Internet usage, which some analysts
predict will grow by 700 percent annually in
coming years, is threatening to overwhelm
telephone access networks and further strain the
nations fiber backbone. The growth of cellular
and PCS is also placing more demand on fiber
networks, which serve as the backbone even for
wireless communications.

Telecommunications Infrastructure
Good But Overwhelmed

Since the early 1980s, the telecommunications
infrastructurebuilt on a hierarchy of high
performance central office switches and copper
4. Dense Wavelength Division Multiplexing Optical Communications
lines has been migrating to massive
computerization and deployment of fiber optic
cables. The widespread use of fiber has been
made possible, in part, by the industrys
acceptance of SONET and SDH as the standard
for signal generation.1 Using SONET/SDH
standards, telecommunication companies have
gradually expanded their capacity by increasing
data transmission rates, to the point that many
carriers now routinely transport 2.4 Gb/s (STM
16/OC48).The bad news, however, is that the
once seemingly inexhaustible capacity promised
by ever increasing SONET rates is reaching its
limit. In fact, bandwidth demand is already
approaching the maximum capacity available in
some networks. Primarily because of technical
limitations and the physical properties of
embedded fiber, today there is a practical ceiling
of 2.4 Gb/s on most fiber networks, although
there are instances where STM64/OC192 is
being deployed. Surprisingly, however, the TDM
equipment installed today utilizes less than 1%
of the intrinsic capacity of the fiber!

Achieving Bandwidth
Capacity Goals

Confronted by the need for more capacity,
carriers have three possible solutions:
Install new fiber.
Invest in new TDM technology to achieve
fasterbit rates.
Deploy Dense Wavelength Division
Multiplexing.


Installing New Fiber to Meet Capacity
Needs

For years, carriers have expanded their networks
by deploying new fiber and transmission
equipment. For each new fiber deployed, the
carrier could add capacity up to 2.4 Gb/s.
Unfortunately, such deployment is frequently
difficult and always costly. The average cost to
deploy the additional fiber cable, excluding costs
of associated support systems and electronics,
has been estimated to be about $70,000 per mile,
with costs escalating in densely populated areas.
While this projection varies from place to place,
installing new fiber can be a daunting prospect,
particularly for carriers with tens of
thousands of route miles. In many cases, the
right-ofway of the cable route or the premises
needed to house transmission equipment is
owned by a third party, such as a railroad or even
a competitor. Moreover, single mode fiber is
currently in short supply owing to production
limitations, potentially adding to costs and
delays. For these reasons, the comprehensive
deployment of additional fiber is an impractical,
if not impossible, solution for many carriers.

Higher Speed TDM Deploying
STM-64/OC-192 (10 Gb/s)
As indicated earlier,
STM64/OC192 is becoming an option for
carriers seeking higher capacity, but there are
significant issues surrounding this solution that
may restrict its applicability. The vast majority
of the existing fiber plant is single-mode fiber
(SMF) that has high dispersion in the 1550 nm
window, making STM64/OC192 transmission
difficult. In fact, dispersion has a 16 times
greater effect with STM64/OC192 equipment
than with STM16/OC48. As a result, effective
STM64/OC192 transmission requires either
some form of dispersion compensating fiber or
entire new fiber builds using non-zero dispersion
shifted fiber (NZDSF) which costs some 50
percent more than SMF. The greater carrier
transmission power associated with the higher bit
rates also introduces nonlinear optical effects
that cause degraded wave form quality. The
effects of Polarization Mode Dispersion
(PMD)which, like other forms of dispersion
affects the distance a light pulse can travel
without signal degradationis of particular
concern for STM-64/OC192.This problem,
barely noticed until recently, has become
significant because as transmission speeds
increase, dispersion problems grow
exponentially thereby dramatically reducing the
distance a signal can travel. PMD appears to
limit the reliable reach of STM64/OC192 to
about 70 kms on most embedded fiber. Although
there is a vigorous and ongoing debate within the
industry over the extent of PMD problems, some
5. Dense Wavelength Division Multiplexing Optical Communications
key issues are already known. PMD is
particularly acute in the conventional single
mode fiber that comprises the vast majority of
the existing fiber plant, as well as in aerial fiber.
Unlike other forms of dispersion that are fairly
predictable and easy to measure, PMD varies
significantly from cable to cable. Moreover,
PMD is affected by environmental conditions,
making it difficult to determine ways to offset its
effect on high bit rate systems. As a result,
carriers must test nearly every span of fiber for
its compatibility with STM64/OC192; in
many cases, PMD will rule out its deployment
altogether.

A Third Approach DWDM

Dense Wavelength Division
Multiplexing (DWDM) is a technology that
allows multiple information streams to be
transmitted simultaneously over a single fiber at
data rates as high as the fiber plant will allow
(e.g. 2.4 Gb/s).The DWDM approach multiplies
the simple 2.4 Gb/s system by up to 16 times,
giving an immense and immediate increase in
capacityusing embedded fiber! A sixteen
channel system (which is available today)
supports 40 Gb/s in each direction over a fiber
pair, while a 40 channel system under
development will support 100 Gb/s, the
equivalent of ten STM64/OC192 transmitters!
The benefits of DWDM over the first two
options adding fiber plant or deploying STM
64/OC192for increasing capacity are clear.
Dense Wavelength Division
Multiplexing (DWDM)
DWDM technology utilizes a
composite optical signal carrying multiple
information streams, each transmitted on a
distinct optical wavelength. Although
wavelength division multiplexing has been a
known technology for several years, its early
application was restricted to providing two
widely separated wideband wavelengths, or to
manufacturing components that separated up to
four channels. Only recently has the technology
evolved to the point that parallel wavelengths
can be densely packed and integrated into a
transmission system, with multiple,
simultaneous, extremely high frequency signals
in the 192 to 200 terahertz (THz) range. By
conforming to the ITU channel plan, such a
system ensures interoperability with other
equipment and allows service providers to be
well positioned to deploy optical solutions
throughout their networks. The 16 channel
system in essence provides a virtual 16fiber
cable, with each frequency channel serving as a
unique STM16/OC48 carrier.



The most common form of DWDM uses a fiber
pairone for transmission and one for reception.
Systems do exist in which a single fiber is used
for bidirectional traffic, but these configurations
6. Dense Wavelength Division Multiplexing Optical Communications
must sacrifice some fiber capacity by setting
aside a guard band to prevent channel mixing;
they also degrade amplifier performance. In
addition, there is a greater risk that reflections
occurring during maintenance or repair could
damage the amplifiers. In any event, the
availability of mature supporting technologies,
like precise demultiplexers and Erbium Doped
Fiber Amplifiers (EDFA), has enabled DWDM
with eight, sixteen, or even higher channel
counts to be commercially delivered.





Demultiplexers

With signals as precise and
as dense as those used in DWDM, there needed
to be a way to provide accurate signal separation,
or filtration, on the optical receiver. Such a
solution also needed to be easy to implement and
essentially maintenance free. Early filtering
technology was either too imprecise for DWDM,
too sensitive to temperature variations and
polarization, too vulnerable to crosstalk from
neighboring channels, or too costly. This
restricted the evolution of DWDM. To meet the
requirements for higher performance, a more
robust filtering technology was developed that
makes DWDM possible on a cost effective basis:
the infiber Bragg grating. The new filter
component, called a fiber grating, consists of a
length of optical fiber wherein the refractive
index of the core has been permanently modified
in a periodic fashion, generally by exposure to an
ultraviolet interference pattern. The result is a
component which acts as a wavelength
dependent reflector and is useful for precise
wavelength separation. In other words, the fiber
grating creates a highly selective,
narrow bandwidth filter that functions somewhat
like a mirror and provides significantly greater
wavelength selectivity than any other optical
technology. The filter wavelength can be
controlled during fabrication through simple
geometric considerations which enable
reproducible accuracy. Because this is a passive
device, fabricated into glass fiber, it is robust and
durable.


The advent of the Erbium
Doped Fiber Amplifier (EDFA) enabled
commercial development of DWDM systems by
providing a way to amplify all the wavelengths
at the same time. This optical amplification is
done by incorporating Erbium ions into the core
of a special fiber in a process known as doping.
Optical pump lasers are then used to transfer
high levels of energy to the special fiber,
energizing the Erbium ions which then boost the
optical signals that are passing through.

Significantly, the atomic
structure of Erbium provides amplification to the
broad spectral range required for densely packed
wavelengths operating in the 1550nm region,
optically boosting the DWDM signals. Instead of
multiple electronic regenerators, which required
that the optical signals be converted to electrical
signals then back again to optical ones, the
EDFA directly amplifies the optical signals.
Hence the composite optical signals can travel up
to 600 kms without regeneration and up to 120
kms between amplifiers in a commercially
available, terrestrial, DWDM system.


7. Dense Wavelength Division Multiplexing Optical Communications



Parlaying New Technologies
In to a DWDM System
The fiber Bragg grating and the EDFA
represented significant technological
breakthroughs in their own right, but the
bandwidth potential associated with these
innovations could only be realized by their
incorporation into integrated DWDM transport
systems for optical networks. Without such a
development the fiber grating would retain
component status similar to other passive WDM
devices, while the power potential of EDFAs
would remain underutilized. The ability to
harness the potential of these technologies,
however, is realizable today through
commercially available, integrated, DWDM
systems. Such a system is attained through the
use of Optical AddDrop Multiplexers (OADM)
and sophisticated network management tools.






Optical AddDrop Multiplexers
The OADM based on DWDM technology is
moving the telecommunications industry
significantly closer to the development of optical
networks. The OADM can be placed between
two end terminals along any route and be
substituted for an optical amplifier.
Commercially available OADMs allow carriers
to drop and/or add up to four STM16/OC48
channels between DWDM terminals.
The OADM has express channels that allow
certain wavelengths to pass through the node
Uninterrupted, as well as broadcast capabilities
that enable information on up to four channels to
be dropped and simultaneously continue as
express channels. By deploying an OADM
instead of an optical amplifier, service providers
can gain flexibility to distribute revenue
generating traffic and reduce costs associated
with deploying end terminals at low traffic areas
along a route. The OADM is especially well-
suited for meshed or branched network
configurations, as well as for ring architectures
used to enhance survivability. Such flexibility is
less achievable with current STM64/OC192
offerings.


Network Management

A critical yet often under
appreciated part of any telecommunications
network is the management systemwhose
reliability is especially vital in the complex and
high capacity world of DWDM. Indeed,
dependable and easily accessible network
management services increasingly will become a
distinguishing characteristic of highperformance,
high-capacity systems. Todays leading DWDM
systems include integrated, network management
programs that are designed to work in
conjunction with other operations support
systems (OSSs) and are compliant with the
standards the International Telecommunication
Union (ITU) has established for
telecommunications Management Network
(TMN).Current systems utilize an optical service
channel that is independent of the working
channels of the DWDM product to create a
standardsbased data communications network
that allows service providers to remotely monitor
and control system performance and use. This
network manager communicates with each node
in the system and also provides dual homing
access and selfhealing routing information in
the event of a network disruption. By meeting
ITU standards and utilizing a Q3 interface, the
system ensures that end users retain high
Operations, Administration, Maintenance, and
Provisioning (OAM&P) service.
8. Dense Wavelength Division Multiplexing Optical Communications




Measurements of Performance

There are several aspects that make the design of
DWDM systems unique. A spectrum of DWDM
channels may begin to accumulate tilt and ripple
effects as the signals propagate along a chain of
amplifiers. Furthermore, each amplifier
introduces amplified spontaneous emissions
(ASE) into the system, which cause a decrease in
the signal to noise ratio, leading to signal
degradation. Upon photo detection, some other
features of optically amplified systems come into
play. The Bit Error Rate (BER) is determined
differently in an optically amplified system than
in a conventional regenerated one. The
probability of error in the latter is dominated by
the amount of receiver noise. In a properly
designed optically amplified system, the
probability of error in the reception of a binary
value of one is determined by the signal mixing
with the ASE, while the probability of error in
the reception of a binary value of zero is
determined by the ASE noise value alone.

Optical SNR and Transmitted Power
Requirements of DWDM Systems

Ultimately, the BER
performance of a DWDM channel is determined
by the optical SNR that is delivered to the photo
detector. In a typical commercial system, an
optical SNR of approximately 20 dB, measured
in a 0.1 nm bandwidth, is required for an
acceptably low BER of 1015. This acceptable
SNR is delivered through a relatively
sophisticated analysis of signal strength per
channel, amplifier distances, and the frequency
spacing between channels. For a specific SNR at
the receiver, the amount of transmit power
required in each channel is linearly proportional
to the number of amplifiers as well as the noise
and SNR of each amplifier, and is exponentially
proportional to the loss between amplifiers.
Because total transmit power is constrained by
present laser technology and fiber nonlinearities,
the workable key factor is amplifier spacing.
This is illustrated in the accompanying graph by
showing the relationship for a fiber plant with a
loss of .3 dB/km, a receiver with a .1nm optical
bandwidth, and optical amplifiers with a 5 dB
noise figure. The system illustrated is expected
to cover 600 kms and the optical SNR required
at the receiver is 20 dB measured in the 0.1 nm
bandwidth.



Applications for DWDM

As occurs with many new technologies, the
potential ways in which DWDM can be used are
only beginning to be explored. Already,
however, the technology has proven to be
particularly well suited for several vital
applications.

DWDM is ready made for long-distance
telecommunications operators that use either
pointtopoint or ring topologies. The sudden
availability of 16 new transmission channels
where there used to be one dramatically
improves an operators ability to expand capacity
and simultaneously set aside backup bandwidth
without installing new fiber.

This large amount of capacity is critical to the
development of self-healing rings, which
characterize todays most sophisticated telecom
networks. By deploying DWDM terminals, an
operator can construct a 100% protected, 40 Gb/s
9. Dense Wavelength Division Multiplexing Optical Communications
ring, with 16 separate communication signals
using only two fibers.
Operators that are building or expanding their
networks will also find DWDM to be an
economical
way to incrementally increase capacity, rapidly
provision new equipment for needed expansion,
and futureproof their infrastructure against
unforeseen bandwidth demands.
Network wholesalers can take advantage of
DWDM to lease capacity, rather than entire
fibers, either to existing operators or to new
market entrants. DWDM will be especially
attractive to companies that have low fiber count
cables that were installed primarily for internal
operations but that could now be used to
generate telecommunications revenue.





The transparency of DWDM systems to various
bit rates and protocols will also allow carriers to
tailor and segregate services to various
customers along the same transmission routes.
DWDM
allows a carrier to provide STM4/OC12
service to one customer and STM16/OC48
service to another all on a shared ring!
In regions with a fast growing industrial base
DWDM is also one way to utilize the existing
thin fiber plant to quickly meet burgeoning
demand.


The Future of DWDM
Building Block of the
Photonic Network
DWDM is already established as the preferred
architecture for relieving the bandwidth crunch
many
carriers face. Several US carriers have settled
on DWDM at STM16/OC48 rates as their
technology
of choice for gaining more capacity. With 16
channel DWDM now being deployed
throughout the carrier infrastructure, and with
a 40 channel system coming, DWDM will
continue to be an essential element of future
interoffice fiber systems. Indeed, deployment
of DWDM is a critical first step toward the
establishment of photonic networks in the
access, interoffice, and interexchange
segments of todays telecommunication
infrastructure.



Given the rapidly changing and unpredictable
nature of the telecommunications industry, it is
imperative that todays DWDM systems have the
ability to adapt to future technological
deployments and network configurations.
DWDM systems with an open architecture
provide such adaptability and prepare service
providers to take full advantage of the emerging
photonic network.
10. Dense Wavelength Division Multiplexing Optical Communications
For example, DWDM systems with open
interfaces give operators the flexibility to
provide SONET/SDH, asynchronous/PDH,
ATM, Frame Relay, and other protocols over the
same fiber. Open systems also eliminate the need
for additional high-performance optical
transmitters to be added to a network when the
need arises to interface with specific protocols.
Rather, open systems allow service providers to
quickly adapt
new technologies to the optical network through
the use of off-the-shelf, relatively inexpensive,
and readily available transmitters.
In contrast to DWDM equipment based on
proprietary specifications, systems with open
interfaces provide operators greater freedom to
provision services and reduce long-term costs.
Proprietary based systems, in which
SONET/SDH equipment is integrated into the
optical multiplexer/demultiplexer unit, are
adequate for straight pointtopoint
configurations. Nevertheless, they require
additional and costly transmission equipment
when deployed in meshed networks.


ANNEX: Practical Considerations of
DWDM Deployment
Based on bit rate alone,
DWDM has a fourfold advantage even over the
latestalbeit nascentTDM option, STM
64/OC192. To fairly compare the two
technologies, however, we need to review and
outline what would be an ideal technological
solution for expanding network capacity. This
has to be done in a broad sense, recognizing that
there are instances in which TDM may offer a
better solution than DWDM. Analyzing the
alternative attributes and benefits of each
approach would require a comparison of several
key issues:
1. Compatibility with Fiber Plant.
The majority
of the legacy fiber plant cannot support high bit
rate TDM. Earlier vintage fiber has some
attributes that lead to significant dispersion and
would, therefore, be incompatible with high bit
rate TDM. Recently produced fiberNZDSF,
for exampleis flexible enough for the latest
TDM equipment, but it is expensive and may
limit the ability of carriers to migrate to the
greater bandwidth available through DWDM at
STM16/OC48 rates.

2. Transparency and Interoperability.
The chosen
solution must provide interoperability
between all vendors transmission equipment,
both existing and new. It must be vendor
independent and conform to international
standards such as the proposed ITU channel
spacing and be based on the Open Systems
Interconnection (OSI) model. Furthermore, it
must be capable of supporting mixed protocols
and signal formats. Some commercially available
DWDM systems provide such transparency and
can be used with any SONET/SDH bit rates, as
well as with asynchronous/PDH protocols.

3. Migration and Provisioning Strategy.
The best
solution must also offer the ability to expand. It
must be capable of supporting differing bit rates
and have channel upgrade
capability. It has to be a long-term solution and
not just a short-term fix. TDM systems already
are reaching their technological barriers and
STM64/OC192, although rich in capacity,
may represent a practical limit that could only be
superseded by DWDM.

4. Network Management.
A properly engineered
solution should also support a comprehensive
network element management system. The solu-
tion must meet international standards, interface
with the carriers existing operating system, and
provide direct connection for all of the network
elements for performance monitoring, fault
identification and isolation, and remedial action.
Sophisticated and reliable network management
11. Dense Wavelength Division Multiplexing Optical Communications
programs will become increasingly important to
deal with the increased complexity and expanded
capacity that will be unleashed through
migration
to optical networks.
5. Technical Constraints.

The systems
deployed must be able to resolve some of the
outstanding technical issues present in current
lightwave transmission systems. For example,
signal dispersion compensation, filtering and
channel cross talk, nonlinear four-wave mixing,
and physical equipment density are some of the
more common problems.Ideally, an optimized
system level architecture that provides a coherent
and unified approach should be chosen over one
that involves the acquisition and deployment of
components on a piecemeal and uncoordinated
basis.

Conclusion
Twenty years ago, who could
have predicted the success of personal
computers, much less the growth of the Internet
and the Web? The next 20 years are likely to
bring surprises, too. If we recognize this and
install robust communications plant, we will be
prepared for them.
Optical networking provides
the backbone to support existing and emerging
technologies with almost limitless amounts of
bandwidth capacity. All-optical networking (not
just point-to-point transport) enabled by optical
cross-connects, optical programmable add/drop
multiplexers, and optical switches provides a
unified infrastructure capable of meeting the
telecommunications demands of today and
tomorrow. Transparently moving trillions of bits
of information efficiently and cost-effectively
will enable service providers to maximize their
embedded infrastructure and position themselves
for the capacity demand of the next millennium.
References
1."Fiber-Optic Communication Systems -
Second Edition"
--Written by Govind P. Agrawal


2."Photonic Networks - Advances in Optical
Communications"
-- Written by Giancarlo Prati (Ed.)
3."Optical Fiber Communication Systems"
-- Written by Leonid Kazovsky,
Sergio Benedetto & Alan Wilner
Websites
http://www.iec.org
www.searchnetworking.techtarget.com
http://www.techguide.com






12. Dense Wavelength Division Multiplexing Optical Communications
EDGE TECHNOLOGY




Presented By

Y.PHANI GOPAL A.V.RAJ A SEKHAR
ID No: 05775A0405 ID NO: 04771A04B8
phanis.yalamanchili@gmail.com raja_allamneni@yahoo.co.in
Ph: 9848595849 Ph: 9912117219


B-tech III year, E.C.E,
Rao & Naidu Engineering College,
Ongole.









ABSTRACT

This paper discusses one of the most upcoming technologies in mobile
communication the EDGE TECHNOLOGY. EDGE is the next step in the
evolution of GSM. The objective of the new technology is to increase data
transmission rates and spectrum efficiency and to facilitate new applications and
increased capacity for mobile use. EDGE can be introduced in two ways: (1) as a
packet-switched enhancement for general packet radio service (GPRS), known as
enhanced GPRS or EGPRS, and (2) as a circuit-switched data enhancement called
Enhanced Circuit-Switched Data (ECSD).
The purpose of this paper is to describe EDGE technology and how it
leverages existing GSM systems and complements WCDMA for further growth by
comparing each aspect of the technology with GPRS.This paper focuses on the packet-
switched enhancement for GPRS, called EGPRS.GPRS allows data rates of 115 kbps
and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of
offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps. A new
modulation technique and error-tolerant transmission methods, combined with
improved link adaptation mechanisms, make these EGPRS rates possible. This is the
key to increased spectrum efficiency and enhanced applications, such as wireless
Internet access, e-mail and file transfers. EGPRS will be one of the pacesetters in the
overall Wireless technology evolution in conjunction with WCDMA.










1.Introduction:
GPRS and EGPRS have different protocols and different behavior on the base
station system side. However, on the core network side, GPRS and EGPRS share the
same packet-handling protocols and, therefore, behave in the same way. Reuse of the
existing GPRS core infrastructure (serving GRPSsupport node/gateway GPRS support
node) emphasizes the fact that EGPRS is only an add-on to the base station system and
is therefore much easier to introduce than GPRS (Figure 1).


In addition to enhancing the throughput for each data user, EDGE also
increases capacity. With EDGE, the same time slot can support more users. This
decreases the number of radio resources required to support the same traffic, thus
freeing up capacity for more data or voice services. EDGE makes it easier for circuit-
switched and packet-switched traffic to coexist while making more efficient use of the
same radio resources. Thus in tightly planned networks with limited spectrum, EDGE
may also be seen as a capacity booster for the data traffic.


2.EDGE technology:
2.1 EDGE Modulation technique:
The modulation type that is used in GSM is the Gaussian minimum shift
keying (GMSK), which is a kind of phase modulation. This can be visualize in an I/Q
diagram that shows the real (I) and imaginary (Q) components of the transmitted
signal (Figure 3) transmitting a zero bit or one bit is then represented by changing the
phase by increments of p. Every symbol that is transmitted represents one bit; that is,
each shift in the phase represents one bit. To achieve higher bit rates per time slot than
those available in GSM/GPRS, the modulation method requires change. EDGE is
specified to reuse the channel structure, channel width, channel coding and the
existing mechanisms and functionality of GPRS and HSCSD. The modulation
standard selected for EDGE, 8-phase shift keying (8PSK), fulfills all of those
requirements. 8PSK modulation has the same qualities in terms of generating
interference on adjacent channels as GMSK. This makes it possible to integrate EDGE
channels into an existing frequency plan and to assign new EDGE channels in the
same way as standard GSM channels. The 8PSK modulation method is a linear
method in which three consecutive bits are mapped onto one symbol in the I/Q plane.










The symbol rate, or the number of symbols sent within a certain period of time,
remains the same as for GMSK, but each symbol now represents three bits instead of
one. The total data rate is therefore increased by a factor of three. The distance
between the different symbols is shorter using 8PSK modulation than when using
GMSK. Shorter distances increase the risk for misinterpretation of the symbols
because it is more difficult for the radio receiver to detect which symbol it has
received. Under good radio conditions, this does not matter. Under poor radio
conditions, however, it does. The extra bits will be used to add more error correcting
coding, and the correct information can be recovered. Only under very poor radio
environments is GMSK more efficient. Therefore the EDGE coding schemes are a
mixture of both GMSK and 8PSK.
2.2 Coding schemes:
For GPRS four coding schemes designated CS1 through CS4, are defined.
Each has different amounts of error-correcting coding that is optimized for different
radio environments. For EGPRS, nine modulation coding schemes, designated MCS1
through MCS9, are introduced. These fulfill the same task as the GPRS coding
schemes. The lower four EGPRS coding schemes (MSC1 to MSC4) use GMSK,
whereas the upper five (MSC5 to MSC9) use 8PSK modulation. Figure 4 shows both
GPRS and EGPRS coding schemes, along with their maximum throughputs. GPRS
user throughput reaches saturation at a maximum of 20 kbps with CS4, whereas the
EGPRS bit rate continues to increase as the radio quality increases, until throughput
reaches saturation at 59.2 kbps. Both GPRS CS1 to CS4 and EGPRS MCS1 to MCS4
use GMSK modulation with slightly different throughput performances. This is due to
differences in the header size (and payload size) of the EGPRS packets.

This makes it possible to resegment EGPRS packets. A packet sent with a
higher coding scheme (less error correction) that is not properly received, can be
retransmitted with a lower coding scheme (more error correction) if the new radio
environment requires it. This resegmenting (retransmitting with another coding
scheme) requires changes in the payload sizes of the radio blocks, which is why
EGPRS and GPRS do not have the same performance for the GMSKmodulated coding
schemes. Resegmentation is not possible with GPRS.



3.Packet handling:
Another improvement that has been made to the EGPRS standard is the ability
to retransmit a packet that has not been decoded properly with a more robust coding
scheme. For GPRS, resegmentation is not possible.



Once packets have been sent, they must be retransmitted using the original coding
scheme even if the radio environment has changed. This has a significant impact on
the throughput, as the algorithm decides the level of confidence with which the link
adaptation (LA) must work. Below is an example of packet transfer and retransmission
for GPRS (Figure 5).

The GPRS terminal receives data from the network on the downlink. Due to a
GPRS measurement report that was previously received, the link adaptation
algorithm in the base station controller decides to send the next radio blocks
(e.g., numbers 1 to 4) with CS3. During the transmission of these packages, the
carrier-to-interference ratio (C/I) decreases dramatically, changing the radio
environment. After the packets have been transmitted, the network polls for a
new measurement report, including the acknowledged /unacknowledged
bitmap that tells the network which radio blocks were received correctly.
The GPRS handset replies with a packet downlink
acknowledged/unacknowledged message containing the information about the
link quality and the bitmap. In this scenario, it is assumed that packets 2 and 3
were sent erroneously.
Based on the new link quality information, the GPRS link adaptation
algorithm will adapt the coding scheme to the new radio environment using
CS1 for the new packets 5 and 6. However, because GPRS cannot resegment
the old packets, packets 2 and 3 must be retransmitted using CS3, although
there is a significant risk that these packets still may not be decoded correctly.
As a result, the link adaptation for GPRS requires careful selection of the
coding scheme in order to avoid retransmissions as much as possible. With
EGPRS, resegmentation is possible. Packets sent with little error protection
can be retransmitted with more error protection, if required by the new radio
environment. The rapidly changing radio environment has a much smaller
effect on the problem of choosing the wrong coding scheme for the next
sequence of radio blocks because resegmentation is possible. Therefore, the
EGPRS link-controlling algorithm can be very aggressive when selecting the
modulation coding schemes.


3.1 Interleaving:
To increase the performance of the higher coding schemes in EGPRS (MCS7
to MCS9) even at low C/I, the interleaving procedure has been changed within the
EGPRS standard. When frequency hopping is used, the radio environment is changing
on a per-burst level. Because a radio block is interleaved and transmitted over four
bursts for GPRS, each burst may experience a completely different interference
environment. If just one of the four bursts is not properly received, the entire radio
block will not be properly decoded and will have to be retransmitted. In the case of
CS4 for GPRS, hardly any error protection is used at all. With EGPRS, the standard
handles the higher coding scheme differently than GPRS to combat this problem.
MCS7, MCS8 and MCS9 actually transmit two radio blocks over the four bursts, and
the interleaving occurs over two bursts instead of four. This reduces the number of
bursts that

must be retransmitted should errors occur. The likelihood of receiving two consecutive
error free bursts is higher than receiving four consecutive error free bursts. This means
that the higher coding schemes for EDGE have a better robustness with regard to
frequency hopping.


4.EGPRS link controlling function:
To achieve the highest possible throughput over the radio link, EGPRS uses a
combination of two functionalities: link adaptation and incremental redundancy.
Compared to a pure link adaptation solution, this combination of mechanisms
significantly improves performance.

4.1 Link adaptation:
Link adaptation uses the radio link quality, measured either by the mobile
station in a downlink transfer or by the base station in an uplink transfer, to select the
most appropriate modulation coding scheme for transmission of the next sequence of
packets. For an uplink packet transfer, the network informs the mobile station which
coding scheme to use for transmission of the next sequence of packets. The
modulation coding scheme can be changed for each radio block (four bursts), but a
change is usually initiated by new quality estimates. The practical adaptation rate is
therefore decided by the measurement interval. There are three families: A, B and C.
Within each family, there is a relationship between the payload sizes, which makes
resegmentation for retransmissions possible.

4.2 Incremental redundancy:
Incremental redundancy initially uses a coding scheme, such as MCS9, with
very little error protection and without consideration for the actual radio link quality.
When information is received incorrectly, additional coding is transmitted and then
soft combined in the receiver with the previously received information. Soft
combining increases the probability of decoding the information. This procedure will
be repeated until the information is successfully decoded. This means that information
about the radio link is not necessary to support incremental redundancy. For the
mobile stations, incremental redundancy support is mandatory in the standard.



5.CONCLUSION:
EGPRS introduces a new modulation technique, along with improvements to
the radio protocol, that allows operators to use existing frequency spectrums (800,900,
1800 and 1900 MHz) more effectively. The simple improvements of the existing
GSM/GPRS protocols make EDGE a cost-effective, easy-to implement add-on.
Software upgrades in the base station system enable use of the new protocol; new
transceiver units in the base station enable use of the new modulation technique.
EDGE triples the capacity of GPRS. This capacity boost improves the performance of
existing applications and enables new services such as multimedia services. It also
enables each transceiver to carry more voice and/or data traffic. EDGE enables new
applications at higher data rates. This will attract new subscribers and increase an
operators customer base. Providing the best and most attractive services will also
increase customer loyalty. EDGE and WCDMA are complementary technologies that
together will sustain an operators need for third generation network coverage and
capacity nationwide. Enhancing a GPRS network is accomplished through evolution
with EDGE within the existing spectrum and by deploying WCDMA in the new
frequency band. Rolling out the two technologies in parallel enables faster time to
market for new high-speed data services as well as lower capital expenditures. EDGE
is designed to integrate into the existing network. The installed base evolves; it is not
replaced or built from scratch, making implementation seamless. Fast, easy rollout
means shorter time to market, which in turn can lead to increased market share. With
EDGE, operators can offer more wireless data applications, including wireless
multimedia, e-mail, web infotainment and positioning services, for both consumer and
business users. Subscribers will be able to browse the Internet on their mobile phones,
personal digital assistants or laptops at the same speed as on stationary personal
computers.





REFERENCES

G. Stuber Principles of Mobile Communication,

Ericsson, "Downlink GMSK Interference Suppression"

H. Trigui and D. Slock, Cochannel Interference Cancellation Within the Current
GSM Standard
WWW.ERICSSON.COM

.












SRI VENKATESWARA UNIVERSITY COLLEGE OF
ENGINEERING
TIRUPATI.



DEPARTMENT OF ELECTRICAL ENGINEERING

This paper is presented by


1. S SRINIVASULU

SVU collegeof engineering
Room No:1305
Visweswara block
SVUCE hostels
Tirupati
ph:0877-2248466
email:mheart_sreenu@yahoo.com

2. C MADHAVA KRISHNA

SVU college of engineering
Room no:1112
Visweswara block
SVUCE hostels
Tirupati
Ph:0877-2248466
email:eee.cmk@gmail.com












1
ENRGY HUBS FOR THE FUTURE




ABSTRACT

Most of todays energy infrastructures evolved during the second half of
the twentieth century, and it is questionable if they meet the requirements of
tomorrow. Besides congested transmission systems, many facilities are approaching
the end of their prospected lifetime. In addition, other issues such as the continuously
growing demand for energy, the dependency on limited fossil energy resources, the
restructuring of power industries, and the general aim of utilizing more sustainable
and environmentally friendly energy sources raise the question of whether piecewise
changes of the existing systems are sufficient to cope with all these challenges.
Various scientific studies have investigated future scenarios based on
boundary conditions given by todays structures, such as standardized electric voltage
and gas pressure levels. Restrictions given by the existing systems are basically
neglected in order to determine real optima. The consideration of multiple energy
carriers, not only electricity, represents one of the key characteristics of this project.
There is a belief that synergies among various forms of energy represent a great
opportunity for system improvements. Besides the possibilities of modern information
technology, state of the art as well as emerging and looming energy technologies, e.g.,
fuel cells, are taken into account. The time horizon for implementation is set to 3050
years from now. Thus, the basic question to be answered is: How should energy
systems look in 3050 years, and what can be expected from them? Under these
conditions, two key approaches are reasonable: transformation, conversion, and
storage of various forms of energy in centralized units called energy hubs and
combined transportation of different energy carriers over longer distances in single
transmission devices called energy interconnectors. The project team soon realized
that only a few established tools were available for the integrated analysis of multiple
energy carrier systems, thus they focused in a first phase on developing a modeling
and analysis framework. In the second phase, which recently started, optimal system
structures and operation strategies are determined and compared with conventional
infrastructures using the developed tools. The result of this phase is the greenfield
approach. The final phase of the project is dedicated to identifying transition paths
and bridging systems leading from todays systems to the identified optimal
structures. Figure 1 outlines this process.


2
Industrial, commercial, and
residential consumers require various
forms of energy services provided by
different infrastructures. In the
industrialized part of the world, coal,
petroleum products, biomass, and grid-
bound energy carriers such as
electricity, natural gas, and district
heating/cooling are typically used. So
far, the different infrastructures are
independently. Combining the systems
can result in a number of benefits.
Synergy effects among various energy
carriers can be achieved by taking
advantage of their specific virtues.
Electricity, for example, can be
transmitted over long distances with
comparably low losses; chemical
energy carriers such as natural gas can
be stored employing relatively simple
and cheap technologies. With so-called
line packing techniques, compressible
fluids can be stored in pipeline
networks, even if there are no
dedicated storage devices installed.
bining the infrastructures
means to
considered and operated almost Com
couple them,
thereby
pproach in
Networks
enabling exchange
of power among them.
Couplings are established
by converter devices that
transform power into other
forms. The question to be
answered is, of course,
where to put which devices
and how to operate them.
Answering this question is
essential for the system
layout and therefore one of
the central issues in the
project. Therefore, models
and methods have been
developed to find the
optimal coupling and power
exchange among multiple
energy carriers based on various
criteria such as cost, emissions, energy
efficiency, availability,
security ,and other
parameters.
The Energy Hub
Concept
The key a
the Vision of Future
Energy
project is the so-called
energy hub. An energy
hub is considered a unit
where multiple energy
carriers can be
converted, conditioned,
and stored. It represents
an interface between
different energy infrastructures and/or
loads. Energy hubs consume power at
3
their input ports connected to, e.g.,
electricity and natural gas
infrastructures, and provide certain
required energy services such as
electricity, heating, cooling, and
compressed air at the output ports.
Within the hub, energy is converted
and conditioned using, e.g., combined
heat and power technology,
transformers, power-electronic
devices, compressors, heat exchangers,
and other equipment. Real facilities
that can be considered as energy hubs
are, for example, industrial plants
(steel works, paper mills), big building
complexes (airports, hospitals, and
shopping malls), rural and urban
districts, and small isolated systems
(trains, ships, aircrafts). Figure 2
shows an example of an energy hub.
The components within the hub may
establish redundant connections
nergy
s of
transm
ta
sion is the
nsmission is more
efficien
between inputs and outputs. For
example, the electricity load connected
to the hub in Figure 2 can be met by
consuming all power directly from the
electricity grid or generating part or all
of the required electricity from natural
gas. This redundancy in supply results
in two important benefits, which can
be achieved using energy hubs. First,
reliability of supply can be increased
from the loads perspective because it
is no longer fully dependent on a single
network. Alternatively, reliability of
the individual infrastructures could be
reduced (e.g., by reducing
maintenance) while availability for the
load remains high. Second, the
additional degree of freedom enables
optimization of the supply of the hub.
Energy carriers offered at the hubs
input can be characterized based on
their cost, related emissions,
availability, and other criteria; the
inputs can then be optimally
dispatched based on these quantities.
In addition, utilizing energy storage
represents an opportunity for
increasing the overall system
performance; therefore, storage is
already taken into account in the
planning phase. Especially when
energy sources with intermittent
primary energy (e.g., wind, solar) are
considered, storage becomes important
since it enables affecting the
corresponding power flows.
Compensation of fluctuating power
flows is possibly the most evident
application of energy storage
technology. However, investigations
have shown that storage can be utilized
in such a way that it positively affects
all of the aforementioned criteria,
especially when considering a
liberalized market environment.
The Interconnector Concept
Integrating different e
carriers is also possible in term
ission. In the Vision of Future
Energy Networks project, a device
named energy interconnector is
proposed that enables integrated
transpor tion of electrical, chemical,
and thermal energy in one underground
device. So far, the most promising
layout seems to be a hollow electrical
conductor carrying a gaseous medium
inside (see Figure 3).
The basic motivation for
combined transmis
possibility of efficiency improvement
due to waste heat recovery. The heat
losses generated in the electrical
conductor are partially stored in the
gas (whose temperature increases
consequently) and could be recovered
at the end of the link. Alternatively,
losses could be used for increasing the
gas temperature before expanding it to
keep the temperature within required
limits. Comparing such dual concepts
with conventional, decoupled
transmission lines shows advantages
and disadvantages.
From an energetic point of
view, combined tra
t if the heat losses can be used
at the end of the link. From a legal
4
point of view, the device could be
interesting since rights of way and
other issues could be managed for
electrical and chemical transmission
simultaneously. Like normal pipelines,
the energy interconnector can also be
used for gas storage (line pack). An
issue to consider is the dependability
of the interacting power flows
(electricity and gas), which could
reduce supply redundancy.
Considering contingencies on the one
hand, common mode failures could be
a serious issue. On the other hand,
investigations have shown that
operational boundaries arise from the
coupling of the flows. Simply
speaking, a certain gas flow is
necessary to provide sufficient cooling
for the electrical conductor. Studies
have shown that these operational
restrictions can be relieved when
combining energy interconnectors with
energy hubs. However, under certain
circumstances, the energy
interconnector promises better
performance than traditional, separated
transmission technologies. The
integration of gaseous and electrical
energy transmission is only one of
several possible approaches. Concepts
involving liquid chemical carriers or
further forms of energy may be
advantageous as well.


New Models and Analysis Tools
Economic and physical

carriers
steadystate flow
ropriate and commonly
used.
performances of different energy
are well understood, but global
features of integrated systems have not
yet been investigated extensively.
Since there are only a few tools
available for the analysis of such
systems, the development of a
modeling and analysis framework for
multicarrier energy systems has
been identified as an essential
need. The aim was to develop the
same tools as are available for
electricity systems e.g., power
flow, economic dispatch,
reliability, and stability. In
addition, models for storage and
interconnector technology were
developed that are capable of
integrating into the system
analysis framework.
Power Flow
For general investigations
on the system level,
models are app
The flows through power
converter devices are simply analyzed,
defining their energy efficiency as the
ratio of steady-state output and input.
With multiple inputs and outputs, a
conversion matrix can be defined that
links the vectors of the corresponding
power lows. Figure 4 outlines this
modeling concept. The coupling matrix
describes the transformation of power
from the input to the output of the hub;
5
it can be derived from the hubs
converter structure and the converters
efficiency characteristics. Describing
the behavior of storage devices
requires considering time and energy
as additional variables. Various flow
models are available for hydraulic and
electric networks, from general
network flow to more detailed steady-
state power flow models. The
appropriate degree of approximation
depends on the kind of investigation.
Combined transmission links
(interconnectors) can be modeled
similar to energy hubs via coupling
matrices.
Reliability
Reliability and availability of
energy supply is an important design
refore, models have also
been developed for this kind of
considering
ystems. The
basic q
the most
important roles. Energy prices and
criterion; the
investigation. Failure and repair rates
can be defined for all components in
the system. Considering an energy hub,
failure and repair rates of the coupling
elements can be stated in matrices
similar to the conversion matrix. The
influence of the energy hub, i.e., an
increase or decrease of availability
between input and output of the hub,
can be analyzed with this approach.
Furthermore, the model can be used in
the optimization process.
System Optimization
Various optimization problems
can be identified when
integrated multicarrier s
uestion of combined optimal
power flow is how much of which
energy carriers the hubs should
consume and how they should be
converted in order to meet the loads at
their outputs. This is an operational
problem. In the planning phase, the
optimal structure of the hub may be of
interest, which can be found by
determining the optimal coupling
matrix that describes the conversions
within the hub. Converters can then be
selected to establish this optimal
coupling, and missing technologies can
be identified. These and other
optimization problems have been
formulated and analyzed using various
criteria such as energy costs, system
emissions, and transmission security
measures. Multi objective optimization
can be performed by combining
different criteria in composite
objective functions.

Evaluation of Investment
When talking about completely
new systems on the green field, the
question of cost plays one of
savings in energy cost can be
estimated, although assumptions are
often critical. The evaluation of
investment costs is more difficult. How
much will new technologies such as
fuel cells costs in 3050 years? To
avoid speculations based on doubtful
assumptions, the question is put
differently. The justifiable investment
costs are determined by comparing the
performances of the conventional and
the proposed/assumed system. For
example, energy cost and CO2 taxes
6
can be compared for a conventional
system and an optimized greenfield
structure. From the annual savings due
to higher energy efficiency and less
emissions of new technologies, a
present value can be determined that
represents the break-even investment
cost of the new technology. With this
method, results still depend on critical
assumptions as inflation, compound-
ing, and risk. However, using this tool
for sensitivity analysis yields deeper
insight into economics; it enables
identification of the significant
parameters.
Figure 5 shows an example
where the sensitivity between total
energy efficiency of a cogeneration-
equippe
ility in Switzerland, the
Baden, which
e
should be
le to storewood chips,
d energy hub and its justifiable
investment cost was determined. In
this particular case, results show that
even state-of-the-art technology could
keep up with the requirements, i.e.,
installing such cogeneration devices
would be reasonable from an economic
point of view (under certain
assumptions).
A First Application
The energy hub idea was picked up by
a municipal ut
Regionalwerke AG
plans to build an energy hub
containing wood chip gasification and
methanation and a cogeneration plant.
The idea is to generate synthetic
natural gas (SNG) and heat from wood
chips, a resource which is available in
the companys supply region. The
produced SNG can then either be
directly injected in the utilitys natural
gas system, or converted into
electricity via a cogeneration unit and
fed into the electric distribution
network. Waste heat, which accrues in
both cases, can be absorbed by the
local district heating network. The
whole system can be seen as an energy
hub processing different energy
carrierwood chips, electricity, heat,
and SNG. In addition to these energy
carriers, the gasification process
requires nitrogen and steam, which
have to be provided at the hub input.
Figure 6 gives an overview of the hub
layout. The new thing here is not the
technology used (converters), but the
integrated planning and operation,
which is believed to enable better
overall system performance. The
developed multicarrier analysis tools
can be applied to this energy hub to
answer some fundamental questions.
. Design/Dimensioning: How should
the converters be rated, i.e., how much
electricity, SNG, and heat should th
hub be able to produce?
. Operation: How should the energy
hub be operated, how much
electricity/SNG/heat
generated depending on the actual load
situation?
. Storage: Which and how much of
which energy carrier should the energy
hub be ab
SNG, heat, electricity?
. System Impact: How does the energy
hub influence the overall system
performance in terms of reliability/
availability, energy efficiency, and
power quality? The project is still in
the planning phase. A first version of
the hub, which only contains wood
gasification and cogeneration units,
should be realized by 2009. The full
version, which then includes the
methanation part (thus enabling infeed
of SNG into the natural gas network)
should start running in 2011.

7

Conclusion
The research project Vision
of Future Energy Networks
itself from others by
aiming
gy offers a powerful
ple
stability), and

Vision of future energy
networks, project home page,
2006, [Online].
ww.eeh.ee.et

2.


networks, in Proc. IEEE

3.
Favre-Perrod, B.
Klockl, and G. Koeppel, A
distinguishes
at a greenfield approach,
integrating multiple energy carriers,
and considering a timeframe of 3050
years from now. The definition of
energy hubs and the conception of
combined interconnector devices
represent key approaches towards a
multicarrier greenfield layout. Models
and tools for technical (e.g., power
flow, reliability), economical (e.g.,
energy and investment cost), and
environmental (e.g., CO2 emissions)
investigations in multicarrier energy
systems have been developed and used
in various case studies. The main
conclusions that can be drawn so far
are as follows.
. The energy hub concept enables new
design approaches for multiple energy
carrier systems.
. The flexible combination of different
energy carriers using conversion and
storage technolo
approach for various system
improvements. Energy cost and system
emissions can be reduced, security and
availability of supply can be increased,
congestion can be released, and overall
energy efficiency can be improved.
. The developed modeling and analysis
framework provides suitable tools for
the planning and operation of multi
energy carrier systems.
Future work includes the development
of dynamic modeling and analysis
tools (e.g., for evaluating
the control of a system of
interconnected energy hubs
(centralized versus decentralized,
agent-based). The concepts will be
further refined and elaborated in more
detail using realistic examples and case
studies.

For Further Reading
1.
Available:http://w
hz.ch/psl/research/vofen.html.
P. Favre-Perrod, M. Geidl, G.
Koeppel, and B. Klockl, A
vision of future energy
Inaugural.
Conf. Exposition Africa,
Durban, South Africa, 2005, M.
Geidl, P.
greenfield approach for future
power systems.
8

TECHNICAL PAPER REGISTRATION FORM


Title of the paper : ESD PROTECTION FOR HIGH-SPEED I/O SIGNALS

College : SISTAM COLLEGE OF ENGINEERING

State : ANDRA PRADESH

Address of college : SISTAM COLLEGE OF ENGINEERING,
AMPOLU ROAD,
SRIKAKULAM,
ANDRA PRADESH.

Branch : ECE

Participant 1 :
Name : K.UTTAM
Email : konathalauttam@yahoo.co.in

Participant 2 :
Name : R.SRINIVASA RAO
Email : cnu_reesu@yahoo.co.in






ESD PROTECTION FOR HIGH-
SPEED I/O SIGNALS

ABSTRACT:
Electrostatic discharges constitute a
danger for integrated circuits that never
should be underestimated. This paper
focuses on what is ESD, where does this
term occurs, what are the damages caused
by ESD and its cost of damage at each and
every level. Electrostatic charging can
occur as a result of friction, as well as for
other reasons. When two non conducting
materials rub together, then are separated,
opposite electrostatic charges remain on
both. These charges attempt to equalize
each other. A common example of the
generation of such charges is when one
walks with well-insulated shoes on a
carpet that is electrically non conducting,
causing the body to become charged. If a
conducting object is touched, for example,
a water pipe or a piece of equipment
connected to a ground line, the body is
discharged. The energy stored in the
human body is injected into the object that
is touched, and is converted primarily into
heat. The various levels of human
perceptions and its ranges are also
discussed. We have some elementary
circuit models in which the ESD occur and
some graphical comparisons of the above
models. This test circuits have been
developed to test sensitivity to electrostatic
discharges by simulating various
scenarios. These test circuits are analyzed
in more detail in the following paragraphs.
These should provide the design engineer
with insight into the reliability of these
tests and the effectiveness of the individual
protection circuits, providing the criteria to
decide, in individual cases, whether
additional precautions are necessary. We
have the classification of failures based on
ESD, their respective contribution
percentages and the basic circuits for the
preventions of ESD.
It is critical to identify the I/O interface to
be protected before selecting the ESD
protection device


Human-Body Model:
The human-body model is
described in MIL-STD-883B. The test is a
simulation, in which the energy stored in a
human body is discharged into an
integrated circuit. The body is charged as a
result of friction, for example. Figure 7
shows the test circuit. In this circuit, a
capacitor (C =100 pF) is charged through
a high-value resistor to 2000 V, then
discharged through a 1.5-kilo ohms
resistor into the device under test.


The 100-pF capacitor simulates the
capacitance of the human body. However,
the actual capacitance of the human body
is between 150 pF and 500 pF, depending
on size and contact area (shoe size). Also,
the 1.5-kW value of the discharge resistor
must be considered. The internal resistance
of the human body ranges from a few
kilohms to a few hundred kilohms,
depending on various factors, which
include the humidity of the skin. However,
if the discharge takes place through a
metallic object, such as a screwdriver, the
discharge resistor can be assumed to be a
few tens of ohms. For these reasons, the
corresponding Standard IEC 802-2
prescribes a test circuit with a capacitance
of 150 pF which in practice, is more
realistic and alower value of discharge
resistor (R = 330 W). This standard is,
however, concerned with a test
specification for equipment that is not
directly applicable to integrated circuits.
Using a value of 2000 V also is
questionable because, when a discharge
causes a tingling in the tips of the fingers,
the body has been charged to at least 4000 V
The energy of about 0.4 mWs that must be
dissipated in the actual protection circuit is
comparatively small. The major part of the
energy stored in the capacitor is converted
into heat in the discharge resistor. A
considerably more-important parameter in
the test, according to this method is the
rise time of the current during the
discharge. Standard IEC 802-2 prescribes
arise time of about 0.7 ns at the actual
location of the discharge. This value is of
interest because, with a fast discharge, at
the first instant only a small part of the
protection circuit conducts. Only during
the subsequent phase (a matter of
nanoseconds) does the current spread over
the complete conducting region of the
protection circuit. Therefore, during the
first moments of the discharge, the danger
of a partial overload of the protection
circuit exists. A similar effect can be
observed with thyristors and triacs. With
such components, the rate of current rise
after triggering must be limited because, at
first, only a small area of the
semiconductor near the trigger electrode is
conducting. A high current density can
result in the destruction of the component.
This effect is, however, in many cases
responsible for the fact that, even with
discharges from considerably higher
voltages, the destruction of the circuit does
not necessarily occur. The point at which
the discharge occurs usually is not at the
connections to the integrated circuit, but,
instead, to the cabinet of the equipment or
to the contact of a plug. Between this point
and the endangered integrated circuit there
is a length of conductor that has significant
inductance. This inductance slows the rate
of rise of the current, and helps ensure that
the discharge current is spread evenly over
the complete protection circuit.

Machine Model:
The test using the machine model
simulates the situation in machinery or
other equipment that contains electronic
components or modules. The casing of
such equipment is constructed largely of
metal, but often contains plastic bearings
or other parts having a wide variety of
shapes and sizes. When individual parts of
the machine are in motion, these plastic
bearings can generate electrostatic charges.
Figure 8 shows the test circuit. In this test,
a C =200-pF capacitor is charged to 500
V, then discharged, without a series
resistor, into the device under test.

Because the charged metal parts
have a very low electrical resistance in this
test circuit, no series resistor is used to
limit the current. Therefore, the peak
current in the device under test is
significantly higher than in the previously
described human-body test circuit.
Whereas, in the human-body test circuit,
extremely short rise times are required, as
a result of the extremely low inductance of
the construction used, considerably higher
inductances of 500 nH are specified in the
discharge circuit of the machine model. As
a result, the rise time of the current and
consequently its amplitude, are limited.
Therefore, the problem of the partial
overload of the protection circuit of the
device under test is reduced significantly.
The energy of 4 mWs to be dissipated is
considerably higher than in the machine-
model test. Because of the high energy
used in this test, integrated circuits usually
cannot be tested with voltages of 500 V
without damaging the device under test.
As a guideline, assume that components
that survive, without damage, the human-
body model test with a voltage of up to
2000 V, also are not damaged by a
machine-model test using voltages of up to
200 V.

Charged-Device Model:
Despite the informative tests
conducted according to the methods
described in the previous sections, in
practice, damage due to electrostatic
discharges also can occur during the
processing of integrated circuits. It has not
been possible to reproduce the profile of
failures observed during processing by
using normal test equipment. Intensive
investigations show that electrostatic
charging, and consequent discharging, of
the device are responsible for the damage.
Charging occurs when the integrated
circuit slides along plastic transport rails
before being inserted into circuit boards,
and the discharge occurs when the
component lands on the circuit board.
Similarly, damage to the component can
occur after it has been tested, when it
slides from the test station onto the
transport rail, and is damaged by the
electrostatic charging that occurs. During
testing, the integrated circuit was without
fault, but it was damaged immediately
afterward. Because the device package is
small, the capacitances are only a few
picofarads, but the inductances also are
extremely low (see Figure 9).



Therefore, in this case, still shorter
rise times (<200 ps) of the current can be
expected. Because the protection circuit is
only partially conducting, damage to the
circuit can result. The simplified test setup
is shown in Figure 10.

The device under test is placed on
its back on a metal plate. In this way, the
largest possible capacitance of the circuit
to the environment is attained. The circuit
is charged with a moveable charging test
probe and discharged with a second test
probe.
Investigations have shown that
integrated circuits in this test that survive
charging up to 1000 V and subsequent
discharging without damage, can be
processed without problems in assembly
machinery if the usual precautions to
prevent electrostatic charging are taken.
There is no correlation between the results
of the human-body and charged-device
model tests. Components that survive the
human-body model test without damage
do not necessarily behave in the same way
in a charged-device test. Conversely, a
successful charged-device model test gives
no indication of results when a component
is tested according to the human-body
model.

An integrated circuit (IC)
connected to external ports is susceptible
to damaging electrostatic discharge (ESD)
pulses from the operating environment and
peripherals. The same ever-shrinking IC
process technology that enables such high-
port interconnect data rates can also suffer
from higher ESD susceptibility because of
its smaller fabrication geometry.
Additional external protection devices can
violate stringent signaling requirements,
leaving design engineers with the need to
balance performance and reliability.
Traditional methods of shunting ESD energy to protect ICs
involves devices such as zener diodes, metal oxide varistors
(MOVs), transient voltage suppression (TVS) diodes, and
regular complementary metal oxide semiconductor (CMOS)
or bipolar clamp diodes. However, at the much higher data
rates of USB 2.0, IEEE 1394, and digital visual interface
(DVI), the parasitic impedance of traditional protection
devices can distort and deteriorate signal integrity. This
article examines the general parameters designers should
look for in ESD protection devices and how these specifications affect and protect their
application.
ESD Basics:
An ESD event is the transfer of
energy between two bodies at different
electrostatic potentials, either through
contact or via an ionized ambient
discharge (a spark). The models typically
use a capacitor charged to a given voltage,
and then some form of current-limiting
resistor (or ambient air condition) to
transfer the energy pulse to the target. ESD
protection devices attempt to divert this

potentially damaging charge away from
sensitive circuitry and protect the system
from permanent damage, as shown in
Figure1.
Integration and Matching:
High-speed differential signals,
such as in IEEE 1394, benefit from
matched loading on the positive and
negative lines of each pair. ESD protection
products with multiple devices per
package (such as thin-film silicon) can
have intrachip device-to-device parasitic
impedance matching of less than 0.1%.
Unitary packages, however, may vary as
much as 30% interchip matching. Printed-
circuit-board (PCB) signal routing
restrictions may also indicate a need for
tight multidevice integration.
Other environmental electronic
hazards can also wreak temporary or
permanent havoc on a system. Electrical
fast transients (EFTs) and induced
electromagnetic interference (EMI), or
even lightning strikes, can produce similar
damage or system failures. Each hazard
requires a different approach for
protection.
High-Speed ESD Protection:
As IC manufacturers have achieved
higher frequencies of input/output (I/O)
interconnects, such as with USB 2.0, they
have continued to decrease the minimum
dimensions of the transistors,
interconnections, and the silicon dioxide
(SiO
2
) insulation layers in their devices.
This decrease results in smaller structures
for higher-speed devices that are more
susceptible to breakdown damage at lower
energy levels. SiO
2
layers are more likely
to rupture, and metal traces are more likely
to open or bridge during an ESD event.

ESD protection devices attempt to divert a potentially damaging charge away
from sensitive circuitry and protect the system from permanent damage:
The changing application
environment is also contributing to
increased ESD vulnerability. A
proliferation of laptop computers and
handeled devices such as cell phones,
personal digital assistants (PDAs), and
other mobile devices are being used in
uncontrolled environments (i.e., no wrist-
grounding straps or conductive and
grounded table surfaces). In these
environments, people are likely to touch
I/O connector pins during the connecting
and disconnecting of cables.
The traditional methods for
shunting ESD energy away from the ICs
involved devices such as zener diodes and
MOVs that have moderate capacitances of
10 to 100 pF. Now with higher signal
frequencies, these devices cannot be used
without distorting the signal beyond
recognition or detection.
Many ICs are designed with
limited internal ESD protection, allowing
them to tolerate from 1- to 2-kV pulses
(per the human body model [HBM]), but
some ICs are not capable of tolerating
even 100 V without suffering damage.
Many IC data sheets do not even specify
an ESD tolerance voltage, so users must
do their own testing to determine the
tolerance of the IC. The creation of ESD
charges also varies widely with ambient
relative humidity (RH). Walking across a
vinyl tile floor with more than 65% RH
generates only 250 V of ESD; however, if
the RH is less than 25%, normal in dry
environments, electrostatic potentials of
more than 12,000 V can be generated.
ESD Standards:
Two popular standards are used to
ensure uniform testing of devices for ESD
tolerance: the HBM, which came from the
U.S. MIL-STD-883 standard, and the
more-stringent IEC 61000-4-2, which
originated in Europe but is now used
worldwide. The IEC standard requires the
tester to store a charge 50% larger than the
HBM and to discharge it through a resistor
that is about one-fifth the size of the HBM
resistor.
This results in a subnanosecond pulse rise
time and a peak current many times larger.
The highest direct-contact-to-the-pins ESD
voltage of the IEC standard is 8 kV, which
has become something of a de facto
industry standard. Both standards use a must duplicate.
prescribed pulse waveform that testers
ESD Protection Devices:
A variety of technologies are used in devices for ESD protection.

A parasitic capacitor here is too high to pass high-frequency signals without significant

Zener Diodes
distortion.
:
One traditional device, the zener diode, is
pass high-frequency signals without
TVS Diodes. There are some TVS
devices
handle positive- and negative-polarity
generally poorly suited for very high-speed
I/O interfaces because the lowest
capacitance of existing devices is about 30
pF (shown as a parasitic capacitor in
Figure 2). This capacitance is too high to
significant distortion. This distortion
results in unreliable detection of the
signals and increased high-frequency roll-
off. Zener diodes could be made with
lower capacitances, but this would result in
ESD voltages insufficient to meet the 68-
kV protection levels necessary.
on the market that add a regular
diode in series with the zener diode to
effectively lower the net capacitance. To
ESD pulses, a second zener and series
diode pair (in the opposite polarity) must
be placed in parallel with the first pair of
diodes. Unfortunately, the resulting
capacitance of 56 pF is still not low
enough to avoid distortion of high-speed
I/Osignals.

MOVs. MOVs can achieve slightly lower device available has a capacitance of 3 pF,
capacitances than TVS devices, but
currently the lowest-capacitance MOV
which can still exceed the allowable load
on high-speed interconnects.


Regular diodes can be used to clamp the ESD pulses to the power or ground rail so the
Dual-Rail Clamp Diodes. Zener
diode c
breakdown, regular diodes can be used to
current flow is always in the diode's forward direction
apacitances are high because their
structures must be sufficiently robust to
tolerate reverse breakdown phenomena. To
eliminate the need for the zener's
clamp the ESD pulses to the power or
ground rail. Using this solution, the current
flow is always in the diode's forward
direction, as shown in Figure 3. This setup
allows the use of smaller, and therefore
lower, capacitance diodes. Positive ESD
pulses are clamped to the positive supply
rail, and negative ESD pulses are clamped
to ground. (The system-bypass capacitors
and power supply are responsible for
shunting this extra energy on the positive
rail back to ground. This can sometimes be
aided by also adding a local zener diode,
which does not affect the signal load.)
At first glance, this scheme can be data sheets reveals that they were not
implemented with standard low-
capacitance diodes. These diodes are
cheap, readily available, and have a
capacitance of 1.5 pF per diode (so the
capacitance on the signal is 3 pF for the
two diodes). However, this capacitance is
relatively high, and an examination of the
designed for high-current ESD pulses.
These diodes have no specifications that
guarantee their use with the high current
and voltages of ESD pulses or with
repetitive high-current ESD pulses. Some
will degrade and eventually fail at high
ESD voltage and currents.
Polymer Devices:
The polymer devices symbolized
in Fig
amperes can be shunted to ground. What
down a
break down until as much as 1000 V is
be used
voltage of the polymer device (trigger
ure 3 have resistances that are
voltage dependent. With a low voltage
(e.g., 5 V), the impedance is in the gigohm
realm. When a high voltage is applied
across the polymer device, the resistance
drops to a very low value, so that tens of
makes these polymers attractive for high-
frequency applications is their sub-
picofarad capacitance (0.051.0 pF). This
low capacitance, however, comes with
some not-so-attractive side effects.
Unlike zener diodes that break
t the same voltage that they clamp
to, a polymer device does not break down
until it reaches a voltage that is much
higher than the final clamping voltage. A
typical polymeric ESD device does not
reached. Then it snaps back to a clamping
voltage of up to 150 V. After the charge is
dissipated, the polymer returns to its high-
impedance state.
Consequently, polymer devices can
only in applications in which the
ICs that are supposed to be protected must
have their own built-in ESD protection
that can tolerate the breakdown or trigger
voltages vary from 300 to 1000 V;
clamping voltages vary from 60 to 150 V).
These devices can be difficult to
accurately characterize in manufacturing,
so their data sheets often contain only
typical specifications without guaranteed
minimums and maximums. So, there is a
design caveat here. Additionally, because
these are physically elastic devices, their
performance degrades based on the
number of ESD pulses they receive. Their
specifications can only guarantee certain
limits over a lifetime of a given number of
ESD pulses. These lifetimes vary from
1000 pulses to as low as 20 pulses.
Metal Oxide Silicon (MOS)
Device
minimum capacitance. Traditional diode
+10 V, ground 10 V) with no
s. A new technology uses a dual-
rail clamp configuration as shown in
Figure 3. The process technology to make
the diodes, however, is fundamentally
different. PicoGuard technology is derived
from a MOS process that is optimized for
structures are derived from simple bipolar
technologies and tend to have higher
capacitance levels. The new technology is
the first to combine low capacitance with
low-voltage clamping levels and high ESD
tolerance.
These diodes provide ESD 15 V (V
protection beyond IEC 61000-4-2 (8-kV-
and-above contact) with a capacitance of
<1.3 pF maximum (~1.0 pF typical). They
have a low insertion loss (virtually zero up
to 3 GHz) and a clamping voltage below
higher trigger voltages. Other
specifications include a subnanosecond
response time, durability of more than
1000 ESD pulses, and a leakage current of
1.0 A.
Where to Start :
CC

Before beginning a search for the
ideal E
conditions of the I/O interface to be
uring assembly and
for manufacturing and placement handling,
SD protection components, it is
important to identify the boundary
protected. Consider these variables of the
unprotected circuit: 0.
Is the node external? If the I/O can only be damaged d
node is internal to the Faraday cage of a
system's metal case, then it may not need
any extra protection onboard, because it
repair when preventive measures (such as
ESD stations) can be guaranteed.
What clamp voltage can the
interface chip tolerate? Many application-
specific ICs have some level of ESD
protection (1 kV). This level is sufficient
but not sufficient to tolerate the allowed
clamp voltage that some ESD protection
devices may let through.
What additional impedance can the
signal line tolerate (or need)? Some signals
are sensitive enough that merely adding a
printed circuit pad for a protection device
will deteriorate the signal quality.
However, some systems may benefit from
the additional signal conditioning of the
parasitic impedance.
What type of clamp V
DD
/ground
rails are available? If the local positive
supply rail is not available for clamping,
zener or other ac clamps may be available.
By starting out with these
requirements well known, the myriad
parameters in dozens of competing ESD
protection device technologies can be
quickly distilled into the correct choice for
a specific application.
CONCLUSION:
Therefore everybody who are
dealing with the manufacturing of ICs and
electronic components that are sensitive
should be aware of ESD, its corresponding
consequences and its prevention
techniques.




FOG SIGNALLING SYSTEM



















M.VIKRAM B.S.SAI PRASANTH
II Yr EEE II Yr EEE
m_vikram_1987@yahoo.com bssaiprasanth@gmail.com




RAJALAKSHMI ENGG COLLEGE



ABSTRACT

Railway signaling is a system used on railway control traffic safely, for
example, to prevent trains from colliding. Trains are uniquely susceptible
to collision because, running on fixed rails, they are not capable of
avoiding a collision by steering away, as can a road vehicle; furthermore,
trains cannot decelerate rapidly, and are frequently operating at speeds
where by the time the driver/engineer can see an obstacle, the train cannot
stop in time to avoid colliding with it.

In our Indian railway signaling system, for knowing the position of trains
on the track tracking circuits, axle counters etc. are used and thus the
required signal is passed on to the train crew using the fixed signals. Thus
the signals are not directly passed on to the train engine. Fixed signals
may be semaphore signals or color light signals. Even though these signal
systems are performing very well, it depends upon the train crew who take
notice of these signals which are placed next to the track during the course
of operation of the train and interpret them properly. And moreover the
areas which are covered by large amount of fog hinder the visibility of
signals thus making it difficult for the trains to run on time.

Our project aims at providing a proper signal system which can be
effectively used in the areas covered by fog. We would suggest a method
to pass the signal directly to the train engine with the use of simple
electronic devices such as laser diodes and photodiodes. In our proposed
system laser diode (light emitter) will be placed within a distance of 1m
from the track and will be inclined in such a way that the light emitted from
the laser diode would fall on the photodiode that would be placed on the
surface of the train engine. The signal will be transmitted to the laser diode
which would operate on a low DC supply which will in turn be converted to
a corresponding light signal and transmitted. When a train passes by then
the light emitted from the laser diode would be detected by the photodiode
which will convert back the light signal into the appropriate signal.
This system of signaling does not require any tracking equipment for
sensing the presence of train over a particular area. And if these electronic
devices are produced on a large scale then cost of setting them up reduces
appreciably. This system would reduce the power consumption as
compared to the fixed signals. This signal system can be used anywhere in
the railway network as it does not depend upon the climatic conditions of
an area and moreover signal is directly passed on to the train crew without
involvement of fixed signals.





Importance of railway signals

Railway signaling is a system used on railways to control traffic
safely, for example, to prevent trains from colliding. Trains are
uniquely susceptible to collision because, running on fixed rails,
they are not capable of avoiding a collision by steering away, trains
cannot decelerate rapidly.

functions:-
They govern the movement of trains
They permit/prevent a train from entering an area.
They prevent the collision between two trains.


They are essential for the faithful operation of the trains without
causing any loss of life and property.

Railway Signal System

Railway signals are a means of communication beyond the range of
the voice.

The most fundamental signals are the hand, or mobile, signals. They
may be given and received by men on the ground, or on a train. By
day, they are emphasized by a flag, and by night, with lights. A red
flag or light for a stop signal, while a white flag or a clear light for
general signals.

The signals so far mentioned are well called " mobile" since they can
be displayed at any point. Some signals are, however, displayed
from fixed locations and are called fixed signals.

The advantage of a fixed signal is that its location is known and
permanent, so it can be regularly looked out for.


First of all, we have stations. There may be a number of routes
available for entering or leaving trains. Arrival and departure signals
are frequent uses of fixed signals, and are basic in Indian signaling.

At junctions where several routes are available, it was often the
custom for drivers to request the route with the whistle. Either
special whistle signals were established for the junction, or some
general rule such as many whistle sounds as the number of the
desired route, counting from the left or right.



Indian Railway Signal System

Semaphore Signals

If two trains cannot be running on the same section of track at the
same time, then they cannot collide. So railway lines are divided into
sections known as blocks, and only one train is normally allowed in
each block at one time. This principle forms the basis of most
railway safety systems.

Types:-
Fixed blocks
Moving blocks
Automatic blocks

Methods of determining if a block is occupied or clear:-
Tracking circuits
Axle counters


Color Light Signals

On most modern railways, detection of the train's position on the line
and signal changing are done automatically by color-light signals.

typical signal system:-

Green: Proceed at line speed. Expect to find next signal green or
yellow.
Yellow: Prepare for next signal to be at red.
Red: Stop.

Double signaling is sometimes used. It derives from semaphore
signaling. Two sets of lights are displayed, one above the other.

The double signals indicate:

green over green: continue
green over yellow: attention, next signal at green over red
green over red: caution, next signal at stop
red over red: stop and stay stopped
red over red with small lamp lit: low speed, 25 km/h.


When a track in a signal block is occupied, the signal which the train
has just passed automatically turns from green (or yellow) to red, the
signal behind that one automatically turns yellow to warn following
trains of the red signal, and the signals behind that one can show
green.

If any train is following behind, the yellow signal will warn it to slow
down in order to stop at the next signal. If, however, the train in front
has passed into the next block, the following train will come across
another yellow signal.

If the train in front is traveling faster than the following train and
clears two blocks, the following train will come across a green
signal.


Need for a Proper Signal System in Fog Areas

Whatever may be the device used for tracking the presence of trains
in a particular block or the signals used, the ultimate goal is to pass
the signal to the driver of the train (either by a semaphore signal or
color light signal).

During the time of fog it becomes difficult for the driver to see the
signal, which causes the delay in running of trains.

Also it may lead to collision between trains.

Proposed Signal System:-

This signal system is proposed mainly for the regions having lot of
fog and that cause the delay in running of trains.

In our proposed signal system we would like to transmit the signal
directly to the engine of the train, instead of transmitting it to the
fixed signals.

In this system the signal to be transmitted is sent to the laser diode
that are placed a little away from the track.

Photo detectors are placed on the engine such that they receive the
signal in the form of some of light and convert it into the appropriate
signal.

Main Components

The main components are:-
Photo emitters
ex: Three identical laser diodes corresponding to red green and
yellow.

Photo detector
ex: photo transistor, photodiode



Photo emitter-Laser Diode

A semiconductor-based laser used to generate analog signals or
digital pulses

stimulates a chain reaction of photon emission inside a tiny
chamber.

The most common semiconductors used in laser diodes are
compounds based on gallium arsenide (750 to 900 nm in the
infrared), indium gallium arsenide phosphide (1200 to 1700 nm in the
infrared) and gallium nitride (near 400 nm in the blue).

Photo Detectors- Photo Diode

The photodiode is a p-n junction. When light of sufficient photon
energy strikes the diode, it excites an electron thereby creating a
mobile electron and a positively charged electron hole.

If the absorption occurs in the junction's depletion region, these
carriers are swept from the junction by the built-in field of the
depletion region, producing a photocurrent which can be used to
turn an led on.

Working of the Signal system

The three laser diode assembly is coupled with the already existing
tracking circuit of the Indian railways.

Instead of activating the fixed post signals, the laser diode with the
corresponding signal is first activated and it starts emitting light like
a fixed signal.

It is ensured that the diode assembly is inclined at an angle.

The three beams point at three different positions on the two rails
irrespective of curvature of tracks.

The detector arrangement is made on the engine in such a way that
as the train approaches, the beam traverses from top to bottom of
the engine and falls on that particular detector only.

The corresponding detector for the signal is activated and the
illumination is converted back into a corresponding photocurrent
that triggers a switch and brings in to ON position which in turn
makes a bulb to glow till it is interpreted by the driver as per the
known code and switch is then brought to OFF position.


Design

Because the train movement is not completely smooth, in spite of
alignment of the beam, there is a possibility that due to lateral (left to
right) movement of the train along the track the detector may not
come in contact with the beam.

To solve this problem we use an array of detectors which covers for
all lateral movement.

Beam is sent in such a way that it comes into contact with one set of
the detectors.
This covers for all lateral movement.
The detectors are placed as shown In the previous figure at the same
height from the bottom of the engine and it is seen to that the
spacing between them is also constant.

This same arrangement is made in each and every engine.

Since three distinguished focus are used, only one detector
responds for the light from particular laser diode because each
signal is focused on its photo detector only.

The process

Activation of respective signal by tracking circuit
Light beam from respective laser diode
Incidence on corresponding detector in the engine
Response of the detector facilitating interpretation
Required change in signal after train has passed.




Economization , Advantages

Since we are coupling this diode system to the existing tracking
circuit , there is no problem of laying extra circuits. So this can serve
as an effective parameter in cost management.

There are no complex back up devices. Since they are laid very
frequently, even in the most unlikely and adverse event of missing a
signal, the next signal can act as a prompt preventing any damage.

Since the same set of detectors are used in every train, and because
the diode system is the same throughout, large scale demand and
hence production would limit the cost greatly.

Ultimately our system would thus be able to provide unhampered rail
transportation during fog, when vicinity is low or void as no visual
interpretation by looking outside is required.


Conclusion

This signal system is very much useful in areas affected by fog.

Even though this signal system has been designed specially for fog
affected areas, it can also be used in all parts of the railway network
wherever necessary.

This signal is mobile as well as fixed because the laser diode that is
used can be easily moved from one place to the other.

The laser diode consumes low power.

And thus an effective fog signal system.




1

SAFETY OF INDIAN RAILWAYS
THROUGH
GLOBALPOSITIONSYSTEM

GPS Moves Through Nation And Creates Sensation
Introduction:
Indian Railways is having the worlds
largest railway network. The Indian Railways
contribute largely in the public transport as it
becomes a primary mode of transport. The leading
nations have the separate rail lines for the goods
and for the passenger trains. These are things that
are not practiced in India.
There are a large number of the accidents
that are happening in which many lives are lost and
also in these accidents there is a huge loss of the
railway and the national property. Most of the
accidents occur due to negligence, technical faults
non-reliable data and late understanding and slow
or no response towards the danger signs considered
small. Thus there is need to develop the sensitive
Guard Rooms, Cabins, Stations, Control Rooms
and many other units. There is also need to deploy
the auto-alert and auto-control systems.
Principle and Working:
As we know that most of the accidents
occur due to the collisions, negligence of the
signals i.e. the human error, track related problems
and some as the technical errors. And yet there is
no measures as such taken for the cause. The main
principle that is used for the safety of the Indian
Railways is that whenever a train will move over
the track there will be vibrations that will be
created on the track. These vibrations that are
produced on the track are dependent on the speed
of the train, load or the weight of the train. These
vibrations travel long distances because of one the
metal tracks and the other reason is the connection
of these tracks by means of the fish plates.
GPS DENIAL FOR SERVICE AND DENIAL FOR SERVICE PROTECTION


2

TRAI N
CABI N
Fig: Communication link architecture

The GP There will be microphones on the
tracks such that they will constantly note
the vibrations that are coming from the
train. The vibrations that will be coming
from the train will depend on the above
mentioned factors as well as the distance
of the sensor from the moving train. There
will be a database for the vibrations
prepared on the basis of the ideal
conditions noted by the case study of
different trains and with distance of the
train from the sensors. There will be
software that will convert these vibrations
in such a form that they can be compared
with the already existing data of the
database.many times faster than today's
most powerful computers will be used
here to get the details of the train such as
the speed and exact position of the train.
Thus the GPS will work as the anti-
collision device as well. The GPS will
provide the means of the communication
and input of the data from the train to all
the centers and the other trains. If there
will be any problem on the track, bridge
etc. such as the removal of the fish plates
the vibrations produced from the train will
change (they will either get damped or will
get excited which is a noticeable change
from the ideal case) the same can be
concluded or developed for the bridges
also. This change can be noted by the
means of the comparison of the data
obtained from train and the data from the
database.

3
There will be use of the fiber optic
network from the sensors so the data can be
transferred immediately. In advance and a
preventive measure can be taken well ahead of
the time. All these things will work in tandem
and if there will be any error then the
vibrations will differ thus through the satellite
communication we can tell the Engine Driver
The whole system depends on the proper
functioning of the components and the
proposed method will even work if there will
be failure of one or two components thus we
can say that the proposed method can
contribute in large towards the railway safety
by means of the network of the following
components.

G.P.S (Global Positioning System):
This is most important of the component for
the safety of the system which is mentioned.

Principle of Working:
Definition: the GPS is a world wide network
radio navigation system formed from
satellites and their ground stations. GPS
uses the artificial satellites to compute the
position of the receiver in terms of the four
coordinates that is X, Y, Z and time. Now
a day the accuracy of the GPS systems is
increased up to nearly the range of the
1cm to even 1mm for the high cost
differential carrier phase survey satellites.
The GPS technology makes the use of the
triangulation method for the determination
of the four co-ordinates. The GPS contains
a combination of at least 24 satellites that
are moving in six of the orbital planes
which are 60 degrees equally spaced.
Apart from each other, each having a 4
number of the satellites in them. These are
inclined at 55 degrees to the equatorial
plane. This arrangement makes it possible
that the receiver is always in range at least
5 satellites. There are many ground
stations that are keeping a close watch on
these satellites. These ground stations also
maintain the clock of the satellite and on
the earth and if necessary they can make
suitable changes while the satellite is in
the orbit. Now a day the GPS is used to
assist the sailors in the open sea.





4
X Y Z T
The Global Positioning System


Functions:
Navigation in the three dimensions is the
primary objective of the GPS. The GPS will
have a very important role in the mission of
the Railway Safety. The main function of
the GPS is to get the interconnectivity of the
various points that will be mentioned in the
later sections and to note the position and
speed of the train and respond to the signals
that are picked up by the sensors and send
the signals to the Train Drivers and the
Guards of the Train Ground Stations
containing the network of the Cabins,
Station Masters, Control Rooms and all the
related points on the ground. Thus by giving
all the details of the train in a very much
understandable and visual form. The signals
that are picked up by the satellites will be
then processed and checked for any kind of
the faulty situations by simply comparing
the signals using the Software and the
DBMS mentioned later in the paper. This
will help the Drivers, Station Masters, Cabin
men and many other to operate the traffic of
the trains with ease and this will also help in
the communication of all the units
(stationary and moving).
Microphone Sensors:
This will be the most critical selection of the
unit of the network. When a train moves
there are huge amount of the vibrations or
the noises are created that are possible to get
recognized from a large distances (every one
knows the childhood game to know whether
a train is coming the children place their ears
on the tracks).



5
Thus with the advanced form of the microphones we can note down these vibrations on
the track.
Functions:
The microphones will have the purpose to
note down the vibration coming from the track
by the motion of the train. These details will
be then compared with the vibrations from the
database in accordance to the distance of the
train from the sensors. The proposed sensor is
the carbon microphones that are generally
used in the telephones.
Fiber Optic Wires:
These are the latest advances made for the
transfer of the data and the signals with the
speed that is nearly equal to the speed of the
light. These are required to transfer the data
that is collected from the sensors to Safety Of
Indian Railways With The Networking Of
G.P.S., Sensors And Software Systems With
The D.B.M.S. there respective places in the
network that is needed to be developed by the
railways.
Functions:
The main function of the fiber optic network
is to transfer the data or the signals that are
collected from the sensors quickly to the
software analysis at the respective places in
the network.
Software And DBMS:
Software is the programs that can perform
some tasks that are assigned to them. The
DBMS is the Data Base Management
System which can store a huge amount of
data in them. Software and the DBMS will
together work for the safety of the railways.
Functions: The main purpose of the
software is to convert the vibration noted by
the sensors and convert it into some
graphical or some other form that can be
used to store the details of the train in the
DBMS. The DBMS will contain many
details of the vibrations or the signals that
are observed under good conditions from the
various analyses done on the trains
according to there weight and distance from
the sensors. The main function of the
software is to compare the vibrations that
are produced by the train with the data that
is stored in the DBMS.
On-board Unit with GPS and
Wireless Communications:

The on-board unit consists of
communication satellite antennae, satellite
positioning antennae, the on-board receiver
system and communication interface
devices, etc (shown in Figure 3).

6
The on-board unit can continuously measure different parameters, tag the
data with time and position information, reports irregular conditions.

TRAI N TRAI N
GPS Rec ei ver
Communi c ati on
I nt er f ac e
Dat a Col l ect or
Equi pment
Man Machi ne
I nt er f ac e
Dat abas e
Communi c ati on
Sat el l i t e
GSMR Wr el ess cab si gnal
On-Board unit design
GPS Sat el l i t e


For Cabins, Station, Guard Rooms,
Control Rooms & Other Important
Establishments: These mentioned
establishments need a tremendous
development in there current status for the
safety of the Railways. These requirements
are as follows:
1. Need of the towers for the transfer of the
details and the communication between other
establishments, trains and the satellites.
2. A visual display system to show the actual
position of the trains along with other details
like visual and audio warning systems.
3. Receivers and the transmitters tuned to
proper frequencies and in accordance to
the network.
The train drivers cabin must be such that
the driver will come to know about the
track, weather, bridges and constructional
status in advance. Thus there is need of a
high tech visual display screen to tell the
details required by the driver for the
smooth operation of the train, Good

7
communication system. This can be
achieved by the use of the satellites of the
GPS for the communication.
Transmitter & Receiver:
From the standard data available the
frequency should be as follows:
Main allocations 5.925 to 6.425 GHz (for
uplink) & 3.700 to 4.200 GHz (for downlink)
which may change as per the advancements
that are made and the system requirements.
The Range of the Trains device will be 25-75
Km (as per the requirements and the system
integration).


System Integration:
After the components will be fabricated then
these will be integrated to perform the
required and proposed functions. This will
help the Indian Railways for a very
longtime. The integration of the project
proposed will such that each time the Train,
Guard Room, Control Room, Stations and
the satellites all will be simultaneously work
for the safety of the railways. The figure
shows the integration of the system in
diagram.


Expected Benefits To The Indian
Railways:
1. Railway Safety
2. Automation in the Work.
3. Reduction in the Stress Levels because of
the good equipments and machinery
.

8
4. Time Delays: As every time the
position of the train will be known thus
there will not be any time delays.
5. Train Traffic Control: due to continuous
monitoring of the trains there will be a good
train traffic control.

6. Futuristic method: with this method it is
possible to make fully automatic train network
which may even not have the driver in it.
Implementation Problems:
There are very few of the problems associated
with the project which are:
The implementation time is large because of
the training and system integration. The cost
may seem high to some non technocrats.
There by reducing the principle costs by an
extent and quick implementation of the
technology.















Conclusion:
The benefits and advantages of the proposed
method outnumber the problems and
disadvantages. Also this method is having
good expected results. Thus the proposal
should be considered for the most important
transport system in India and for the safety
the costliest human life and the national
property. The project can also work for a
very large time period. Thus the integration
of GPS, sensors along with software systems
can be effectively used to provide safety as
well as huge income to the Indian Railways.

Bibilography
1. Satellite Communication by Charles W.
Bastian & Timothy Pratt
2. Electronic Communication Systems by
George Kennedy & Bernard Davis
3. www.trimble.com
4. www.discoveryindia.com
5. www.irsuggestions.org





HINDUSTHAN COLLEGE OF ENGINEERING AND
TECHNOLOGY
OTHAKKALMANDAPAM
COIMBATORE - 32.





GSM BASED VEHICLE SECURITY
SYSTEM


BALASUBRAMANIAN.A III YEAR EEE
MUTHIAH.M III YEAR EEE
SATHISH.R III YEAR EEE




CONTACT:

MOBILE: 9994859239, 9994158510, 9994582214

E-MAIL:sathisraju@gmail.com




GSM BASED VEHICLE SECURITY SYSTEM

ABSTRACT:
The Objective of this project is to
design a system to keep the vehicle
secure and to control the engine ignition
through IR remote. The IR sensor fixed
in the lock will sense and induce the
microcontroller to send sms to the
owners mobile thus intimating the
attempt theft to the owner. Thus the
security has been provided to the
vehicle. The GSM modem can replace
the mobile which is more secure.

DESCRIPTION:
The module is designed with the
following blocks,
IR Sensor
Signal conditioning unit (SCU)
Microcontroller
IR Remote
Driver circuit
Mobile (GSM Modem)

Consider a scenario of the
vehicle owner parking the four wheeler
on the parking area. Whenever he parks
the vehicle, he will switch ON the
Security System by using IR remote.
The IR sensor is fixed on the lock of the
vehicle; it consists of a transmitter and a
receiver. Once the vehicle is armed with
the security system the IR Rays passes
from the transmitter to the receiver
located at the lock. When an intruder
tries to tamper the lock then the IR rays
are cut so the output of the IR sensor is
passed to the signal conditioning unit
(SCU). The Signal Conditioning Unit
Converts the input to square wave pulse.
The square wave pulse is given as the
input of the microcontroller.
The engine ignition is controlled
by IR remote. The IR remote is
consisting of IR transmitter and receiver.
Then the IR output is given to
microcontroller. When you switch on the
ignition through remote, the
microcontroller disabled the security
circuit and activates the relay driver
circuit for ignition. When you switch off
the ignition the relay driver circuit is
deactivated and security circuit is
enabled. Here the microcontroller is the
flash type reprogrammable
microcontroller in which we have
already programmed. The Mobile (GSM
MODEM) is interfaced with
Microcontroller with the help of Data
Cable and RS232 level.
If anything mentioned above
occurs the microcontroller activates the
Mobile Phone (GSM modem) used in
the system and it sends a SMS to the
owner irrespective of distance.
Once when the security system is
switched ON the supply to the Ignition
of the vehicle is disconnected
electrically, so that it ensures more
safety to vehicle.
If the owner wants open the
door he has to deactivate the circuit in
order to avoid the sms being sent to
himself and to give supply to Ignition
system of vehicle.

GSM Technology:
General Information
GSM is also known as Global System
for Mobile Communications, or simply
Global System for Mobile. A technology
started development in 1985 by a French
company formerly known as Group
Special Mobile. Its main competitor Is
CDMA, currently in use by Bell
Mobility,Telus Mobility and Mobility
Canada carriers.
Currently, only two main carriers
in Canada are operating GSM networks.
Microcell (Fido, Cityfone) and Rogers
Wireless. Fido was the first carrier to
start utilizing this technology, followed
by Rogers Wireless mainstream around
2001. Several companies in the United
States have adopted GSM and its
spreading fast among AT&T Wireless,
T-Mobile.
GSM operates on 4 different
frequencies worldwide. However, only
two are which are used in Canada, which
are GSM-850 and GSM-1900.GSM-850
and GSM-1900 which operate at 1.9
GHz.
GSM calls are either based on data
or voice. Voice calls use audio codecs
called half-rate, full-rate and enhanced
full-rate. Data calls can turn the cell
phone into a modem operating at 9600
bps. An extended GSM feature is high
speed circuit switched data, allowing the
phone to transmit up to around 40 kbps.




GSM MODEM
The GSM/GPRS Modem comes with a
serial interface through which the
modem can be controlled using AT
command interface. An antenna and a
power adapter are provided.
The basic segregation of working of the
modem is as under:
Voice calls
SMS
GSM Data calls
GPRS

Voice calls: Voice calls are not an
application area to be targeted. In future
if interfaces like a microphone and
speaker are provided for some
applications then this can be considered.
SMS: SMS is an area where the modem
can be used to provide features like:
Pre-stored SMS transmission
These SMS can be transmitted on
certain trigger events in an automation
system
SMS can also be used in areas where
small text information has to be sent.
The transmitter can be an automation
system or machines like vending
machines, collection machines or
applications like positioning systems
where the navigator keeps on sending
SMS at particular time intervals
SMS can be a solution where GSM
data call or GPRS services are not
available
GSM Data Calls: Data calls can be
made using this modem. Data calls can
be made to a normal PSTN
modem/phone line also (even received).
Data calls are basically made to
send/receive data streams between two
units either PCs or embedded devices.
The advantage of Data calls over SMS is
that both parties are capable of
sending/receiving data through their
terminals.
GPRS: This modem can be used to
make a GPRS connection. Upon
connection the modem can be used for
internet connectivity of devices.

IR TRANSMITTER AND
RECEIVER
Infrared transmitter is one type of
LED which emits infrared rays generally
called as IR Transmitter. Similarly IR
Receiver is used to receive the IR rays
transmitted by the IR transmitter. One
important point is both IR transmitter
and receiver should be placed straight
line to each other.
The transmitted signal is given to IR
transmitter whenever the signal is high
the IR transmitter LED is conducting it
passes the IR rays to the receiver. The IR
receiver is connected with comparator.
The comparator is constructed with LM
741 operational amplifier. In the
comparator circuit reference voltage is
given to inverting input terminal. The
Non inverting input terminal is
connected IR receiver. Initially when
there is no transmitting signal IR
transmitter doesnt passes the rays to the
receiver. So the comparator non

Inverting input terminal voltage is higher
then inverting input. Now the
comparator output is in the range of
+12V. This voltage is given to base of
the transistor Q1d due to this transistor is
conducting. Here the transistor is act as
switch so the collector and emitter will
short. The output is taken from collector
terminal now the output is zero.
When the transmitting signal is given
to IR transmitter, IR transmitter passes
the rays to receiver depends on the
transmitting signal. When IR rays falls
on the receiver, the IR receiver is
conducting due to that non inverting
input voltage is lower than inverting
input. Now the comparator output is
12V so the transistor is cutoff region.
The output from the collector is +5V.
The output signals are given to 40106 IC
which is the inverter with buffer. It is
used to invert the output either high to
low or low to high depend on the
interfacing circuit or device.

MICROCONTROLLER
The microcontroller used in
this system is flash type reprogrammable
microcontroller i.e. AT89C51

DESCRIPTION:
The AT89C51 is a low-power,
high-performance CMOS 8-bit
microcomputer with 4K bytes of Flash
Programmable and Erasable Read Only
Memory (PEROM). The device is
manufactured using Atmels high
density nonvolatile memory technology
and is compatible with the industry
standard MCS-51 instruction set and
pin out. The on-chip Flash allows the
program memory to be reprogrammed
in-system or by a conventional
nonvolatile memory programmer. By
combining a versatile 8-bit CPU with
Flash on a monolithic chip, the Atmels
AT89C51 is a powerful microcomputer
which provides a highly flexible and cost
effective solution to many embedded
control applications.

PIN DIAGARAM OF AT89C51:



DRIVER CIRCUIT (MAX232)
The MAX232 is a dual Driver/receiver
that includes a capacitive voltage
generator to supply EIA-232 voltage
levels from a single 5-V supply. Each
receiver converts EIA-232 inputs to 5-V
TTL/CMOS levels. These receivers have
a typical threshold of 1.3 V and a typical
hysteresis of 0.5 V, and can accept 30-
V inputs. Each driver converts TTL or
CMOS input levels into EIA-232 levels.


ADVANTAGES:
Security of the vehicle is ensured
Easy to use and implement
Economical
Reliable
Adaptability

APPLICATIONS:
The vehicle security system can
be implemented in any four wheelers
regardless of the type of the vehicle. It
ensures the safety of the vehicle. The
main advantage of the project is that it
has a very wide adaptability. This makes
its application also wider; it can be used
as a security system in home by fixing it
in the door lock. Also in other domestic
applications like lockers, almirah etc. It
can also be applied in industrial areas
thus our security system is adaptable to
different places, which is more important
for any security system.


PROJECT MODULE BLOCK DIAGRAM





































.




Identifying Frequency of Signals using Autocorrelation and
FFT
Lakshmi Chaitanya Chennu
Department of Electronics and Communication Engineering
SriKalahastheeswara Institute of Technology, Srikalahasthi
Email: chennuchaitanya@gmail.com

Abstract

In this paper we identify the frequency of signals using
autocorrelation and FFT. The power spectrum density (PSD) of
FFT is calculated. The moments of both FFT and auto correlated
signals are estimated which involves the calculation of total
energy, spectral width and mean. Finally, these moments of both
FFT and auto correlated signals are compared and the frequency
is identified based on the comparision.


I. Introduction

Radar (RAdio Detection And Ranging) works on the principle
that when a pulse of electro magnetic wave is transmitted
towards a remotely located object, a fraction of the pulse
energy is returned through either reflection or scattering,
providing information about the object. The time delay with
reference to the transmitted pulse and the received signal power
provide respectively the range and the radar scattering cross-
section of the target detected. This class of radar is known as
pulsed radar. In case the target is in motion when detected, the
returned signal is Doppler shifted from the transmitted
frequency and the measurement of the Doppler shift provides
the line-of-sight velocity of the target. The radars having this
capability are referred to as pulse Doppler radars. The Indian
Mesosphere, stratosphere, troposphere (MST) Radar is a highly
sensitive, pulse code, VHF phased array operating at 53MHz
with an average power aperture of 710
10
W-m . This radar
facility has been established in southeastern part of Indian at
Gadanki (13.5 E, 79.2 N) near Tirupati, mainly to support
the atmospheric research in the MST reg ion.
2

The FFT is a fast algorithm for efficient implementation of the
DFT, where the number of time samples of the input signal N is
transformed into N frequency points. The FFT performs same
arithmetic as the DFT, but it organizes the operations in a more
efficient manner. The FFT makes heavy use of intermediate result
by dividing the transform into a series of smaller Transform.






In the following section we come across how the frequency
of a signal is identified by using FFT and autocorrelation.


II. Implementation of code

The entire code for the identification of frequency and
their comparision is written in C code. The platform used
is VC++. First some sine or cosine waves are generated
which are used as simulated signals for the
implementation of N-point FFT. Next some atmospheric
data is taken which is in the form of real and imaginary
values convenient for FFT implementation. The
atmospheric data consists of range gates which represent
the information collected by the RADAR station upto
some fixed distance (in kms) in the atmosphere. In the
FFT code, the power spectrum is calculated which looks
like





`
magnitude
N


Fig: N-point FFT vs Magnitude curve

From the power spectrum drawn, we calculate
the maximum and minimum values (indicated by p and q) and also
the peak position (indicated by max).
After this, the frequency of p and q is calculated with the header
obtained from the atmospheric data.

With these frequencies the moments namely zeroth momen
or total power, first moment, second moment, Doppler shift and
signal to noise ratio (SNR) are calculated. This is carried out for
several range gates and the graphs are obtained.







Magnitude



N ------>

Fig: N-point FFT vs Magnitude for several range gates






III. Acknowledgements
This work is completed in support with Dr.V.K.Anandan,
Now, after we are done with FFT, Autocorrelation is
taken. A time lag is observed between the original signal and the
time shifted signal. The Doppler shift is obtained with the auto
correlated values. The shift with respect to zero is calculated and
this is compared with the FFT.


III. Results


The moments are calculated for the auto correlate
values which should almost coincide with the FFT moments.
When a comparision (scatter plot) is drawn between
FFT and auto correlation function, we get a linear curve.
The Doppler curve is shown below:










ACF








FFT

Fig: Doppler curve




The above Doppler curve shows that the frequency
obtained through both FFT and auto correlation function is
almost same.



NARL, Gadanki.

IV. References
1.Introduction to radar systems by Skolnik
2.Digital Signal Processing by Sanjay.K.Mitra
3.Communication Systems by Simon Haykin











.


IMPLEMENTING SMART ANTENNAS
TECHNOLOGY IN CELLULAR BASE
STATIONS


PRESENTED BY:


B.VENKATAREDDY
(3/4 ECE)
Venkata_reddyb@yahoo.co.in
Phone No:-9885513579







K.PERAIAH
(3/4 ECE)
kpc_ece@yahoo.co.in
Phone No:-9885513579





LAKIREDDY BALIREDDY COLLEGE
OF ENGINEERING

MYLAVARAM,
KRISHNA(DIST)


ABSTRACT :
There is an ever-increasing demand on
mobile wireless operators to provide
voice and high-speed data services. At
the same time, operators want to support
more users per base station in order to
reduce overall network cost and make
the services affordable to subscribers. As
a result, wireless systems that enable
higher data rates and higher capacities
have become the need of the hour. Smart
Antennas technology attempts to address
this problem via advanced signal
processing techniques called
Beamforming. This promising new
technology has already found its way
into all the major wireless standards.
INTRODUCTION :
Smart Antenna also known as adaptive
antenna refers to a system of antenna
arrays with smart signal processing
algorithms that are used to identify the
direction of arrival (DOA) of the signal,
and use it to calculate beamforming
vectors, to track and locate the antenna
beam on the mobile/target. The antenna
could optionally be any sensor.
Contents :
DOA estimation
Beamforming
References

DOA ESTIMATION :
The smart antenna system estimates the
direction of arrival of the signal, using
any of the techniques like MUSIC
(Multiple Signal Classification) or
ESPRIT (Estimation of Signal
Parameters via Rotational Invariant
Techniques) algorithms,Matrix Pencil
method or their derivatives. They
involve finding a spatial spectrum of the
antenna/sensor array, and calculating the
DOA from the peaks of this spectrum.
BEAMFORMING :
Beamforming is the method used to
create the radiation pattern of the
antenna array by adding constructively
the phases of the signals in the direction
of the targets/mobiles desired, and
nulling the pattern of the targets/mobiles
that are undesired/interfering targets.
This can be done with a simple FIR
tapped delay line filter. The weights of
the FIR filter may also be changed
adaptively, and used to provide optimal
beamforming, in the sense that it reduces
the MMSE between the desired and
actual beampattern formed. Typical
algorithms are the steepest descent , and
LMS algorithms. Beamforming is a
latest technology being used for various
purposes.
AREAS OF ITS
APPLICATION :
Listening to the Cell (Uplink
Processing)
Speaking to the Users (Downlink
Processing)
Switched Beam Systems.

Implementing Smart
Antennas Technology in
Cellular Base Stations :
There is an ever-increasing demand on
mobile wireless operators to provide
voice and high-speed data services. At
the same time, operators want to support
more users per basestation in order to
reduce overall network cost and make
the services affordable to subscribers. As
a result, wireless systems that enable
higher data rates and higher capacities
have become the need of the hour. Smart
Antennas technology attempts to address
this problem via advanced signal
processing techniques called
Beamforming. This promising new
technology has already found its way
into all the major wireless standards
including 3GPP, 3GPP2, IEEE 802.16
and IEEE 802.11 systems. Altera FPGAs
help address the major implementation
challenges, including the need for
flexibility and processing speed.
Interference Limited
Systems :
Since the available broadcast spectrum is
limited, attempts to increase traffic
within a fixed bandwidth create more
interference in the system and degrade
the signal quality.
In particular, when omni-directional
antennas [see part (a) of Figure 1] are
used at the basestation, the
transmission/reception of each users
signal becomes a source of interference
to other users located in the same cell,
making the overall system interference
limited. An effective way to reduce this
type of interference is to split up the cell
into multiple sectors and use sectorized
antennas, as shown in part (b) of Figure
1.
Figure 1. Non-Smart Antennas System



Smart Antennas Technology
-Beamforming :
Smart antennas technology offers a
significantly improved solution to reduce
interference levels and improve the
system capacity. With this technology,
each users signal is transmitted and
received by the basestation only in the
direction of that particular user. This
drastically reduces the overall
interference in the system. A smart
antennas system, as shown in Figure 2,
consists of an array of antennas that
together direct different
transmission/reception beams toward
each user in the system. This method of
transmission and reception is called
beamforming and is made possible
through smart (advanced) signal
processing at the baseband.
Figure 2. Smart Antennas System
Beamforming

In beam forming, each users signal is
multiplied with complex weights that
adjust the magnitude and phase of the
signal to and from each antenna. This
causes the output from the array of
antennas to form a transmit/receive
beam in the desired direction and
minimizes the output in other directions.
Switched & Adaptive
Beamforming :
If the complex weights are selected from
a library of weights that form beams in
specific, predetermined directions, the
process is called switched beamforming.
Here, the basestation basically switches
between the different beams based on
the received signal strength
measurements. On the other hand, if the
weights are computed and adaptively
updated in real time, the process is called
adaptive beamforming. Through
adaptive beamforming, the basestation
can form narrower beams towards the
desired user and nulls towards
interfering users, considerably
improving the signal-to-interference-
plus-noise ratio.
FPGA based Adaptive
Beamforming :
The high-performance digital signal
processing (DSP) blocks, embedded
Nios II

processors, and logic elements


(LEs) of Alteras Stratix

II FPGAs
make them ideal for adaptive
beamforming applications. This section
describes the Altera

implementation of
a Rake-Beamformer (also known as two-
dimensional Rake) structure that
performs joint space-time processing. As
illustrated in Figure 3, the signal from
each receive antenna is first down-
converted to baseband, processed by the
matched filter-multipath estimator, and
accordingly assigned to different Rake
fingers.

Figure 3. Adaptive Beamforming with
Alteras stratix II FPGA




Notes:
DDC: digital down converter
MRC: maximal ratio combining
CORDIC: coordinate rotation digital
computer
QRD: QR decomposition

The beamforming unit on each Rake
finger then calculates the corresponding
beamformer weights and channel
estimate using the pilot symbols that
have been transmitted through the
dedicated physical control channel
(DPCCH). The QRD-based recursive
least squares (RLS) algorithm is selected
as the weight update algorithm for its
fast convergence and good numerical
properties. The updated beamformer
weights are then used for multiplication
with the data that has been transmitted
through the dedicated physical data
channel (DPDCH). Maximal ratio
combining (MRC) of the signals from all
fingers is then performed to yield the
final soft estimate of the DPDCH data.
The beamforming unit implementation
on each Rake finger is further elaborated
below.
Complex Weight
Multiplications using DSP
Blocks :
The application of complex weights to
the signals from different antennas
involves complex multiplications that
map well onto the embedded DSP blocks
available in Stratix II devices. Each DSP
block can be operated at more than 370
MHz and has a number of multipliers,
followed by
adder/subtractor/accumulators, in
addition to registers for pipelining. With
these features, Stratix II devices can
efficiently implement complex
multiplications and reduce the amount of
overall logic and routing required in
beamforming designs.
CORDIC-Based QR
Decomposition :
The QRD-RLS weights update algorithm
involves decomposing the input signal
matrix Y into QR, where Q is a
unitary matrix and R is an upper
triangular matrix. This is achieved using
a triangular systolic array of CORDIC
blocks, as shown in Figure 4. Each
CORDIC block operates in either
vectoring or rotating modes and
performs a series of micro rotations
through simple shift and add/subtract
operations and can run at speeds of 300
MHz.

Figure 4. Triangular Systolic
Array Example for CORDIC-
Based QRD-RLS



The R matrix and u vector (transformed
reference signal vector d) are
recursively updated for every new row
of inputs entering the triangular array.
The triangular systolic array can be
further mapped into a linear array with
reduced number of time-shared
CORDIC blocksas illustrated in
Figure 4providing a trade-off between
resource consumption and throughput.
Back Substitution for Weights
Using Nios :
The final beamformer weights vector w
is related to the R and u outputs of the
triangular array as Rw=u. Because R
is an upper triangular matrix, w can be
solved using a procedure called back
substitution that can be implemented in
software on the flexible embedded Nios
processor. The Nios soft processor is
capable of operating at over 150 MHz on
Stratix II FPGAs and can also utilize
custom instructions for hardware
acceleration of program code. For an
example eight antennas system, the
beamformer weights for a Rake finger
can be solved via back substitution in
approximately 0.2 ms using the Nios
operating at 100 MHz. The computation
time can be lowered to 3 s
byimplementing the back substitution on
a hardware peripheral controlled by the
Nios processor. Moreover, the Nios
processor offers a flexible platform to
implement other adaptive weight update
algorithms such as least mean squares
(LMS) and normalized LMS.
CONCLUSION:
Altera FPGAs help address the major
implementation challenges,
including the need for flexibility and
processing speed.
Smart antennas technology offers a
significantly improved solution to
reduce interference levels and
improve the system capacity. With
this technology, each users signal is
transmitted and received by the
basestation only in the direction of
that particular user. This drastically
reduces the overall interference in
the system
Through adaptive beamforming, the
basestation can form narrower beams
towards the desired user and nulls
towards interfering users,
considerably improving the signal-
to-interference-plus-noise ratio

REFERENCES:
I. K. A. Duong, E. Garcia,
H.B. Waites, Smart piezo
stacks as actuators for
precision control of
adaptive reflectors, Proc.
of SPIE Conference on
Smart Structures and
Intelligent Systems, Vol.
2190, pp. 454-462, 1994.
II. J .W. Gosslee, and W.F.
Hinson Measurement of
electrostatically deformed
antennas using
photogrammetry and
theodolites, 1984
American Conference on
Surveying and Mapping,
March 11- 16 1984,
Washington D.C..
III. L. M. Silverberg,
Electrostatically Shaped
Membranes, US Patent
No. 5307082, April 26,
1994.
IV. P. Halevi, Bimorph
piezoelectric flexible
mirror: graphical solution
and comparison with
experiment, Journal of the
Optical Society of America,
Vol. 73, No. 1, J anuary
1983.









INTELLIGENT SKYSCRAPER MONITORING SYSTEM
BASED ON GPS AND OPTICAL FIBRE SENSORS
Presented by

POTHURU KARTHIK S RANGANATH
RegNo: Y4EC451 RegNo: Y4EC450
3/4 ECE 3/4 ECE
BAPATLA ENGINEERING COLLEGE BAPATLA ENGINEERING COLLEGE
BAPATLA-522101 BAPATLA-522101
E-mail: karthik_ece53@yahoo.co.in E-mail: ranganath.singanamalla@gmail.com
Phone: 9989666368 Phone: 9985497151

MONITOR THE GLOBE WITH THE EYES OF ELECTRONIC TECHNOLOGY

1. ABSTRACT
The collapse of the World Trade Center (WTC) has
reminded us of the importance of structural integrity
Monitoring for improving disaster preparedness and
response. A real-time system based on GPS and optical
fiber sensors for monitoring structural integrity of the
skyscraper has been proposed. In the designed system,
there are 4 RTK GPS receivers atop the skyscraper on the
vertices and 4 groups of several strings of optical fiber
Bragg grating (FBG) sensors deployed along the 4 edges
of the perimeters. The GPS sensors are used to measure
the 3D positions of the vertices so that strain, tilt and
rotation of the skyscraper can be determined. The
vibrations of the skyscraper can be measured directly if an
external reference GPS receiver is used together with the
rover receivers atop the skyscraper at high sampling rate.
The optical fiber Bragg grating (FBG) sensors are used to
measure both strain and temperature. The conceptual
design of the combined system of GPS and optical fiber
sensors was based on the dimensions of the WTC. The
system proposed here may also prove to be useful for the
monitoring of other structures.

2. INTRODUCTION
The collapse of the World Trade Center (WTC) twin
tower has caught many people by surprise and has
reminded us of the importance of structural integrity
monitoring for improving disaster preparedness and
response. Nobody thought it (WTC) would collapse!
The sentence has been repeated many, many times in the
media on the first anniversary of 11 September 2001. The
events following the 11 September 2001 attacks in New
York City were among the worst building disasters in
history and resulted in the largest loss of life from any
single building collapse in the United States. Of the
58,000 people estimated to be at the WTC Complex, over
3,000 lost their lives that day, including 343 emergency
responders. Two commercial airliners were hijacked, and
each was flown into one of the two 110-story towers. The
Structural damage sustained by each tower from the
impact, combined with the Ensuing fires, resulted in the
total collapse of each building. In total, 10 major
buildings experienced partial or total collapse and
approximately 30 million square feet of commercial
office space was removed from service, of which 12
million belonged to the WTC Complex.
The collapse of the twin towers astonished
most observers, including knowledgeable structural
engineers, and, in the immediate aftermath, a wide range
of explanations were offered in an attempt to help the
public understand these tragic events. However, the
collapse of these symbolic buildings entailed a complex
series of events that were not identical for each tower.
To determine the sequence of events, likely
root causes, and methods or technologies that may
improve the building performance observed, the Federal
Emergency Management Agency (FEMA) and the
Structural Engineering Institute of the American Society
of Civil Engineers (SEI/ASCE), in association with New
York City and several other Federal agencies and
professional organizations, deployed a team of civil,
structural, and fire protection engineers to study the
performance of buildings at the WTC site.
Although the team conducted field observations at
the WTC site and steel salvage yards, removed and tested
samples of the collapsed structures, viewed hundreds of
hours of video and thousands of still photographs,
conducted interviews with witnesses and persons involved
in the design, construction, and maintenance of each of
the affected buildings, reviewed construction documents,
and conducted preliminary analyses of the damage to the
WTC towers, with the information and time available, the
sequence of events leading to the collapse of each
tower could not be definitively determined. Because of
lack of built-in monitoring system and there was not time
to deploy an external one, it is unfortunate that the studies
had to rely on some subjective information as well, in
addition to field observations. Therefore, from our
opinion, objective, uniform, scientific data is missing
about what actually happened during the collapse of the
WTC.
Let's remember that after September 11 similar attacks to
high-rise buildings happened again inside as well as
outside the US.
Unfortunately, terrorist attack is not the only threat to
such structures. For example, if a high-rise building or a
bridge is partly damaged in a major earthquake, the very
first decision we need to make is whether the structure is
still safe for rescue personnel to enter (in the case of a
building) or for emergency vehicles to pass (in the case of
a bridge) in order to carry out rescue work. A real-time
monitoring system can function as an early warning
system, will provide crucial data for making these
decisions, and will help to establish the mechanism if the
structure is collapsed.
A real-time system based on optical fiber and GPS
sensors for monitoring structural integrity has been
proposed in this paper.

3. CONCEPTUAL DESIGN OF THE SYSTEM
In the designed system (Figure 1), there are 4 RTK GPS
receivers atop the skyscraper on the vertices and 4 groups
of several strings of optical fiber Bragg grating (FBG)
sensors deployed along the 4 edges of the perimeters. The
GPS sensors are used to measure the 3D positions of the
vertices so that strain, tilt and rotation of the skyscraper
can be determined. The vibrations of the skyscraper can
be measured directly if an external reference GPS receiver
is used together with the rover receivers atop the
skyscraper at high sampling rate. The optical fiber Bragg
grating (FBG) sensors are used to measure both strain and
temperature. In the system 20 FBG sensors connected by
optical fiber form one linear array (string) in which they
are arranged in 10 pairs of 2 closely located FBG sensors.
In each pair, one FBG sensor is in thermal contact with
the structure but does not respond to local strain changes
while the other FBG sensor responds to both temperature
and strain changes so that temperature and strain changes
can be discriminated. The length of optical fiber between
two adjacent FBG sensor pairs is the same as the height of
one storey so that there is one FBG sensor pair at each of
the 4 edges on every floor. Hence, one string with 20
FBG sensors will cover 10 storey of the skyscraper.
Therefore, depending on the total number of storey in the
skyscraper, several strings of optical fiber Bragg grating
(FBG) sensors will have to be used in each edge. Taking
the WTC as an example, 11 strings have to be used at
each edge in order to cover the entire storey.
Experiments have been carried out to study the feasibility
of using the GPS and FBG sensors in such an integrated
system.



FBG: optical fiber grating sensor
S/D: optical source and detector
GPS: Global Positioning System
Figure 1. Conceptual design of the combined optical fiber
and GPS monitoring system.

4. THE GPS COMPONENT
In a joint experiment in Tokyo on 10 August 1999
between the University of New South Wales (UNSW),
Australia and the Meteorological Research Institute
(MRI), J apan, two Trimble MS750 GPS receivers were
Used in the RTK mode with a fast sampling rate of up to
20Hz (Ge 2000). As can be seen from Figure 2, the GPS
antenna, an accelerometer, and a velocimeter were
installed on a metal plate, which was mounted with bolts
and adhesive tape on the roof of an earthquake shake
simulator truck, shown in Figure 3.



Figure 2. The setup of GPS antenna, accelerometer, and
velocimeter.




Figure 3. Earthquake shake-simulator truck.
A total of 48 experiment sessions in which vibrations
corresponding to earthquakes of different intensities,
including past quakes such as the 1923 Kanto Quake and
the 1995 Kobe Quake, were simulated. GPS sampling
rates used were 20Hz, 10Hz and 5Hz, while the sampling
rates for the accelerometers were 100Hz.
The GPS receiver on the truck was used as 'rover
receiver' whiles a reference station was setup 10m away
from the truck. The GPS-RTK results for the experiments
were recorded on files, in the GGK message format,
which includes information on time, position, position
type and DOP (Dilution of Precision) values. Acceleration
and velocity data from the seismometers were recorded
concurrently. According to the MS750 Manual, the
accuracy of the MS750 in low latency mode, in which it
was configured for the UNSW-MRI experiment, is 2cm +
2ppm for horizontal and 3cm +2ppm for vertical.
In the following Figure 4, the GPS-RTK time series of
selected segments of the 20Hz session are compared with
acceleration integrated twice and velocity integrated once.
The later two are band pass filtered (pass band: 0.1 to
8Hz). The three results are in very good agreement in all
the experimental sessions (where there were vibrations).
But the GPS results indicate that the shaft of the shake
simulator truck did not return to its original position after
the sessions, and indeed no effort was made to do so in
the experiment.
In Figure 5 below, the three results were band pass
filtered (pass band: 0.1 to 8Hz). The acceleration and
velocity results were offset 2cm and 2cm respectively in
the vertical axis direction for better viewing. (As a matter
of fact, when the results are superimposed in a color plot
they agree with each other very well.) The



Figure 4. Comparison of GPS results with
accelerometer and velocimeter.
GPS result is much better than expected (although the
origin of the trace offsets in the GPS result that occur
every 5 seconds is not clear at present). As can be seen
from the figure, sine waves of 10cm amplitude peak-to-
peak and 1-5 sec periods were generated by the truck in
this session.

Figure 5. Another comparison of GPS results with
accelerometer and velocimeter.
In conclusion, in the joint UNSW-MRI experiment using
two Trimble MS750 GPS receivers, operating in the Real-
Time Kinematics mode, to detect seismic signals
generated by an earthquake simulating truck, the GPS-
RTK result is in very good agreement with the results of
the accelerometer and velocimeter, indicating that a fast
sampling rate (up to 20Hz) GPS system can be used for
measuring displacements directly. The vertical
displacement divided by the height of the structure will
give the strain; the horizontal displacement can be
converted to tilt and rotation of the structure. The
advantage of using GPS receivers instead of traditional
accelerometers is GPS results can be directly used to
derive strain, tilt and rotation of the skyscraper while the
accelerometer results have to be integrated twice in order
to be converted to displacement, which cannot reflect the
true movement of the structure as revealed in Figure 4.
Another advantage is that the spectrum response of GPS
is flat over the frequency band (0 20Hz) most
Important to structural integrity monitoring. Since the
conceptual design of the combined system was
Based on the dimensions of the WTC (417m in height and
65m in width), the GPS resolutions for strain, tilt and
rotation of the skyscraper are 50 micro-strain, 5, and 45
respectively, assuming the GPS-RTK accuracy as 1cm +
2ppm for horizontal and 2cm +2ppm for vertical?


5. THE OPTICAL FIBRE SENSOR COMPONENT
A Fiber Bragg Grating (FBG) sensor is a periodic or no
periodic perturbation of the refractive index along the
fiber length which is formed through photosensitivity by
exposure of the core to an intense optical interference
pattern. The index perturbation in the core is similar to a
volume hologram or a crystal lattice that acts as a stop
band filter. A narrow band of the incident optical field
within the fiber is reflected by successive, coherent
scattering from the index variations. The following Figure
6 shows Bragg resonance for reflection of the incident
mode occurs at the wavelength for which the grating pitch
along the fiber axis is equal to one-half of the modal
wavelength within the fiber core. The back scattering
from each crest in the periodic index perturbation will be
in phase and the scattering intensity will accumulate as
the incident wave is coupled to a backward propagating
wave.



Figure 6. A FBG sensor.

For short period fiber grating or FBG, the strongest
interaction or mode-coupling occurs at the Bragg
Wavelength given by
Wb=2neff /\
Where Neff is the modal index and is the grating period
(usually <1m).
Figure 7 shows that with FBG it is possible to detect both
the reflected and transmitted signals. The basic principle
is the changes in the measured (e.g., strain, temperature)
will change the grating pitch or refractive index which
iinducesa shift in the Bragg wavelength, linked by
the measured can be obtained by monitoring shift in the
Bragg wavelength of the FBG sensor. If spectrally
broadband source of light is injected into the fiber, a
narrowband spectral component at the Bragg wavelength
is reflected by the grating. In the transmitted light, this
spectral component will be removed. Both the reflected
and transmitted light can be used for sensing.


Figure 7. FBG sensing principle.

ny change in fiber properties, such as strain,
, elastooptic
A
temperature, or polarization which varies the modal index
or grating pitch, will change the reflected (Bragg) or
transmission wavelength. The grating is an intrinsic
sensor which changes the spectrum of an incident signal
by coupling energy to other fiber modes. In the simplest
case of FBG, the incident wave is coupled to the same
counter propagating mode and thus reflected.
The sensitivity is governed by the fiber elastic
and thermo optic properties and the nature of the load or
strain which is applied to the structure that the fiber is
attached to or embedded within. Strain shifts the Bragg
wavelength through dilating or compressing the grating
and changing the effective index. Using Eq. (3) the
amount of wavelength shift is given by

Where the principal strains are all along the fiber axis and
t transverse to the fiber axis and the fiber Pockels
coefficients are p11 and p12. For simplicity Neff is
replaced by n. A more complicated loading might be
triaxial which would introduce a third strain component,
resonance condition of a grating. Therefore, the change of
normal to both the direction of fiber polarization and
wave propagation. If the strain is homogeneous and
isotropic, then Eq. (4) simplifies to its more common
form


Where we have subsumed the photo elastic contributions
into pe, which is defined by

In terms of the fiber Pockels coefficients p11 and p12
and the Poisson ratio I. Typical values for the sensitivity
to an applied axial strain are 1 nm/ Millis train at 1300 nm
and 0.64 nm/ Millis train at 820 nm. The strain response
is linear with no evidence of hysteretic at temperatures as
high as 370 0C. Therefore, the FBG strain resolution can
be as high as 1 strain considering the wavelength
resolution is better than 0.001 nm. The temperature
sensitivity of a bare fiber is primarily due to the thermo
optic effect. It is given by

Up to 85 C. a is the coefficient of thermal expansion
mal
(CTE) of the fiber material (e.g., silica), and dT is the
Temperature change. A typical value for the ther
response at 1550 nm is 0.01 nm/ 0C. At higher
temperatures, the sensitivity increases and the response
become slightly nonlinear. If the fiber is jacketed or
embedded in another substance then the sensitivity can be
enhanced by the proper choice of material. This is clearly
desirable if the grating is to be used as a sensor.
Therefore, the FBG temperature resolution can be as high
as 0.10C considering the wavelength resolution is better
than 0.001 nm. A very important advantage of an FBG
sensor is that it is wavelength-encoded. Shifts in the
spectrum, seen as a narrow-band reflection or dip in
transmission, are independent of the optical intensity and
uniquely associated with each grating, provided no
overlap occurs in each sensor stop-band. With care in
selection of the Bragg wavelengths, each tandem array of
FBG sensors only registers a measured change along its
length and not from adjacent or distant transducers.
Several individual FGB sensors can be connected to form
an array for quasi-distributed monitoring as shown in
Figure 8. Moreover, the use of an optical switch shown in
Figure 9 allows such an instrumentation system to address
several strings or arrays of FBG sensors as described
in Figure 8. In the system, single-mode optical fiber
switches driven under PC control are utilized to allow the
measurement of strain along independent strings of FBG
Sensors. This approach can bring significant cost-
reduction to the monitoring system because several
FBG arrays can share the same optoelectronic system.

Figure 8. A single string FBG array.


Figure 9. A multi-string FBG array.

irst, the two FBG sensors (FBG1 and FBG2) were tested
and
F
using the optical spectrum analyzer. In Figure 10 the first
plot is the reflective spectrum of the two FBGs. The
Second plot shows the reflective spectrum of FBG1
transimissive spectrum of FBG2. It can be seen that the
Bragg wavelengths of the two are very close.

Figure 10. Reflective and transimissive spectrums of
FBG.
Using the optical spectrum analyzer, FBG1 was further
tested in various load conditions. In Figure 11 below the
wavelength shift is plotted against strain assuming a
sensitivity of one micro-strain per pm wavelength shift.


Figure 11. FBG wavelength shift against strain.

Then the two FBG sensors were used together in an
experimental system as depicted in the Figure 12.
Broadband light was launched into the sensing FBG1
through an adapter, an isolator and a 3-dB fiber coupler.
Light reflected by FBG1 was split into two arms. One arm
was directly detected by photo detector PD1 (V1) while
on another arm light passed through FBG2 and was
detected by photo detector PD2 (V2).
At different ambient conditions (23.2 and 25.2 oC of
temperature), FBG1 was tested in various strains. The
results are given in the Figure 13. It can be seen that the
linearity in 0 ~250 micro-strain range is very good.


Figure 12. Dual FBG experiment system








Figure 13. FBG strain sensitivity test under different
temperature conditions.
In conclusion, the temperature and strain resolutions of
FBG can be as high as 0.10C and 1 strain respectively and
the advantages of FBG sensing are,
Electrically passive operation
EMI immunity
High sensitivity
multiplexing capabilities
distributed sensing
Inherent self referencing
The last two are unique to FBG sensing while the rest are
the advantages normally attributed to fiber sensors in
general.
6. THE INTEGRATION OF GPS AND OPTICAL
FIBRE SENSOR COMPONENTS
On 11 September 2001, some communications
technology was put to extreme tests and failed. Hence, we
have to focus on redundancy and security in designing the
integrity monitoring system. In the combined GPS and
FBG system, both local strain at the individual FBG
sensors and integrated strain between FBG sensors can be
monitored. The strain measured by GPS is essentially an
integration of all these FBG measured strains over the
whole height of the skyscraper. As a redundancy within
the FBG sensing component optical sources and detectors
(S/D) have been placed both on top and at the bottom of
the skyscraper as shown in Figure 1. With such a design,
even if the building is suffered an attack mid-way the
FBG arrays will still be functional using the reflective
sensing scheme before a
Total collapse. In order to send the monitoring data to a
safe location, various data communications schemes such
as internet, radio and satellite have to be used. The FBG
arrays themselves can also be used as data links, because
a FBG sensor is merely a perturbation of the refractive
index along the fiber length which is formed through
photosensitivity by exposure of the core to an intense
optical interference pattern.
For example, Figure 14 shows the FBG reflection growth
in a polymer optical fiber manufactured in UNSW under
the exposure of 4.6mJ ultraviolet laser. Figures 15 and 16
show the FBG reflection and transmission growth
respectively after laser writing. Therefore, in principle
data communications in the FBG arrays can be carried out
on any wavelength outside 1572-1574 nm. However,
considering the variations in the central FBG wavelength
as illustrated in Figures 17, 18 and 19 of three FBGs all
Produced in UNSW, especially between FBG1
(1572.25nm) and FBG2 (1573.25nm), the wavelength
Used for data communications should be selected further
away from the Bragg wavelength. Unlike Figure 14, the
reflection spectrums in Figures 17-19 have been vertically
shifted 6dBm consecutively for clarity.

Figure 14. FBG reflection growth during laser writing.

Figure 15. FBG reflection growth after laser writing.


Figure 16. FBG transmission growth after laser writing.


Figure 17. FBG1: reflection growth during laser writing.

Figure 18. FBG2: reflection growth during laser writing.

Figure 19. FBG3: reflection growth during laser writing.

7.CONCLUDING REMARKS
In the wake of the September 11 attack, a real-time
system based on GPS and optical fiber sensors for
monitoring structural integrity of the skyscraper has been
proposed. The conceptual design of the combined system
was based on the dimensions of the WTC. Then the first
experiment involving Trimble MS750 GPS receivers,
velocimeters and accelerometers indicates that the GPS
RTK results are in good agreement with the results of
velocimeters and accelerometers integrated once and
twice respectively. The GPS resolutions for strain, tilt and
rotation of the skyscraper are 50 micro-strain, 5, and 45
respectively. In the second experiment involving the
optical fiber Bragg grating (FBG) sensors, resolutions of
one micro-strain change and 0.1 degree temperature
change have been demonstrated. The system can
function as an early warning system, will provide
crucial data for making decisions for rescue, and will
help to establish the mechanism if the structure is
collapsed. The system proposed here may also prove to
be useful for the monitoring of other structures.

REFERENCES

1. Alan D. Kersey, Michael A. Davis, Heather J . Patrick,
J OURNAL OF LIGHTWAVE TECHNOLOGY, VOL.
15,
NO. 8, 1442-1463.
2.Kenneth O. Hill and Gerald Meltz, 1997. Fiber Bragg
Grating Technology Fundamentals and Overview,
3. Nellen, Ph.M./Sennhauser, U, 2000. Characterization
and
Aging of optical fiber brag gratings. In: 3rd Int.
Conference and poster exhibition on micro materials
(MicroMat 2000), Berlin, 1719. April, S. 740743.
4. IEEE magazine based on satellite & mobile
communications







JNTU COLLEGE OF ENGINEERING (Autonomous), ANANTAPUR.
DEPARTMENT OF ELECTRICAL & ELECTRONICS ENGINEERING


A Paper on
NUMBER PORTABILITY AND INDIAN SCENARIO
By
K.Praveen kumar (II B.TECH EEE)
T.Siva kesava babu(II B.TECH EEE)

EMAIL & PHONE NO-
thisispraveenkumar_jntu@yahoo.co.in(9989452492)
siva_1436@yahoo.com(9985228337)









ABSTRACT-
How do you feel if there is a technology that
enables a customer to have a single mobile
number even he or she switch over from one
service provider to the other? Do you think that
this would be a great achievement. One can
retain mobile number even after a customer have
jumped from one service provider to the other.
The magic word that can make this possible is
NUMBER PORTABILITY.

Number portability allows customers not only to
move from one mobile service provider to
another within the global system for mobile
communications (GSM) but also from GSM to
code-division multiple access(CDMA) services
and also from landline to wireless phones.

This paper introduces basic types of number
portability such as operator, location and service
portability. Types of number portability and their
possibilities of implementation are clearly
explained. Our paper explains two methods of
providing number portability. The methods are

Off Switch solutions
On Switch solutions. Beside this, a
short comparison between these two methods is
also given. Some important issues that greatly
effect the implementation of number portability
are also mentioned. This includes tariff
transparency and difficulty in routing short
messages.

Besides explaining about number
portability paper also describes the situation
regarding implementation of number portability
in our country and opinion of some leading
service providers like Reliance infocomm and
Tatas. In conclusion, points in favour of mobile
number portability in India are mentioned.
Finally recommendations of Telecom Regulatory
Authority of India (TRAI) are provided in paper.

INTRODUCTION-
Currently in India, subscribers are required to
change their telephone numbers when
changing operators. Changing a telephone
number can be a major inconvenience and a
barrier preventing them from exercising the
choice of changing operators. As a result, the
customer may be unable to take full advantage of
the growing competition among operators or the
introduction of new services and technologies.
At present, Indias tele-
density is approximately 12% and approximately
60 million cellular mobile subscribers are
connected through four-to-seven cellular mobile
operators with in each service area. Subscribers
have a significant choice of services to choose
from and move between, if they desire so.
Therefore Indian service providers should
consider it seriously to provide better services for
Indian customers.
DEFINITION :
Number portability allows customers not only to
move from one mobile service provider to
another with in the Global System for Mobile
communications (G S M) but also from GSM to
Code-Division Multiple Access (CDMA)
services, and also from land line to wireless
phones.
TYPES OF NUMBER
PORTABILITY:
Number portability is not only limited to
operator switch over but also enables a
subscriber to switch between services or location
while retaining the original telephone number,
with out compromising on quality, reliability and
operational convenience. So there are three basic
types of number portability: operator, location
and service portability.
Operator portability:
. This is the ability of a subscriber to retain with
in the same service area an existing telephone
number even if he changes from one service
provider/operator to an other. This type of
portability is for the same service, i.e., fixed-to-
fixed or mobile to mobile. Different categories of
operator portability are as follows:
Fixed-number portability:
FNP is operator portability applied to a fixed-to-
fixed porting process.The main hurdle in the
implementation of fixed-number portability is
that it requires again a change in the national
numbering plan, which may not be a small issue
for a country like India. So at this stage it is easy
to go for mobile number portability and then
look for fixed-number portability.
Mobile number portability:
MNP is operator portability applied to a mobile-
to-mobile porting process. There is a latent
demand for MNP in India. In a survey,
International Data Corporation (IDC) found that
30% of mobile subscribers are likely to shift to
t ff i b tt i if i th
option. Competition between mobile service
providers in India is already intense. The
beneficiary of this competition would be the
Indian customer, and MNP may increase the
level of competition further. The Telecom
Regulatory Authority of India (TRAI)
considered that this level of portability is easy to
achieve and it will be first step for
implementation of number portability in India
Location portability:
Location portability is the ability of subscriber
to retain an existing telephone number when
changing from one physical location to another.
It becomes complex in the Indian situation if the
subscriber moves to region where his original
network operator has no foot print. There might
be different impacts of routing and billing
depending on the new location of the number.
Location portability is not required in the
existing mobile services as long as the subscriber
moves within the service area, i.e.; circle or
Metro.
Service portability:
Service portability is the ability of a subscriber
to retain the existing telephone number when
changing from one service to another service,
say, from fixed to mobile service. In the Indian
context, service portability will encourage the
introduction and adoption of new telecom
services and technologies. Additionally, it is a
source of competition between all telecom
operators, whether fixed or mobile. How ever,
there might be concerns about possible confusion
for callers about the chargers for different phone
calls, i.e., tariff transparency is affected. In the
current context thisisespeciallytruewhenwire
line numbers become wireless numbers. A caller
would no longer be able to estimate call charges
based on the format of the phone number.
IMPLEMENTING MOBILE
NUMBER PORTABILITY:
The technical solution adopted for the
implementation of number portability is
important as it will have cost implications on
service providers/ network operators, and will
affect the services offered and the performances
of the services made available to the subscribers.
Number portability can be provided by two
broad categories of method: off-switch solutions
or on-switch solutions
Off-switch solutions:
Off-switch solutions require the use of data base
to acquire the information of ported numbers and
route the query to concerned switches depending
upon the result of query. This type of solution
allows for efficient routing of the call towards
the recipient switch.
The originating switch can
intercept a call to ported number by querying the
data base that contains a list of all ported
numbers plus rooting information associated
with each ported number. There could be two
way to access the data base: The all-call-query
and the query-on-release methods.

All-call-query method:
The originating network first checks the location
of the dialed number in the central data base and
then routes that call directly to the recipient
network (the network where a number is located
after beingported).
Query-on-release:
The originating network first checks the status
of the dialed number with the donor network (the
initial network where the number was located
before ported). The donor network returns a
message to the originating network identifying
whether the number has been ported or not. The
originating network then queries the central
database to obtain the information regarding the
recipient network and routes the call directly to
the recipient network.
On-switch solutions:
In the case of on-switch solutions the donor
network manages the routing information for a
ported number. Thus, the donor switch performs
the interception, either routing the call itself, or
providing routing information to the originating
network that then routes the call to the recipient
network. Consequently, this involves the use of
internal databases.
The two ways to implement on-switch
solutions are: onward routing (call forwarding)
and call drop back.
Onward routing (call forwarding):
Here the originating network connects to the
donor network. If the dialed number has been
ported, the donor network itself routes the call to
the recipient network.
Call-drop back:
Here the donor network checks whether the
number is ported and if it is, it releases the call
back to the originating network together with
information identifying the correct recipient
network. The originating network then routes the
call to recipient network.
Comparison of different
technical options:
Onward routing is often regarded as the simplest
routing method to implement and the
All-call query method as the most complex, with
the other methods lying between these two
extremes. On-switch solutions are usually seen
as a short-term interim solution for number
portability. These are relatively easy and quick to
implement compared to off-switch solutions.
Some countries initially chose a transient, short-
term solution. This was not necessarily the most
technically efficient solution, but allowed
implementation in a timely way and with
minimal investment. Simultaneously, a long-
term solution was also studied and deployed
progressively.
OTHER IMPORTANT ISSUES-
There are other important issued that will greatly
effect the implementation of number portability.
TARIFF TRANSPARENCY:
Users find it desirable to be able to predict the
price of calls, and porting numbers should not
undermine this capability. For example, some
cellular service providers charge less for calls
within their network, and more for calls to
phones on other networks. If portability is
implemented, it may not be possible for a caller
to determine what the tariff for a call might be.
This could lead to confusion for the calling
subscriber. Implementing tariff transparency will
help to avoid this situation.
Tariff transparency can be achieved through the
use of recorded announcement at the start of a
call or when the caller has a terminal with a
screen where the tariff or service information
could be displayed.

National numbering plan:
Since a data base query returns the routing
information in the form of a re routing
number(which may be a pre fixed original called
party number), it is important that this re
routing number is recognizable and routable by
the transit switches and fits into the national
numbering plan .additionally, the national
numbering plan needs modification because of
introduction of number portability the recipient
network should be allowed to use numbers
originally assigned to the donor network
Short message services:
Sms messages are routed between mobile
networks via signaling paths rather than over
voice circuits and so the methods used for
routing of calls to ported numbers are not
applicable to handling of SMS messages
forwarded to ported numbers. Service providers
may have to use a separate solution for handling
Sms traffic for ported numbers.
INDIAN SCENARIO:
Considering the growth of the telecom services
in India, it is appropriate at this stage to discuss
number portability in order to ensure full
competition in the telecom industry. It appears
from the feedback of service providers to the
consultation document of TRAI that the Indian
mobile industry is still not ready for MNP. Most
of the service providers are of the view that
number portability is introduced in countries as
an instrument to increase competition. But in
India, we already have so much of competition
and the lowest tariffs in the world. Introducing
portability will only increase tariffs-it makes no
sense now.
In the CDMA space, the main players-Reliance
Infocomm and Tatas are divided on the issue.
While Reliance Infocomm has been pushing for
number portability only in fixed lines, the Tatas
have demanded that portability should be
extended to both fixed as well as wireless
services.
Tata telecom services said that considering cheap
labour and lower software costs in India, the cost
per subscriber would be around Rs300.

IS MNP FEASIBLE IN INDIA:
The points in favour of MNP in India are:
The cost per subscriber to introduce
MNP is Rs 675 in Australia and Rs 900
in Europe, but around Rs 300 in India.
Pakistan and the Netherlands are
introducing MNP despite only 6.9% tele
density. In India it is 12%.
Tariff has not gone up in countries that
have adopted MNP.
The Points against MNP are:
In the US, only 5% phone subscribers
have adopted MNP so it has got a
lukewarm response.
Churn rates will increase by 15 to 30%.
Additional expenditure will be in the
range of Rs 30 to 40 billion, hence
tariffs to go up.
Indian mobile tariffs are the lowest in
the world, hence no justification to use
portability as a weapon to ensure
competition
Lessons from International
Experiences

Economy FNP MNP
Hong Kong
, SAR
1996 1999
United
Kingdom
1996 1999
Australia 1997 2001
United
States
1997 2003
Germany 1998 2002
France 1998 2003
Netherlands 1999 1999
Singapore 2001 1997

Table 4.1: FNP and MNP
implementations internationally



Overview of Technical Choices in Other
Countries
Call routing to ported mobile numbers planned
or adopted in European countries vary considerably in
technique. Table below illustrates this variation across
various countries and for which information is available:






Table 4.2: Technical solutions for MNP
used in various countries


Routing from a fixed network
to a mobile network
Routing from a
mobile network
to another mobile
network
Belgium all call
query
all call query
& query on
release
Denmark all call
query &
query on
release
all call query
& query on
release
Finland onward
routing
onward
routing
France Phase 1:
onward
routing,
phase 2: all
call query
phase 1:
onward
routing, phase
2: all call
query
Germany onward
routing &
all call
query
all call query
Hungary all call
query &
query on
release
phase 1:
onward
routing & all
call query,
phase 2: all
call query
Ireland onward
routing
all call query

RECOMMENDATIONS OF TRAI:
TRAI has recommended that subscribers who
wish to avail the facility will have to pay a one
time fee of rupees 200 to the operator the
subscriber wants to switch to. The fee will enable
the service provider to recover his investment
cost in 3 to 5 years. The time frame of 12 months
between the acceptance of recommendation by
the govt and launch of this facility is
recommended. The facility implemented from
april1st, 2007,would be first available to mobile
subscribers in metros.

Conclusion
The paper introduced the concept of
number portability, related regulation, and
alternative technical solutions. The situation in
our country regarding feasibility of number
portability is clearly explained. MOBILE
NUMBER PORTABILITY will raise the bar of
competition in the cut-throat telecom market,
there by increasing the quality of service and
reducing tariffs in India.









REFERENCES
Consultation Paper on Mobile Number Portability, Consultation Paper No. 7/2005,
Telecom Regulatory Authority of India, New Delhi: July 22, 2005.
T. Smura, Mobile Number Portability: Case Finland,
http://keskus.hut.fi/opetus/s38042/s04/Presentations/06102004_Smura/Smura_paper.pdf.
Mnp from www.efy.com

Information paper: Mobile number portability in Singapore, Infocomm Development
Authority of Singapore, August2003
EU, 2005. Regulatory framework for electronic communications in the European
Union, Situation in September 2005.
http://europa.eu.int/comm/competition/liberalization/leg
Islation/regulatory_framework.pdf
FICORA 46 B/2004 M. Regulation on telephone number portability Available at:
http://www.ficora.fi/englanti/document/FICORA46B2004M.pdf
IT- og Telestyrelsen (ITST) Denmark



PAPER PRESENTATION FOR
ELECTROCOM07
AT
KAKATIYA INSTITUTE OF TECHNOLOGY AND SCIENCE

OPTICAL ROUTER WITH TIME/WAVELENGTH CODING FOR OPTICAL-CDMA
NETWORKS
(optical communications)
By

SIVARAMA GUPTA.I SANTOSH KUMAR.S
(05341A0448) (05341A0443)


II/IV B-Tech (E.C.E),
G.M.R.Institute of Technology,
Rajam.
E-Mail: sivaramaguptaimmidi@yahoo.com
: sakinala_santosh87@yahoo.com

Mobile No: 09346308714











1
Optical Router with Time/Wavelength coding for Optical-CDMA
Networks
ABSTRACT
Optical CDMA presents several properties
that are very useful for access networks. This
technique allows multiple user access to the
same medium sharing the same time and
wavelength range avoiding the use of
synchronization clocks and is compatible with
bursty asynchronous traffic. The generation
and recovery of codes has already been
widely studied and several solutions have
been presented already, however the
conversion of these codes for band reuse or
interconnectivity of OCDMA-networks has
not been sufficiently studied. In this paper we
suggest and compare two all-optical router
architectures that can perform this function,
converting the signals in code and in
wavelength range. One of the configurations
is based in the Cross Gain Modulation of
Semi-Conductor Optical Amplifiers, and the
other is based in the cross phase modulation
of the same devices.
Index Terms: Asynchronous traffic,
OCDMA-networks, Router, Cross Gain
Modulation, Semi-Conductor Optical
Amplifiers
INTRODUCTION
Optical code division multiple access
(OCDMA) has been recognized as one of the
most important technologies for supporting
many simultaneous users in shared media, and
in some cases can increase the transmission
capacity of an optical fiber. OCDMA has
distinctive features that include possibility of
full asynchronous communication, enhanced
security and a soft variation of the system
properties to the number of users. However,
for improperly designed codes, the maximum
number of simultaneous users and the
performance of the system can be seriously
limited by the crosstalk from other users. For
this reason, various code families have been
suggested, from the one-dimensional (1D)
optical orthogonal code (OOC) to the recent
two-dimensional (2D) codes . In general, 2D
optical codes outperform 1D OOCs in the
number of supportable users at a given bit
error rate (BER), and also relax the
requirements on code length and pulse width.
However, working at higher dimensions
makes it difficult to design a code family,
requiring codewords to be treated in matrix
form. The introduction of a large number of
required wavelengths makes it difficult to
construct the hardware sets, and to
compensate for the transmission group delay
caused by the wide wavelength separations.

2
OCDMA systems are normally classified as
incoherent and coherent. The former approach
utilizes unipolar codes, matched filtering and
direct detection; the latter makes use of the
wavelike nature of light and by impressing
phase information, produces bipolar coding
and hence, an improvement in the processing
gain. The restrictions imposed by operating in
the incoherent domain can be traded off
against the complexity of the implementation.
Matched filtering followed by direct detection
is more straightforward than coherent
correlation with phase-coded chips since in
the latter, the phase of the carrier needs to be
tracked to successfully recover the data.
Given the phase noise problems associated
with optical coherent techniques, this
represents a challenging implementation issue
requiring additional complexity at the
receiver. In incoherent systems the
restrictions imposed by the unipolar regime
translates into the use of very long and sparse
sequences. Given this limitation, a new family
of unipolar hybrid codes has been developed
recently for use in OCDMA applications. In
parallel complimentary activities have been
pursued using coherent optical codes
comprising phase-coded pulses.
There are several ways of generating 2D
coherent and incoherent codes. One of the
common ways to generate OCDMA
time/wavelength codes is by recurring to
Fiber Bragg Gratings (FBG) .These devices
can be designed as narrow band reflecting
filters or as reflecting band phase coders or
with other properties that are interesting for
this king of broadband transmissions. By
recurring to a broadband source or to a
spectralcomb of wavelengths time/wavelength
codes can be easily and dynamically
generated.

3
Fig. 1 shows an example of a sub-system that encodes a sequence of pulses that come from the
sources either generated by a multiple laser comb like multiple laser arrays or by a super continuum
source , then are coded by some modulating device that will encode the sequence and then coded in
time and wavelength by a set of optical delay lines and FBGs. The example is only for a three-
wavelength code.


If we manage to have several bands in the
same fiber or to have several interconnected
OCDMA networks, in order to allow
connectivity, we need to have code
conversion capabilities such that allow also
wavelength conversion. In fig. 2 it is shown a
simple block diagram of a code converter that
meets the specification. To allow the
complete code conversion (time and
wavelength), it is needed a multi-wavelength
converter and a time code converter. The code
conversion is a simple process since the same
device that is used to generate the code
(delays and FBGs) can be used, generally to
detect it, and then to generate it again. After
detection, some kind of band conversion is
needed. In order to obtain the all-optical
recoding we will need either wavelength
conversion or some kind of optical sequence
driven modulator in order not to loose the
data.
In this paper we will concentrate in the
multiple-wavelength conversion task. We will
address two types of wavelength conversion
techniques in a semiconductor optical
amplifier (SOA), the Cross Gain modulation
effect (XGM) and the Cross Phase
Modulation (XPM). Some other conversion
techniques can be identified, like the ones
based in Four Wave Mixing (FWM), however
they come out of the scope of this study.
4
A.XGM Technique
In this technique, the SOA is used as
an optically controlled gate, in which the
incoming, intensity modulated signal gates
one or more Continuous Wave (CW) source
at another wavelength. The mechanism
involves a reduction of the carrier population,
due to stimulated recombination induced by
incoming signal. Since the incoming signal or
signals are modulated, this mechanism will
induce gain modulation, which modulates the
output wavelength. In this case the gain is
reduced when the signal is intense, resulting
in a negative slope converter. The maximum
bit rate is related to the carrier dynamics,
which under relatively high optical/electrical
injection current can be very fast and thus
compatible with relatively high bit rates.
Digital transparency is inherent to this
technique since the gain modulation follows
directly the input signal. In this scheme the
polarization dependence is related only to the
SOA polarization dependence degree. This
technique has two particular drawbacks, the
extinction ratio degradation and the chirp
induced by the phase modulation resultant
from the gain modulation in the SOA. The
inverted signal can also represent a difficulty;
however with the scheme that is presented in
fig. 2 this restriction can be overcome. This
scheme uses an Intermediate Frequency (IF)
laser that is responsible for a first conversion
of the signal to its inverted form, irrespective
of the number of incoming frequencies that
are arriving to the converter, as long as they
are synchronized, the conversion will take
place in good condition. After this first
conversion, a second one takes place and this
inverted sequence is then converted again to
another comb of wavelengths.


5

This technique is quite attractive due to its
simplicity and requires two SOA and one IF
laser that only needs to have a good control of
its power and line width, what relaxes all the
other parameters ranges, decreasing its cost.
B.XPM Technique
Converters based on optical gates that
rely on changes in the refractive index appear
at the moment to be the most promising all-
optical converters. These devices can be
obtained in several configurations like the
Michelson interferometers (MI), Mach
Zehnder Interferometers (MZI) and Non-
linear Optical Loop Mirror interferometers
(NOLM). One of the big problems of this
configuration is the temperature and current
control, and for that reason the later devices
are being realized in a monolithically
integrated device form. They are being
realized in three usual configurations, the all-
active, the passive-active and the hybrid
active.
The principle of operation is more or
less the same for all the configurations. The
modulated signal induces stimulated
recombination in the SOA, thereby depleting
the carriers in the active medium. This results
in a variation in the active medium local
index change, inducing a change in the phase
of the CW light that is traveling trough one of
the arms. This creates a phase mismatching
condition that leads to intensity modulation
ate the output, encoding in this way the data
coming in the input wavelength or
synchronized wavelength comb. These
devices present a possibility to control the
chirp of the converted output signal and a
large signal to ASE noise can be obtained
generally, allowing the 2R regeneration. It is
also observed that a few dB are enough to
generate a phase shift, for that reason the
input power range is generally limited (3-
5dB), what can lead to several impairments in
real operation. However, there are several
techniques the can help to overcome this
problem, requiring normally extra hardware,
what can make this option a little more
complex and expensive.
The solution presented, converts several
wavelengths at a time and is based in a MZI.
It is to be noticed that the arms should be very
well controlled in order to have the correct
phase difference at the output coupler, and
generally this is not so well achieved with
discrete components assembled together, what
can result in limited band operation and
arbitrary temporal performance change.
6

Figure 4: Wavelength conversion based on the XPM in a Mach-Zehnder configuration.
DISCUSSION AND COMPARISON OF THE TWO PRESENTED TECHNIQUES
These two techniques both require two optical
amplifiers and the XGM requires one laser
more for the IF conversion. Disregarding the
fact that the XPM needs a better control in all
its realization and input power control, it
presents better conditions of chirp and
extinction ratio than the XGM. This last
(XGM), presents higher robustness and
simplicity being easily realized with discrete
components in the lab. The signal is received
with an optical power offset due to the
process of gating involved in the XGM
conversion, which in transmission can
saturate the amplifiers and bring some
problems.
The code conversion was simulated for both
configurations, and the results were obtained
With the system presented in fig. 5.

7
Figure 5: Block diagram of the system used to test the efficiency of the two OCDMA routing
techniques.

The signal was generated in a Raised
cosine shape in order to have a better spectral
shape and therefore allows better packing of
the signals, then, by using a three wavelength
part of the 6 wavelength comb generated at
the CW source, the data was modulated in
both the three CW wavelengths. After that,
the signal was sent to a grating coder with a
specific code in time and wavelength. The
signal propagated over a 5km fiber span, and
arrived to the decoder in the OCDMA
time/wavelength code router. A PIN detected
this sequence, the Q factor of the received
sequence was observed for a matching grating
and a non-matching grating. The Q factor was
11.5 for the XGM and 13.8 for the XPM
configurations showing the effect of the
extinction ratio decrease caused by the XGM
configuration properties.
CONCLUSIONS
In this paper we have reviewed the
wavelength converter techniques for use in
multi-wavelength conversions. Two of these
techniques were characterized in detail and
their qualities and problems evidenced. In an
OCDMA routing function the XPM
configuration has shown that it can perform
somehow better than the XGM chosen
configuration due mainly to the extinction
ratio control that is possible in this
configuration and not easily achieved in the
XGM configuration. The routing was
performed with success changing bands and
codes in an OCDMA network and the
potential of this technique for interconnecting
OCDMA networks presented.
References
[1] CHUNG, F.R.K., SALEHI, J .A., and WEI, V.K.: Optical orthogonal codes: Design, analysis,
and applications, IEEE Trans., 1989, IT-35, pp. 595604
[2] YANG, G.C., and KWONG, W.C.: Performance comparison of multi-wavelength CDMA and
WDMA+CDMA for fiber-optic networks, IEEE Trans., 1
[3] FAHALLAH, H., RUSCH, L.A., and LAROCHELLE, S.: Optical frequency hop multiple
access communication system. IEEE ICC 98, 1998, pp. 12691273 37, pp. 824833, Aug. 1989.
[4] I. Andonovic and L. Tancevski, Incoherent optical code division multiple access systems, in
Proc. IEEE Conf. ISSSTA, 1996, pp. 424430.
8
[5] M. E. Marhic, Coherent optical CDMA networks, J. Lightwave Technol., vol. 11, pp. 854
863, May/J une 1993.
[6] L. Tancevski and I. Andonovic, Hybrid wavelength hopping/time spreading schemes for use in
massive optical networks with increased security, J. Lightwave. Technol., vol. 14, pp. 2636 2647,
Dec. 1996.
[7] W. Huang and K. Kitayama, Optical pulse code division multiple access utilizing coherent
correlation detection, in Proc. Int. Conf. Telecommun. (ICT), 1997, pp. 12211226.
9


ABSTRACT

INTRODUCTION

Interest in the use of light as a carrier for information grew
in the 1960s with the advent of the laser as a source of coherent light.
At the same time developments in semiconductor light sources and detectors
meant that by 1980 world wide installation of Fibre optic communication
systems had been achieved.


Optical communication is a telecommunication that uses light
as the transmission medium. Fibre optics is branch of physics based up
on the transmission of light through transparent fibres of glass or plastic.
These optical fibres can carry light over distances ranging from a few ce-
ntimeters to more than 160 km. Such fibres work individually or in bun-
dles. Some individual fibres measure less than 0.004 mm in diameter.


The principle on which this transmission of light depends is
That of total internal reflection: Light traveling inside the fibre center,
Or core, strikes the outside surface at an angle of incidence greater than
the critical angle, so that all the light is reflected toward the inside of the
fibre without loss. Thus light can be transmitted over long distances by being
reflected inwards thousands of times.


THERE ARE TWO BASIC KINDS OF OPTICAL FIBRES


SINGLE MODE FIBRE:


Single mode fibres are used for long-distance transmissions.
They have extremely small cores(about 3.5*10^-4 inches or 9 microns in di-
ameter), and they accept light only the axis of the fibres.


MULTI MODE FIBRE:


Multi mode fibres have cores (about 205*10^-3 inches or
62.5 microns in diameter) larger than those of single mode fibres, and they
accept light(wave length=850 to 1,300 nm) from a variety of angles.










ADVANTAGES OF OPTICAL FIBRE :

Why are fibre-optic systems revolutionizing telecommunications?
Compared to conventional metal wire(copper wire ),optical fibers are :

Thinner, Higher carrying capacity
Low power,Light weight,Flexible

DISADVANTAGES OF OPTICAL FIBRE:

Higher cost.
Need for more expensive optical transmitters and receivers.
More difficult and expensive to splice than wires.
Cannot carry electrical power to operate terminal devices.


CONCLUSION:

So here we are going to present our views on how OPTICAL FIBRE

plays a major role in MODERN DIGITAL OPTICAL COMMUNICATION.



BY

CH.SIVA KUMAR K.SOWMYA TEJA
03G61A0443 03G61A0420
FINAL YEAR,E.C.E. FINAL YEAR,E.C.E.


THANDRA PAPARAYA INSTITUTE OF SCIENCE AND TECHNOLOGY,

BOBBILI.

OPTIMIZATION OF REAL -TIME VIDEO TRANSPORT OVER
3G WIRELESS NETWORK

E. Chandra Prakash
M. Sriteja
III / ECE
Chadalawada Ramanamma Engg College
Tirupati
Email: teja_storm@yahoo.co.in


ABSTRACT

Feedback adaptation has been the
basis for many media streaming schemes, whereby
the media being sent is adapted in real time
according to feedback information about the
observed network state and application state. Central
to the success of such adaptive schemes, the feedback
must:

1) Arrive in a timely manner and
2) Carry enough information to effect useful
adaptation.
In this paper, we examine the use of
feedback adaptation for media streaming in 3G
wireless networks, where the media servers are
located in wired networks while the clients are
wireless. We argue that end-to-end feedback
adaptation using only information provided by 3G
standards is neither timely nor contain enough
information for media adaptation at the server.

We first show how the introduction of a
streaming agent (SA) at the junction of the wired and
wireless network can be used to provide useful
information in a timely manner for media adaptation.
We then show how optimization algorithms can be
designed to take advantage of SA feedbacks to
improve performance. The improvement of SA
feedbacks in peak signal-to-noise ratio is significant
over nonagent-based systems.






INTRODUCTION:

THE GOAL of this paper is to
improve real-time video transport over 3G wireless
networks. By real-time video transport, we mean a
piece of video content being delivered from a server
in a wired network to a mobile client via a last-hop
wireless link, to be decoded and viewed by the client
before the entire content has been downloaded. This
video streaming service must be compliant with the
3GPP packet streaming service (3GPP-PSS), where
the server uses IETFRFC-compliant RTP for media
transport and each client sends only RTCP reports as
feedback to its server.

One common objective of media
adaptation is congestion control whereby video
sources reduce their transmission rates in reaction to
deduced network congestion. For paths involving
wired and wireless links, it has been shown that end-
to-end feedback information alone is ineffective for
congestion control purposes since it is not possible to
identify where losses occur. Specifically, if losses
occur in the wireless link due to poor wireless
condition, it is not helpful for the sources to reduce
their transmission rate.

On the other hand, if losses occur
in the wired network due to congestion, the sources
should reduce their transmission rate. One effective
mechanism to provide additional information that
allows sources to take appropriate actions is the RTP
monitoring agent.






STREAMING AGENT:

Design of streaming Agent:

SA an enhanced version of
the RTP monitoring agent , is a network agent
installed by the wireless network provider to provide
network services to the server/client pair that are not
possible with endpoints alone. It is located at the
intersection of the wired core network and the
transmitting wireless link .During a server-client RTP
streaming session, SA identifies the stream by packet
classification: look for matches at selected fields of
RTP headers of incoming IP packets such as source
and destination addresses and source and destination
port numbers. SA periodically sends timely
feedbacks (SA-FB) to the sending server in sub-
second intervals, reporting the arrival status of the
last RTP packets of a stream.
Using SA in 3GPP-WCDMA:

Two possible locations for SA
within a WCDMA system are the radio network
controller (RNC), which is responsible for the control
of the radio resources of the radio access network,
and the transmitting base station (node B). NodeB
handles layer 1 processing such as channel coding
and interleaving, rate adaptation, spreading, etc.
RNC, on the other hand, performs layer-3 packet
processing such as header compression. Hence, it is
logical to place the functionalities of SA at RNC.

PLACEMENT OF SA IN 3GPP
WCDMA SYSTEM:


Optimization window





At any given optimization instance, an
optimization window equal to M-frame time is
selected. The window is defined to be the set of
frames whose delivery deadline falls within start time
and end time. Frames are brought into the
optimization window attire. Frames in the window
expire at time when they cannot reasonably be
expected to be delivered to the client on time. The
slope of both functionsthe rate at which they
advance in timeis the playback speed at the client.



PROBLEM FORMULATION:

Dynamic Programming (DP) Solution

To make the problem mathematically
tractable, we first introduce simple source and
network models as follows.
We employ a DP technique to simplify discussion,
we assume for now that the size of the optimization
window M is N, and we are optimizing a sequence of
1 I-frame plus L-1dependent P-frames. We denote
additional distortion reduction from frame to provide
by policy vector given the first frames are correctly
decoded. We write
Source model:
In source model reductively coded
sequence is sent from the server to the client, after the
packets has been delivered and decoded correctly, the
PSNR of the decoded signal to the source signal is
calculated.





SIMULATION SETUP:

We performed simulations using
Network Simulator. Three nodes were constructed,
n0, n1 and n2, representing the three locations of the
server, SA and the wireless client, respectively. To
connect these nodes, two links were constructed.
Link n0-n1, simulating the wired network between
the server and SA, had constant propagation delay
and uniform loss rate. The transport layer had a
duplex connection (p0-p2) from the server n0 to the
client n2 and a simplex connection (p1-p0b) from SA
n1 to the server. P0-p2 was for endpoint data
transmission from server to client. P1-p0b was for
feedbacks from SA.
Network model:
In network model the packets are
sent from the server to the client with a constant
delay. The optimization is done by taking the history
of packets, total number of transmissions and the
number of acknowledgements.






















RESULTS:
























CONCLUSION AND FUTURE
WORK:
In this paper, we proposed
the use of a Streaming Agent at the junction of
the wired and wireless networks to provide
additional timely feedbacks to the streaming
servers. We discussed one specific streaming
optimization, complexity-scalable application-
level retransmission that exploits such feedback
to demonstrate potential benefits. Through
simulation, it is shown that significant PSNR
improvement can be maintained over nonagent-
based systems.

For future work, extensions of the
streaming agent that provide more services is
possible. Given the network agent already Since
objective monitors media flows, it is perhaps
sensible for it to perform network Policing, for
example, to make sure it does not operate at a
higher sending rate than it deserves.




REFERENCES:

[1] S. Rye et al., The Fourth Generation Mobile
Communication Services
[2] A. Paulraj. Space-time Processing for
wireless Communications,



A Solution to Remote Detection of Illegal Electricity
Usage via Power Line Communications


K.Chiranjeevi
Dept of EEE,
S.R.K.R Engineering College;
Bhimavaram 534204;
AP INDIA
City2chiru@yahoo.com
Contact no:9948831560

M.VenakteswaraReddy
Dept of EEE,
S.R.K.R Engineering College;
Bhimavaram 534204;
AP INDIA
Contact no:9948831560
Abstract:
Power line communication (PLC) presents an
interesting and economical solution for Automatic
Meter Reading (AMR). If an AMR system via PLC is
set in a power delivery system, a detection system for
illegal electricity usage may be easily added in the
existing PLC network. In the detection system, the
second digitally energy meter chip is used and the
value of energy is stored. The recorded energy is
compared with the value at the main kilo Watt-hour
meter. In the case of the difference between two
recorded energy data, an error signal is generated and
transmitted via PLC network.

The detector and control system is proposed.
The architecture of the system and their critical
components are given. The measurement results are
given.

Index Terms: Automatic meter reading (AMR),
detector, illegal electricity usage, power line communication,
power line communications (PLC) modem.

1. Introduction
India, the largest democracy with an estimated
population of about 1.04 billion, is on a road to rapid
growth in economy. Energy, particularly electricity, is a
key input for accelerating economic growth.
The theft of electricity is a criminal offence and
power utilities are losing billions of rupees in this
account. If an Automatic Meter Reading system via
Power line Communication is set in a power delivery
system, a detection system for illegal electricity usage is
possible.
Power line communications (PLC) has many
new service possibilities on the data transferring via
power lines without use of extra cables. Automatic Meter
Reading (AMR) is a very important application in these
possibilities due to every user connected each other via
modems, using power lines. AMR is a technique to
facilitate remote readings of energy consumption.
The following sections will describe the
proposed detection and control system for illegal
electricity usage using the power lines.


2. Detection of illegal electricity usage

In this section the discussion is on how a
subscriber can illegally use the electricity and the basic
building blocks for the detection using power line
communication.

2.1 Methods of illegal electricity usage
In illegal usage a subscriber illegally use
electricity in the following ways,
Figure1: Electromechanical movement to digital signal conversion.

1


1) Using the mechanical objects:
A subscriber can use some mechanical objects to
prevent the revolution of a meter, so that disk speed is
reduced and the recorded energy is also reduced.
2) Using a fixed magnet:
A subscriber can use a fixed magnet to change
the electromagnetic field of the current coils. As is well
known, the recorded energy is proportional to
electromagnetic field.
Figure 2: AMR communication set up [5].
3) Using the external phase before meter terminals:
This method gives subscribers free energy
without any record.
4) Switching the energy cables at the meter connector
box:
In this way, the current does not pass through the
current coil of the meter, so the meter does not record the
energy consumption.
Although all of the methods explained above
may be valid for electromechanical meters, only the last
two methods are valid for digital meters. Therefore, this
problem should be solved by electronics and control
techniques [1].

2.2Building blocks for detection

2.2.1. Automatic Meter Reading (AMR):

The AMR system starts at the meter. Some
means of translating readings from rotating meter dials, or
cyclometer style meter dials, into digital form is necessary
in order to send digital metering data from the customer
site to a central point. In most cases, the meter that is used
in an AMR system is the same ordinary meter used for
manual reading but the difference with conventional
energy meter is the addition of some device to generate
pulses relating to the amount of consumption monitored,
or generates an electronic, digital code that translates to
the actual reading on the meter dials. One such technique
using optical sensor is shown in Figure 1.

The three main components of AMR system are,

1. Meter interface module: with power supply, meter
sensors, controlling electronics and a communication
interface that allows data to be transmitted from this
remote device to a central location.

2. Communications systems: used for the transmission, or
telemetry, of data and control send signals between the
meter interface units and the central office.

3. Central office systems equipment: including modems,
receivers, data concentrators, controllers, host upload
links, and host computer [4].

2.2.2 Power Line Communication (PLC):
Power line carrier communications take place
over the same lines that deliver electricity. This technique
involves injecting a high frequency AC carrier onto the
power line and modulating this carrier with data
originating from the remote meter or central station.
Power line communications has many new service
possibilities on the data transferring via power lines
without use of extra cables. AMR is a very important
application in these possibilities due to every user
connected each other via power lines. In this power
network, every user connected to each other via modems
with data originating from the remote meter or central
station.

Electrical power systems vary in configuration
from country to country depending on the state of the
respective power sources and loads. The practice of using
medium-voltage (11-to-33kV) and low-voltage (100-to-
400V) power distribution lines as high-speed PLC
communication means and optical networks as backbone
networks is commonplace.

Under normal service conditions, they can be
broadly divided into open-loop systems, each with a
single opening, and tree systems with radial arranged
lines. In the case of tree systems, connection points for
adjacent systems are provided in order that paths/loads
may be switched when necessary for operation.
Additionally, in terms of distribution line types, there are
underground cables and overhead power distribution
lines. Where transformers are concerned, they can be
divided into pole-mounted transformers, pad-mounted
transformers and indoor transformers.

High-speed PLC applications of the future
include Automatic Meter Reading (AMR), power system
fault detection, power theft detection, leakage current
detection, and the measurement/control/energy-
management of electrical power equipment for electrical
power companies, as well as home security, the remote-
2
monitoring/control of electrical household appliances,
online games, home networks, and billing [3].



3. Detection and Control System

The proposed control system [1] for the
detection of illegal electricity usage is shown in Fig.3.
PLC signaling is only valid over the low voltage VAC
power lines. The system should be applied to every low-
voltage distribution network. The system given in Fig. 3
belongs only one distribution transformer network and
should be repeated for every distribution network.
Although the proposed system can be used uniquely, it is
better to use it with automatic meter reading system. If the
AMR system will be used in any network, the host PLC
unit and a PLC modem for every subscriber should be
contained in this system. In Fig. 3, the host PLC unit and
other PLC modems are named PLC1A, PLCNA and are
used for AMR. These units provide communication with
each other and send the recorded data in kilowatt-hour
meters to the PLC unit. In order to detect illegal usage of
electrical energy, a PLC modem and an energy meter chip
for every subscriber are added to an existing AMR
system. As given in Fig. 3, PLC1B, PLCNB and energy
meter chips belong to the detector.
The detector PLC s and energy meters must be
placed at the connection point between distribution main
lines and subscribers line. Since this connection point is
usually in the air or at underground, it is not suitable for
anyone to access, such that its control is easy. The main
procedure of the proposed system can be summarized as
follows.
Figure 3: Schematic illustration of detection system of illegal electricity usage. [1]
PLC signaling must be in CENELEC standards.
In Europe, CENELEC has formed the standard EN-50
065-1, in which the frequency bands, signaling levels,
and procedures are specified. 395 kHz are restricted for
use

by electricity suppliers, and 95148.5 kHz are
restricted to consumer use.
The recorded data in kilowatt-hour meters for
every subscriber are sent to host PLC modem via PLC
modems, which is placed in subscribers locations. On the
other hand, energy meter chips are located at the
connection points and read the energy in kilowatt-hours
and also send the data to host PLC unit. This proposed
detector system has two recorded energy data in host PLC
unit, one, which comes from the AMR-PLC, and the
other, which comes from the PLC modem at the
connection points. These two recorded energy data are
compared in the host PLC; if there is any difference
between two readings, an error signal is generated. This
means that there is an illegal usage in the network. After
that, the subscriber address and error signal are combined
and sent to the central control unit. If it is requested, a
contactor may be included to the system at subscriber
locations to turn off the energy automatically, as in the
case of illegal usage.
3

3.1 Simulation

The system model and simulation of the
detection system of illegal electricity usage is shown in
Fig. 4. It contains a host PLC modem, an energy meter
chip

and its PLC modem, an electromechanical kilowatt-hour
meter and its PLC modem, and an optical reflector sensor
system is loaded at the same phase of the power grid. The
energy value at the electromechanical kilowatt-hour meter
is converted to digital data using by optical reflector
sensor. Disk speed of the kilowatt-hour meter is counted
and obtained data is sent to PLC modem as energy value
of the kilowatt-hour meter. At the system model, an
illegal load may be connected to the power line before the
kilowatt-hour meter via an S switch. While only a legal
load is in the system, two meters are accorded each other
to compensate for any error readings. The host PLC unit
reads two recorded data coming from metering PLC units.
If the S switch is closed, the illegal load is connected to
the system,
and
therefore
two
recorded
energy
values are
different
from each other.
The host PLC unit is generated when it received
two different records from the same subscriber. This is
the detection of the illegal usage for interested users. In
these tests, the carrier frequency is selected at 132 kHz,
which is permitted in the CENELEC frequency band. In
real applications, the AMR system may be designed in all
CENELEC bands. The data rate between the host and
other PLC modems is 2400 b/s.
Data signaling between PLC modems has a
protocol, which includes a header, address, energy value
data, error correction bits, and other serial communication
bits such as parity and stop bits. The protocol may also be
changed according to the properties of the required
system and national power grid architecture.
Fig.5 shows the detection system for an
electromechanical kilowatt-hour meter system. In the
digital energy meter system, the recorded energy may be
received in the digital form directly using the port of the
meter. Therefore, there is no need for an optical reflector
system in digital meters. The results of the tests show that
this system may solve this problem economically because
the budget of the proposed system is approximately U.S.
$ 2025 per subscriber. It is very economical and is a
reliable solution when it is compared with the economic
loss caused by illegal usage [1].

4. Overview of the proposed Detector System

The proposed detector system is the equipment
and procedure for controlling more remote stations from a
master control station. It includes PLC modems, energy
meters, control logics, and the system software. The PLC
modems are host and target modems for two-way
communications to and from the host station and the
remotely controlled targets. The energy meters include
metering chips and some circuit elements; the control and
logic units compare and generate the error signal in the
illegal usage. The system software has two parts:
assembler program for the micro controller and the
operating software for the management of the overall
system. Operating software may be downloaded from a
PC and should be placed in the main center of the system.
Figure 4: Illegal detector system for one subscriber. [1]
Figure 5: System simulation and modeling of the detection
system of illegal electricity usage for electromechanical
kilowatt-hour meters. [1]
4


An AMR system including an illegal detector
performs the following functions.




1) Every user has two PLC modems; one is for AMR and
the other is used to send the data from second energy
meter chip to host PLC modem.
2) An energy meter must be installed in the connection
box between a home line and main power lines.
3) The host PLC unit must be placed in the distribution
transformer and the configuration of the addressing
format of PLC signaling must be designed carefully.
4) The host PLC modems and its controller must include
two addresses per every user: one is the AMR and the
other for the energy meter. These two addresses must be
selected sequentially.
5) Operating software must be designed for the
information of every subscriber in every sub power
network: subscriber identification number, billing
address, etc.
6) The system has two values of the energy consumption
for every user, so if there is a difference between them, an
error signal is generated for the illegal user,
7) The proposed equipment is the only one distributed in
the power network. So this system should be repeated for
all distribution power networks. All host units in each
distribution transformer may be connected to only one


main center station via phone lines, fiber-optic cable, or
RF links.

Results and the variations of the measurements
are shown in Figs. 67 [2]. The relations between
frequency, length, and bit-error probability are given in
these figures [1].
Research work has been taking place in the
CPRI, Bangalore for the remote metering and detection of
power theft and will soon be helpful to electricity boards
in India.


Figure 7: Bit-error probability with frequency and load impedance for
1000-m[2]
5. Conclusion
Figure 6: Effects of distance of the source-receiver on the
loss for various [2]

The proposed detector system to determine
illegal electricity usage via power line communications is
examined in the laboratory conditions. Results proved
that if AMR and detector system are used together illegal
usage of electricity might be detected. Once this proposed
detection systems are tried in real power lines, the
distribution losses in India can be reduced effectively.

6. Acknowledgements

The authors wish to express thanks to
Mr. P. Kantha Rao, Head of department and Mr. Bh.R.K
Varma, Associate Professor of EEE, S.R.K.R.E.C for
their suggestions and encouragement in this endeavor.


7. References:
[1] I. H. Cavdar, A Solution to Remote Detection of
IEEE Transactions on power delivery, Vol. 19, No. 4,
October 2004.
[2] I. H. Cavdar, Performance analysis of FSK power
line communications systems over the time-varying
channels: Measurements and modeling, IEEE Trans.
Power Delivery, vol. 19, pp. 111117, J an. 2004.
[3] Yoshinori Mizugai and Masahiro Oya World Trends
in Power Line Communications Mitsubishi Electric
ADVANCE March 2005.
[4] Tom D Tamarkin Automatic Meter Reading, Public
Power magazine Volume50, Number5 September-
October 1992.
[5] Online;
www.wikipedia.org/powerlinecommunication
5
[6] Proceedings, ETP06, Dept of EEE, S.R.K.R.E.C.
6





R RE EC CE EN NT T D DE EV VE EL LO OP PM ME EN NT TS S I IN N
A AI IR RC CR RA AF FT T W WI IR RE EL LE ES SS S N NE ET TW WO OR RK KS S




GODAVARI INSTITUTE OF ENGINEERING
AND TECHNOLOGY
N.H-5, CHAITANYA NAGAR,
RAJAHMUNDRY.


Department of ELECTRONS AND COMMUNICATION ENGINEERING


SAI PRADEEP.R RAMANJANEYULU.M
E-mail:: sai_pradeep888@yahoo.com E-mail:: ramu2anji@yahoo.co.in























Recent Developments in Aircraft Wireless Networks




Abstract



This report discusses some key recent developments in the area of wireless
networking on aircraft. We discuss the products and services in commercial aviation that have
been driven by the demand for in-flight entertainment and connectivity. We also touch on the
research that has enabled this technology. Lastly, we mention the developments in military aircraft
wireless networks, and the standards behind them.

















1. Introduction
In recent years, wireless networking has become more commonplace than wired networking. One
application domain in which wireless networks are of far greater practical use is aviation, since
planes are scattered all over the world. In this paper we discuss recent and future developments in
aircraft wireless networks (AWNs). In commercial aviation, the major goals are to provide in-
flight Internet connectivity to passengers. We discuss some research which has enabled this
technology as well as some current and future services, such as Connexion by Boeing and OnAir,
which satisfy this demand. The military uses wireless networks to improve tactical situational
awareness for war-fighting aircraft. It is important to note that the AWN paradigm is different in
military applications, where the intent is not to connect to the public Internet. The military domain
is also where the development of AWNs is the most mature in terms of standardization and
ubiquity. We discuss two of the most relevant military AWN standards, Link-16 and J TRS.
2. Research Developments
In this section, we discuss research work dealing with AWNs. Since the motivation for the work in
this area is support of in-flight passenger communication, our focus is on research concerning
network connectivity from within an aircraft to the outside world. A common architecture for these
services found in research and industry is shown in Figure 1. It consists of three basic segments: an
aircraft, a satellite link, and a ground station. On the aircraft, a wireless access point can be used to
provide connectivity to passengers and crew members. The satellite link provides a connection to
the ground station, which is connected to the Internet.


Figure 1 - Example commercial airline network topology



A common thread among the research projects examined here is that each aims find a way to
enable AWNs to be more useful and efficient. In fact, some of the research and development work
eventually had useful application in industry. The major issues in AWN research that we examine
are interoperability, interference, mobility, and quality of service (QoS). Interoperability is
important because airlines are looking to provide different services to their passengers, which
requires the use of multiple different technologies. Two of the key services considered are Internet
and cellular connectivity. To provide these services, an aircraft must have access points for
receiving both kinds of wireless signals and be able to transmit traffic from both systems to the
ground via the satellite link. Interference comes into play because it is undesirable for these
transmissions to interfere with the navigational and communications systems needed for operation
of the aircraft. Case studies have shown that there have been times where passenger personal
electronic devices (PEDs) have caused aircraft systems to malfunction. However, some research
studies indicate that this should not be the case. The physical movement of the aircraft makes
mobility and QoS issues of concern. An aircraft and the ground stations with which it
communicates must be mobility-aware, as the aircraft is essentially a moving network. As it
moves, an aircraft must register with each new ground station it encounters in order to establish a
path for traffic to and from the aircraft. The ground station must also handle routing to the multiple
nodes which reside on each aircraft connected to it. The process of switching between satellites
and/or ground stations can cause a loss or degradation of service for passengers.


The architecture of the communication system should be such that the impact of handovers on
QoS is minimized. We examine research dealing with these issues in the following subsections.

2.1 Integration and Interoperability


One research project developed to demonstrate interoperability in AWNs was the Terrestrial
Hybrid Environment for Verification of Aeronautical Networks (THEVAN). As the name
suggests, the platform was not actually an aircraft, but a modified ambulance chassis loaded with
racks of wireless networking equipment. The intent was to demonstrate the integration of different
technologies which could be used to provide network services in an aircraft. These technologies
included a Ku-band satellite system, a medium data-rate satellite system (MDSS), a commercial
Very High Frequency (VHF) radio, and IEEE 802.11b. The Ku satcom phased array antenna was
used to provide a full-duplex 2 Mbps/256 kpbs (downlink/uplink) connection to a fixed ground
station via satellite. The MDSS was composed of 16 L-band Globalstar-compatible satellite phones
providing an aggregate data rate of 112 kbps. The VHF data radios were commercial modems
which provided a 19.2 kbps full-duplex link. Cisco 802.11b (11 Mbps) access points were used
with external bi-directional amplifiers and omni directional antennas. The testbed was used to
evaluate IP-based (Mobile IPv4 and IPv6) connections on a mobile platform. They demonstrated
that TCP and UDP connections could be maintained as the platform moved through a wide area.
They also showed that the different networking technologies could be integrated to provide
Internet services, such as Hyper Text Transfer Protocol (HTTP) and File Transfer Protocol (FTP),
to users. THEVAN was also able to switch between the different forms of RF connectivity as they
became available. Unlike THEVAN, the WirelessCabin project was tested on airplanes. The goal
of the project was to develop architecture for in-flight wireless access for passengers. The
technologies considered for integration in this project were GSM, UMTS, WLAN (IEEE 802.11),
and Bluetooth. In their system, a 3G piconet and an 802.11WLAN are set up for passengers to use
to connect to the outside world. In addition, passengers can set up Personal Area Networks
(PANs) for their personal devices using Bluetooth. This would all operate in one environment
without the wireless technologies interfering with each other. Their architecture consisted of three
segments: cabin, transport, and ground. Network traffic was partitioned into four QoS classes, first
(highest priority) through fourth, depending on real-time requirements. However, support of this
classification was an issue when integrating the different technologies. For example, UMTS
provides QoS support while 802.11 did not. The culmination of this work was a demonstration
flight aboard an Airbus A340 aircraft in September 2004. During the flight, GSM and VoIP voice
services and IP e-mail and web services were successfully demonstrated.





2.2 Interference

Another area of concern is the potential interference of wireless communication equipment with
aircraft navigation and communication systems. PEDs emit two kinds of radiation: intentional and
spurious. Intentional emissions are those with the purpose of transmitting data in the allocated
frequency bands of the PED. Only devices which communicate via wireless links have these.
Spurious emissions are those which are unintentional, and contribute to the RF noise level.
Although all PEDs have these, they are more significant in wireless PEDs. Intentional
transmissions are normally not a concern because their frequency bands are limited and do not
overlap the frequencies of airline systems, as shown in Table 1. However, if the power level of a
spurious emission is high enough at a receiving frequency of an aircraft nav/comm system, it could
interfere with aircraft operations.

Table 1 - Frequency ranges of wireless communication equipment and aircraft nav/comm systems



Omega
navigation
10 - 14
kHz

ADF
190 -
1750 kHz

HF
2 - 30
MHz
Marker
beacon
74.85,
75, 75.-5
MHZ
VOR/LOC
108 - 118
MHZ
VHF COM
118 - 136
MHz

Glide
slope
328 - 335
MHz

GSM 400
450 - 496
MHz

GSM 850
824 - 894
MHz

GSM 900
876 - 950
DME TCAS/ATC GPS SATCOM
960 -
1220
MHz
1030,
1090
1575 MHz 1529,
1661 MHz

MHz MHz

GSM
1800
European
UMTS
GSM
1900
IMS
band:
Low-range Microwave 802.11a
altimeter landing
system
5.15 -
5.35 1710 -
1880
1880 -
2025,
1850 - 802.11b, 4.3 GHz

1900
MHz
Bluetooth 5.03, 5.09
GHz
GHz

MHz 2110 -
2200
2446.5 -
2483.5

MHz MHz


Weather 802.11a Weather Sky radio DBS TV
radar 5.725 -
5.825
radar 11.7 GHz 12.2 - 12.7

5.4 GHz 9.3 GHz GHz

GHz


Studies on the interference of PEDs with aircraft systems have produced conflicting
results. Some studies based on incident reporting seem to indicate that PEDs interfere
with avionics systems. One study indicated that of the 40 PED related reports
collected by the International Transport Association, laptop computers (40%) were
the most frequent cause of the trouble and that navigation systems (68%) were the
most affected. The study also pointed out that in three of those cases it was verified
that the problem ceased when the PED was turned off and reappeared when the PED
was turned back on. Another study which examined the Aviation Safety Reporting
System database reaffirmed that navigation systems (112 out of 130 system
anomalies) were most affected by PEDs. Cell phones and laptop computers (25 each
out of 104 incidents where the PED was identified) were the most common culprits.
However, a few recent studies have shown that PEDs should not generate powerful
enough spurious emissions to interfere with aircraft systems. One such study tested
the spurious emissions of various cellular technologies against the operating
frequencies of some common nav/comm systems. The author concluded that the
technologies tested (CDMA-Cellular, TDMA-11 Hz, TDMA-50 Hz Cellular and
PCS, GSM, and DCS 1800) in the lab environment should not interfere with VHF
omni-directional range (VOR), localizer (LOC), VHF communication, glide slope
(GS), and global positioning system (GPS) avionics systems.
2.3 Mobility
The Aeronautical Telecommunications Network (ATN), developed by the International
Civil Aviation Organization, was envisioned as a way to provide ground/ground and
air/ground data communications services in the aviation industry. ATN is based on the
seven-layer Open Systems Interconnection (OSI) model. It is a private network with its
own addresses, and a scheme for providing network mobility for aircraft. Their mobility
solution was based a large address space and backbone routers updating path information
in the network. When an aircraft attached to a new access point, the associated backbone
router would pass the routing information for that aircraft's network back through the
ATN backbone. The ATN also has specifications for managing different QoS levels.
Although the standard has been in development for over 15 years, ATN is used only in a
limited range of applications in the industry, such as Air Traffic Services (ATS) and
Airline Operation Communications(AOC).As the passenger communications business
began to accelerate, it was necessary to analyze which network model would be the most
useful in the future. The rest of the world was using IP and its services while the airline
industry was implementing ATN. In order to provide passengers with Internet
connectivity, IP has to be supported. The decision is then how to provide ATN services
over IP. There were four proposals for a new architecture:
Transitional scenario using an airborne gateway
o Mobility support using ATN
Transitional scenario using a ground-based gateway
o Mobility support using Mobile IPv4
Transitional scenario using an IP Subnet work Dependent Convergence
Function
o Mobility support using ATN
Interoperability scenario using TCP/IPv6
Mobility support using IPv6
The interoperability scenario using IPv6 is the preferred solution as this would provide
many of the mobility and QoS services native to ATN. One disadvantage of Mobile
IPv4 when compared to ATN is increased delay due the extensive use of tunneling,
especially with the multiple expected route switches during a long flight. The route
optimization schemes in IPv6 will help mitigate this issue. The current problem is that
IPv6 is yet to be widely deployed. Despite this, ATN has failed to catch on, and Mobile
IPv4 has become the popular solution for aircraft network mobility. For example, the
research done in and use schemes based on Mobile IPv4 as their mobility solutions.

2.4 Quality of Service
QoS is an important issue for satellite links which are used to support passenger
communication because efficient handover algorithms are needed as planes move from
one satellite's coverage area to another's. This is compounded by the fact that these links
have long propagation delays, are prone to errors, and are limited in bandwidth. These
challenges were the subject of a few research projects. The Network Architecture and
Technologies for Airborne Communications of Internet High Bandwidth Applications
(NATACHA) project was commissioned by the European Union to develop architecture
for aircraft passenger communication and to evaluate the performance of IP-based
multimedia services in the system. Although the project officially lasted from May 2002
to J uly 2004, related work was still ongoing in2005. The architecture they developed,
dubbed Aeronautical Broadband Communication System (Air COM), consisted of the
same three segments that are present in other research work and industry: airborne,
satellite, and ground. They concluded that satellite links with Code Division Multiple
Access (CDMA) bandwidth sharing would be best suited in this application domain.
They also proposed a bandwidth on demand (BoD) end-to-end QoS scheme based on
IP's IntServ. For local link management, they divided the network traffic into three
classes:
Constant Bandwidth Allocation
Strict real-time requirements
Dynamic Bandwidth Allocation
Based on network load - stringent delay and packet loss requirements,
relaxed jitter requirements
Best Effort Allocation
o No QoS parameters
Their simulation results led to the conclusion that a Multi Code CDMA (MC-CDMA)
resource allocation technique using a Packet-by-packet General Processor Sharing (P-
GPS) algorithm for QoS buffer prioritization worked best in managing the different QoS
levels. Other research has also used IP-based services for managing QoS. One such
study developed an inter-satellite handover framework which integrates Mobile IPv4 and
RSVP signaling while taking into account the added constraints of satellite links.
Handovers are divided into three phases: information collection, handover decision, and
handover execution. Upon handover execution, resource re-reservation need only take
place on the new satellite link between the airborne mobility router and the ground
mobility server using the previous RSVP reservations. Simulation results showed
significant performance improvement when RSVP was used during handover. So far we
have explored research issues dealing with AWNs. This research has been done in
support of the commercial airlines' desire to provide network connectivity to passengers
while in flight. In the next section, we examine some of the services which are available
in industry.
3. Commercial Developments
Until recently, passenger aircraft were one of the last places where the public could not
receive Internet connectivity and services that are offered on the ground. Recent
developments have started to change this. The initial service offerings were limited in
data rates and constrained to seat-back interfaces. This proved to be unprofitable as the
real passenger demand was for bandwidth-demanding multimedia services on their own
PEDs. Architecture to support this is being put in place, as the next generation of aircraft
communication satellites is being deployed. In this section we discuss this satellite
system owned by Inmarsat and the services that are, and are soon to be, offered using
them. In addition, we discuss the current market leader in passenger communication
services, Connexion by Boeing, whose services are not provided using Inmarsats.
3.1 Satellite Services
Although there are other satellite service providers, including Iridium and Globalstar, the
biggest player in providing satellite communication for commercial aircraft is Inmarsat,
an international company which operates a constellation of 11 geostationary satellites.
Of these satellites, four are I-2 (second generation), five are I-3(Third generation), and
two are I-4 (fourth generation). Inmarsat offers services in multiple sectors, including
telephony, maritime, and aeronautic. Their satellites provide three kinds of coverage:
global beam, wide spot bean, and narrow spot beam. The global beam is capable of
covering about one-third of the earth's surface, but provides low data rate
communication. Of Inmarsat's aero services which are currently widely available, the
most recent is Swift64, which offers up to64 kbps per channel, 6.67 times faster than any
of their other services. This done using the wide spot beam on the I-3 and I-4 satellites.
Swift64 terminals can offer up to 4 channels, increasing the data rate to 256 kbps per
terminal. Channel bonding and data acceleration techniques can boost the effective data
rate close to 0.5 Mbps .The narrow spot beam capability of I-4 satellites will be the
backbone of Inmarsat's planned Broadband Global Area Network (BGAN) services.
BGAN, also known as Swift Broadband, will offer up to 492 kbps, which can be used to
support Internet services and GSM. When the system is complete there will be three I-4
satellites, one positioned over each major ocean (Pacific, Atlantic, and Indian).Swift64 is
currently being used to provide Internet connectivity to mostly business and corporate
travelers. One example of a company which does this is EMS Satcom, which equips
corporate jets with antennas and terminals which are compatible with Swift64. Their
eNfusion product line offers travelers up to 256 kpbs using 4 Swift64 channels. Other
companies like Satcom Direct offer similar services. ARINC's SKYlink is another
business jet solution which offers data rates of up to 3.5 Mbps upstream and 128 kbps
downstream, though it does not use Inmarsat technology.
3.2 Airline Services
On-Air, a joint venture between Airbus and SITA INC, aims to provide Internet
connectivity to all airline passengers, not just VIPs. This service is based on Inmarsat's
Swift Broadband, and is advertising shared data rates of up to 864 kbps. The service will
allow Internet and GSM/GPRS (2.5G), and will be offered on both Boeing and Airbus
aircraft. OnAir currently offers in-seat telephony and text messaging, but looks to roll
out their full service in early 2007. Their coverage area will be complete when Inmarsat
launches the third and final I-4 satellite, to be positioned over the Pacific Ocean; later in
2007.The first company to market with an in-flight passenger connectivity service does
not use the Inmarsat satellite constellation. Boeing rolled out its Connexion by Boeing
service in 2004, using Ku-band satellite equipment leased from its satellite businesses
and other business partners. The service offers a high-speed two-way Internet connection
and global TV. This is provided on multiple partner carriers on both Boeing and Airbus
aircraft. Each transponder can provide data rates of 5 Mbps downlink and up to 1 Mbps
uplink. An aircraft may carry multiple transponders, and so may receive more data, up to
20 Mbps, downlink. However, the uplink traffic per aircraft is capped at 1 Mbps. In
addition to in-flight passenger entertainment, the service (like other connectivity
services) can be used for aircrew and airline operations. Tests are ongoing to offer in-
flight telephony, over CDMA-2000 and GSM, using the service. The service is also
offered to executive jets. Connexion by Boeing is also noteworthy for their network
mobility approach. In order to avoid the increased delay due to tunneling associated with
Mobile IP, they use a scheme based on Border Gateway Protocol (BGP) routing updates.
This works because BGP is universally supported and is the backbone routing protocol
of the Internet. In their system, ground stations only advertise network IP addresses for
the aircraft that they are currently serving. They start with a /24 address block
advertisement, which is normally sufficient for propagation. In the case that it is not, the
size of the advertised network is increased. Testing has shown that the time to set up a
two-way connection with a satellite is complementary with the time it takes BGP
updates to globally converge. This means that the new route is ready for use at the same
time the satellite link is.
4. Military Developments
As mentioned earlier, the structure of military AWNs is very different from that
found in commercial aviation. The purpose of these networks is not to provide
Internet services , but to increase tactical situation awareness for pilots and weapon
systems operators. A generalized network topology is shown in Figure 2. An
airborne command and control(C2) center, such as the Airborne Warning and
Control System (AWACS), is usually the main hub of activity. In addition to the
battle management tasks, AWACSs typically report status back to central command
on the ground. The network is used to exchange communication, navigation, and
identification (CNI) as well as tactical information over voice and data channels.
Interoperability between the different aircraft platforms is enabled through rigorous
standardization. In this section we discuss the latest standards in military AWNs.
One should note that although the information presented here was gathered from
publicly available sources, the standards themselves are not publicly available due
to export restrictions

Figure 2 - Example tactical military network topology
4.1 Link-16
The latest in tactical data links is Tactical Digital Information Link J (TADIL-J ), which
was developed for the U.S. armed forces. TADIL-J is defined by U.S. MIL-STD-6016,
and has been adopted by NATO and other nations around the world as Link-16. Link-16
improves upon the features of its predecessors, Link-11 (TADIL-A) and Link-4A
(TADIL-C). There are two classes of terminals which implement Link 16: J oint Tactical
Information Distribution System (J TIDS) and Multifunctional Information Distribution
System(MIDS) terminals. Because MIDS terminals are required to be smaller and lighter
due to platform necessities, they implement a smaller set of the standard than J TIDs
terminals do.
Link-16 provides for a high-speed, jam-resistant, secure network. It improves on its
predecessors in these three areas as well as information granularity and reduced terminal
size. The link is shared using a TDMA protocol which calls for a 12.8 minute epoch with
64 frames of 1536 slots of length 7.8125 ms. This is illustrated in Figure 3. The standard
provides for the allocation of slots in a frame as needed for throughput requirements.
The network is partitioned into nets, of which there can be 132 in an area. Each net can
have up to 32 participants. The possible message formats include fixed, free text, and
variable. The link operates in the L band between 969MHz and 1.206 GHz.


Network segregation is done in two ways. First, a J TIDS Unit (J U) must have the
proper configuration information to enter a net. Within a net, the time slots can be
allocated to Network Participation Groups (NPGs) based on function. The groups
include: Surveillance, Electronic Warfare, Mission Management, Weapons
Coordination, Air Control, Fighter-to-Fighter, Secure Voice, and Precise Participant
Location and Identification (PPLI). J Us need only participate in NPGs which support
their mission. Data transmission is performed using a frequency hopping scheme which
is highly resistant to jamming. Frequency hopping is performed every 13 ms to one of
51 channels. In addition to frequency hopping, security is provided using crypto
variables. One crypto variable is used to encrypt the data to be transmitted, and another
is used to control the transmitted waveform. In addition, pseudo-random noise and
jitter can be added to the signal to increase difficulty of detection and jamming. Adding
a jitter portion to the beginning of every slot halves the data rate. In addition, every
other message pulse can be repeated so that Reed-Solomon Forward Error Correction
can be used to salvage the message if it is being jammed. This also haves the data rate
of the link. If no dynamically configurable security options are chosen, data can be
transmitted at up to 107.56 kbps.
4.2 JTRS
Another military wireless networking standard worth mentioning is the J oint Tactical
Radio System (J TRS). J TRS is the next generation radio for US military field operations.
It has a large operating bandwidth, from 2MHz to 2+GHz. Another unique aspect of
J TRS is that it is a software-defined radio which provides both voice and data transfer.
These two characteristics enable it to be backwards compatible with many military and
civilian radio systems, including Link-4A, Link-11, and MIL-STD-188-181 UHF
SATCOM. It includes encryption capabilities for security and Wideband Networking
Software for mobile ad hoc networks. Depending on the configuration of the radio, burst
rates of up to 1.2 Mbps are achievable. The program is still in development, and will be
rolled out in clusters. Two of the five clusters (1 and 4) include requirements for aircraft.
Like J TIDS/Link-16, J TRS will provide a means for secure voice and data transmission,
which will be used to increased situational awareness while in combat. This model is in
contrast to that of commercial aviation, where the goal is to provide in-flight
connectivity to the public Internet.
5. Summary
AWNs continue to be an area of great interest and development, particularly in
commercial aviation. This is driven by passenger demand for in-flight entertainment
and communication services. Connexion by Boeing was the first in the airline market
to offer services with which passengers could use their own PEDs. However, they will
have stiff competition from companies such as OnAir, which will offer services based
on Inmarsat's Swift Broadband. As these products and services continue to roll out, the
impact of research in the field, which played a big part in making this all possible, is
likely to be diminished. In the military realm, implementation is driven by
standardization. Link-16 has been deployed for many years and is common on most
war-fighting aircraft platforms. The highly configurable J TRS is being developed to
replace many legacy military radio platforms, providing both voice and data services.
6. References
[J ahn03a] J ahn, A., Holzbock, M., Muller, J ., Kebel, R., de Sanctis, M., Rogoyski, A.,
Trachtman, E., Franzrahe, O., Werner, M., Hu, F., "Evolution of aeronautical communications
for personal and multimedia services," IEEE Communications Magazine, J uly 2003.
http://www.coe.montana.edu/ee/rwolff/Games%20Project/literature%20search/Aeornautical%
20communications.p Article on the evolution of aeronautical communications technologies
used for passenger multimedia servies.

[J ahn03b] J ahn, A., Nicbla, C.P., "System architecture for 3G wireless networks in
aircraft," IEEE 58th
Vehicular Technology Conference (VTC 2003).
Describes architecture for the convergence of technologies to support 3G
communication in aircraft.

[J ahn04] "Results from the WirelessCabin Demonstration Flight,"
http://www.eurasip.org/content/Eusipco/IST05/papers/295.pdf
Technical report which discusses the results from the WirelessCabin demonstration
flight.

[DataComm] "Fire Controlman Volume 06 - Digital Communications, Chapter 5 -
New Technology in Data
Communications," http://www.tpub.com/content/fc/14103/css/14103_73.htm
Handbook on data communication with information on military aircraft networks
such as Link-4A, Link-11, and
Link-16.

[Connexion1] "Connexion by Boeing," http://www.connexionbyboeing.com/
Connexion by Boeing home page.

[Connexion2] "Connexion by Boeing - Wikipedia,"
http://en.wikipedia.org/wiki/Connexion_by_Boeing
Wikipedia article on Connexion by Boeing.

[Connexion3] "Broadband Connectivity to Aircraft,"
http://www.itu.int/ITU-R/study-groups/seminars/rsg8-tech-innov/docs/3.4-
broadband-connectivity-to-aircraft.ppt
Presentation with details on the broadband connectivity scheme used in Connexion
by Boeing.

[TADIL] "Tactical Digital Information Links (TADIL),"
http://www.fas.org/irp/program/disseminate/tadil.htm
Webpage with information on TADILs, including TADIL-A/B, TADIL-C, and
TADIL-J.

[J TIDSLink16] "J TIDS Link 16,"
https://wrc.navair-
rdte.navy.mil/warfighter_enc/weapons/SensElec/Sensors/link16.htm
Webpage with information on JTIDS/Link-16.



7. List of Acronyms



3G - Third generation (in mobile communication)
ADF - Automatic Direction Finder
AirCom - Aeronautical Broadband Communication System
ATC - Air Traffic Control
ATN - Aeronautical Telecommunications Network
ATS - Air Traffic Services
AWN - Aircraft Wireless Network
BGAN - Broadband Global Area Network
BGP - Border Gateway Protocol
BoD - Bandwidth on Demand
C2 - Command and Control
DBS - Direct Broadcast Satellite
DCS - Digital Cellular System
DME - Distance Measure Equipment
FTP - File Transfer Protocol
GPS - Global Positioning System
GS - Glide slope
GSM - Global System for Mobile Communication
HF - High Frequency
HTTP - Hyper Text Transfer Protocol
IMS - Industrial, Scientific, and Medical
IntServ - Integrated Services
IP - Internet Protocol
J TIDS - J oint Tactical Information Distribution System
J TRS - J oint Tactical Radio System
J U - J TIDS Unit
LOC - Localizer
MC-CDMA - Multi Code CDMA
MDSS - Medium Data-rate Satellite System
MIDS - Multifunctional Information Distribution System
NATO - North Atlantic Treat Organization
NPG - Network Participation Group
OSI - Open Systems Interconnection
P-GPS - Packet-by-packet General Processor Sharing
PAN - Personal Area Network
PCS - Personal Communication Service
PED - Personal electronic device
PPLI - Precise Participant Location and Identification
QoS - Quality of Service
RF - Radio frequency
RSVP - Resource ReSerVation Protocol
SNDCF - Subnetwork Dependent Convergence Function
TADIL - Tactical Digital Information Link
TCAS - Traffic alert and Collision Avoidance System
TCP - Transmission Control Protocol
TDMA - Time Division Multiple Access THEVAN - Terrestrial Hybrid Environment
forVerification of Aeronautical Networks
DP - User Datagram Protocol
UHF - Ultra High Frequency
UMTS - Universal Mobile Telecommunications System
VHF - Very High Frequency
VoIP - Voice over IP
VOR - VHF omni-directional range
WLAN - Wireless Local Area Network
RADIUS
( REMOTE AUTHENTICATION DIAL-IN USER SERVICE)


1.B.Anil kumar ,2.B.Dileep Kumar Reddy
III CSE

SIDDHARTH INSTITUTE OF ENGINEERING AND TECHNOLOGY
PUTTUR
ani l . bheema@gmai l . com Cel l :08772252751




ABSTRACT

RADIUS i s bui l t on the
archi tecture of AAA frame work. The
AAA model i s desi gned to work-i n
heterogeneous envi ronments. RADI US
devel opment i s mostl y done on UDP
rather than on TCP.The RADI US
protocol uses UDP packets to pass
transmi ssi ons between the cl i ent and
server. To strengthen securi ty and
i ncrease transacti onal i ntegri ty, the
RADI US protocol uses the concept of
shared secrets. RADI US supports a
vari ety of di fferent protocol
mechani sms to transmi t sensi ti ve user-
speci fi c data to and from the
authenti cati on server. The two most
common are the Password
Authenti cati on Protocol (PAP) and the
Chal l enge/Handshake Authenti cati on
Protocol (CHAP).RADI US i s bei ng
wi del y used i n many appl i cati ons
where use of protocol s has been an
easy task and al so i n securi ty aspects
l i ke web appl i cati ons and Di rectory
Servi ce













KEYWORDS

RADI US,AAA,Cl i ent,Server,
Authenti cati on, Authori zati on,
Accounti ng,hop-to-hop,end-to-
end,UHO,I SP,push sequence, pul l
sequence, agent
sequence,UDP,TCP,shares
secrets,PAP,CHAP.





AAA

The framework around whi ch
RADI US i s bui l t i s known as the AAA
process, consi sti ng of authenti cati on,
authori zati on, and accounti ng. AAA i s
the foundati on of the next generati on
remote access protocol . RADI US was
created before the AAA model was
devel oped, but i t was the fi rst real
AAA-based protocol exhi bi ti ng the
AAA.Before AAA was i ntroduced,
i ndi vi dual equi pment had to be used to
authenti cate users. The AAA Worki ng
Group was formed by the I ETF to
create a functi onal archi tecture.
Cl i ents and Server
pl ays a vi tal rol e i n AAA and RADI US:
A cl i ent i s machi ne that makes requests
of and uses resources on another
machi ne. The cl i ent can be the end
user. An AAA cl i ent can be the machi ne
that sends AAA-styl e packets to and
from an AAA server. A server i s
commonl y known as the machi ne of
whi ch cl i ents request resources. I n
AAA, thi s can be the network servera
NAS (network attached storage)
machi ne or some other concentrator
or an AAA server that authenti cates,
authori zes, and performs accounti ng
functi ons.

Authent i cat i on
I s the process of
veri fyi ng a person's (or machi ne's)
decl ared i denti ty. The key aspect of
authenti cati on i s that i t al l ows two
uni que obj ects to form a trust
rel ati onshi pboth are assumed to be
val i d users.

Authori zat i on
I nvol ves usi ng a set of
rul es or other templ ates to deci de what
an authenti cated user can do on a
system. The system admi ni strator
defi nes the rul es. AAA servers have
l ogi c that wi l l anal yze a request and
grant whatever access i t can, whether
or not the enti re request i s val i d.

Account i ng
Measures and documents
the resources a user takes advantage of
duri ng access. Accounti ng i s carri ed
out by the l oggi ng of sessi on stati sti cs
and usage i nformati on and i s used for
authori zati on control Accounti ng data
has several uses. An admi ni strator can
anal yze successful requests to
determi ne capaci ty and predi ct future
system l oad.





ARCHITECTURE

Model i s desi gned
to work i n envi ronments wi th vari ed
user requi rements and equal l y vari ed
network desi gn. The AAA model i s
desi gned to work i n envi ronments wi th
vari ed user requi rements and equal l y
vari ed network desi gn. Cl i ent/server
envi ronments al l ow for a good l oad-
bal anci ng desi gn, i n whi ch hi gh
avai l abi l i ty and response ti me are
cri ti cal . Servers can be di stri buted and
decentral i zed among the network.
Unl i ke i n end-to-end
where i n hi gh ti me del ays are
i ntroduced as the systems act as both
cl i ent and server. An AAA server can
be confi gured to authori ze a request or
pass i t al ong to another AAA server,
whi ch wi l l then make the appropri ate
provi si ons or pass i t al ong agai n. When
a server proxi es another server, the
ori gi nator di spl ays the characteri sti cs
of a cl i ent. A trust rel ati onshi p has to
be created for each cl i ent/server hop
unti l the request reaches equi pment
that provi si ons the needed resources.
Cl i ents requesti ng servi ces and
resources from an AAA .server can
communi cate wi th each other by usi ng
ei ther a hop-to-hop or an end-to-end
transacti on.






The di sti ncti on i s where the trust
rel ati onshi p l i es i n the transacti on
chai n.
Hop to hop t ransact i on: I n a
hop-to-hop transacti on, a cl i ent
makes an i ni ti al request to an
AAA devi ce. At thi s poi nt, there
i s a trust rel ati onshi p between
the cl i ent and the frontl i ne AAA
server. That machi ne determi nes
that the request needs to be
forwarded to another server i n a
di fferent l ocati on, so i t acts as a
proxy and contacts another AAA
server. Now the trust
rel ati onshi p i s wi th the two
AAA servers, wi th the frontl i ne
machi ne acti ng as the cl i ent and
the second AAA machi ne acti ng
as the server. I t's i mportant to
note that the trust rel ati onshi p
i s not i nherentl y transi ti ve,
meani ng that the i ni ti al cl i ent
and the second AAA machi ne do
not have a trust rel ati onshi p






End t o end t ransact i on: I n thi s
model , the trust rel ati on i s
between the i ni ti al , requesti ng
cl i ent and the AAA server that
fi nal l y authori zes the request.
I n an end-to-end model , the
proxy chai n i s sti l l very much
functi onal as the model doesn't
necessari l y mean the transacti on
i s end-to-end: i t's the trust
rel ati onshi p that i s. Because i t
i s poor desi gn to pass sensi ti ve
i nformati on i n proxy requests,
some other mean of
authenti cati ng a request and
val i dati ng data i ntegri ty i s
needed when the i ni ti al request
j umps through the hops i n the
proxy chai n.



FRAME WORK

Frameworks desi gnate how
systems i nteract wi th one another. The
authori zati on framework i ntroduces the
concept of a User Home Organi zati on
(UHO), whi ch i s an enti ty that has a
di rect contractual rel ati onshi p wi th an
end user. Al so, the Servi ce Provi der
(SP) i s i nvol ved, whi ch mai ntai ns and
provi si ons the tangi bl e network
resources. The UHO and the SP need
not be the same organi zati on


AUTHORIZATION Sequences

Di fferent types
The agent sequence:
I n thi s sequence,
the AAA server acts as a mi ddl eman of
sorts between the servi ce equi pment
and the end user. The end user i ni ti al l y
contacts the AAA server, whi ch
authori zes the user's request and sends
a message to the servi ce equi pment
noti fyi ng i t to set that servi ce up. The
servi ce equi pment does so, noti fi es the
AAA machi ne, and the noti fi cati on i s
passed on to the end user, who then
begi ns usi ng the network. Thi s
sequence i s typi cal l y used i n
broadband appl i cati ons





The pul l sequence:
Di al -i n users frequentl y
encounter thi s sequence. The end user
i n thi s si tuati on connects di rectl y to
the servi ce equi pment whi ch then
checks wi th an AAA server to
determi ne whether to grant the request.
The AAA server noti fi es the servi ce
equi pment of i ts deci si on, and the
servi ce equi pment then ei ther connects
or di sconnects the user to the network.









The push sequence:
The push sequence
al ters the trust rel ati onshi p between al l
of the machi nes i n a transacti on. The
user connects to the AAA server fi rst,
and when the request to the server i s
authori zed, the AAA server di stri butes
some sort of authenti cati on "recei pt"(a
di gi tal certi fi cate or si gned token,
perhaps) back to the end user. The end
user then pushes thi s token al ong wi th
hi s request to the servi ce equi pment,
and the equi pment treats the ti cket
from the AAA server as a green l i ght to
provi si on the servi ce. The mai n
di sti ncti on i s that the user acts as the
agent between the AAA server and the
servi ce equi pment.


Di stri but ed Servi ces:
Now consi der a si tuati on
i n whi ch a servi ce provi der contracts
wi th numerous whol esal ers to provi de
servi ces to i ts user base. For exampl e,
a provi der coul d guarantee a certai n
amount of bandwi dth across the
country for a parti cul ar company. The
frontl i ne I SP wi th whi ch the company,
as a cl i ent, contracts needs to set a
QoS pol i cy on equi pment across the
country to mai ntai n i ts contractual duty
to the customer. The customer, i n thi s
si tuati on, i s usi ng a di stri buted servi ce.

Speci f i cat i ons for Resource and
Sessi on Management The components
of the authori zati on framework are
Resource management i s
basi cal l y the abi l i ty to moni tor
resources that have been
previ ousl y al l ocated. A program
or uti l i ty cal l ed the "resource
manager" woul d be abl e to
recei ve and di spl ay i nformati on
on a resource i n real ti me. Such
a program coul d, for exampl e,
moni tor a pool of di al -up ports
on a termi nal server and report
i nformati on to the moni tor
program. Wi th fewer AAA
servers, there i sn't much traffi c
i nvol ved i n real -ti me
moni tori ng, and the equi pment
i s more l i kel y to be confi ned to
one enti ty's real m. Once the
AAA server group expands and,
parti cul arl y, begi ns to span
mul ti pl e domai ns, i t becomes
i ncreasi ngl y probl emati c to
mai ntai n the i denti ty of speci fi c
servers. Uni queness of sessi ons
i s cri ti cal , and i n addi ti on,
some method of combi ni ng
sessi on and resource
i nformati on wi th a uni que
i denti fi er i s needed.

Sessi on management i s the
capabi l i ty of a protocol or pi ece
of equi pment to noti fy an AAA
server of a change i n
condi ti ons, and more i deal l y, to
modi fy an exi sti ng sessi on. That
sessi on coul d be changed, put
on hol d, or termi nated based on
changi ng condi ti ons recorded by
the resource manager. A sessi on
manager woul d use the
i nformati on from the resource
manager. The combi nati on of
resource and sessi on
management al l ows compl i cated
pol i ci es to be i mpl emented and
provi si oned wi th ease, even
across a di stri buted pol i cy
pl atform. I ts been di ffi cul t up
to now to synchroni ze a sessi on
database wi th the real state of a
sessi on


HISTORY OF RADIUS

RADI US, l i ke most i nnovati ve
products, was bui l t from a need. I n thi s
case, the need was to have a method of
authenti cati ng, authori zi ng, and
accounti ng for users needi ng access to
heterogeneous computi ng resources.
Earl y versi on of RADI US was wri tten
by Meri t Networks and Li vi ngston.
More software was constructed to
operate between the servi ce equi pment
Li vi ngston manufactured and the
RADI US server at Meri t, whi ch was
operati ng wi th UNI X. The devel oper of
RADI US i s Steve Wi l l i ns Both
compani es offer a RADI US server to
the publ i c at no charge.

Propert i es

The RFC speci fi cati ons for
the RADI US protocol di ctate that
RADI US:
I s a UDP-based connecti onl ess
protocol that doesn't use di rect
connecti ons
Uses a hop-by-hop securi ty
model
I s statel ess (more to come on
that l ater)
Supports PAP and CHAP
authenti cati on vi a PPP
Uses MD5 for password-hi di ng
al gori thms
Provi des over 50 attri bute/val ue
pai rs wi th the abi l i ty to create
vendor-speci fi c pai rs
Supports the authenti cati on-
authori zati on-accounti ng model

UDP versus TCP
UDP was sel ected
l argel y because RADI US has a few
i nherent properti es that are
characteri sti c of UDP: RADI US
requi res that fai l ed queri es to a
pri mary authenti cati on server be
redi rected to a secondary server, and to
do thi s, a copy of the ori gi nal request
must exi st above the transport l ayer of
the OSI model . Thi s, i n effect,
mandates the use of retransmi ssi on
ti mers.


Packet Formats
The RADI US protocol
uses UDP packets to pass transmi ssi ons
between the cl i ent and server.




Code: The code regi on i s one octet
l ong and serves to di sti ngui sh the type
of RADI US message bei ng sent i n that
packet. Packets wi th i nval i d code
fi el ds are thrown away wi thout
noti fi cati on.

Ident i f i er: The i denti fi er regi on i s one
octet l ong and i s used to perform
threadi ng, or the automated l i nki ng of
i ni ti al requests and subsequent repl i es.
RADI US servers can general l y
i ntercept dupl i cate messages by
exami ni ng such factors as the source I P
address, the source UDP port, the ti me
span between the suspect messages,
and the i denti fi er fi el d.

Length: The l ength regi on i s 2 octets
l ong and i s used to speci fy how l ong a
RADI US message i s. The val ue i n thi s
fi el d i s cal cul ated by anal yzi ng the
code, i denti fi er, l ength, authenti cator,
and attri bute fi el ds and fi ndi ng thei r
sum. The l ength fi el d i s checked when
a RADI US server recei ves a packet to
ensure data i ntegri ty. Val i d l ength
val ues range between 20 and 4096.I f
the RADI US server recei ves a
transmi ssi on wi th a message l onger
than the l ength fi el d, i t i gnores al l data
past the end poi nt desi gnated i n the
l ength fi el d. Conversel y, i f the server
recei ves a shorter message than the
l ength fi el d reports, the server wi l l
di scard the message.

Authent i cator: The authenti cator
regi on, often 16 octets l ong, i s the
fi el d i n whi ch the i ntegri ty of the
message's payl oad i s i nspected and
veri fi ed. I n thi s fi el d, the most
i mportant octet i s transmi tted before
any otherthe val ue used to
authenti cate repl i es from the RADI US
server. Thi s val ue i s al so used i n the
mechani sm to conceal passwords.
There are two speci fi c types of
authenti cator val ues: the request and
response val ues. Request authenti cators
are used wi th Authenti cati on-Request
and Accounti ng-Request packets. The
response authenti cator i s used i n
Access-Accept, Access-Rej ect, and
Access-Chal l enge packets.

Shared Secret s
To strengthen securi ty
and i ncrease transacti onal i ntegri ty, the
RADI US protocol uses the concept of
shared secrets. Shared secrets are
val ues generated at random that are
known to both the cl i ent and server.
The shared secret i s used wi thi n al l
operati ons that requi re hi di ng data and
conceal i ng val ues. Onl y techni cal
l i mi tati on i s that shared secrets must
be greater than 0 i n l ength, the secret
be at l east 16 octets. A secret of that
l ength i s vi rtual l y i mpossi bl e to crack
wi th brute force. Shared secrets
(commonl y cal l ed j ust "secrets") are
uni que to a parti cul ar RADI US cl i ent
and server pai r. For e.g. i f an end user
subscri bes to mul ti pl e I nternet servi ce
provi ders for hi s di al -up access, he
i ndi rectl y makes requests to mul ti pl e
RADI US servers. Shared secrets
between the cl i ent NAS equi pment i n
I SPs A, B and C that are used to
communi cate wi th respecti ve RADI US
servers shoul d not match.
Protecti ng
transacti onal securi ty by usi ng an
automated shared-secret changer has a
di sadvantage. There i s no guarantee the
cl i ents and servers can synchroni ze to
the new shared secret at the most
appropri ate ti me. And even i f i t was
certai n that the si mul taneous
synchroni zati on coul d occur, i f there
are outstandi ng requests to the
RADI US server and the cl i ent i s busy
processi ng (and, therefore, i t mi sses
the cue to synchroni ze the new secret),
then those outstandi ng requests wi l l be
rej ected by the server.

Authent i cat i on Methods
RADI US supports
a vari ety of di fferent protocol
mechani sms to transmi t sensi ti ve user-
speci fi c data to and from the
authenti cati on server. The two most
common are the Password
Authenti cati on Protocol (PAP) and the
CHAP.



RADI US al so al l ows for more
attri butes and methods devel oped by
vendors.
PAP: The User-Password
attri bute i n a requesti ng packet
si gnal s to the RADI US server
that the PAP protocol wi l l be
used for that transacti on. I t's
i mportant to note that the onl y
mandatory fi el d i n thi s case i s
the User-Password fi el d. The
User-Name fi el d does not have
to be i ncl uded i n the requesti ng
packet, and i t's enti rel y possi bl e
that a RADI US server al ong a
proxy chai n wi l l change the
val ue i n the User-Name fi el d.
Fi rst, the cl i ent detects the
i denti fi er and the shared secret
for the ori gi nal request and
submi ts i t to an MD5 hashi ng
sequence. The cl i ent's ori gi nal
password i s put through the
XOR process and the resul t
comi ng from these two
sequences i s then put i n the
User-Password fi el d. The
recei vi ng RADI US server then
reverses these procedures to
determi ne `whether to authori ze
the connecti on.
CHAP: CHAP dynami cal l y
encrypts the requesti ng user's
I D and password. The user's
machi ne obtai ns a key from the
RADI US cl i ent equi pment of 16
octets i n l ength. The cl i ent then
hashes that key and sends back
a CHAP I D, a CHAP response,
and the username to the
RADI US cl i ent. The RADI US
cl i ent, havi ng recei ved al l of
the above, pl aces the CHAP I D
fi el d i nto the appropri ate pl aces
i n the CHAP-Password attri bute
and then sends a response. The
chal l enge val ue ori gi nal l y
obtai ned i s pl aced i n ei ther the
CHAP-Chal l enge attri bute or i n
the authenti cator fi el d i n the
headerthi s i s so the server can
easi l y access the val ue i n order
to authenti cate the user. The
password i n a CHAP transacti on
i s never passed across the
network. CHAP I Ds are al so
non-persi stent

RADIUS Accounti ng
RADI US supports a
ful l -featured accounti ng protocol
subset, whi ch al l ows i t to sati sfy al l
requi rements of the AAA model . The
desi gn of accounti ng i n RADI US i s
based upon three maj or characteri sti cs:
Accounti ng wi l l be based on a
cl i ent/server model .
Communi cati ons between devi ces wi l l
be secure.
RADI US accounti ng wi l l be extensi bl e.

Basi c Operat i on
Al l communi cati ons
regardi ng RADI US accounti ng are done
wi th an Accounti ng-Request packet. A
cl i ent that i s parti ci pati ng i n the
RADI US accounti ng process wi l l
generate an Accounti ng Start packet,
whi ch i s a speci fi c ki nd of Accounti ng-
Request packet. Thi s packet i ncl udes
i nformati on on whi ch servi ce has been
provi si oned and on the user for whi ch
these servi ces are provi ded. Thi s
packet i s sent to the RADI US
accounti ng server, whi ch wi l l then
acknowl edge recei pt of the data. When
the cl i ent i s fi ni shed wi th the network
servi ces, i t wi l l send to the accounti ng
server an Accounti ng Stop packet
(agai n, a speci al i zed Accounti ng-
Request packet), whi ch wi l l i ncl ude the
servi ce del i vered; usage stati sti cs such
as ti me el apsed, amount transferred,
average speed; and other detai l s. The
accounti ng server acknowl edges recei pt
of the stop packet, and al l i s wel l . I f
the server does not or cannot handl e
the contents of the Accounti ng-Request
packet, i t i s not al l owed to send a
recei pt acknowl edgment to the cl i ent

Rel i abi l i ty of Accounti ng
Whi l e the
speci fi cati on for RADI US accounti ng
i s promi si ng, experi ence sees that
accounti ng packets are not a sure,
100% certai nty. For exampl e, i f a
cl i ent sends accounti ng packets to a
server but recei ves no acknowl edgment
or response, he wi l l conti nue to send
the same packet for onl y a l i mi ted
ti me. Thi s resul ts i n some sessi ons
wi th i nconsi stent records. Thi s presents
probl ems wi th operati ons that requi re
great consi stency and accuracy.








APPLICATIONS

RADIUS for Web
Authent i cat i on: Chances are
good that you have an area of
your web si te that needs to be
protected from general publ i c
access. RADI US accounti ng can
be used to track usage stati sti cs
for thi s protected si te. A
corporati on who wants to create
a speci al I ntranet si te
speci fi cal l y for i ts remote,
mobi l e, and home users An
I nternet servi ce provi der who
wi shes to create a pri vate si te
for subscri bers onl y; perhaps a
bi l l i ng or support si te that
contai ns techni cal i nformati on
sui tabl e onl y for payi ng
customers.

LDAP Di rectory Servi ce:
The ever-present compl ai nt of
systems admi ni strators who deal
wi th mul ti pl e user databases
across mul ti pl e pl atforms i s that
of effi ci ency. Vari ous
appl i cati on servers al l ti e i nto
that one database and use i ts
l i st. Wi thout a central i zed
reposi tory for user i nformati on,
the effort of si mpl y changi ng a
password i s mul ti pl i ed by the
number of systems on whi ch a
uni que copy of the password i s
stored.


Parsi ng Account i ng Fi l es:
One of the most useful aspects
of RADI US i s the uti l i ty of i ts
accounti ng porti on. Logs from
the RADI US accounti ng server
can be used for a mul ti tude of
purposes. Radi us Report al l ows
you to i mport l og fi l es and
create di fferent reports based on
thei r contents.

CONCLUSION
Radi us provi des 3 feature
i .e. accounti ng, authori zati on, and
authenti cati on at the same ti me
whereas i t i s not provi ded i n al l other
protocol s. I t i s a better protocol than
UDP protocol . I t i s hi ghl y secured
usi ng the method so shared secrets
between the cl i ent and server.



BOOKS

1. Securi ty + i n depth by. Paul
Camp Benn Cal vert &Steven
Boswel l
2. RADI US by J onathan Hassel l
SITES

1. (http://www.gnu.org/software/ra
di us/radi us.html )
2. (http://www.xs4al l .nl /~evbergen
/openradi us/)











Sri venkateswara university

A Paper on
Universal Interaction of Smart Phones
using
Embedded
Systems


by

A.N. Sireesha O.S.Swetha
Sireesha_489@yahoo.co.in swetha_zen@yahoo.co.in




III Year
Department of Electronics & Communications
Sri Venkateswara College of Eng. & Technology
RVS Nagar
Chittoor-517 127

Contents

Abstract
1. Introduction
2. Smart Phones Technology
3. Smart Phone Interaction Models
3.1. Universal Remote Control Model
3.2. Dual Connectivity Model
3.3. Gateway Connectivity Model
3.4. Peer-to-Peer Model
4. System Architecture
5. Conclusions


Abstract

In this paper, we present a system architecture that allows users to interact with
embedded systems located in their proximity using Smart Phones. We have identified four
models of interaction between a Smart Phone and the surrounding environment:
universal remote control, dual connectivity, gateway connectivity, and peer-to-peer.
Although each of these models has different characteristics, our architecture
provides a unique framework for all of the models.

Central to our architecture are the hybrid communication capabilities
incorporated in the Smart Phones. These phones have the unique feature of incorporating
shortrange wireless connectivity (e.g., Bluetooth) and Internet connectivity (e.g., GPRS)
in the same personal mobile device. This feature together with significant processing
power and memory can turn a Smart Phone into the only mobile device that people will
carry wherever they go.

1. Introduction
Recent advances in technology make it feasible to incorporate significant
processing power in almost every device that we encounter in our daily life. These
embedded systems are heterogeneous, distributed everywhere in the surrounding
environment, and capable of communicating through wired or wireless interfaces. For a
number of years, visionary papers have presented a picturesque computerized physical
world with which we can potentially interact faster and in a simpler fashion. People,
however, are not yet taking advantage of this ubiquitous computing world. Despite all the
computing power laying around, most of our daily interactions with the surrounding
environment are still primitive and far from the ubiquitous computing vision. Our pockets
and bags are still jammed with a bunch of keys for the doors we have to open/close daily
(they did not change much since the Middle Ages), the car key or remote, access cards,
credit cards, and money to pay for goods. Any of these forgotten at home can turn the day
into a nightmare. If we travel, we also need maps and travel guides, coins to pay the
parking in the city, and tickets to take the train or subway.
In addition, we are always carrying our mobile phone, which for some mysterious
reason is the least likely to be left at home. When we finally arrive home or at the hotel,
we are greeted by several remote controls eager to test our intelligence. All these items
are absolutely necessary for us to properly interact with our environment. The problem is
that there are too many of them, they are sometimes heavy, and we will likely accumulate
more and more of them as our life goes on, requiring much larger pockets. For this
problem, the community does not lack innovative solutions that address some of its
aspects (e.g., wireless micro servers, electronic payment methods, digital door keys).
What is missing is a simple, universal solution, which end-users are likely to accept
easily. Ideally, we would like to have a single device that acts as both personal server and
personal assistant for remote interaction with embedded systems located in proximity of
the user. This device should be programmable and support dynamic software extensions
for interaction with newly encountered embedded systems (i.e., dynamically loading new
interfaces). To simplify its acceptance by society, it should be a device that is already
carried by people wherever they go.
We believe that Smart Phones are the devices that have the greatest chance of
successfully becoming universal remote controls for people to interact with various
devices from their surrounding environment; they will also replace all the different items
we currently carry in our pockets. Smart Phone is an emerging mobile phone technology
that supports J ava program execution and provides both shortrange wireless connectivity
(Bluetooth) and cellular network connectivity through which the Internet can be accessed.
In this paper, we present a system architecture that allows users to interact with
embedded systems located in their proximity using a Smart Phone. We have identified
four models of interaction between a Smart Phone and the surrounding environment:
universal remote control, dual connectivity, gateway connectivity, and peer-to-peer.
Although each of these models has different characteristics, our architecture provides a
unique framework for all the models.
Central to our architecture are the hybrid communication capabilities incorporated
in the Smart Phones which allow them to interact with the close-by environment through
short-range wireless networking and with the rest of the world through the Internet over
cellular links. This feature together with significant processing power and memory can
turn a Smart Phone into the long awaited universal personal assistant that can make our
daily life much simpler.

2. Smart Phones Technology
With more than a billion mobile phones being carried around by consumers of all
ages, the mobile phone has become the most pervasive pocket-carried device. We are
beginning to see the introduction of Smart Phones, such as Sony Ericsson P800/P900 and
Motorola A760, as a result of the convergence of mobile phones and PDA devices.
Unlike traditional mobile phones, which have limited processing power and act merely as
dumb conduits for passing voice or data between the cellular network and end users,
Smart Phones combine significant computing power with memory, short-range wireless
interfaces (e.g., Bluetooth), Internet connectivity (over GPRS), and various input-output
components (e.g., high-resolution color touch screens, digital cameras, and MP3 players).
Sony Ericsson P800/P900 runs Symbian OS, an operating system specifically designed
for resource constrained devices such as mobile phones. It also comes equipped with two
versions of J ava technology: Personal J ava and J 2ME CLDC/MIDP. Additionally, it
supports C++which provides low level access to the operating system and the Bluetooth
driver. The phone has 16MB of internal memory and up to 128MB external flash
memory.Motorola A760 has a Motorola i250 chip for communication, Intels 200 MHz
PXA262 chip for computation, and 256MB of RAM memory. It runs a version of
MontaVista Linux and comes with J ava J 2ME support.
Bluetooth is a low-cost, low-power standard for wireless connectivity. Today, we
can find Bluetooth chips embedded in PCs, laptops, digital cameras, GPS devices, Smart
Phones, and a whole range of other electronic devices. Bluetooth supports point-to-point
and point-to-multipoint connections. We can actively connect a Bluetooth device to up to
seven devices simultaneously.
Together, they form an ad hoc network, called Piconet. Several Pico nets can be
linked to form a Scatternet. Another important development for the mobile phone
technology is the introduction of General Packet Radio Service (GPRS), a packet
switching technology over the current GSM cellular networks. GPRS is offered as a
nonvoice value-added service that allows data to be sent and received across GSM
cellular networks at a rate of up to 171.2kbps, and its goal is to supplement todays
Circuit Switched Data and Short Message Service. GPRS offers an always-on service and
supports Internet protocols.


Figure 1. Example of Smart Phones: Sony Ericsson P800 (Left) and Motorola A760

3. Smart Phone Interaction Models
A Smart Phone can be used to interact with the surrounding environment in
different ways. We have identified four interaction models: universal remote control, dual
connectivity, gateway connectivity, and peer-to-peer. With these models, a Smart Phone
can be used to execute applications from as simple as remotely adjusting various controls
of home appliances or opening smart locks to complex applications such as automatically
booking a cab or ordering/ paying in a restaurant using an ad hoc network of mobile
phones to connect to the cashiers computer.
3.1. Universal Remote Control Model
The Smart Phone can act as a universal remote control for interaction with
embedded systems located in its proximity. To support proximity-aware interactions,
both the Smart Phone and the embedded systems with which the user interacts must have
short-range wireless communication capabilities. Figure 2 illustrates such interactions
using Bluetooth.
Due to its low-power, low-cost features, Bluetooth is the primary candidate for the short-
range wireless technology that will enable proximity-aware communication.

Figure 2. The Universal Remote Control Interaction Model

Figure 3. The Dual Connectivity Interaction Model

Since embedded systems with different functionalities can be scattered
everywhere, a discovery protocol will allow Smart Phones to learn the identity and the
description of the embedded systems located in their proximity. This protocol can work
either automatically or on-demand, but the information about the devices currently
located in users proximity is displayed only upon users request. Each embedded system
should be able to provide its identity information (unique to a device or to a class of
devices) and a description of its basic functionality in a human-understandable format.
This model works well as long as the user has the interfaces for interacting with the
embedded systems preinstalled on the phone.
3.2. Dual Connectivity Model
Central to our universal interaction architecture is the dual connectivity model which is
based on the hybrid communication capabilities incorporated in the Smart Phones. They
have the unique feature of incorporating both short range wireless connectivity (e.g.,
Bluetooth) and Internet connectivity (e.g., GPRS) in the same personal mobile device.
With this model, the users can interact with the close by environment using the short-
range wireless connectivity and with the rest of the world using the Internet connectivity.
As a typical application, let us assume that a person has just bought an intelligent
microwave oven equipped with a Bluetooth interface. This embedded system is very
simple and is not capable of storing or transferring its interface to a Smart Phone.
However, it is able to identify itself to Smart Phones. Using this information, the phones
can connect to a server across the Internet (i.e., over GPRS) to download the code of the
interface that will allow it to become a remote control for the microwave oven. The
phone can also perform authentication over the Internet to ensure that the code is trusted.
All further communication between this embedded system and the Smart Phone happens
by executing the downloaded code. This code will display a panel that emulates the panel
of the microwave on the phones screen (i.e., it effectively transforms the phone into an
intuitive microwave remote control).
Another typical application is opening/closing Smart Locks. We envision that the entry in
certain buildings will soon be protected by Smart Locks (e.g., locks that are Bluetooth-
enabled and can be opened using digital door keys). The dual connectivity model enables
users carrying Smart Phones to open these locks in a secure manner
The dual connectivity model can also be used to implement electronic payment
applications. A client does not need to know about a vendors embedded system in
advance. The Smart Phone can authenticate the vendor using its Internet connection. The
same connection can be used by the client to withdraw electronic currency from her bank
and store it on the phone. Another option provided by the Smart Phone is to send some of
the unused money back into the bank account (i.e., make a deposit each time the amount
on the phone exceeds a certain limit). Potentially, the vendors embedded system can also
be connected to the Internet. For instance, this ability can be used to authenticate the
client.

Figure 4. The Gateway Connectivity Interaction Model


Figure 5. The Peer-to-Peer Interaction Model

3.3. Gateway Connectivity Model
Many pervasive applications assume wireless communication through the IEEE 802.11
family of protocols. These protocols allow for a significant increase in the
communication distance and bandwidth compared to Bluetooth. Using these protocols,
the communication range is 250m or more, while Bluetooth reaches only 10m. The
bandwidth is also larger, 11-54Mbps compared to less than 1Mbps for Bluetooth.
Additionally, many routing protocols for mobile ad hoc networks based 802.11 already
exist. The disadvantage of 802.11 is that it consumes too much energy, and consequently,
it drains out the mobile devices batteries in a very short period of time. With the current
state of the art, we do not expect to have 802.11 network interfaces embedded in Smart
Phones or other resource constrained embedded systems that need to run on batteries for
a significant period of time (e.g., several hours or even days). More powerful systems,
however, can take advantage of the 802.11 benefits and create mobile ad hoc networks.
In such a situation, a user would like to access data and services provided by these
networks from its Smart Phone. To succeed, a gateway device has to perform a change of
protocol from Bluetooth to 802.11 and vice-versa. Many places in a city (e.g., stores,
theaters, restaurants) can provide such gateway stations together with 802.11 hotspots.
Figure 4 illustrates this communication model and also presents an application that can be
built on top of it
3.4. Peer-to-Peer Model
The Smart Phones can also communicate among themselves (or with other Bluetooth-
enabled devices) in a multihop, peer-to-peer fashion, similar to mobile ad hoc networks.
For instance, this model allows people to share music and pictures with others even if
they are not in the proximity of each other. Figure 5 depicts yet another example of this
model. A group of friends having dinner in a restaurant can use their Smart Phones to
execute a program that shares the check. One phone initiates this process, an ad hoc
network of Smart Phones is created, and finally the payment message arrives at the
cashier.

4. System Architecture
Our system architecture for universal interaction consists of a common Smart
Phone software architecture and an interaction protocol. This protocol allows Smart
Phones to interact with the surrounding environment and the Internet. Figure 6 shows the
Smart Phone software architecture. In the following, we briefly describe the components
of the software architecture.
Bluetooth Engine is responsible for communicating with the Bluetooth-enabled
embedded systems. It is composed of sub-components for device discovery and
sending/receiving data. The Bluetooth Engine is a layer above the Bluetooth stack and
provides a convenient J ava API for accessing the Bluetooth stack.
Internet Access Module carries out the communication between the Smart Phone and
various Internet servers. It provides a well-defined API that supports operations specific
to our architecture (e.g., downloading an interface). The protocol of communication is
HTTP on top of GPRS.


Figure 6. Smart Phone Software Architecture

Proximity Engine is responsible for discovering the embedded systems located within
the Bluetooth communication range. Each time the user wants to interact with one of
these systems, and an interface for this system is not available locally (i.e., a miss in the
Interface Cache), the Proximity Engine is responsible from downloading such an
interface. If the embedded system has enough computing power and memory, the
interface can be downloaded directly from it. Otherwise, the Proximity Engine invokes
the Internet Access Module to connect to a web server and download the interface. The
downloaded interface is stored in the Interface Cache for later reuse. Once this is done,
the Proximity Engine informs the Execution Engine to dispatch the downloaded interface
for execution. All further communication between the Smart Phone and the embedded
system happens as a result of executing this interface.
Execution Engine is invoked by the Proximity Engine and is responsible for dispatching
interface programs for execution over the J ava virtual machine. These programs interact
with the Bluetooth Engine to communicate with the embedded systems or with other
Smart Phones. They may also interact with the Internet Access Module to communicate
with Internet servers. For instance, the interface programs may need to contact a server
for security related actions or to download necessary data in case of a miss in the
Personal Data Storage.
Interface Cache stores the code of the downloaded interfaces. This cache avoids
downloading an interface every time it is needed. An interface can be shared by an entire
class of embedded systems (e.g., Smart Locks, or Microwaves). Every interface has an ID
(which can be the ID of the embedded system or the class of embedded systems it is
associated with). This ID helps in recognizing the cached interface each time it needs to
be looked up in the cache. Additionally, each interface has an associated access handler
that is executed before any subsequent execution of the interface. This handler may
define the time period for which the interface should be cached, how and when the
interface can be reused, or the permissions to access local resources. The user can set the
access handlers parameters before the first execution of the interface.
Personal Data Storage acts as a cache for active data, similar to Active Cache. It
stores data that needs to be used during the interactions with various embedded systems.
Examples of such data include digital door keys and electronic cash. Each data item
stored in this cache has three associated handlers: access handler, miss handler, and
eviction handler. Each time an interface needs some data, it checks the Personal Data
Storage. If the data is available locally (i.e., hit), the access handler is executed, and the
program goes ahead. For instance, the access handler may check if this data can be shared
among different interfaces. If the data is not available locally (i.e., miss), the miss handler
instructs the Internet Access Module to download the data from the corresponding
Internet server. The eviction handler defines the actions to be taken when data is evicted
from the cache. For instance, electronic cash can be sent back to the bank at eviction
time. Figure 7 shows the interaction protocol that takes place when a Smart Phone needs
to interact with an embedded system. We consider that any embedded system is
registered with a trusted web server (this web server can be physically distributed on
multiple computers). At registration, the web server assigns a unique ID and a URL to the
device.
All the information necessary to interact with the device along with a user
interface is stored at that URL. This URL may be common for an entire class of
embedded systems. The user invokes the Proximity Engine each time she needs to
interact with a device located in the proximity. Once the embedded systems in the
proximity have been identified, the user can choose the one she wants to interact with.
Consequently, a request is sent to the embedded system to provide its ID and URL. Upon
receiving the ID and URL of the embedded system, the Smart Phone executes the access
control handler, and then, loads and executes the interface.
In case of a miss in the Interface Cache, the interface needs to be downloaded on the
phone either from the web server or from the embedded system itself. An inter face
downloaded from an embedded system is untrusted and is not allowed to access local
resources (i.e., this is a sandbox model of execution, where the interface can only execute
safe instructions on the phone). The interfaces downloaded from the web server are
trusted; they are assumed to be verified before being distributed by the server. Each time
a Smart Phone requests an interface from the web server, it has to send the interface ID
and the URL provided by the embedded system.

Figure 7. Smart Phone Interaction Protocol

It also sends its ID (stored in the Personal Data Storage). The permission to download an
interface is subject to access control enforced based on the Smart Phone ID and,
potentially, other credentials presented by the user. Once the access is granted, the web
server responds with the interface code.

5. Conclusion

In this paper, we have argued for turning the Smart Phone into the only device
that people carry in their pockets wherever they go. The Smart Phone can be used as both
personal server that stores or downloads data that its user needs and personal assistant for
remote interaction with embedded systems located in the users proximity. To achieve
this vision, we have presented a unified system architecture for different models of
interaction between a Smart Phone and the surrounding environment.Central to this
universal interaction architecture is the dual connectivity feature of Smart Phones, which
allows them to interact with the close-by environment through short-range wireless
networking and with the rest of the world through the Internet over cellular links.

6.References
www.google.com
www.embeddedsystems.com
www.electronicsforyou.com








SOS TRANSMISSION
Through Cellular phones to save Accident Victims
-Boon for the cellular phone users





PRESENTED BY


SHAIK ABUBAKAR N.SITA MAHALAKSHMI
III E.I.C.E III E.C.E
N B K R I ST VIDYANAGAR NARAYANA ENGINEERING
NELLORE (AP) COLLEGE, GUDUR
e-mail:abubakarsk2000@yahoo.co.in NELLORE, AP
Ph:9885577550 e-mail:nallapati_ece@yahoo.co.in


1. Introduction
Cellular phones are turning out to be a menace on the road. This is a major problem for
the cellular phone manufacturers. This paper provides a solution which transmits a SOS
signal to save the accident victims. It describes in detail a cost-effective foolproof
solution.
There are many factors to be considered when designing such a system. In most of the
accidents the victim becomes unconscious. How is a SOS transmitted then? Here, many
ideas can be implemented. One such solution is described here. The cell phone is fitted
with a transducer, which detects shocks. The cell phone automatically transmits the SOS
if the shock level goes beyond a certain percentage.
The cell phone must not trigger an accidental SOS. To ensure this, the shock level that
triggers the SOS must be high enough. Based on the first condition, if the shock level is
made very high then an accident might not be identified at all.



Fig: Antenna used in RDF to identify the position of the victim

Having thus identified the situations in the accident, one needs to understand the actual
requirements in each case. They are given below.

The solution requires a software robot resident in the cellular phone providers server,
which can transmit the SOS signal in an intelligent manner and monitor responses for the
victim.
i) Similarly the solution needs a Positioning System to transmit the victims whereabouts
to others. This has to be a cheap system and should not increase the cell phone receivers
cost greatly.
ii) The solution requires a high fidelity shock transducer and decoding circuit to identify
the shock magnitude.
iii) The SOS has to be transmitted as soon as possible. So all systems must have a very
small time delay.
iv) Above all, the new system must fit in with the present system (i.e.,) there must be no
difference in the information received between a user who requests this option and one
who does not.
The detailed description of the solution will be presented now.

2. The Toy Car Experiment

In case the victim becomes unconscious, the system must be able to automatically detect
an accident and transmit the SOS automatically. In order to achieve this, a shock
transducer is used to measure the jerk experienced through the accident and trigger the
SOS circuit if the force level is very high. This system needs statistical data acquisition to
find out the exact threshold level of the force in an accident.

It is highly expensive to simulate the accident in real time. So, a scaled down experiment
is used. Here, a pair of toy cars of mass 200g is made to collide with each other. The
force caused by them is measured by simple piezoelectric transducers. The results of this
experiment are tabulated below






Fig: Toy car experiment to verify the working of the system

Table 2.1

Sample
no:
Measured Voltage(mv) Actual Force(Newtons)
1 113.2 .977
2 112.7 .972
3 114.3 .985
4 114.5 .987
5 113.3 .978
mean 113.6 .980
As seen from the experiment, the average force acting on a toy car in case of an accident
is approximately 1N. For a car measuring 960kg and moving at 70kmph speed, the force
will be scaled18000 times or 18kN. These practical results can be verified by a simple
theoretical calculation. A car weighing 960kg decelerates from approximately 70kmph to
0kmph in 2 seconds in case of an accident.

Hence, the force is given by F =ma which is, 960*70*1000/3600 or 18.67kN
approximately. This confirms with the scaled down experimental results. However, in a
four-wheeler, all of the total force does not act inside the vehicle. As per information got
from Mercedes Benz, only 10% of the total force acts inside the car (Acknowledgement)
Thus, the threshold can be set to approximately at 1kN. The scaled down experiment used
a cheaper transducer that does not measure high forces.

The transducer required for the actual system costs Rs.1000 a pair. Based on the
statistical data collected above, the approximate threshold level is determined. More
accurate results can be determined if the experiments are carried in real time to the exact
detail.
In order to ensure that the force calculated above acts on the cell phone, it is essential to
place the phone in the stand that normally comes as a standard part of cars. This stand
requires a slight modification to provide the cell phone a small moving space so that it is
jerked when an accident occurs.

The alternate and better solution would be to attach the transducer to some part of the
vehicle itself and connect the cell phone to it whenever the user is driving hi/her car. This
solution would require that the transducer be properly protected. The problem of finding
the positions victim is now dealt with.


3. Identifying the Position of the Victim:

The problem of knowing where we are has been an interesting and difficult problem
through the ages. Years of research have resulted in the Global Positioning System
(GPS). This technique uses three satellites and pin points the location by the triangulation
process, wherein the users position is located as the point of intersection of the three
circles corresponding to the satellites. Installing such a system is quite simple. But the
major constraint here is the cost. A normal hand-held GPS costs around $100 and
weighs quite heavy. Minimizing the above apparatus will increase the cost further.

This would mean an extra cost of Rs.10000 to Rs.15000 for the Indian user.
The better option would be to wait for a SOS signal and then identify the victims
position. This being a faster technique also makes the design process easy and cheap.




Fig: Identifying the position of the victim through satellite

This being the case, one could make use of certain obvious facts to identify the victim.
They are,
i) The cell within which the victim is present can be identified easily by the base station.
However, this resolution is not enough because the cell can be of a huge size.
ii) Accidents are exceptional cases. They occur rarely. Further, the probability of two
users in the same cell getting into an accident is highly improbable.
The system suggested by this paper makes use of a beacon or search signal transmitted by
the base station. This is a constant amplitude ac. signal that fits in the guard band of the
respective cell. The signal has the same frequency for all users and so is unsuitable for
simultaneous multi-user handling.
However, that will be a highly improbable case as reasoned above.
This search signal is sent only if an SOS is identified. So, when a victim sends out his
SOS, the base station immediately sends the search signal.

The cellular phone is fitted with a small reflector which reflects this signal as such. This
is easily achieved by constructing a mismatched termination in the cellular phone for that
frequency. Now, the to and fro travel of the signal introduces a time delay. So,
from the signal reflected, the users distance can be identified.

Fig: Cellular phone used in SOS transmission

The information got now gives only the radius of the circle within which the user might
be present. This might be too large an area to identify the user even within the cell limits
as there is no maximum limit on the cell area. Since we have got the radius, all that is
required is to find the angle or direction within which the user might be present. To do
this, we use the Radio Direction Finder (RDF) antenna system.
This makes use of a highly directional loop antenna to identify the signal source which in
this case is the cellular phone.
In order to do this, the cellular phone needs to transmit a microwave signal to the base
station.

This can be of any frequency that has not been allocated for the existing control
frequencies. The base station is then fitted with the CROSSED LOOP or BELLINI TOSI
or GONIOMETER type of direction finder. It has been proved mathematically that the
meter points to the direction of the signal source

The user in distress sends out a microwave signal to the base station just as the
base station sends its beacon signal. From the reflected beacon signal the radius of the
victims position is found. From the Goniometer, the direction is found as well. This
system as assumed above presents a design for only one user. To do this a small
electronic system, preferably a microcontroller based system maybe used. Such systems
are available widely in the market and so there is no point in trying to design one.

Thus, the problem of identifying the victim is overcome. Once the victims location is
identified, the base station transmits the SOS sent by the cell phone along with his
coordinates to the main server.
The cell phone thus initiates the process and the base station propagates it.

4. Complete Block diagram of the System:

The below diagram depicts the working of the complete system. As seen, the jerk caused
by the accident is detected by the shock transducer and the SMS sub-routine is triggered.
Along with the message, control signals that inform the base station that an accident has
occurred are transmitted.

The triggering is achieved by using a high pass filter that detects abrupt changes in the
transducer. Simultaneously, the microwave signal for the goniometer is also transmitted.
The position is identified as described in the previous section. The users id and his
position in the polar coordinates are given to the software robot. This robot, then decodes
the users position to other subscribers based on a priority list.




Figure 4.1: Block Diagram

So far, the hardware design of the system has been dealt with in detail. As mentioned at
the start of the paper, a software robot that manages the whole show will have to be
designed. This robot is made resident in the main server in the control tower of the
cellular service provider. The functions that this robot will have to perform are complex.

The algorithm it follows and its code at the highest level of abstraction are explained in
the next section.

5. Designing the Software Robot:

A software robot is a program that resides in a network (or an environment) and executes
a specific task assigned to it. For this purpose, it may move around the environment or
contact other software robots in the same or other environments. A software robot is to be
designed for this system so as to monitor and transmit the SOS signals in an intelligent
manner.
The tasks that are to be performed by the software robot are listed below.
i)It has to transmit the SOS to appropriate persons as will be described.
ii)It has to act in the victims place and monitor responses.
iii)It has to check for a confirmation form the victim to avoid false alarms. This is
accomplished by interrogating the victim and waiting for the confirmation. If in a very
short time, there is no response, the transmitted SOS must be followed through with a
False Alarm message.
Before designing the algorithm, the hierarchy in which the SOS is to be transmitted is to
be decided. This takes into account the following factors,
i)The proximity of the help source.
ii)The certainty with which help might be got. For example, a relative whose cell number
is present in the victims address book would be more likely to help rather than a third
person.
Based on the above constraints, a suggested hierarchy of transmitting the SOS is
given below. This is maintained as sets of indices in the agents look up table with each
index representing one group.
i) Emergency desk of all hospitals in the victims cell.
ii) All doctors presently in the victims cell.
iii) All subscribers in the victims address book that are presently in the cell.
iv) All subscribers in the victims cell.
v) Emergency desk of all hospitals in the next nearest cell and so on.
This set of indices in changed in a dynamic fashion by the MAILER DAEMON in
the server. This MAILER DAEMON is normally present in all servers. It is this program
that initiates the actual Agent whenever a SOS occurs. The code of the Agent is given in
an abstracted level below.
Public void class Agent

{
/* get victim number and position from MAILER DAEMON. Subscriber class
defines the victim */
Subscriber SOS Transmitter =new Subscriber ( MAILERDAEMON.getVictim( ) );
Position victim Posn =new Position ( MAILERDAEMON.getPosition( ) );
Boolean processing Flag =True;
Subscriber help Sub;
// look UP look up table with all indices

While ( ( help Sub =read List( look Up ) ) !=EOF )

{
send ( HELP +vicitim Posn helpSub);
delay( 30 ); // wait for 30 seconds
Response resp1 =new Response ( scan Response( ) );
If ( Response !=NULL )
exit ( 0 );
}

if ( Response ==NULL )
//scan response continuously for 120secs after transmitting to all subscribers
Response =scan Response (120 );
If ( Response !=NULL )
Send ( HELP ON THE WAY , SOS Transmitter );
//inform victim of help processing Flag =False;
}

The agent given here once started, gets the victims id and his position into the respective
objects. It then puts each of the indexes from the look up table into its corresponding
object and sends the SOS to them. It then monitors the response and informs the victim
when somebody responds.

6. Simulation:
The simulation of this system has been carried out in a small scale level. As seen from the
block diagram a microphone substitutes for the shock transducer in the original system.
This then transmitted through a Medium Wave transmitter to a personal computer. The
signal received is passed through an ADC and received in a C program. This program
checks the signal value and sets a flag variable when it goes beyond a certain level.

This flag is continually checked by a thread of the J AVA front-end. If the flag is set, the
program connects to the back-end database and displays a list of users to whom the mock
message is sent based on the hierarchy explained above. The simulation does not cover
the positioning part of the system as that is too expensive to be done on small scale. The
screen shot of the J ava frontend is shown in the next section.




8. Approximate Cost Analysis of the System:

1. Cost of installing Goniometer for one base station: Rs.20000
2. Number of Base Stations in Chennai City: 50
3. Cost of installing Gonio meter for the City: Rs.1000000
4. Cost of other hardware and software (for total city):Rs. 100000
5. Number of subscribers in Coimbatore city: Rs. 10000 (Assumed Value)
6. Cost per Subscriber: Rs.110
7. Cost of Transducer fitted in the Cell Phone: Rs 1000
8. Cost of other hardware in the Cell Phone: Rs.1000
Total Cost Per Subscriber: Rs.2110 (Approximate)
It is thus seen that allowing for the widest possible cost; the total increase is only Rs.2110
which is a good price when considering the fact that the system saves the life of the
subscriber.

9. CONCLUSION:

The system though complete presents a few limitations. They are, the system requires the
user to place the cellular phone in a stand or connect the transducer to the vehicle in case
of four wheelers. Though this might seem as if taking choice from the user, the fact that
the system deals with a question of life or death is more important.
The system needs detailed surveying to decode the position of the user in polar
coordinates to actual localities. This however is a one time job. The system does not
handle multiple victims simultaneously. However, priority can be allocated to users based
on the force measured.
False alarms are bound to occur in such a system. This can be reduced by ringing the
cellular phone every time an SOS is sent and thereby warning the user. The data collected
are approximate. However, accurate data can be collected if the system is tested in real
time as a commercial venture.
Thus, if implemented this system would prove to be a boon to all the people out there
driving with hands-free earphone in their ears.

10. References:

[1] Helfrick and Cooper, Electronics Measurements and Instrumentation
[2] Raj Pandya, Personal Mobile Communication Systems and Services
[3] Thiagarajan Viswanathan, Telecommunication and Switching Systems
[4] K.D.Prasad, Antennas and Wave Propagation
[5] George Kennedy, Electronic Communication Systems

U
U
U
T
T
T
A
A
A
C
C
C
T
T
T
I
I
I
L
L
L
E
E
E


S
S
S
E
E
E
N
N
N
S
S
S
I
I
I
N
N
N
G
G
G


S
S
S
Y
Y
Y
S
S
S
T
T
T
E
E
E
M
M
M


U
U
U
S
S
S
I
I
I
N
N
N
G
G
G


A
A
A
R
R
R
T
T
T
I
I
I
F
FF
I
I
I
C
C
C
I
I
I
A
A
A
L
L
L


N
N
N
E
E
E
U
U
U
R
R
R
A
A
A
L
L
L


N
N
N
E
E
E
T
T
T
W
W
W
O
O
O
R
R
R
K
K
K
S
S
S






AUTHORS
D.RAJESH
E Mail: HTUrajeshmsec@yahoo.co.inUTH
DEPARTMENT OF ELECTRONICS AND COMMUNICATIONS ENGINEERING
MEPCO SCHLENK ENGINEERING COLLEGE, SIVAKASI
C.VENKATRAMAN
E Mail:HTUchandru.venkat@gmail.comUTH
DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING
MEPCO SCHLENK ENGINEERING COLLEGE, SIVAKASI

C CO ON NT TE EN NT TS S



A AB BS ST TR RA AC CT T
I IN NT TR RO OD DU UC CT TI IO ON N
C CO ON NF FI IG GU UR RA AT TI IO ON N O OF F T TA AC CT TI IL LE E S SE EN NS SI IN NG G S SY YS ST TE EM M
T TA AC CT TI IL LE E D DA AT TA A P PR RO OC CE ES SS SI IN NG G
S SY YS ST TE EM M M MO OD DU UL LE ES S
A AR RT TI IF FI IC CI IA AL L N NE EU UR RA AL L N NE ET TW WO OR RK KS S
E EX XP PE ER RI IM ME EN NT TA AT TI IO ON N
E EX XP PE ER RI IM ME EN NT TS S O ON N O OB BJ J E EC CT T I ID DE EN NT TI IF FI IC CA AT TI IO ON N U US SI IN NG G A AN NN N
C CH HA AN NG GE E I IN N P PO OS SI IT TI IO ON N
C CH HA AN NG GE E I IN N S SI IZ ZE E
C CH HA AN NG GE E I IN N O OR RI IE EN NT TA AT TI IO ON N
C CO ON NC CL LU US SI IO ON N
R RE EF FE ER RE EN NC CE ES S









A AB BS ST TR RA AC CT T
Object Identification plays an
important role in various applications
ranging from robotics to computer
vision. Artificial neural network
(ANN) is being used for in various
pattern recognition applications due to
the advantages of ability as well as
adaptability to learn. This paper
presents identification of object
independent to size, position and
orientation using the concept of ANN
and moments. The image of the object
is taken with the help of the tactile
sensing system. This paper describes
the complete hardware and software
concepts of the system for the object
identification with the help of ANN.
Keywords: Tactile sensing, Object
orientation, force sensing sites.

INTRODUCTION:
Of the many sensing operations
performed by the human beings, the
one that is probably the most likely to
be taken for granted is that of touch.
Touch is not only complementary to
vision, but it offers many powerful
sensing capabilities. Tactile sensing is
another name of touch sensing, which
deals with the acquisition of
information about the object simply by
touching that object .Touch sensing
gives the information about the object
like shape,hardness,surface details and
height of the object etc. Tactile sensing
is required when an intelligent robot
wants to perform delicate assembly
operations. During this assembly
operation an industrial robot must be
capable of recognizing parts,
determining their position, orientation
and sensing any problem encountered
during the assembly from the interface
of the parts.
Keeping all these things in
mind a tactile sensing system is
developed which uses image
processing techniques for
preprocessing and analysis along with
ANNs for object identification.
Classification of the object
independent of translation, scale,
rotation is a difficult task. The concept
of artificial neural network is used in
this system for the proper identification
of the object irrespective to their size,
position and the orientation.

CONFIGURATION OF TACTILE
SENSING SYSTEM:
This system uses conductive
plasterer as a sensor, which has
property that its conductivity changes
as the function of the pressure. Fig 1
shows the configuration of the system.
The conductive elastomer is mounted
on 8*8 force sensing sites for the
measurement of pressure distribution
on the object. These force sensing sites
are connected through PC add on data
acquisition card. Stepper motor is used
to apply specific amount of pressure
for proper identification of the objects.
Hardware for scanning the matrix and
related signal conditioning is designed
along with the stepper motor interface
circuitry. Once the image of the object
is acquired through the tactile sensing
system then it is further processed
using image processing concepts for
the proper identification and inferring
other properties of the objects. Tactile
data for the object identification is
acquired using row-scanning
technique. After the removal of noisese
from this tactile data it passes tothe
different modules of the system for
further processing .
Fig1. Configuration of the system



TACTILE DATA PROCESSING :
Main modules of this system are
preprocessing, data acquisition, matrix
representation graphical representation,
edge detection and moments
calculations for generating a feature
vector. Tactile image acquisition
involves conversion of the pressure
image into an array of numbers that
can be manipulated by the computer.
In this system tactile sensor is used to
obtain the pressure data of the object
and this data is further acquired with
the help of data acquisition and data
input output card, which are interfaced
with the computer. The preprocessing
module involves in the removal of the
noise, which is essential for acquiring
the image of the object under
consideration. The image is analyzed
by a set of numerical features to
remove redundancy from data and
reduce its dimensions. Invariant
moments are calculated in this module
which is required by the next module
to the artificial neural for the
identification of the object independent
to scale, rotation and position.

SYSTEM MODULES:
Main module of this system
includes the description of the system
and gives many options to the user for
processing the tactile data in different
DATA
ACQUI
SITOIN
CARD
COMPUTER
FOR
TACTILE
DATA
PROCEESSI
NG

DATA

INPUT

OUTPUT
CARD
TACTILE
IMAGER
MOUNTED
ON
STEPPER
MOTOR
TACTIL
E
IMAGE
ACQUIS
ITION
PRE
PROCES
SING
OBJ ECT
IDENTIFI
CAITON
BY ANN
forms like automatic or manual
processing of the data. Feature
extraction module is used for the
calculation of the moments from the
acquired tactile data. The application
of moments provides a method of
describing the object in terms of its
area, position and the orientation.
These invariant moments are used by
the ANN as the input neurodes which
is the important data for the
classification and identification of the
object. Flow chart for taking the image
identification is shown in Fig.2.




ARTIFICIAL NEURAL
NETWORK:
ssArtificial neural network
,which is inspired from the studies of
biological nervous system ,has been
used for various applications like
supervised classifier. Object
identification is decision making
process that requires the neural
network to identify the class and
category, which best represent the
input pattern.


Figure 3. Tactile Data processing

To overcome the difficulty of
identifying the object using the
moment alone use of artificial neural
networks for the same purpose was
investigated. The back propagation
algorithm was implemented in the
software to generate three-layered
network. The feature vector was given
as the input layer to the network. The
number of input neurodes was equal to
the number of feature vectors
(invariant moments). The number of
output neurodes was kept equal to the
number of the objects to be identified.
STOP
Is sys.
Ready
No Card
Detected
Pre processing of
the image
Acquire the
Image
STOP
User Option for Image
Identification
N
Y
START
Fig 2: Flow chart for image
identification
Here it was considered only five
objects like square, triangular,
rectangular, bar, circle. Several
experiments were carried out and it
was observed that for input neurodes
equal to the seven, hidden neurodes
required were three .Increase in hidden
neurodes increase the complexity of
the network and also increases the
computational time of the system .Also
less number of hidden neurodes takes
longer time to be trained .Hence the
optimum value of hidden value to be
decided for the use of artificial neural
network.

EXPERIMENTATION:
Stepper motor is used for
exerting specific amount of pressure on
the object, which is required for the
handling of delicate objects, getting
proper image of the object and
calculating the height of object .First of
all stepper motor is arranged in such a
way that its angular motion is
converted into the linear motion and it
was designed in such a way that it
travels linear distance of 0.03 mm per
step. This part of calibration is also
used for calculating the height of the
object .To begin with ,motion of the
stepper motor was calibrated in terms
of pressure applied and the linear
displacement . Pressure calibration of
stepper motor is done with the help
of capacitor pressure sensor in
which foam is used as dielectric
between two parallel plates of the
capacitor .First graph was plotted
pressure verses the change in the value
of the capacitance .Response of the
same is shown in Graph1. Then the
same capacitive tactile was used to
find the number of steps versus
change in capacitance response as
shown in Graph 2.Then with the help
of the two graphs the third graph was
plotted pressure verses number of
steps of the stepper ,which is given in
Graph 3.This process was repeated
for the many capacitive tactels. And
the pressure calibration was completed
in this way. Finally it was found that
stepper motor could exert pressure
19N/MP
2
Pper step.

After the calibration of the
stepper motor the sensor (elastometer)P

Pwas calibrated in terms of the
pressure and time response. Thus the
system with stepper motor used for
applying specific amount of pressure
along with the data acquisition and
data input/outputP

P card could be used
for acquiring a tactile data. Programs
were developed to scan and digitize
individual tactels and store the tactile
information in the form of a two
dimensional array. The response of
change in the ADC output voltage
with the number of steps of the
stepper motor is shown in Graph 4.

















EXPERIMENTS ON OBJECT
IDENTIFICATION WITH THE
HELP OF ANN

After the calibration of the
sensor and the system the
experiments were performed on the
objects like square, triangular, circular,
rectangular. The network was trained
by generating the learn data file having
moments with change in the position
,size and orientation of the objects.
The trained network was linked with
the moments module and derived
moments were input to the neural
network. Target pattern was set at less
than 0.5 for output not belonging to
the class and greater than 0.5 for
correct class.

(A) Change in position
For this experiment the position
of the object is varied over the array of
the sensor. And the moments were
GRAPH 1
GRAPH 2
GRAPH 3
GRAPH 4
calculated for the same object. Then
the network was trained with the help
of three sets of data by varying its
position . Then is was observed that
network was successful to identify the
object which was not included in the
training set of the object.

(B) Change in size
Size of square was varied as
2X2,4X4 ,6X6.Invariant moments
calculated from these objects and
network was trained then network was
given set of the data which was not
included in the training and it was
observed that the network was
successful to give proper
identification of the object.

(C) Change in orientation
The network was trained with
set of data by changing its orientation
and this network was tested for the
data, which was not included in the
training set. Finally it was observed
that network can identify the objects
independent of the size, orientation
and position of the object. These
experiments have been carried out for
the identification of the square,
triangular, circular , rectangular ,bar
type of the object once the training.
Of the artificial neural network has
been completed with the help of
moment data file.

In this way different types of
objects were tested and it was
concluded that system is intelligent
enough to give best results for the
identification of different types of
objects and describing the object
under consideration in many ways.
For further analysis of the object the
tactile data automatically passed to the
different modules of the system.

CONCLUSION:
This system is well suited for
the classification of the object with the
help of ANN and moments.
Classification and identification of the
object is independent to size, position
and orientation of the object under
consideration .This system has
capability of describing the object
under test in different forms like
identification , height ,edge, contact
area, pressure distribution ,on the
object,3-D representation, orientation,
position etc of the object. The system
has well support of software for
processing the tactile data in different
forms and taking any decision after the
identification of the object. Finally it is
observed that this system is intelligent
enough for the identification of
different types of the object like
square, rectangle, circle, bar, triangle
independent to their position
,orientation and scale and giving its
many physical properties .This system
has various applications in robotics,
medical and tele-operations and in
computer vision.

BIBLIOGRAPHY:
[1].Prasant Kumar Patra, Neural
network for invariant image
Classification, J ournal of the IETE,
vol 42, nos4-5, pp 282-290, J uly-
October 1994.
[2].R.P.L. Rectier, Tactile Imaging,
Security and Actuators, A (1992) pp
83-89.
[3].G.J . Awcock & R.Thomas ,
Applied image processing, McGraw
Hill , pp162-166
[4]. Elepart &Bobbins (Eds), Neural
Network PC Tools A practical guide
[5]. Philip D.Wasserman ANZA
Reasearch, Inc Neural Computing

SRI KALAHASTHEESWARA INSTITUTE OF
TECHNOLOGY


TITLE: TESTING OF THE BEAM POINTING ACCURACY
OF THE MST RADAR PHASED ARRAY ANTENNA USING
PHASE SWITCH INTERFEROMETER TECHNIQUE

PRESENTED BY: G.C MADHU&P.JAFRULLA KHAN

DEPARTMENT: ELECTRONICS AND COMUNICATIONS.

EMAIL: p.jafrullakhan@gmail.com
msnaidu417@gmail.com























ANTENNA POINTING ACCURACY TESTING BY
PHASE SWITCHED INTERFEROMETER TECHNIQUE

ABSTRACT


NARL at Gadanki operates a 53 MHz, VHF radar for
atmospheric studies. The radar is equipped with a 17000-square meter
phased array antenna, sensitive receiver and high-speed data acquisition
system.

In our project, an automated method is developed for antenna
pattern measurement of this antenna. Radio source M 87 (Virgo
constellation) signals are received to get antenna pattern by phase
switched interferometer technique. A hardware module Synchronous
bi-phase switch is developed; a radar data acquisition experiment is
designed and a software code is written to process and provide antenna
pattern.

The measured antenna pattern is compared with theoretical
antenna pattern. Antenna pointing accuracy is verified to be within 0.5-
degree using transit time of radio source through antenna zenith beam.






I
I
I
N
N
N
T
T
T
R
R
R
O
O
O
D
D
D
U
U
U
C
C
C
T
T
T
I
I
I
O
O
O
N
N
N
:
:
:




Antenna radiation pattern measurement: the receiver the received the received
signal over a continuous time will represent the characters of the antenna and
receiver system.
Similarly Virgo star, which passes through the location of the
location of the MST radar system, also emits high radio frequency radiations
these RF signals are received by the phased antenna array. The received signal
is processed in order to determine the characteristics of the antenna array.


D
D
D
E
E
E
S
S
S
C
C
C
R
R
R
I
I
I
P
P
P
T
T
T
I
I
I
O
O
O
N
N
N


O
O
O
F
F
F


T
T
T
H
H
H
E
E
E


M
M
M
S
S
S
T
T
T


R
R
R
A
A
A
D
D
D
A
A
A
R
R
R





The Indian MST radar has been established in
south eastern part of India at gadanki (13.5, 79.2E) MST radar is capable of
probing different regions of the atmosphere namely mesosphere, stratosphere,
and troposphere (MST) it provides estimates of atmosphere wind velocity
with very high temporal and spatial resolution on a continuous basis, it is used
in study of various dynamic process in the atmosphere. The Indian MST radar
is high power, highly sensitive, coherent monostatic .vhf pulse Doppler and
phased array radar operating at 53 MHz
The MST radar comprises of a two dimensional filled
array both for transmission and reception the phased array consists of two
orthogonal sets, one for each polarization. It consists of 1024 high gain, three
element yagiuda antennas arranged in the fastron of 32 X 32 arrays. The inter
element
Spacing is 0.71.the total area occupying by the antenna array is
17000sq.mts.the array is illuminated in either of linear (EW, NS) polarization
using 32 transmitters, each feeding a linear sub array of 32 antennas. The beam
can be tilted up to 24 maximum from zenith.



To form zenith beam, there should be no phase difference between the
antennas. In order to tilt the beam in a given direction, the phase difference
between consecutive sub arrays need to kept equal. Any disturbance in the
differential phase between the arrays will lead to an error in the beam
pointing directions. By knowing the coordinates of the point source, array-
pointing accuracy can be characterized. If any errors observed, they can be
corrected by phase corrections.

The front-end unit of the receiver consists of a LNA, mixer
preamplifier for each of 32 channels. The output of an LNA is mixed with
phase shifted 48 MHz LO signal and amplified in the mixer preamplifier. The
IF outputs from the 32 channels are combined and are brought to control room
by coaxial cables.

PHASE SWITCH INTERFEROMETER GENERAL
DESCRIPTION

The system makes use of two spaced antennas to
produce an interference pattern with a switch arranged. It is possible to
displace the interference pattern. So that, the new maxima corresponds to the
minima of the original pattern. The envelope of the interference pattern is
determined by the reception pattern A () of individual antennas.
The power reception when the antennas are in phase is
given by

A () [1+cos {(2d/) sin}

When the antennas are connected out of phase, the receptivity is given by

A () [1-cos {(2d/) sin}

Where is the angle between the source and the
axial plane, d is the spacing between antennas and is the wavelength.
If the analysis is restricted to the angles near the axial plane of
the interferometer, these equations may be simplified to
A () [1+cos (2d/)].
Suppose now that a point source of radiation,
which produces a power flux P at the antenna, is situated in a direction , the
powers intercepted by the antenna system in the two conditions are given by
PA () [1+cos (2d/)] and

PA () [1- cos (2d/)].
If the system is switched rapidly between the two
conditions, the antenna power will contain the alternating component whose
magnitude is the difference between the powers intercepted in the two switch
conditions is given by
2PA () cos (2d/).

If the antenna is connected to a receiver having square law
detector, the out put voltage will contain a steady component, which will
include noise power generated by the receiver it self, together with a square
wave alternating component whose amplitude is

2GPA () cos (2d/).
Where G is the amplification factor of the
receiver and the constant of the detector. Thus by addition of an amplifier
which responds to the alternating component from the detector, but not the
steady component. It is possible to obtain out put voltage with a reference
signal derived from the antenna phase switch; it is then possible to produce a
direct current whose magnitude and sign depend on the intensity and
direction of the point source.
Now consider the effect of a slow variation of , which might be
caused by the rotation of the earth. The relative powers from the antenna
system in the two positions of the phase changing switch will alter and the
amplitude of the square wave component and hence of the direct current out
put from the phase sensitive rectifier will undergo a periodic variation in time
proportional to

2GPA () cos (2d/).

Amplifier
Phase
reversing
switch
Detector
Switch
Frequency
Amplifier
Phase
Sensitive
rectifier
recorder



M
M
M
S
S
S
T
T
T


R
R
R
A
A
A
D
D
D
A
A
A
R
R
R


P
P
P
H
H
H
A
A
A
S
S
S
E
E
E
D
D
D


A
A
A
R
R
R
R
R
R
A
A
A
Y
Y
Y
S
S
S


F
F
F
O
O
O
R
R
R


R
R
R
A
A
A
D
D
D
I
I
I
O
O
O
M
M
M
E
E
E
T
T
T
E
E
E
R
R
R



The 32X32 antenna array is splitted into two antenna
arrays of size 32X16. The combined IF signal from 16 east antenna lines and 16
west half antenna lines are separately brought to control room by coaxial
cables.

A balanced transformer coupled phase switch is
incorporated between the two antenna arrays. When the phase difference
between the two channels is 0, the combined out put gives SUM pattern of
the array.

For MST radar radiometer measurements, the radiation pattern
of each array of 32X16 elements is given by


Where d is the inter element spacing=0.7. stands for zenith angle and
a(m,n)=1 for all m and n which is a close approximation for MST radar.

When the two arrays are added in phase, the radiation pattern is given by


Where =1.4sin and d1=16d=11.2.
When the phases are differing by 180, it gives the DIFFERENCE pattern and
is given by


For measurement of Virgo, transmitters are switched off and radar is in
passive mode to receive the signal from radio source.
The detected power pattern is taken from the difference of sum and difference.

A plot of sum-difference pattern as a function of as shown below;



From the plot, amplitude of the power spectrum can be deduced for different
values of .
D
D
D
E
E
E
S
S
S
I
I
I
G
G
G
N
N
N


O
O
O
F
F
F


T
T
T
H
H
H
E
E
E


P
P
P
H
H
H
A
A
A
S
S
S
E
E
E


S
S
S
W
W
W
I
I
I
T
T
T
C
C
C
H
H
H
:
:
:










































The phase switch is incorporated between the
east half-antenna array and the west half-antenna array. The east half-
antenna array received signal will not modify. The west half antenna array
receive signal undergoes phase change in a synchronous manner controlled
by the clock. For 1ms time phase switch introduces 0phase and next 1ms it
shifts by 180 phase. When the phase difference between the two channels
is 0 the combined out put gives sum pattern of the array and when the
phases are differencing by 180, it gives difference pattern of the array.
The phase switch is a balanced transformer type switch. The
transformers are made using inductors wound over two hole ferries bead
and pin diodes are used as switching elements. The pin diodes are
arranged in a bridge model to control. The flow of RF signals in two
transformer windings. The out put transformers input side windings are
wound to introduce in-phase signal in first winding and out-of-phase
signal in second winding with respect to the input. The control clock is fed
to bias the pin diodes, which switches the signal flow out from first
winding when pulse is high and from second winding when pulse is low.
So the phase shifting of 0is achieved in first case and 180 is achieved in
the second case.
C
C
C
O
O
O
N
N
N
C
C
C
L
L
L
U
U
U
S
S
S
S
S
S
I
I
I
O
O
O
N
N
N
:
:
:












By observing the received signal plot, the RF signal
received by the antenna array must be maximum when the Virgo star
passes through the zenith beam position of the antenna array. If it is not so,
we can conclude that some corrections must be needed in the differential
phases giving to the sub linear array elements.

R
R
R
E
E
E
F
F
F
E
E
E
R
R
R
E
E
E
N
N
N
C
C
C
E
E
E
S
S
S
:
:
:


1. Indian MST radar system description by P.B Rao, A.R Jain, S.H Damle.
2. A new radio interferometer and its application to observation of weak radio
stars by M.RYLE.
3. Calibration of ST radar using radio source VIRGO by T.Chakravarthy, S.H
Damle, J.V Chande, K.P Ray, Anil Kulakarni SAMEER, Bombay.
4. Measurements in the near field of the antenna array of Indian MST radar by
P.Srinivasulu, A.R Jain.
5. Communication systems by Simon Haykin.
6. Mesosphere-Stratosphere-Troposphere (MST) Radar-A powerful tool for
atmospheric research, scattering mechanisms and MST radar technique by
P.B Rao










TRACKING AND POSITIONING OF MOBILE PHONES
IN TELECOMMUNICATION NETWORK USING THE
CONCEPT
TIME OF ARRIVAL




SUBMITTED BY


P.NEETH PRAPUL S.GOUTHAM KARTHIK
III/IV B.TECH/E.C.E III/IVB.TECH/E.C.E

NARAYANA ENGINEERING COLLEGE

MUTHUKUR ROAD,
NELLORE
Mail ids:
Prapul2u@yahoo.co.in
Goutham_sulaga@yahoo.co.in










1. ABSTRACT

Mobile positioning technology
has become an important area of
research, for emergency as well as for
commercial services. Mobile positioning
in cellular networks will provide several
services such as, locating stolen mobiles,
emergency calls, different billing tariffs
depending on where the call is
originated, and methods to predict the
user movement inside a region. The
evolution to location-dependent services
and applications in wireless systems
continues to require the development of
more accurate and reliable mobile
positioning technologies. The major
challenge to accurate location estimation
is in creating techniques that yield
acceptable performance when the direct
path from the transmitter to the receiver
is intermittently blocked. This is the
Non-Line-Of-Sight (NLOS) problem,
and it is known to be a major source of
error since it systematically causes
mobile to appear farther away from the
base station (BS) than it actually is,
thereby increasing the positioning error.
In this paper, we present a
simple method for mobile telephone
tracking and positioning with high
accuracy. Our paper presents the
location of a mobile telephone by
drawing a plurality of circles with the
radii being the distances between a
mobile telephone and a several base
stations (it will be found using Time Of
Arrival (TOA)) and the base stations at
their centers, and using location tracking
curves connecting the intersection points
between each circle pair instead of the
common chords defined by the circles.
We use location tracking curves
connecting the intersection points of the
two circles which will be drawn by
ordinary TOA method, instead of the
common chord as in TDOA.


NEED FOR MOBILE
TRACKING
Recent demands from new
applications require positioning
capabilities of mobile telephones or
other devices. The ability to obtain the
geo-location of the Mobile Telephone
(MT) in the cellular system allows the
network operators to facilitate new
services to the mobile users. The most
immediate motivation for the cellular
system to provide MT position is
enhanced in accident emergency
services. The positioning of the mobile
user could provide services like
Emergency service for
subscriber safety.
Location sensitive billing.
Cellular Fraud detection.
Intelligent transport
system services.
Efficient and effective
network performance and
management.




EXISTING
TECHNOLOGIES &
CONSTRAINTS
NETWORK ASSISTED GLOBAL
POSITIONING SYSTEM (GPS)
A mobile telephone can be
located by a mobile telephone itself or
through a mobile telecommunication
network. To locate the mobile telephone
by itself, the mobile telephone is
provided with a GPS receiver to
calculate its location in latitude and
longitude coordinates based on the
location information received from a
satellite through the GPS receiver.
Increases the price and
the size of the mobile
telephone.
The load on the mobile
telephone is increased.
Power consumption is
high.

NETWORK BASED MOBILE
POSITIONING
In the case that the mobile
telephone network locates the mobile
telephone, at least three base stations
(BSs) receive a signal from the mobile
telephone; calculate the distances
between the BSs and the mobile
telephone using the arrival time of the
signals at the BSs, then determine the
location of the mobile telephone using
the trigonometry. This location service is
provided generally by a location data
processor included in a base station
controller (BSC). Upon a request for
service about the location of a specific
mobile subscriber, the BSC selects the
three adjacent BSs surrounding the
mobile telephone for use in the location
service, and these selected BSs are ready
for communication with the mobile
telephone.
TIME OF ARRIVAL (TOA)
The TOA method calculates the
distance of a mobile telephone and a BS
based on the TOA of a signal transmitted
from the mobile telephone at the BS. It
is assumed that the mobile telephone is
located at the intersection point of three
circles having the radius of the distances
between the BSs and the mobile
telephone. The distance is calculated by
the following equation,
R
i
= C
i
= sqrt ( (x
i
X )
2
+ (y
i
Y)
2

where,
C propagation speed of
electromagnetic wave,

i
propagation of time from the mobile
telephone to i
th
base station,
x
i,
y
i
-- location of i
th
base station,
X, Y mobile position.

Figure 1,
illustrates a typical TOA method for
locating a mobile telephone
.In this method, three circles or
hyperbolas do not meet at one point
but overlap each other over an area.
TIME DIFFERENCE OF
ARRIVAL (TDOA)
The TDOA method assumes that
the TDOAs of a signal transmitted from
the mobile telephone at the three BSs
define a set of points on a hyperbola, and
the mobile telephone is located at the
intersection point of at least three
hyperbolas. The implementation requires
accurate synchronization of each BS.
The signal of the mobile telephone often
travels a longer path to a BS due to the
multi-path fading characteristic and the
Non- Line Of Sight (NLOS) effects.
When at least three circles C1,
C2, and C3 are overlapped over an area
without meeting at one point, the mobile
telephone M1 is considered to exist at
the intersection point of three common
chords L1, L2, and L3. The above
method using the common chord is not
very accurate in locating the mobile
telephone except in the case where the
mobile telephone is at an approximate
equal distance from the selected BSs and
in a similar propagation environment to
each respective BS.


Figure 2, illustrates the TDOA method of
locating a mobile telephone.
LOCATION TRACKING
CURVE METHOD







Figure3 illustrates the configuration of a
typical mobile telecommunication network.
In a cellular mobile
telecommunication network, the whole
service area is divided into a several
coverage areas having respective base
stations (BS). Each BS coverage area is
called a "cell." An MSC controls these
BSs so that a subscriber can continue his
call without interruption while moving
between different cells. The MSC can
reduce the time required for calling a
subscriber by locating the cell of the
subscriber. In case of an emergency like
a fire, or a patient needing first aid
treatment, the mobile subscriber should
be accurately located. Tracking the
location of a mobile subscriber within
the boundary of a cell in a mobile
telecommunication network is known as
"location service."
The method proposed by us for
tracking the location of a mobile
telephone using curves connecting the
points where circles intersect one
another, the circles radii being the
distances between BSs and the mobile
telephone. The steps involved are
shown in flowchart











Depicts a flowchart showing the steps
involved in locating a mobile telephone.
The several location tracking curves are
parts of circles with centers near to the
base station with smaller variances
between the first and the second base
stations. The circles formed by the
location tracking curves have the centers
on a line connecting the coordinates of
the first and the second base stations.
The larger variances between the
variances of the first and the second base
stations are compared to the variances of


the several location tracking curves, and
one of the location tracking curves is
selected according to the comparison
result. The location coordinates of the
mobile telephone are determined by
averaging the coordinates of the
intersection points obtained in final step
DETERMINATION OF
LOCATION TRACKING CURVE
The NLOS environment has been
compared with the LOS environment
and we see that the variances of the
TOAs of a signal transmitted from a
mobile telephone are higher in the
NLOS environment. By knowing this,
appropriate curves can be selected by
comparison between the variances of
TOAs of an input signal. That is, the
mobile telephone is nearer from the
common chord L1 to the one with the
larger variances out of the two BSs in
Figure 5. Therefore, the BS with the
smaller variances should be selected to
draw reference circles based on the
variances.
For example, since the first
mobile telephone M1 is near the first BS
T1, the variances of the TOAs of a
signal transmitted from the mobile
telephone M1 at the first BS T1 will be
higher than those of the signal at the
second BS T2. Hence, the reference
circle C1 is obtained around the second
BS T2 with smaller variances.


Figure 6, illustrates the determination
of location tracking curve.
From Figure 6, assuming that the
first and the second BSs T1 and T2
selected for use in the location tracking
are present at positions (x1, y1) and (x2,
y2), respectively, in the second-
dimensional coordinates, the location
data processor draws the two circles C1
and C2 with the coordinates (x1, y1) and
(x2, y2) of the two BSs T1 and T2 at
their centers. The curve connects the two
points P1 and P2 at which the two circles
C1 and C2 intersect each other. The
coordinates of the intersection points P1
and P2 are (xA, yA) and (xB, yB),
respectively.
Since the mobile telephone is
near the first BS T1 with respect to the
common chord L1, the variances of the
TOAs of a signal transmitted from the
mobile telephone at the first BS T1 will
be larger than those of the signal at the
second BS. Therefore, reference circles
TR1 to TR4 are drawn with respect to
the second BS T2 with smaller
variances, as shown in Figure 6.
The coordinates of the reference
circle can be obtained (using minimum
variance) which has its center on the line
ST passing through (x1, y1) and (x2, y2)
and passes through (xA, yA) and (xB,
yB). Selecting the center of the reference
circle is significant as the mobile
telephone is located on the reference
circle. The location data processor
selects the desired curves (reference
circles) with respect to the several BSs
selected for location tracking. In Figure
6, as the real location of the mobile
telephone deviates farther from the circle
C2 with the second BS T2 at its center,
the center of a reference circle is farther
from the location of the second BS T2.
That is, the center of a desired reference
circle is farther from the second BS T2
in the case of a third mobile telephone
M3 (curve C3) than in the case of a
fourth mobile telephone M4.
REFERENCE CIRCLE
SELECTION
The variances of the TOAs of a
signal which arrives at the two BSs T1
and T2 from different paths are used to
find the curve on which the actual
location of the mobile telephone is
determined.
If the TOAs of the signal at the
first BS T1 from N propagation paths are
t1, t2, . . . , tN, the first BS T1 calculates
the variances of t1, t2, . . . , tN. The
location data processor compares the
variances calculated by the first BS T1
with the variances calculated by the
second BS T2 and considers that the
mobile telephone is near to that BS with
the larger variances (the first BS T2 in
Figure 6).
Hence, the reference circle has its
center near to the BS with the smaller
variances (the second BS T2 in Figure 6)
on the line ST. With the larger variances,
the center of a reference circle gets
farther to the right from the center of the
second BS T2. In order to select the
desired curve, the location data
processor initializes the reference circles
with predetermined radii and the
variances of TOAs of a signal
transmitted from the mobile telephone
located on the reference circles, and
compare the preset variances with real
variance measurements.
The location data processor sets
a several reference circles based on the
distances between the mobile telephone
and the BS with the smaller
variances(the second BS T2) In Figure 6,
as an example, the first to the fourth
reference circles TR1 to TR4 have radii
twice, three times, four times, and five
times, respectively, of that of BS T2,
where all these points of reference
circles TR1 and TR4 are located along
the line ST. The variances of the second
BS T2 smaller than those of the first BS
T1 are used as a criterion for selecting an
optimal reference circle. Therefore, the
location data processor predetermines
the reference variances for the first to the
fourth reference circles TR1 to TR4 to
be compared with respect to the second
BS T1. It is assumed in the following
description that 1, 2, and 3 are
reference variances and
1< 2< 3
The location data processor
compares the variances calculated by the
two BSs T1 and T2 and selects the base
station with smaller variances as a
reference point to draw the reference
circle. If the selected variances (those of
the second BS T2) are , the location
data processor compares the selected
variances , with the preset reference
variances 1, 2, and 3.

(1). If <= 1, the curve of the first
reference circles TR1 is selected.
(2).If 1 < <= 2, the curve of the
second reference circles TR2 is selected.
(3). If 2 < <= 3, the curve of the
third reference circles TR3 is selected.
(4). If 3 < , the curve of the fourth
reference circles TR4 is selected.

As we have seen, the location
data processor selects the optimal curve
(reference circle) for the two BSs among
the several BSs, and selects another
optimal circle for another BS pair, and
so on. When curves are selected for all
selected BS pairs, the location data
processor obtains the intersection points
among the selected curves as shown in
Figure 7. However, as the selected
curves do not intersect at one point due
to the multi-path fading or the NLOS
effects, the midpoint of these
intersection points is determined as the
location of the mobile telephone.









Figure 7, illustrates the positioning of mobile
telephone by the proposed method
After the location of the mobile
telephone, that is, the intersection points
among the curves are obtained, the
location data processor represents the
intersection points in the latitude and the
longitude coordinates and transmits the
position coordinates to the network
(BS/BSC/MSC) and the mobile
telephone.
5. CONCLUSION
Our proposal is advantageous in
that the location of a mobile telephone
can be accurately tracked even in the
multi-path fading and the NLOS
environment, by using more accurate
tracking curves connecting the
intersection points among circles with
the radii being the distances between
corresponding BSs and the mobile
telephone in a cellular mobile
communication system. We have
described about accurate positioning of
mobile telephones, which can be used
for several applications. The important
considerations to be undertaken while
selecting a location based technology are
location accuracy, implementation cost,
reliability, increasing functionality.
6. REFERENCES
J . Caffery, and G. Stuber J r, Vehicle
location and tracking for IVHS in
CDMA micro-cells, Proc. IEEE
PIMRC, 1994
G. Morley, and W. Grover, Improved
location estimation with pulse-ranging
in presence of shadowing and multi-
path excess-delay effects, Electronics
Letters, vol.31, No.18, 1995






Ultra Wide Band (UWB) Technology and
Applications
BY:

M.V.NARASIMHA
( camnarasimha@gmail.com )
&
D.DINAKAR
( dina_yaa@yahoo.co.in )




SREE VIDYANIKETHAN ENGIREEING COLLEGE
A.RANGAMPET





Abstract

The paper covers Ultra Wide-Band technology. General description, implementation
issues, methods to design and use this technology and their applications are
covered. Special emphasize will be done on the applications in communications
technologies,providing just a brief description for the other possible applications (like
penetrating radars, etc). Modulation and demodulation techniques will be described
along with the mathematical background.


1 Introduction

This technology has many synonyms in technical literature as baseband, carrier-free or
impulse. The term "ultra wideband" not being applied until approximately recently.
The system emerged from the need to fully describe the transient behavior of a certain
Class of microwave networks through their characteristic impulse response . Briefly,
instead of characterizing a linear, time-invariant (LTI) system with the conventional
means of frequency analysis (i.e., amplitude and phase measurements versus frequency,
Fourier transforms, etc), an LTI system could be also investigated by its response
(impulse response h(t)) to an impulsive excitation (ideally the Dirac pulse). In particular,
the output y(t) of such a system to any arbitrary input x(t) could be uniquely determined
using the Digital signal processing basic formula:

Note that the characterization is done now in TIME DOMAIN, and not anymore in
frequency domain. That is why the technology is also known as Impulse radio technique.
The description of the technology will refer to spectrum just for evaluating purposes, but
the significant variables in the analysis are time domain dependent functions .

2 Ultra Wide Band Technology

2.1 Ultra Wide Band Technology Description

Ultra wide band radio communicates with baseband signal pulses of very short duration.
The duration of the signal is usually few nanoseconds. The "shape of the signal" has a
frequency characteristic starting from near very low frequency (few Hz) to Giga hertz
range.

The energy "spreaded" (dependent on the frequency spectrum distribution and
amplitudes) of the radio signal has very low power spectral densities values; typically
few microW per MHz. Briefly, what result it is a signal with a very broad spectrum,
rather uniform and with low power. The center frequency is now typically between 650
MHz and 5 GHz. Normally few kilometers range are obtained with milli Watt (or below
milli Watt) power level, even using low Gain antennas.
So, with this technology the signal has high bandwidth even if no modulation is
performed. As we shall see later, time hopping will be used to allow the multiple access
needed by the mobile communication. In this way, the logic is the users share the same
wide band, and coding will be used to separate the individual information from the users.
The general modulation technique is the "Time Modulation". Many equipment providers
might use sometimes specific issues for their particular implementation, but the general
technique is the same.

2.2 Time Modulation Technique

The modulation consists of emitting ultra-short monocycles. The systems use pulse
position modulation. The interval between two pulses is controlled according to an input
information signal (user data) and a user assigned sequence called channel code. The
sequence is needed to separate the users in the multiaccess environment. It is the "code"
of the user. The widths of the monocycle pulse ranges between 0.20 and 1.50
nanoseconds.
The interval between two pulses is between 25 and 1000 nanoseconds. These short
monocycles are obviously ultra-wideband.The demodulator directly converts the received
wideband signal into the needed output signal. A front end cross-correlator coherently
converts the electromagnetic pulse train to a baseband signal in one stage. There is no
intermediate frequency stage, greatly reducing complexity.
A single bit of information is generally spread over multiple monocycles. This is the
assumption that we deal with a wide sense stationary random process. This is important,
since the correlator in the demodulator will use a technique that is based on this
assumption to function properly. The receiver coherently sums the proper number of
pulses to recover the transmitted information. The monocycle shape is not restricted, only
its characteristics are restricted. Any signal obeying the rules is a suitable candidate. This
allowed multiple designers to choose the shape of the monocycle according to their
particular preference, at least in theory. In practice, in the future, a standardization will be
needed since multivendor markets must ensure interoperability.
For example, a "Gaussian" monocycle can be used. The mathematical description is listed
below :

It has the shape similar to the first derivative of the Gaussian function.
The spectrum for this type of monocycle is:



The graphical plot is presented below. Note the center frequency is and in this case is
about 2 GHz.



As stated before, the modulation is performed by varying the timing for transmission of
the next monocycle with respect to its nominal unmodulated position. Note that the
unmodulated signal is in fact a sequence of pulses equally spaced at the predefined time
interval. And the modulation in fact reduces or increases the time interval between two
successive pulses in the train according to its modulation rules.
The time-modulated signal shape can be easy understood using the picture below.


As noticed in the picture, the logical "0" determines the next pulse to be advanced in time
(send earlier) by a predefined amount of time (say 10 percent of the time interval between
the pulses), while a logical "1" determines the pulse to be delayed with respect to the
neutral position with the same amount of time. Note! The neutral position (reference
pulse position) does not correspond to any of the two levels, since both logical 0 and 1
are displaced with respect to the "reference" position. This description is referenced from
implementation.
For reasons of clarity and simplicity, the above description does not include the effect of
introducing the additional channel code (next part will take it also in consideration).
The rigorous mathematical description is presented next, and uses the assumption of
having a multi-access environment:

where represents the transmitted monocycle waveform; superscript k denotes the
user k assumed in multi user environment. So, the signal emitted by user k consists of
monocycles shifted to different times, the j-th monocycle nominally beginning at time
.
The component structure and meaning is:

*** Uniform Pulse Train of the form consists of monocycle pulses
spaced Tf seconds apart in time. The pulse repetition period must be at least a
hundred times the monocycle width. If multiple-access signals would have been
composed only with uniformly spaced pulses then collisions from the train of pulses
coming from the signals coming from the other user simultaneously using the system can
corrupt irreversibly the message . Thats also why the channel code is needed (the
channel code is used also to provide the privacy of the transmission).
*** Pseudorandom noise code is needed to protect from collisions in multiple
accessing , each user k is assigned a distinct channel code (in fact a pattern that will
provide its own influence in shifting the monocycles. These hopping codes are
periodic. Since ideally random chosen pattern was wished, but the requirement for
periodicity made the codes to be pseudorandom with period Np . Only a receiver
operating with the same sequence code can decode the transmission.
Thus, this code provides an additional time shift to each pulse in the pulse train, with the
j-th monocycle additionally shifted with seconds. Hence the added time shifts
caused by the code are discrete times between O and NhTc seconds. Normally, is an
integer value in the range 0 to a predefined value Nh. Also the greatest shift generated
by the code (NhTc seconds) is required to be less than the length of the basic train pulses
period Tf. One effect of the hopping code is to reduce the power-spectral density from
the line spectral density (1/Tf apart) of the uniformly spaced pulse train down to a
spectral density with finer line spacing 1/Tp apart (Tp =NpTf, where Np is the period of
the time hopping sequence). That is because the spectrum is not continuous (even if it
might look continuous). Practically the signal is periodic, the period is large, and hence,
the frequency spacing is very small. We assumed the data to be quasi-stationary
(variation rate very small compared with the other rates).
The slight different notation of the same concepts presented above is that consider a
frame to be the interval between two basic pulses (Tf). In this frame (time period)
multiple users are sharing the medium. The frame is also logically split in compartments.
The number of compartments is equal to the maximum value of possible shifting due to
time hopping code. So, Nh compartments will be in a frame. And a pulse for a particular
transmitter, at a particular time must be in a particular compartment within the frame. The
compartment position within the frames will vary in time according to time hopping
sequence of values.
*** Information Data To Be Modulated: The data sequence of transmitter k
is a binary (logical 0 or 1) symbol stream. The modulation system uses Ns monocycles
transmitted for each symbol (oversampling) and considered as wide-sense stationary
random process from statistical view. In this modulation method, when the data symbol is
1 a time shift of is added to a monocycle, and when the data is 0 either a backwards
shift of is performed to the monocycle in some implementation. Some other
implementations may choose that for the 0 symbol not to shift the monocycle. This is
implementation decision, and no restriction can be imposed. Other forms of data
modulation can be employed to benefit the performance of the synchronization loops,
interference rejection, implementation complexity, etc. The data modulation further
smoothens the power spectral density of the pseudorandom time-hopping modulation .
The picture below provides the description for the transmitter. Note that the transmitter
does not contain a contain a power amplifier. As will be described later, the power
required is easily obtained without power amplifiers. A very important issue remains the
output filters. Some implementations uses the antenna itself as a filter, others uses
standalone filters and the antenna designs requirements are not very strict. Note the
advantage of having no carrier modulation, greatly simplifies the scheme.

Figure: The Block Diagram of the Transmitter


The transmitter will simply be a circuit that will control the timing of monocycles
transmission, based on above rules. The resulted signal is directly fed into the antenna.

2.3 Demodulation Techniques

So, having all the above information explained, we can proceed further to describe the
signal from which the receiver must recover its correct information. Considering Nu
users in the multiple access channel, the composite signal received by the antenna for
each receiver is:

The received signal is a superposition of the transmitted signals from each user. n(t)
introduces the effect of additive white gaussian noise (AWGN) from the multishared
communication channel. The receiver demodulation functionality is based on the
correlation receiver. The correlation receiver is normally used in coherent detection of the
signal. This means that a local signal to be correlated to the received signal must be
generated. A general correlator based decoder for a particular user (see also the picture) is
simply a multiplier circuit(mixer) followed by an integrator circuit. The output of the
integrator is fed to a decision device (that might also have another integrator in its box)
that monitors the output of the integrator and performs the decision for the receiving bit.
Note ! The PN code is assumed to be already known at the receiving entity. A procedure
for prior PN code distribution must exist before the communication can begin. Normally,
transceivers have already the codes distributed by multiple means (SIM cards, etc).
Next in the presentation we shall use as a notation for the received time
dependent waveform. Why is difference between and monocycles?
Ideally, there should not be any difference if considering perfect transmission medium.
But that is not the practical case. So in order to model the modifications of we
introduce , although the basic shape and ratios must be preserved. In practical
implementations it is possible to estimate this wave form if training sequences are sent
initially by the transmitters.
Logically, the receiver must decide whether the transmitted bit is 0 or 1. Using
mathematical notations, this can be reformulated as having two hypotheses Ho and H1,
and we must decide which hypothesis is valid. The available information is the received
signal r(t),the PN code and . Synchronization is assumed to be already performed.
Practically, the received signal is investigated over an interval bit duration. Note the bit
duration interval is Ts =NsTf.
The pulse correlator will correlate (multiply and integrate) the received signal r(t) with
time shifted versions of the template signal v(t) over a period Tf. v(t) is a locally
generated signal(the difference between a received monocycle and a time shifted version
of the same monocycle):

Inside the correlator module, the multiplied signal is applied to an integrator submodule,
that performs the integration over a Tf duration of the signal obtained from multiplier
subunit..
The pulse correlator out will be:

Continuing, the output of the correlator (integrator) is supplied to the test module(also
known as pulse train integrator in some schemes) (that will sum up the integrator output
(correlations) along a duration of a bit (Ts) and will compare the final result to 0).
Mathematically, this can be expressed as :

The result of upper evaluation is compared with the value 0! If the value is greater than
0 we can assume the considered hypothesis is true.
Briefly, we can state that the bit if and only if the expression evaluated above is
greater than O. That will mean the validation of the Hypothesis Ho.


Figure :The DSP Logical Diagram of the Receiver.

2.4 Performance in Multi-Access Environments

First, the UWB channel capacity relation are presented. The maximum Channel capacity
C (bits/sec) can be related to channel bandwidth B (hertz), signal power S (watts) and
noise power N watts) using the following relation (The Shannon theorem):

As the upper relation shows, the capacity increases linearly with the bandwidth B (2GHz
for current implementations), but logarithmically with the signal to noise ratio!
The requirement criteria for the system is to keep the bit error rate (BER) controlled. One
possible factor to affect the BER is the signal to noise ratio. It was proven that in the
multiple access environment with aggregate additive white noise channel, the number of
users Nu that can use simultaneously the system is provided with the following relation
(result is truncated to be integer):

where M is the modulation coefficient, P is the fractional increase in required power (in
dB) to maintain a signal-to-noise ratio at a level , where there are Nu users.
Considering the maximum number of users, we can evaluate using the following relation:

obtained using the assumption that


4 Applications of Ultra Wide Band Technologies

4.1 Communication Systems

A typical scheme of the receiver is described using the following picture:
Currently, the todays ideal requirements for a performant system should employ high
data traffic at high speeds for as many as possible users located far away between them.
Of course all together is hard to obtain, so trade-off are needed to be identified. Todays
trend is to have as much data as possible very fast, without having the distance as a main
requirement. In fact, shorter distances mean spectrum reuse, with benefits for the
numbers of users being simultaneously served. Also the cell radio techniques (like GSM
and W-CDMA), makes hardly visible for the end user the short distance his mobile is
actually performing, by having a very dense coverage. So, it seems high capacity over
short ranges is going to be the chosen solution, at least for highly populated urban areas.
Sometimes the capacity can be measured as bits/sec/square-meter. Also the power
required for transmissions is going to be smaller and smaller, allowing small low power
circuits to be used. Also theprice will be smaller to this kind of devices, and also the
sizes.
Before continuing the discussion, for the sake of completeness, the block schematic of a
UWB practical receiver is presented in the next picture.


Figure The Block Diagram of the Receiver
Currently, there are several competing technology in highRate-shortRange sector.
Bluetooth, Wireless LANs (802.11a and 802.11b), W-CDMA, GSM, etc . Much closer to
our interest will be still Bluetooth and WLANs. IEEE 802.11b has a typical operating
range of 100 meters. In the 2.4GHz band, there is about 80MHz of useable spectrum.In
the covering area (radius 100 meters) can operate three 22MHz 802.11b systems,each
having 11Mbps.The total speed is 33Mbps per cell, (hence the spatial capacity of
approximately 1Kbs/square-meter). And this rate is shared by the total number of users in
the covering area. Same assumption about the number of users will apply for the next
descriptions on Bluetooth and UWB. Note , when computing the area we assume a circle(
=>area = ). The same formula applies for the next evaluation also, noting that is
roughly truncated to 3 for easier to read values.
IEEE 802.11a is a derived system meant for higher speed and shorter ranges. The range is
50 meters and speeds up to 54Mbps. The available spectrum is 200 MHz, running in the
5GHz band. 12 simultaneous systems can run within the 50 meters range area obtaining
a total speed of 650Mbps (83 Kbs/square-meter).Considering technical implementation
issues, we can also summarize:

>8 frequencies 4 cell frequency re-use pattern,
>the transmitter should have 16 dBm;
>MAC efficiency is 0.6
>Receiver (10 percent PER for t rms <200ns): 0 dBi
>antenna, 10 dB noise figure, 5 dB multipath

Bluetooth, in its low-power mode,has a rated 10-meter range and a rate of 1Mbps. It was
shown that approximately 10 Bluetooth "piconets "can operate simultaneously in the
same 10-meter circle yielding a total speed of 10Mbps. Hence for Bluetooth we get 30
Kbs/square-meter. Normally, a typical Blotooth transmitter needs about 1mW power in
antenna.

One UWB technology developer has measured peak speeds of over 50Mbps at a range of
10 meters and projects that six such systems could operate within the same 10-meter
radius circle with only minimal degradation.Following the same procedure,the projected
spatial capacity for this system would be over 1 Mbs /square-meter.

Capacity Model for UWB
>Transmitter: <10 uW average power in antenna (-16 dBm)( ! compare to 1mW
needed in Bluetooth)
>MAC efficiency: 0.9 (compare to 0.6 MAC efficiency for Bluetooth)
>Receiver: 7 dB SNR, 0 dBi antenna, 10 dB noise figure +implementation loss

Still, depending on the technology used UWB can provide rates starting 50Mbps per cell
up to 300 Mbps per cell.
An important issue to be mentioned is the capability of UWB technology to coexist with
the other technologies without disrupting each over(eg like Bluetooth and wireless LANs
devices). The power transmission levels are very low (considered below the noise level
for most applications that uses narrowband) and the broad spectrum allows high
robustness to interference and jamming, even if locally the narrowband signals are
powerful( like radiostations etc). As stated, the UWB signal will provide almost
undetectable interference with other signals, and morevover, by requiring to filter the
GPS 2GHZ band, it proves to be a reliable technology.

4.2 Location and Radar Systems with High Resolution

Radar and motion detection application are natural applications for this technology. The
operation is similar to the other radar systems. A train of pulses is directed to a target, and
the reflections are measured. Thus position can be determined, and naturally the distance
and speed. Since the util band reaches gigaHerthz ranges, the resolution can be as precise
as few centimeters. This is a particular important advantage. Of course, special design
antennas are needed to provide the directionality of the transmitted signal. High directive
antennas are required for a proper operation.

One important drawback comes from directive antenna designs to cover such a broad
spectrum. Compromise designs are needed, but still feasible. Also the range of these
radars can not be high. Since the powers are restricted, only short range radars can be
developed. If power is increased, it will interfere with the other useful signals in the band.
This technology provides an efficient mean for protection against jamming, and it is very
difficult to intercept.

4.3 Through-Wall Motion Sensing

These applications requires the spectrum to have the center frequency to be as low as
possible. Normally, wall (material) penetrating properties of electromagnetic waves
exhibit an attenuation that increases with frequency. Microwave and RF spectrum signals
have lower penetration abilities compared to lower frequency band, and they are more
like light wave propagation patterns. So, low frequencies are better suited for penetrating
needs. UWB fits very well, because it covers naturally the low frequencies spectrum. On
the other hand, having too low frequencies might determine very poor resolutions. So, for
normal through wall sensing devices, there is a requirement trade-off. UWB, by having a
very wide spectrum from near DC to few Gigarths, with full range swap, it will be a
flexible method to adapt to a multitude of wall materials, and will not need any special
tuning to optimize the wall properties. Briefly, Through Wall motion sensing is basically
a radar system, with the characteristic that lower frequency of the waves allows
propagating through material that usual RF and Microwave waves can not.

5 Conclusion

Considering the indoor usage it was proven that UWB propagation have
dependencies, while for sine wave carrier systems the variation is (r is the distance
between the transmitter and the testing place). So, UWB is a very efficient solution for
indoor usage.
The simple transmit and receiver structures that are possible makes it a potentially
powerful technology for low complexity, low cost communications. The physical
characteristics of the signal also support location and tracking capabilities of UWB much
more readily than with existing narrower band technologies.
The severe restrictions on transmit power have substantially limited the range of
applications of UWB to short distance, high data rate, or low data rate, longer distance
applications. The great potential of UWB is to allow flexible transition between these two
extremes without the need for substantial modifications to the transceiver.









By
P.Sravani
04381A0444
III B.Tech, ECE,



E mail: p_sravani_ece@yahoo.com
Postal Address: P.Sravani,
D\o P.Radha Krishna,
D.No. 14\117,
Gowri nagar,
Renigunta - 517520

1

INDEX:


ABSTRACT

INTRODUCTION

TECHNICAL DETAILS

WIMAX WORKING

COMPARISON BETWEEN WI-FI & WIMAX

APPLICATIONS

ENSURING BRIGHT FUTURE

CONCLUSIONS

REFERENCES






2
ABSTRACT:
A wireless revolution is seeping into
our daily lives never before. Sooner or
later we are all going to go wireless.
Broadband Wireless Access has occupied a
niche in the market for about a decade. The
recently developed Blue tooth wireless
technology is a low power, short-range
technology for ad hoc cable replacement
and it enables people to wirelessly
combine devices wherever they bring
them. Due to the short-range limitations of
Blue tooth, the recent emergence of Wifi
has replaced it. Wifi popularly known as
802.11 is a moderaterange, moderate
speed technology based on Ethernet. It
allows people to wirelessly access
throughout a location. Although the
technologies share a 2.4GHz band, they
have potentially overlapping applications.
As more and more people use Wifi, more
and more people are getting frustrated with
its coverage limitations. The demand for
more coverage has opened a door for
WiMax.

WiMax built on IEEE 802.16
standards is a wireless technology that
provides high throughput broadband
connections over long distances .Due to its
high security, robustness and mainly huge
data rates is soon to replace/support
existing wireless access technologies like
Wifi, Bluetooth, etc and thus believed to
be the next generation of wireless access
technology. When commercially available,
Wimax will offer fixed, nomadic and
mobile wireless broadband connectivity
without needing direct line-of-sight access
to a base station.

1. INTRODUCTION
WiMAX is a coined term or
acronym meaning Worldwide
Interoperability For Microwave Access
(WiMAX). Its purpose is to ensure that
the broadband wireless radios
manufactured for customer use
interoperate from vendor to vendor.
WiMax is a new standard being developed
by the IEEE that focuses on solving the
problems of point to multipoint broadband
outdoor wireless networks. It has several
possible applications, including last mile
connectivity for homes and businesses and
backhaul for wireless hot spots.
While WiMax has historically
lacked the grass roots popularity of its
popular cousin, WiFi, and is the standard
for wireless metropolitan area networks
(WMANs). It has gained significant
traction from the high profile support it has
3
received from the likes of Intel, Dell,
Motorola, Fujitsu and other big name
corporations. It represents the next
generation of wireless networking. Intel
has called it as a technology that will
enable up to 5 billion people to be
connected over time. The first
WiMax Chip

WiMAX is both faster and has a
longer range than Wi-Fi. However,
WiMAX does not necessarily conflict with
Wi-Fi, but is designed to co-exist with it
and may indeed complement it. This
complementarity to Wi-Fi also extends to
all flavors of wired ethernet (IEEE 802.3),
token ring (IEEE 802.5) and non-IEEE
Standards .


2. Technical Details:
Range - 30-mile (50-km) radius
from base station.
Speed - 70 megabits per second
Line-of-sight not needed between
user and base station
Frequency bands - 2 to 11 GHz and
10 to 66 GHz (licensed and
unlicensed bands)
Defines both the MAC and PHY
layers and allows multiple PHY-
layer specifications
WiMAX covers a couple of
different frequency ranges. Basically, the
IEEE 802.16 standard addresses
frequencies from 10GHz to 66GHz. The
802.16a specification, which is an
extension of IEEE802.16, covers bands in
the 2GHz-to-11GHz range. WiMAX has a
range of up to 30 miles with a typical cell
radius of 46 miles.
Behind WiMAX is the

4
acceleration of radio technology in order to
bridge greater distances. WiMAX's
channel sizes range from 1.5 to 20MHz as
well, and offer a WiMAX-based network
the flexibility to support a variety of data
transmitting rates such as T1 (1.5Mbps)
and higher data transmitting rates of up to
70Mbps on a single channel that can
support thousands of users. This flexibility
allows WiMAX to adapt to the available
spectrum and channel widths in different
countries or licensed to different service
providers.
Wireless Into The Network: WiMAX


3.WiMAX Working:
In practical terms, WiMAX
would operate similar to WiFi but at higher
speeds, over greater distances and for a
greater number of users. WiMAX could
potentially erase the suburban and rural
blackout areas that currently have no
broadband Internet access because phone
and cable companies have not yet run the
necessary wires to those remote locations.
WiMAX transmitting tower
A WiMAX system consists of two parts:
A WiMAX tower, a single
WiMAX tower can provide
coverage to a very large area as big
as 3,000 square miles (~8,000
square km).
A WiMAX receiver - The receiver
and antenna could be a small box
or PCMCIA card or they could be
built into a laptop the way WiFi
access is today.
A WiMAX tower station can connect
directly to the Internet using a high-
bandwidth wired connection. It can also
connect to another WiMAX tower using a
line-of-sight, microwave link. This
connection to a backhaul, along with the
ability of a single tower to cover up to
3,000 square miles, is what allows
WiMAX to provide coverage to remote
rural areas.
WiMAX actually can provide two forms of
wireless service:
5
There is the non-line-of-sight,
where a small antenna on your
computer connects to the tower. In
this mode, WiMAX uses a lower
frequency range 2 GHz to 11
GHz. Lower-wavelength
transmissions are not as easily
disrupted by physical obstructions,
they are better able to diffract, or
bend, around obstacles.
There is line-of-sight service,
where a fixed dish antenna points
straight at the WiMAX tower from
a rooftop or pole. The line-of-sight
connection is stronger and more
stable, so it's able to send a lot of
data with fewer errors. Line-of-
sight transmissions use higher
frequencies, with ranges reaching
a possible 66 GHz. At higher
frequencies, there is less
interference and lots more
bandwidth.
WiFi-style access will be
limited to a 4-to-6 mile radius. Through the
stronger line-of-sight antennas, the
WiMAX transmitting station would send
data to WiMAX-enabled computers or
routers set up within the transmitter's 30-
mile radius.
The fastest WiMAX handles
up to 70 megabits per second, which,
according to WiMAX proponents, is
enough bandwidth to simultaneously
support more than 60 businesses with T1-
type connectivity and well over a thousand
homes at 1Mbit/s DSL-level connectivity.
4.Comparison between WiFi &
WiMax:

WiFi
802.11
WiMax
802.16a
Speed
6 - 54
Mbps
70 Mpbs
Band Unlicensed Both
Coverage
50 - 1500
ft
2 - 30 Miles
The biggest difference isn't
speed, its distance. WiMAX outdistances
6
WiFi by miles. WiFi's range is about 100
feet (30 m). WiMAX will blanket a radius
of 30 miles (50 km) with wireless access.
The increased range is due to the
frequencies used and the power of the
transmitter. Of course, at that distance,
terrain, weather and large buildings will
act to reduce the maximum range in some
circumstances, but the potential is there to
cover huge tracts of land.
Similar Technologies:
Unlike earlier BroadBand
wireless access (BWA) iterations WiMAX
is highly standardized which should reduce
costs. However, since Chipsets are custom-
built for each broadband wireless access
manufacturer, this adds time and cost to
the process of bringing a product to
market, and this won't be changed by
WiMAX.
WiMAX's equivalent or
competitor in Europe is HIPERMAN.
WiMAX Forum, the consortium behind the
standardization, is working on methods to
make 802.16 and HIPERMAN interoperate
seamlessly. Products developed by the
WiMAX Forum members need to comply
to pass the certification process. Korea's
telecoms industry has developed its own
standard, WiBro. In late 2004, Intel and
LG Electronics have agreed on
interoperability between WiBro and
WiMAX.
5. APPLICATIONS:
1. Intel announced that it has begun
sending WiMax chipsets to
equipment manufacturers, which
are planning to ship products to
customers.
2. WiMax is the long-awaited
industry standard. If WiMax lives
up to its promise, it could solve the
dilemma of delivering zippy,
Internet connections in areas where
the cost of running cables to homes
and offices is prohibitively
expensive.
WiMax access points are expected
to start between $250 and $550 and fall
gradually over time, with Intel estimating
the cost approaching $50 by 2008. That
would be cheap enough to include it in
laptops, cell phones and other consumer
gadgetry, which could support streaming
video and voice over Internet Protocol, or
VoIP.
7
3.The big name corporations have alreadly
comeup with a WiMax base station and
WiMax access point for exterior use.

WiMax base station
Redline Communications WiMax
access point
4. WiMax is being developed on WiFis
Virtual Guides, GPS PDAs and Audio
Beam forming
Audio Beam forming device
5. WiMax has more on Embedded MP-3
virtual tours, Solar Powered WiFi, Mobile
Hotspot, WiFi Pedicabs and the Internet
Rickshaw.
6. Ensuring Bright Future:
Early products are likely
to be aimed at network service providers
and businesses, not consumers. It has the
potential to enable millions more to access
the Internet wirelessly, cheaply and easily.
Proponents say that WiMAX wireless
coverage will be measured in square
kilometers while that of Wi-Fi is measured
in square meters. According to WiMAX
promoters, a WiMAX base station would
beam high-speed Internet connections to
homes and businesses in a radius of up to
50 km, these base stations will eventually
cover an entire metropolitan area, making
that area into a WMAN and allowing true
wireless mobility within it, as opposed to
hot-spot hopping required by Wi-Fi. Its
8
proponents are hoping that the technology
will eventually be used in notebook
computers and PDAs.According to
industry estimates,this technology would
initially be used by broadband cable/DSL
providers.With futher improvement,it
would allow users to access the internet in
a truly roaming environment.
WiMax will be driven by demand
from the network providers themselves,
and telecom providers also. A grid of a
relatively small number of WiMax base
stations can even connect an entire city.
Since WiMax supports several
communication protocols, this network
can serve as the backbone for both an ISP,
and a telecom provider, without the need
for digging up roads.Lines between
telecom and internet services are blurred
already, with the companies offering both
services.Once WiMax equipment becomes
more readily available and affordable, ISPs
will be able to offer WiMax services
directly to consumers. If WiMax chipsets
can be embedded in mobile devices(cell
phones and PDAs) soon, WiMax might
land up replacing every other wireless
technology.
7.Conclusion:
WiMax with all its challenges and
opportunities is an unavoidable part of our
future.WiMax has the potential to be a true
business enabler.The possibiliites with this
technology are immense and numerous.It
will lead to great advances of commercial
field.The researchers are filled with
optimism,and based on this technology are
beginning to make their mark.The extent to
which WiMax will impact our lives only
depends on the limits of human
ingenuinity.It can rightly be said that
WiMax is slowly but steadily ushering in
the next revolution. Though the technology
is still under standardisation process for
use in chipsets,antenna and other
devices,the WiMax forum expects it to be
a killer.
8.REFERENCES:
1) Information Technology
magazine(October 2004)
2) www.google.com
3) www.houstuffworks.com
4) www.intelsemiworks.com
5) IEEE magazines
6) www.Homepna.org

9






10
Wi-MAX

(Worldwide Interoperability for Microwave Access)


V.Abhishek, G.K.Bhagyaraj
III CSIT

SIDDHARTH INSTITUTE OF ENGG & TECHNOLOGY(SIETK)
PUTTUR.

1.vemaniabhishek@yahoo.com,2.bhagyaraj3@yahoo.co.in


ABSTRACT:


The two driving forces of modern Internet are broadband, and wireless. The
WiMax standard combines the two, delivering high-speed broadband Internet access over
a wireless connection. The main problems with broadband access are that it is pretty
expensive and it doesn't reach all areas. The main problem with WiFi access is that hot
spots are very small, so coverage is sparse.

Here comes the technology of Wi-MAX, acronym for Worldwide Interoperability
for Microwave Access and goes by and it also goes by the IEEE name 802.16. This
technology would provide high speed of Broadband service, wireless access, and most
importantly wide coverage area unlike the Wi-Fi. Because it can be used over relatively
long distances, it is an effective "last mile" solution for delivering broadband to the home,
and for creating wireless "hot spots" in places like airports, college campuses, and small
communities.

The so-called "last mile" of broadband is the most expensive and most difficult
for broadband providers and Wi-MAX provides an easy solution. Although it is a
wireless technology unlike some other wireless technologies, it doesn't require a direct
line of sight between the source and endpoint, and it has a service range of 50 kilometers.
It provides a shared data rate of up to 70Mbps, which is enough to service up to a
thousand homes with high-speed access. Ultimately, Wi-MAX may be used to provide
connectivity to entire cities, and may be incorporated into laptops to give users an added
measure of mobility.

This paper discusses about this revolutionary wireless technology that is
challenging the present Broadband and wireless technologies. This paper deals with the
working, different standards and comparison with other technologies like Wi-Fi.













1. INTRODUCTION:

Wi-MAX is short for Worldwide
Interoperability for Microwave Access,
and it also goes by the IEEE name
802.16. WiMAX has the potential to do
to broadband Internet access what cell
phones have done to phone access. In the
same way that many people have given
up their "land lines" in favor of cell
phones, WiMAX could replace cable
and DSL services, providing universal
Internet access just about anywhere you
go. Wi-MAX delivers a point-to-
multipoint architecture, making it an
ideal method for carriers to deliver
broadband to locations where wired
connections would be difficult or costly.
It may also provide a useful solution for
delivering broadband to rural areas
where high-speed lines have not yet
become available. A WiMax connection
can also be bridged or routed to a
standard wired or wireless Local Area
Network (LAN).


2. PARTS OF A WiMAX SYSTEM:

A WiMAX system has mainly two parts:
WiMAX Tower and WiMAX Receiver.

2.1 WiMAX Tower:

It is similar in concept to cell-
phone tower. It can provide coverage to
a very large area -- as big as 3,000
square miles (~8,000 square km).

2.2 WiMAX Receiver:

The receiver and antenna could
be a small box or they could be built into
a laptop the way WiFi access is today.






3. WORKING:

In practical terms, WiMAX would
operate similar to WiFi. A WiMAX
tower station can connect directly to the
Internet using a high-bandwidth, wired
connection (for example, a T3 line). It
can also connect to another WiMAX
tower using a line-of-sight, microwave
link. This connection to a second tower
(often referred to as a backhaul), along
with the ability of a single tower to cover
up to 3,000 square miles, is what allows
WiMAX to provide coverage to remote
rural areas. As opposed to a traditional
Internet Service Provider (ISP), which
divides that bandwidth among customers
via wire, it uses a microwave link to
establish a connection.
This points out that WiMAX
actually can provide two forms of
wireless service:

1. There is the non-line-of-sight, WiFi
sort of service, where a small antenna on
your computer connects to the tower. In
this mode, WiMAX uses a lower
frequency range -- 2 GHz to 11 GHz
(similar to WiFi). Lower-wavelength
transmissions are not as easily disrupted
by physical obstructions -- they are
better able to diffract, or bend, around
obstacles.
2. There is line-of-sight service, where a
fixed dish antenna points straight at the
WiMAX tower from a rooftop or pole.
The line-of-sight connection is stronger
and more stable, so it's able to send a lot
of data with fewer errors. Line-of-sight
transmissions use higher frequencies,
with ranges reaching a possible 66 GHz.
At higher frequencies, there is less
interference and lots more bandwidth.
WiFi-style access will be limited
to a 4-to-6 mile radius (perhaps 25
square miles or 65 square km of
coverage, which is similar in range to a
cell-phone zone). Through the stronger
line-of-sight antennas, the WiMAX
transmitting station would send data to
WiMAX-enabled computers or routers
set up within the transmitter's 30-mile
radius (2,800 square miles or 9,300
square km of coverage). This is what
allows WiMAX to achieve its maximum
range.





4. STANDARDS OF WiMAX:

The different standards of WiMAX as
given by the IEEE are:

4.1. 802.16-2001:
This is first version of this technology
approved in 2001.

4.2. 802.16a, 802.16c:
These are the later versions of the
802.16-2001 technology. These are just
the amendments of the above.

4.3. 802.16-2004:
The current 802.16 standard is IEEE Std
802.16-2004, approved in J une 2004. It
renders the previous versions 802.16-
2001 along with its amendments 802.16a
and 802.16c as obsolete.

4.4. 802.16-2005(802.16e):
IEEE Std 802.16-2004 addresses
only fixed systems. An amendment is in
the works which adds mobility
components to the standard. This
amendment comes in this new standard.
IEEE 802.16-2005, approved December,
2005, (formerly named 802.16e), the
WiMAX mobility standard, is an
improvement on the modulation schemes
stipulated in the original WiMAX
standard. It allows for fixed wireless and
mobile Non Line of Sight (NLOS)
applications primarily by enhancing the
OFDMA (Orthogonal Frequency
Division Multiplexing Access).


5. TECHNICAL ADVANTAGES
OVER WiFi:

Because IEEE 802.16 networks use the
same LLC layer (standardized by IEEE
802.2) as other LANs and WANs, it can
be both bridged and routed to them.
An important aspect of the IEEE
802.16 is that it defines a MAC layer
that supports multiple physical layer
specifications. This is crucial to allow
equipment makers to differentiate their
offerings. This is also an important
aspect of why WiMAX can be described
as a "framework for the evolution of
wireless broadband" rather than a static
implementation of wireless technologies.
Enhancements to current and new
technologies and potentially new basic
technologies incorporated into the PHY
(physical layer) can be used. A
converging trend is the use of multi-
mode and multi-radio SoCs and system
designs that are harmonized through the
use of common MAC, system
management, roaming, IMS and other
levels of the system. WiMAX may be
described as a bold attempt at forging
many technologies to serve many needs
across many spectrums.
The MAC is significantly
different from that of Wi-Fi (and
ethernet from which Wi-Fi is derived).
In Wi-Fi, the MAC uses contention
access-all subscriber stations wishing to
pass data through an access point are
competing for the AP's attention on a
random basis. This can cause distant
nodes from the AP to be repeatedly
interrupted by less sensitive, closer
nodes, greatly reducing their throughput.
By contrast, the 802.16 MAC is a
scheduling MAC where the subscriber
station only has to compete once (for
initial entry into the network). After that
it is allocated a time slot by the base
station. The time slot can enlarge and
constrict, but it remains assigned to the
subscriber station meaning that other
subscribers are not supposed to use it but
take their turn. This scheduling
algorithm is stable under overload and
oversubscription (unlike 802.11). It is
also much more bandwidth efficient. The
scheduling algorithm also allows the
base station to control Quality of Service
by balancing the assignments among the
needs of the subscriber stations.
The original WiMAX standard,
IEEE 802.16, specifies WiMAX in the
10 to 66 GHz range. 802.16a added
support for the 2 to 11 GHz range, of
which most parts are already unlicensed
internationally and only very few still
require domestic licenses. Most business
interest will probably be in the 802.16a
standard, as opposed to licensed
frequencies. The WiMAX specification
improves upon many of the limitations
of the Wi-Fi standard by providing
increased bandwidth and stronger
encryption. It also aims to provide
connectivity between network endpoints
without direct line of sight in some
circumstances. The details of
performance under non-line of sight
(NLOS) circumstances are unclear as
they have yet to be demonstrated. It is
commonly considered that spectrum
under 5-6 GHz is needed to provide
reasonable NLOS performance and cost
effectiveness for PtM (point to multi-
point) deployments. WiMAX makes
clever use of multi-path signals but does
not defy the laws of physics.
WiMAX operates on the same
general principles as WiFi -- it sends
data from one computer to another via
radio signals. A computer (either a
desktop or a laptop) equipped with
WiMAX would receive data from the
WiMAX transmitting station, probably
using encrypted data keys to prevent
unauthorized users from stealing access.
The fastest WiFi connection can
transmit up to 54 megabits per second
under optimal conditions. WiMAX
should be able to handle up to 70
megabits per second. Even once that 70
megabits is split up between several
dozen businesses or a few hundred home
users, it will provide at least the
equivalent of cable-modem transfer rates
to each user.
The biggest difference isn't
speed; it's distance. WiMAX
outdistances WiFi by miles. WiFi's
range is about 100 feet (30 m). WiMAX
will blanket a radius of 30 miles (50 km)
with wireless access. The increased
range is due to the frequencies used and
the power of the transmitter. Of course,
at that distance, terrain, weather and
large buildings will act to reduce the
maximum range in some circumstances,
but the potential is there to cover huge
tracts of land.


6. IEEE 802.16 SPECIFICATIONS:

The specifications for 802.16, i.e.,
WiMAX as given by the IEEE are:

6.1. Range: 30-mile (50km) radius from
base station

6.2. Speed: 70 megabits per second

6.3. Line-of-sight: not needed between
user and base station

6.4. Frequency bands: 2 to 11 GHz and
10 to 66 GHz

6.5. Layers: Defines both the MAC and
PHY layers and allows multiple PHY-
layer specifications.


7. POTENTIAL APPLICATIONS:

WiMAX is a wireless metropolitan area
network (MAN) technology that can
connect IEEE 802.11 (Wi-Fi) hotspots
with each other and to other parts of the
Internet and provide a wireless
alternative to cable and DSL for last
mile (last km) broadband access. IEEE
802.16 provides up to 50 km (31 miles)
of linear service area range and allows
connectivity between users without a
direct line of sight. Note that this should
not be taken to mean that users 50 km
(31 miles) away without line of sight
will have connectivity. Practical limits
from real world tests seem to be around
"3 to 5 miles" (5 to 8 kilometers). The
technology has been claimed to provide
shared data rates up to 70 Mbit/s, which,
according to WiMAX proponents, is
enough bandwidth to simultaneously
support more than 60 businesses with
T1-type connectivity and well over a
thousand homes at 1Mbit/s DSL-level
connectivity. Real world tests, however,
show practical maximum data rates
between 500kbit/s and 2 Mbit/s,
depending on conditions at a given site.
It is also anticipated that
WiMAX will allow interpenetration for
broadband service provision of VoIP,
video, and Internet access-
simultaneously. Most cable and
traditional telephone companies are
closely examining or actively trial-
testing the potential of WiMAX for "last
mile" connectivity. This should result in
better pricepoints for both home and
business customers as competition
results from the elimination of the
"captive" customer bases both telephone
and cable networks traditionally
enjoyed. Even in areas without
preexisting physical cable or telephone
networks, WiMAX could allow access
between anyone within range of each
other. Home units the size of a
paperback book that provide both phone
and network connection points are
already available and easy to install.
There is also interesting potential
for interoperability of WiMAX with
legacy cellular networks. WiMAX
antennas can "share" a cell tower
without compromising the function of
cellular arrays already in place.
Companies that already lease cell sites in
widespread service areas have a unique
opportunity to diversify, and often
already have the necessary spectrum
available to them (i.e. they own the
licenses for radio frequencies important
to increased speed and/or range of a
WiMAX connection). WiMAX antennae
may be even connected to an Internet
backbone via either a light fiber optics
cable or a directional microwave link.
Some cellular companies are evaluating
WiMAX as a means of increasing
bandwidth for a variety of data-intensive
applications. In line with these possible
applications is the technology's ability to
serve as a very high bandwidth
"backhaul" for Internet or cellular phone
traffic from remote areas back to a
backbone. Although the cost-
effectiveness of WiMAX in a remote
application will be higher, it is definitely
not limited to such applications, and may
in fact be an answer to expensive urban
deployments of T1 backhauls as well.
Given developing countries' (such as in
Africa) limited wired infrastructure, the
costs to install a WiMAX station in
conjunction with an existing cellular
tower or even as a solitary hub will be
diminutive in comparison to developing
a wired solution. The wide, flat expanses
and low population density of such an
area lends itself well to WiMAX and its
current diametrical range of 30 miles.
For countries that have skipped wired
infrastructure as a result of inhibitive
costs and unsympathetic geography,
WiMAX can enhance wireless
infrastructure in an inexpensive,
decentralized, deployment-friendly and
effective manner.
Another application under
consideration is gaming. Sony and
Microsoft are closely considering the
addition of WiMAX as a feature in their
next generation game console. This will
allow gamers to create ad hoc networks
with other players. This may prove to be
one of the "killer apps" driving WiMAX
adoption: WiFi-like functionality with
vastly improved range and greatly
reduced network latency and the
capability to create ad hoc mesh
networks.
Another important application of
the WiMAX technology is the Govt.
security. Communication is crucial for
government officials as they try to
determine the cause of the problem, find
out who may be injured and coordinate
rescue efforts or cleanup operations. A
gas-line explosion or terrorist attack
could sever the cables that connect
leaders and officials with their vital
information networks. WiMAX could be
used to set up a back-up (or even
primary) communications system that
would be difficult to destroy with a
single, pinpoint attack. A cluster of
WiMAX transmitters would be set up in
range of a key command center but as
far from each other as possible. Each
transmitter would be in a bunker
hardened against bombs and other
attacks. No single attack could destroy
all of the transmitters, so the officials in
the command center would remain in
communication at all times.







8. SCENARIO OF WiMAX:

An Internet service provider sets up a
WiMAX base station 10 miles from your
home. You would buy a WiMAX-
enabled computer (some of them should
be on store shelves in 2005) or upgrade
your old computer to add WiMAX
capability. You would receive a special
encryption code that would give you
access to the base station. The base
station would beam data from the
Internet to your computer (at speeds
potentially higher than today's cable
modems), for which you would pay the
provider a monthly fee. The cost for this
service could be much lower than
current high-speed Internet-subscription
fees because the provider never had to
run cables.
If you have a home network,
things wouldn't change much. The
WiMAX base station would send data to
a WiMAX-enabled router, which would
then send the data to the different
computers on your network. You could
even combine WiFi with WiMAX by
having the router send the data to the
computers via WiFi.
The smallest-scale network is a
personal area network (PAN). A PAN
allows devices to communicate with
each other over short distances.
Bluetooth is the best example of a PAN.
The next step up is a local area
network (LAN). A LAN allows devices
to share information, but is limited to a
fairly small central area, such as a
company's headquarters, a coffee shop
or your house. Many LANs use WiFi to
connect the network wirelessly.
WiMAX is the wireless solution
for the next step up in scale, the
metropolitan area network (MAN). A
MAN allows areas the size of cities to be
connected.
9. NETWORK SCALE:

The wireless networking, in the broad
sense may be classified into three major
types. They, in the order of the area they
span over, are given below:

9.1. PAN (Personal Area Network):
The standard used here is the
IEEE 802.15 standard. The best example
for this is Bluetooth.

9.2. LAN (Local Area Network):
The standard used here is the
IEEE 802.11 standard. The best example
for this is WiFi.

9.3 MAN (Metropolitan Area Network):
The standard used here is the
IEEE 802.16 standard. The best solution
for this is WiMAX.


10. CONCLUSION:

The IEEE 802.16 family of
standards and its associated industry
consortium, WiMax, promise to deliver
high data rates over large areas to a large
number of users in the near future. This
exciting addition to current broadband
options such as DSL, cable, and WiFi
promises to rapidly provide broadband
access to locations in the world's rural
and developing areas where broadband
is currently unavailable, as well as
competing for urban market share.
The WiMAX protocol is
designed to accommodate several
different methods of data transmission,
one of which is Voice Over Internet
Protocol (VoIP). VoIP allows people to
make local, long-distance and even
international calls through a broadband
Internet connection, bypassing phone
companies entirely. If WiMAX-
compatible computers become very
common, the use of VoIP could increase
dramatically. Almost anyone with a
laptop could make VoIP calls.












Bibliography:


IEEE 802.16 Backgrounder
http://ieee802.org/16/pub/backgrounder.html

NetworkDictionary.com: IEEE 802.16: Broadband Wireless MAN Standard
(WiMAX)
http://www.networkdictionary.com/protocols/80216.php

www.howstuffworks.com

www.wimaxforum.org

www.wikipedia.org






Wimax

-EMERGING WIRELESS TECHNOLOGY



SUBMITTED BY


M.HEMACHAND V.J ASWANTH KUMAR
ROLL NO: 04711A0416 ROLL NO: 04711A0417
III/IV B-TECH III/IV B-TECH
E.C.E E.C.E
hemachand_416@yahoo.co.in jashureddy@yahoo.com


NARAYANA ENGINEERING COLLEGE

NELLORE






WiMAX
- Emerging wireless technology



1

ABSTRACT
New and increasingly advanced data
services are driving up wireless traffic, which is
being further boosted by growth in voice
applications in advanced market segments as
the migration from fixed to mobile voice
continues. This is already putting pressure on
some networks and may be leading to
difficulties in maintaining acceptable levels of
service to subscribers.
For the past few decades the lower
band width applications are growing but the
growth of broad band data applications is slow.
Hence we require technology which helps in the
growth of the broad band data applications.
WiMAX is such a technology which helps in
point-to-multipoint broadband wireless access
with out the need of direct line of sight
connectivity with base station.
This paper explains about the WiMAX
technology, its additional features in physical
layer and MAC layer and the benefits of each
feature.
This paper focuses on the
major technical comparisons (like QOS and
coverage) between WiMAX and other
technologies. It also explains about the ability
of the WiMAX to provide efficient service in
multipath environment.
II. Introduction:
For the past couple decades,
low-bandwidth applications such as
downloading ring tones and SMS are
experiencing sharp growth, but the growth of
broadband data applications such as email and
downloading/ uploading files with a laptop
computer or PDA has been slow. The demand
for broadband access continues to escalate
worldwide and lower-bandwidth wire line
methods have failed to satisfy the need for
higher bandwidth integrated data and voice
services. WiMAX is radio technology that
promises two-way Internet access at several
megabits per second with ranges of several
miles. It is believed that the technology can
challenge DSL (Digital Subscriber Line) and
cable broadband services because it offers
similar speeds but is less expensive to set up.
The intention for WiMAX is to provide fixed,
nomadic, portable and, eventually, Mobile
wireless broadband connectivity without the
need for Direct line-of-sight with a base
station.


III.What is wimax?
WiMAX is an acronym that
stands for Worldwide Interoperability for
Microwave Access. IEEE 802.16 is
working group number 16 of IEEE 802,
specializing in point-to-multipoint broadband
wireless access. It also is known as WiMAX.
There are at least four 802.16 standards:
802.16, 802.16a, 802.16-2004 (802.16),
and 802.16e.
WiMAX does not conflict
with WiFi but actually complements it.
WiMAX is a wireless metropolitan area
network (MAN) technology that will connect
IEEE 802.11 (WiFi) hotspots to the Internet
and provide a wireless extension to cable and
DSL for last km broadband access. IEEE
802.16 provides up to 50 km of linear
service area range and allows users
connectivity without a direct line of sight to a
base station. The technology also provides
shared data rates up to 70 Mbit/s.
The portable version of
WiMAX, IEEE 802.16 utilizes Orthogonal
Frequency Division Multiplexing Access
(OFDM/OFDMA) where the spectrum is
divided into many sub-carriers. Each sub-
carrier then uses QPSK or QAM for
modulation. WiMAX standard relies
mainly on spectrum in the 2 to 11 GHz
range. The WiMAX specification improves
upon many of the limitations of the WiFi
standard by providing increased
bandwidth and stronger encryptionFor
years, the wildly successful 802.11 x or
WiFi wireless LAN technology has been
used in BWA applications. When the
WLAN technology was examined closely,
it was evident that the overall design and
feature set available was not well suited
for outdoor Broadband wireless access
(BWA) applications. WiMAX is suited for
both indoor and outdoor BWA; hence it
solves the major problem.
In reviewing the standard, the technical
details and features that differentiate WiMAX
certified equipment from WiFi or other
technologies can best be illustrated by
focusing on the two layers addressed in the
standard, the physical (PHY) and the media
access control (MAC) layer design.
III. a) WIMAX PHY Layer:
1
The first version of the
802.16 standard released addressed Line-of-
Sight (LOS) environments at high frequency
bands operating in the 10-66 GHz range,
whereas the recently adopted amendment,
the 802.16a standard, is designed for systems
operating in bands between 2 GHz and 11
GHz. The significant difference between
these two frequency bands lies in the ability
to support Non-Line -of-Sight (NLOS)
operation in the lower frequencies,
something that is not possible in higher
bands. Consequently, the 802.16a
amendment to the standard opened up the
opportunity for major changes to the PHY
layer specifications specifically to address the
needs of the 2-11 GHz bands. This is
achieved through the introduction of three
new PHY-layer specifications (a new Single
Carrier PHY, a 256 point FFT OFDM PHY,
and a 2048 point FFT OFDMA PHY);
Some of the other PHY
layer features of 802.16a that are instrumental
in giving this technology the power to deliver
robust performance in a broad range of
channel environments are; flexible channel
widths, adaptive burst profiles, forward error
correction with concatenated Reed-Solomon
and convolutional encoding, optional AAS
(advanced antenna systems) to improve
range/capacity, DFS (dynamic frequency
selection)-which helps in minimizing
interference, and STC (space-time coding) to
enhance performance in fading environments
through spatial diversity. Table 1 gives a high
level overview of some of the PHY layer
features of the IEEE 802.16a standard.


B) IEEE 802.16a MAC Layer
The 802.16a standard uses a slotted TDMA
protocol scheduled by the base station to
allocate capacity to subscribers in a point-to-
multipoint network topology. By tarting with a
TDMA approach with intelligent scheduling,
WiMAX systems will be able to deliver not
only high speed data with SLAs, but latency
sensitive services such as voice and video or
database access are also supported. The
standard delivers QoS beyond mere
prioritization, a technique that is very limited in
effectiveness as traffic load and the number of
subscribers increases. The MAC layer in
WiMAX certified systems has also been
designed to address the harsh physical layer
environment where interference, fast fading and
other phenomena are prevalent in outdoor
operation.
2
IV.WiMAX Scalability:
At the PHY layer the
standard supports flexible RF channel
bandwidths and reuse of these channels
(frequency reuse) as a way to increase cell
capacity as the network grows. The standard
also specifies support for automatic transmit
power control and channel quality
measurements as additional PHY layer tools
to support cell planning/deployment and
efficient spectrum use. Operators can re-
allocate spectrum through sectorization and
cell splitting as the number of subscribers
grows.
In the MAC layer, the
CSMA/CA foundation of 802.11, basically a
wireless Ethernet protocol, scales about as
well as does Ethernet. That is to say - poorly.
J ust as in an Ethernet LAN, more users results
in a geometric reduction of throughput, so
does the CSMA/CA MAC for WLANs. In
contrast the MAC layer in the 802.16 standard
has been designed to scale from one up to
100's of users within one RF channel, a feat
the 802.11 MAC was never designed for and
is incapable of supporting.
a) Coverage:
The BWA standard is
designed for optimal performance in all types
of propagation environments, including LOS,
near LOS and NLOS environments, and
delivers reliable robust performance even in
cases where extreme link pathologies have
been introduced. The robust OFDM
waveform supports high spectral efficiency
over ranges from 2 to 40 kilometers with up
to 70 Mbps in a single RF channel. Advanced
topologies (mesh networks) and antenna
techniques (beam-forming, STC, antenna
diversity) can be employed to improve
coverage even further. These advanced
techniques can also be used to increase
spectral efficiency, capacity, reuse, and
average and peak throughput per RF channel.
In addition, not all OFDM is the same. The
OFDM designed for BWA has in it the ability
to support longer range transmissions and the
multi-path or reflections encountered. In
contrast, WLANs and 802.11 systems have at
their core either a basic CDMA approach or
use OFDM with a much different design, and
have as a requirement low power
consumption limiting the range. OFDM in the
WLAN was created with the vision of the
systems covering tens and maybe a few
hundreds of meters versus 802.16 which is
designed for higher power and an OFDM
approach that supports deployments in the
tens of kilometers.
b) Quality of service:
The 802.16a MAC relies on
a Grant/Request protocol for access to the
medium and it supports differentiated service
The protocol employs TDM data streams on
the DL (downlink) and TDMA on the UL
(uplink), with the hooks for a centralized
scheduler to support delay-sensitive services
like voice and video. By assuring collision-
free data access to the channel, the 16a MAC
improves total system throughput and
bandwidth efficiency, in comparison with
contention-based access techniques like the
CSMA-CA protocol used in WLANs. The
16a MAC also assures bounded delay on the
data. The TDM/TDMA access technique also
ensures easier support for multicast and
broadcast services. With a CSMA/CA
approach at its core, WLANs in their current
implementation will never be able to deliver
the QoS of aBWA, 802.16 systems.
V. ROLE OF OFDMA IN MULTIPATH
ENIRONMENT:
Technologies using DSSS
(802.11b, CDMA) and other wide band
technologies are very susceptible to multipath
fading, since the delay time can easily exceed
the symbol duration, which causes the
symbols to completely overlap (ISI). The use
of several parallel sub-carriers for OFDMA
enables much longer symbol duration, which
makes the signal more robust to multipath
time dispersion
a). Multipath: Frequency Selective Fading
This type of fading affects
certain frequencies of a transmission and can
result in deep fading at certain frequencies.
One reason this occurs is because of the wide
band nature of the signals. When a signal is
reflected off a surface, different frequencies
will reflect in different ways. In Figure below,
both CDMA (left) and OFDMA (right)
experience selective fading near the center of
the band. With optimal channel coding and
interleaving, these errors can be corrected.
CDMA tries to overcome this by spreading
the signal out and then equalizing the whole
signal. OFDMA is therefore much more
resilient to frequency selective fading when
compared to CDMA.

3
VI. OFDMA with Adaptive Modulation
and Coding (AMC):
Both W-CDMA (HSDPA)
and OFDM utilize Quadrature Phase Shift
Keying (QPSK) and Quadrature Amplitude
Modulation (QAM). It should be noted here
that for WCDMA, AMC is only used on the
downlink, since the uplink still relies on
WCDMA which uses QPSK but not QAM.
Modulation and coding rates can be changed
to achieve higher throughput, but higher order
modulation will require better Signal to Noise
Ratio. Figure illustrates how higher order
modulations like QAM 64 are used closer to
the base station, while lower order
modulations like QPSK are used to extend the
range of the base station . Performance results
conducted for one of the 3GPP Working
Groups [2], show that while OFDM is able to
achieve the maximum throughput of 9.6 Mbps
(16QAM), WCDMA does not exceed 3 Mbps.
From these results, it appears that even higher
discrepancy may be found when utilizing
higher modulation and code rates to yield
even higher throughput for OFDM.
Adaptive Modulation and Coding
(AMC) in a multipath environment may give
OFDMA further advantages since the
flexibility to change the modulation for
specific sub-channels allows you to optimize
at the frequency level. Another alternative
would be to assign those sub channels to a
different user who may have better channel
conditions for that particular sub-channel.
This could allow users to concentrate transmit
power on specific sub-channels, resulting in
improvements to the uplink budget and
providing greater range. This technique is
known as Space Division Multiple Access
(SDMA).
In Figure below, you can
see how sub-channels could be chosen
depending on the received signal strength.
The sub-channels on which the user is
experiencing significant fading are avoided
and power is concentrated on channels with
better channel conditions. The signals on the
top indicate the received signal strength,
while the bottom part of the figure indicates
which sub-carriers are then chosen for each
signal.With OFDMA, the client device could
choose sub channels based on geographical
locations with the potential of eliminating the
impact of deep fades. CDMA-based
technologies utilize the same frequency band
regardless of where the user is.

VII.ADVANCED RADIO TECHNIQUES:




Smart Antenna Technology:
b) Adaptive antenna systems (AAS) are an
optional part of the 802.16 standard. AAS
equipped base stations can create beams
that can be steered, focusing the transmit
energy to achieve greater range as shown
in the figure. When receiving, they can
focus in the particular direction of the
receiver. This helps eliminate unwanted
interference from other locations.
a) Transmit and receive diversity
schemes:Transmit and Receive Diversity
schemes are used to take advantage of
multipath and reflected signals that occur in
NLOS environments. By utilizing multiple
antennas (transmit and/or receive), fading,
interference and path loss can be reduced. The
OFDMA transmit diversity option uses space
time coding. For receive diversity, techniques
such as maximum ratio combining (MRC) take
advantage of two separate receive paths.
VIII. Conclusion:
Thus WiMAX systems for
portable/nomadic use will have better
performance, interference rejection, multipath
tolerance, high data quality of service support
(data oriented MAC, symmetric link) and
lower future equipment costs i.e., low chipset
complexity, high spectral efficiencies. And
hence WiMAX can complement existing and
emerging 3G mobile and wireline networks,
and play a significant role in helping service
provides deliver converged service offerings

IX. BIBLIOGRAPHY:
Understanding WiMAX- J oe Laslo
& Michael gartenberg
www.intel.com/ebusiness/pdf/wireless/
intel
www.intel.com/netcomms/technologie
s/wimax







4
1










WAP
WIRELESS APPLICATION PROTOCOL












Authors:
swetha.k shilpa.J
IV year,ECE II year,ECE
MITS,Madanapalle MITS,Madanapalle
swetha.kalapati@gmail.com jimmy_shilpa@yahoo.com


















ABSTRACT:

Mobile phones have become very popular . They offer ultimate
convenience and are very useful devices for people to stay in touch. The main draw back
of web based application is that the user has to stay close to the pc to use the facility.
WAP has given a new dimension to web applications by doing away with the restrictions
posed by pc. The growing awareness of wireless web applications aims to deliver in for
that can be accessed anywhere, anytime using mobile.

What is WAP all about :-

The wireless Application Protocol is the de-facto world standard for the
presentation and delivery of wireless information and telephony services on mobile
phones and other wireless terminals.

When we use a browser to access the Internet, we have to type in an
address and a web page, then a well laid-out text appears on the screen. Similarly, on the
WAP enabled mobile phone, we will connect to the Internet by keying in an
address, and with the help of WAP you will be able to access the web page you want.

Connecting to the Internet usually requires to dial up to an Internet Service
Providers, WAP phones communicate with telephone companies that provide the cellular
connection.

Inorder to guide the development of there exciting new applications the
leaders of wireless telecommunication industry formed the WAP forum. The WAP
forum,, which has over 200 member companies has published a global wireless protocol
specification for all wireless networks spread across globe.


Challenges faced by WAP:-

Challenges provides insight into how the WAP architecture was developed
and help in making sense of upcoming developments in this field. Here are some
challenges faced by WAP.

1. A different market:-

Bringing computing power to a wireless handset opens an extensive new market
for information access.


Ease of use:-

Mobile devices using WAP will be used by people who potentially have
no desktop computing experience. Subscribers wont be focused on their
handset the way that when they are sitting in front of desktop computer.

Expected service level:-

Subscribes expect wireless data access to perform like the rest of handset.
The service should be instantly available, easy to use.

Short, essential tasks:-

Subscriber will have small, specific tasks and need to
be accomplished quickly. Receiving timely traffic alerts on the handset will be essential.




2. The device is different:-

Less powerful CPUs
Less memory
Small displays
Different input devices
Restricted power consumption

Because of these limitations, the user interface of wireless handset is
fundamentally different that of desktop computer.

For example the screen size of the monitor on the desktop computer is
usually 14inches to 17inches whereas the screen size of wireless device is 5inches to 6
inches.

PCs work on electricity supply whereas handled device work on batteries,
which have limited capacity.

The challenges should be kept in mind when developing applications
using WAP.

WAP features:

Languages:

WML has full support for user input to enable interactivity within the
WAP application. WML is based on XML and is user-interface independent. The
features of WML are very similar to HTML. However, bear in mind that WML is
primarily for mobile devices. This Script has been optimized for use with in handheld
wireless device. It makes minimal demand on memory and gives developer the
flexibility to carry out client side validation.

Operating system:

WAP can be built on any operating system including palmOS, FLEXOS,
OS/9, J AVAOS etc. It provides service interoperability even between different device
families.

Networks that support WAP:

WAP can be used very easily on almost any wireless format. Currently
WAP is being used by both GSM, CDMA and TDMA cellular phone networks. Most
people talk about WAP as a GSM standard, due to the high usage of WAP by the GSM
network.

We can also use pictures:

We can use pictures, but black and white only. Most of the mobile phones
have a green background. Restriction is firstly there are very few mobile phones that have
color displays. The main reason is speed. The data speed of the mobile phone is quite
slow compared to a domestic modem. So pictures take a while to download. Images of
maps can also be displayed on WAP enabled phones. The maps come from
WWW.webraska.com.


WAP Models:
The main objectives is that understanding how WAP works.

WAP programming model:

WAP programming model is similar to World Wide Web programming
model. WAP programming model is shown below.



WAP model consists of three entities

1. Client
2. Gateway
3. Server.
1.Client:-

This is micro browser that resides on the mobile device.

2.Gateway:-

This entity takes care of translation between WAP & WWW.

3.Server:-

This entity is the standard WWW server that contains static &dynamic
content.

WAP defines a set of standard components that enable communication between mobile
terminals & network servers they are.

Standard naming model:

WWW standard URLs are used to load WAP pages in the micro browser.
If we need to access WAP page, then the naming convention show should be identical to
existing web page.

For e.g.: A typical web address is http://www.Yahoo.com/index.html
To access WAP page from web server we have to type
http://www.Yahoo.com/index.wml

Standard content formats:

WAP content formats are based on WWW technology .

Standard communication protocol:

WAP communication protocol enables the communication of browser
request from the mobile terminal to network web server. The WAP proxy comprises of
the functionality called Protocol Gateway. This acts mainly as a translation service that
bridges the gap between WAP & WWW.

Content encoders & Decoders:

The content encoders translate WAP content into compact encoded
formates to reduce the size of data over the network.

WAP Network:

The user initiates a connection to a WAP page using wireless device.

The request goes to a WAP proxy that translates the WAP requests to
WWW request, allowing the WAP client to submit requests to a web server. The proxy
also encodes the responses from the web server into compact binary format understood
by the client.

If the web server provides WAP content the WAP proxy retrieves it directly from the
web server. If it is WWW content then a filter is used to translate the WWW content to
WAP content, i.e. HTML to WML.

The WTA server is an example of an origin or gateway server that
responds to requests from WAP client directly. This include features like monitoring
billing information, value added services registration etc. that are provided to the user
based on his of her account with the network provider.

WAP Architecture:

Wireless Application Environment:

WAE is a general -purpose application environment based on a
combination of World Wide Web and mobile telephony technology. This provides an
open environment to allow development of application over the mobile communication
infrastructure supporting WAP.



Wireless Session Protocol:
The WSP provides the application layer of WAP with connection oriented
or connectionless session service operates above the WTP layer and connection less
service operates above a secure of a non-secure datagram service.

Wireless Transaction Protocol:

The WTP runs on top of a datagram service and provides a lightweight
transaction oriented protocol that is suitable for implementation in thin clients such as
cellular handsets. WTP operates secure or non-secure wireless datagram transport service
efficiently.






wireless Transport Layer Protocol:

WTLP is a security protocol based upon Industry Standard TRANSPORT
layer security which is used widely for secure transactions over the Internet.

Wireless Datagram Protocol:

WDP is the transport layer of the WAP protocol stack. The WDP layer
operates above the capable bearer services provided by underlying cellular
communication network. It provides consistent secure and non-secure services to clients
irrespective of type of bearer used to send data units.


















WAP Applications:

Some of the key WAP application categories include:

Wireless access to Internet content:

Consumes benefit from immediate interactive access to the information
they need at that moment, location of specific shops, map, taxi and GPS driven locator.

Wireless access to corporate IT-systems and Extranets:

Corporations can offer new channels for their services are also create
totally new services for their mobile customers in addition to directory services and
Internets.

Wireless access to personal information:

Wireless device users can access their e-mail, calendars and even screen
text headers for their voice mail messages, immediate information on interest areas like
sports, news and horoscopes.

Intelligent telephony services:

Carriers can offer their customers secure, access to their personal and
other customer related information, billing and other databases in the intelligent network.


Connecting WAP device to Internet:

To connect WAP to the Internet, the WAO user connects using a dial up
connection through PPP to a service provider. This is very similar to how most of the
dials up users connect to Internet.
Once the connection is made through PPP, the userid , passwords are verified and the
WAP device is ready to access the Internet via the specified WAP gateway.




Extension to WAP technology-Bluetooth technology:

With the bluetooth technology the wireless world will be even more
wireless. We will able to print from our phone of laptop without connecting any cables.
You will tell your computers to communicate with each other without your permission.
The bluetooth chip will also tell you if a fried of yours is nearby or if the stranger passing
you on the street is single and wants a date with you. With this technology you will be
able to read books online while sited in your room. You will have Internet access of
WAP access 24 hours a day. In your car you will have a city map for navigation and if
you are in an emergency, the GPS will tell the rescue team where they can find you.

Conclusion:

WAP is an initiative by WAP forum that allows users of mobile devices to
access content on their devices using wireless networks. It provides a universal standard,
which enables users to easily access web based interactive information services and
applications from the screens of their mobile phones. This includes both consumer and
corporate solutions like e-mail, corporate dates, sports, news, entertainment, transactions
and banking services.












S.V.U.COLLEGE OF ENGINEERING
TIRUPATI

A PAPER PRESENTATION ON WIRELESS COMMUNICATIONS (FIDILITY)

BY
P.NARENDRA (I I ECE)
S.V.U.COLLEGE OF ENGI NEERI NG
nar endr a_c hi nni @yahoo.c o.i n
K.VASI STA J AYANTH (I I CSE)
S.V.U.COLLEGE OF ENGI NEERI NG
vasi st a.c se@gmai l .c om

ABSTRACT


Technology is no longer judged by its technical brilliance, but by the return
on investment (both tangible and intangible). This in turn, is dictated by the killer
application for that technology. Wireless Networks fit into this because the
technology has been around long enough and can provide enough benefits to be
seriously considered for deployment.
At the enterprise, it provides communication support for mobile computing.
It overcomes and, in fact, annihilates the physical limitation of wired networks in
terms of adaptability to a variation in demand. Network connectivity in a
companys meeting room is a classic example. The number of users using that
room would vary for different meetings. So, it would be difficult to decide how
many wired network ports to put there. With wireless access, the number of users
is mostly constrained by the bandwidth available on the wireless network.
Mobility is another feature by wireless. Mobile users can be truly mobile, in
that hey dont need to be bound to their seats when connecting to the network.
Mobility, however is not only associated with users, its also associated with the
infrastructure itself. You can have a wireless network up and running in no time, a
boon for people who need to do it for exhibitions, events, etc.



This leads to other provision of wireless, that of scalability. It really helps
in extending your network. It also becomes important if an enterprise has a rented
office and needs to shift to a new place. At home, the need for wireless is more to
do with ubiquitous computing.
Wi-fi, or wireless fidelity, is freedom: it allows you to connect to the
internet from your couch at home, a bed in a hotel room, or a conference room at
work without wires. It is a wireless technology like cell phones, Wi-Fi enabled
computers send and receive data indoors and outdoors; anywhere within the range
of the base station. And the best thing of all, Wi-Fi is fast. In fact, its several
times faster than the fastest cable modem connection.
Wireless technology, therefore is really happening, and should be seriously
considered. The following presentation explains wireless LANs, their basic
operations, topologies; security features and answers some of the questions
evaluating WLAN technology.








1. IEEE 802.11b Wireless Networking Overview
Approval of the IEEE 802.11 standard for wireless local area networking
(WLAN) and rapid progress made toward higher data rates have put the promise
of truly mobile computing within reach. While wired LANs have been a
mainstream technology for at least fifteen years, WLANs are uncharted territory
for most networking professionals.
In September of 1999, the Institute of Electrical and Electronic Engineers
(IEEE) ratified the specification for IEEE 802.11b, also known as Wi-Fi. IEEE
802.11b defines the physical layer and media access control (MAC) sublayer for
communications across a shared, wireless local area network (WLAN).
At the physical layer, IEEE 802.11b operates at the radio frequency of 2.45
gigahertz (GHz) with a maximum bit rate of 11 Mbps. It uses the direct sequence
spread spectrum (DSSS) transmission technique. At the MAC sublayer of the Data
Link layer, 802.11b uses the carrier sense multiple access with collision avoidance
(CSMA/CA) media access control (MAC) protocol.
A wireless station with a frame to transmit first listens on the wireless
medium to determine if another station is currently transmitting (this is the carrier
sense portion of CSMA/CA). If the medium is being used, the wireless station
calculates a random back off delay. Only after the random backoff delay elapses
can the wireless station again listen for a transmitting station. By instituting a
random backoff delay, multiple stations that are waiting to transmit do not end up
trying to transmit at the same time (this is the collision avoidance portion of
CSMA/CA). Collisions can occur and, unlike with Ethernet, they might not be
detected by the transmitting nodes. Therefore, 802.11b uses a Request to Send
(RTS)/Clear to Send (CTS) protocol with an Acknowledgment (ACK) signal to
ensure that a frame is successfully transmitted and received.

2. Wireless Networking Components
IEEE 802.11b wireless networking consists of the following components:
Stations: A station (STA) is a network node that is equipped with a wireless
network device. A personal computer with a wireless network adapter is known as
a wireless client. Wireless clients can communicate directly with each other or
through a wireless access point (AP). Wireless clients are mobile.
Wireless AP: wireless AP is a wireless network node that acts as a bridge between
STAs and a wired network. A wireless AP contains:
1. At least one interface that connects the wireless AP to an existing wired
network (such as an Ethernet backbone).
2. A wireless network device with which it creates wireless connections
with STAs.
3. IEEE 802.1D bridging software, so that it can act as a transparent
bridge between the wireless and wired networks.
The wireless AP is similar to a cellular phone network's base station. Wireless
clients communicate with both the wired network and other wireless clients
through the wireless AP. Wireless APs are not mobile and act as peripheral bridge
devices that extend a wired network.
Ports: A port is a channel of a device that can support a single point-to-point
connection. For IEEE 802.11b, a port is an association, a logical entity over which
a single wireless connection is made. A typical wireless client with a single
wireless network adapter has one port and can support only one wireless
connection. A typical wireless AP has multiple ports and can simultaneously
support multiple wireless connections. The logical connection between a port on
the wireless client and the port on a wireless AP is a point-to-point bridged LAN
segmentsimilar to an Ethernet-based network client that is connected to an
Ethernet switch.
4. IEEE 802.11b Operating Modes (network topology)
IEEE 802.11 defines two operating modes: Ad hoc mode and Infrastructure
mode. The basic topology of an 802.11 network is shown in Figure 1. A Basic
Service Set (BSS) consists of two or more wireless nodes, or stations (STAs),
which have recognized each other and have established communications. In the
most basic form, stations communicate directly with each other on a peer-to-peer
level sharing a given cell coverage area. This type of network is often formed on a
temporary basis, and is commonly referred to as an ad hoc network, or
Independent Basic Service Set (IBSS).

In most instances, the BSS contains an Access Point (AP). When an AP is
present, stations do not communicate on a peer-to-peer basis. All communications
between stations or between a station and a wired network client go through the
AP. APs are not mobile, and form part of the wired network infrastructure. A
BSS in this configuration is said to be operating in infrastructure mode.

The Extended Service Set (ESS) shown in Figure 2 consists of a series of
overlapping BSSs (each containing an AP) connected together by means of a
Distribution System (DS). Although the DS could be any type of network, it is
almost invariably an Ethernet LAN. Mobile nodes can roam between APs and
seamless campus-wide coverage is possible.
5. IEEE 802.11b Operation Basics
When a wireless adapter is turned on, it begins to scan across the
wireless frequencies for wireless APs and other wireless clients in ad hoc mode.
Assuming that the wireless client is configured to operate in infrastructure mode,
the wireless adapter chooses a wireless AP with which to connect. This selection is
made automatically by using SSID and signal strength and frame error rate
information. Next, the wireless adapter switches to the assigned channel of the
selected wireless AP and negotiates the use of a port. This is known as
establishing an association.
If the signal strength of the wireless AP is too low, the error rate too high,
or if instructed by the operating system (in the case of Windows XP), the wireless
adapter scans for other wireless APs to determine whether a different wireless AP
can provide a stronger signal or lower error rate. If such a wireless AP is located,
the wireless adapter switches to the channel of that wireless AP and negotiates the
use of a port. This is known as reassociation.
Reassociation with a different wireless AP can occur for several reasons.
The signal can weaken as either the wireless adapter moves away from the
wireless AP or the wireless AP becomes congested with too much traffic or
interference. By switching to another wireless AP, the wireless adapter can
distribute the load to other wireless APs, increasing the performance for other
wireless clients.
5. Radio Technology in 802.11
IEEE 802.11 provides for two variations of the PHY. These include two (2)
RF technologies namely Direct Sequence Spread Spectrum (DSSS), and
Frequency Hopped Spread Spectrum (FHSS). The DSSS and FHSS PHY options
were designed specifically to conform to FCC regulations (FCC 15.247) for
operation in the 2.4 GHz ISM band, which has worldwide allocation for
unlicensed operation.

DSSS systems use technology similar to GPS satellites and some types of
cell phones. Each information bit is combined via an XOR function with a longer
Pseudo-random Numerical (PN) sequence as shown in Figure 3. The result is a
high speed digital stream which is then modulated onto a carrier frequency using
Differential Phase Shift Keying (DPSK).



When receiving the DSSS signal, a matched filter correlator is used as
shown in Figure 4.The correlator removes the PN sequence and recovers the
original data stream. Tat the higher data rates of 5.5 and 11 Mbps, DSSS receivers
employ different PN codes and a bank of correlators to recover the transmitted
data stream. The high rate modulation method is called Complimentary Code
Keying (CCK). The effects of using PN codes to generate the spread spectrum
signal are shown in Figure 5.

As shown in Figure 5a, the PN sequence spreads the transmitted bandwidth
of the resulting signal (thus the term, spread spectrum) and reduces peak power.
Note however, that total power is unchanged. Upon reception, the signal is
correlated with the same PN sequence to reject narrow band interference and
recover the original binary data (Fig. 5b). Regardless of whether the data rate is 1,
2, 5.5, or 11 Mbps, the channel bandwidth is about 20 MHz for DSSS systems.
Therefore, the ISM band will accommodate up to three non-overlapping channels



6. Multiple Access
The basic access method for 802.11 is the Distributed Coordination
Function (DCF) which uses Carrier Sense Multiple Access / Collision Avoidance
(CSMA / CA). This requires each station to listen for other users. If the channel is
idle, the station may transmit. However if it is busy, each station waits until
transmission stops, and then enters into a random back off procedure. This
prevents multiple stations from seizing the medium immediately after completion
of the preceding transmission.

Packet reception in DCF requires acknowledgement as shown in Figure 7.
The period between completion of packet transmission and start of the ACK frame
is one Short Inter Frame Space (SIFS). ACK frames have a higher priority than
other traffic. Fast acknowledgement is one of the salient features of the 802.11
standard, because it requires ACKs to be handled at the MAC sublayer.

The underlying assumption is that every station can hear all other stations. This
is not always the case. Referring to Figure 8, the AP is within range of the STA-A,
but STA-B is out of range. STA-B would not be able to detect transmissions from
STA-A, and the probability of collision is greatly increased. This is known as the
Hidden Node.
To combat this problem, a second carrier sense mechanism is available. Virtual
Carrier Sense enables a station to reserve the medium for a specified period of
time through the use of RTS/CTS frames.
7. IEEE 802.11 Security
The IEEE 802.11 standard defines the following mechanisms for wireless
security:
Authentication through the open system and shared key authentication
types
Data confidentiality through Wired Equivalent Privacy (WEP)
Open system authentication is used when no authentication is required. Some
wireless APs allow the configuration of the MAC addresses of allowed wireless
clients. However, this is not secure because the MAC address of a wireless client
can be spoofed.
Shared key authentication verifies that an authenticating wireless client has
knowledge of a shared secret. This is similar to preshared key authentication in
Internet Protocol security (IPsec). The 802.11 standard currently assumes that the
shared key is delivered to participating STAs through a secure channel that is
independent of IEEE 802.11. In practice, this secret is manually configured for
both the wireless AP and client. Because the shared key authentication secret must
be distributed manually, this method of authentication does not scale to a large
infrastructure mode network (for example, corporate campuses and public places,
such as malls and airports). for use.
Inherent in the nature of wireless networks, securing physical access to the
network is difficult. Because a physical port is not required, anyone within range
of a wireless AP can send and receive frames, as well as listen for other frames
being sent. Without WEP, eavesdropping and remote packet sniffing would be
very easy. WEP is defined by the IEEE 802.11 standard and is intended to provide
the level of data confidentiality that is equivalent to a wired network.
WEP provides data confidentiality services by encrypting the data sent
between wireless nodes. WEP encryption uses the RC4 symmetrical stream cipher
with either a 40-bit or 104-bit encryption key. WEP provides data integrity from
random errors by including an integrity check value (ICV) in the encrypted portion
of the wireless frame.
However, one significant problem remains with WEP. The determination and
distribution of WEP keys are not defined and must be distributed through a secure
channel that is independent of 802.11. Obviously, this key distribution system
does not scale well to an enterprise organization.
Additionally, there is no defined mechanism to change the WEP keyeither
per authentication or at periodic intervals over the duration of an authenticated
connection. All wireless APs and clients use the same manually configured WEP
key for multiple connections and authentications. With multiple wireless clients
sending large amounts of data, it is possible for a malicious user to remotely
capture large amounts of WEP cipher text and use cryptanalysis methods to
determine the WEP key.
The lack of WEP key management, to both automatically determine a WEP key
and change it frequently, is a principal limitation of 802.11 security, especially
with a large number of wireless clients in infrastructure mode. The lack of
automated authentication and key determination services also effects operation in
ad hoc mode.
The combination of a lack of both adequate authentication methods and key
management for encryption of wireless data has led the IEEE to adopt the IEEE
802.1X Port-Based Network Access Control standard for wireless connections

8. The Wireless Ethernet Compatibility Alliance
The recently adopted Complimentary Code Keying (CCK) waveform
delivers speeds of 5.5 and 11 Mbps in the same occupied bandwidth as current
generation 1 and 2 Mbps DSSS radios and will be fully backward compatible.
Now that a standard is firmly in place, WLANs will become a part of the
enterprise networking landscape within the next twelve months.
The mission of the Wireless Ethernet Compatibility Alliance is to provide
certification of compliance with the IEEE 802.11 Standard and to ensure that
products from multiple vendors meet strict requirements for interoperability. With
cross vendor interoperability assured, WLANs are now able to fulfill the promise
of high speed mobile computing.


Conclusion:
The use of wireless LANs is expected to increase dramatically in the future
as businesses discover the enhanced productivity and the increased mobility that
wireless communications can provide in a society that is moving towards more
connectionless connections.
In conclusion, the panelists felt that hurdles in deploying WLANs can be
overcome. Cost of wireless services are already falling. The issue is now to lower
the costs of the device that is needed to access the WLAN. Large chop design
companies can make use of this opportunity to get into the market place. And Wi-
Fi cannot move ahead quickly without support form private and government
sectors

Bibliography:
1.Data over wireless networks -Gilbert Held
2. Electronics for you (magazine) J une 2003 & February 2003
3. Electronics today (magazine) March 2003
4. A Technical tutorial on the IEEE 802.11 protocol - Pablo Brenner

WIRELESS COMMUNICATIONS IN VIEW OF HIGH-ALTITUDE
PLATFORMS
- AN APPLICATION







V. KOTESWAR REDDY C. SUBRAMANYAM REDDY
C.R. ENGINEERING COLLEGE, TIRUPATI.
BTECH III YEAR BRANCH: E.C.E




Email id :y4k_nani _shankar@yahoo.com
Veluru_k2006@yahoo.com

















ABSTRACT

The demand for high-capacity wireless services is bringing increasing challenges, especially
for delivery of the last mile. Terrestrially, the need for line-of-sight propagation paths
represents a constraint unless very large numbers of base-station masts are deployed, while
satellite systems have capacity limitations. An emerging solution is offered by high-altitude
platforms (HAPs) operating in the stratosphere at altitudes of up to 22 km to provide
communication facilities that can exploit the best features of both terrestrial and satellite
schemes. This paper outlines the application and features of HAPs, and some specific
development programmes. Particular consideration is given to the use of HAPs for delivery of
future broadband wireless communications.

The challenge for wireless communications
As the demand grows for communication services, wireless solutions are becoming increasingly
important. Wireless can offer high-bandwidth service provision without reliance on fixed infrastructure
and represents a solution to the last mile problem, i.e. delivery directly to a customers premises,
while in many scenarios wireless may represent the only viable delivery mechanism.

Wireless is also essential for mobile services, and cellular networks (e.g. 2nd generation mobile) are
now operational worldwide. Fixed wireless access (FWA) schemes are also becoming established to
provide telephony and data services to both business and home users.

The emerging market is for broadband data provision for multimedia, which represents a convergence
of high speed Internet (and e-mail), telephony, TV, video-on demand, sound broadcasting etc.
Broadband fixed wireless access (B-FWA) schemes aim to deliver a range of multimedia services to the
customer at data rates of typically at least 2 Mbit/s. B-FWA should offer greater capacity to the user
than services based on existing wire lines, such as ISDN or xDSL, which are in any event unlikely to be
available to all customers. The alternative would be cable or fibre delivery, but such installation may be
prohibitively expensive in many scenarios, and this may represent a barrier to new service providers. B-
FWA is likely to be targeted initially at business, including SME (smallmedium enterprise) and SOHO
(small office/ home office) users, although the market is anticipated to extend rapidly to domestic
customers.

However delivering high-capacity services by wireless also presents a challenge, especially as the radio
spectrum is a limited resource subject to increasing pressure as demand grows. To provide bandwidth to
a large number of users, some form of frequency reuse strategy must be adopted, usually based around
a fixed cellular structure. Fig. 1a illustrates the cellular concept, where each hexagon represents a cell
having a base station near its centre and employing a different frequency or group of frequencies
represented by the colour. These frequencies are reused only at a distance, the reuse distance being a
function of many factors, including the local propagation environment and the acceptable signal-to-
interference-plus-noise ratio. To provide increased capacity, the cell sizes may be reduced, thus
allowing the spectrum to be reused more often within a given geographical area, as illustrated in Fig.
1b. This philosophy leads to the concept of micro cells for areas of high user density, with a base-
station on perhaps every street corner. Indeed, taking the concept to its extreme limit, one might
envisage one cell per user, the evident price in either case being the cost and environmental impact of a
plethora of base-station antennas, together with the task of providing the backhaul links to serve them,
by fiber or other wireless means, and the cost of installation.

Pressure on the radio spectrum also leads to a move towards higher frequency bands, which are less
heavily congested and can provide significant bandwidth. The main allocations for broadband are in the
28GHz band (26GHz in some regions), as well as at 38GHz1. Existing broadband schemes may be
variously described as LMDS (Local Multipoint Distribution Services) or MVDS (Multipoint Video
Distribution Services)2,3; these are



flexible concepts for delivery of broadband services, although they may be encompassed in the generic
term B-FWA, or simply BWA. The use of these millimeter wavelengths implies line-of sight
propagation, which represents a challenge compared with lower frequencies. Thus local obstructions
will cause problems and each customer terminal needs to see a base-station; i.e. a microcellular
structure is essential for propagation reasons. This again implies a need for a very large number of base-
stations. A solution to these problems might be very tall base station masts with line-of-sight to users.
However, these would be not only costly but also environmentally unacceptable. An alternative
delivery mechanism is via satellite, which can provide line-of-sight communication to many users.
Indeed, broadband services from geostationary (GEO) satellites are projected to represent a significant
market over the next few years4. However, there are limitations on performance due partly to the range
of c. 40000 km, which yields a free-space path loss (FSPL) of the order of 200dB, as well as to physical
constraints of on-board antenna dimensions. The latter leads to a lower limit for the spot-beam (i.e. cell)
diameter on the ground, and these minimum dimensions constrain the frequency reuse density and
hence the overall capacity. Additionally, the high FSPL requires sizeable antennas at ground terminals
to achieve broadband data rates. A further downside is the lengthy propagation delay over a
geostationary satellite link of 025s, which not only
is troublesome for speech but also may cause difficulties with some data protocols. Low earth orbit
(LEO) satellites may circumvent some of these limitations in principle, but suffer from complexities of
rapid handover, not only between cells but also between platforms. The need for large numbers of LEO
satellites to provide continuous coverage is also a significant economic burden, and such schemes have
yet to prove commercially successful.

AERIAL PLATFORMS: A SOLUTION?

A potential solution to the wireless delivery problem lies in aerial platforms, carrying communications
relay payloads and operating in a quasi-stationary position at altitudes up to some 22km. A payload can
be a complete base-station, or simply a transparent transponder, akin to the majority of satellites. Line-
of-sight propagation paths can be provided to most users, with a modest FSPL, thus enabling services
that take advantage of the best features of both terrestrial and satellite communications.

A single aerial platform can replace a large number of terrestrial masts, along with their
associated costs, environmental impact and backhaul constraints. Site acquisition problems are also
eliminated, together with installation maintenance costs, which can represent a major overhead in many
regions of the world.

The platforms may be aero planes or airships (essentially balloons, termed aerostats) and may be
manned or unmanned with autonomous operation coupled with remote control from the ground. Of
most interest are craft designed to operate in the stratosphere at an altitude of typically between 17 and
22km, which are referred to as high-altitude platforms (HAPs)

While the term HAP may not have a rigid definition, we take it to mean a solar-powered and unmanned
aero plane or airship, capable of long endurance on-statione.g. several months or more. Another term
in use is HALE High Altitude Long Enduranceplatform, which implies crafts capable of lengthy
on-station deployment of perhaps up to a few years.


HAPs are now being actively developed in a number of programmes world-wide, and the surge of
recent activity reflects both the lucrative demand for wireless services and advances in platform
technology, such as in materials, solar cells and energy storage.















Communication applications

Fig. depicts a general HAP communications scenario. Services can be provided from a single HAP with
up- and down-links to the user terminals, together with backhaul links as required into the fibre
backbone. Inter-HAP links may serve to connect a network of HAPs29, while links may also be
established if required via satellite directly from the HAP. The coverage region served by a HAP is
essentially determined by line-of-sight propagation (at least at the higher frequency bands) and the
minimum elevation.

BWA applications
3G/2G applications
HAP networks


3G/2G applications
HAPs may offer opportunity to deploy next generation (3G) mobile cellular services, or indeed current
(2G) services30, and use of the IMT-2000 (3G) bands from HAPs has been specifically authorized by
the ITU. A single base-station on the HAP with a wide-beam width antenna could serve a wide area,
which may prove advantageous over sparsely populated regions. Alternatively, a number of smaller
cells could be deployed with appropriate directional antennas. The benefits would include rapid roll-out
covering a large region, relatively uncluttered propagation paths, and elimination of much ground-
station installation.

HAP networks

A number of HAPs may be deployed in a network to cover an entire region. For example, Fig. shows
several HAPs serving the UK. Inter-HAP links may be accomplished at high EHF frequencies or using
optical links; such technology is well established for satellites and should not present major problems.

Developing world applications

HAPs offer a range of opportunities for services in the developing world. These include rural telephony,
broadcasting and data services. Such services may be particularly valuable where existing ground
infrastructure is lacking or difficult.

Emergency or disaster applications

HAPs can be rapidly deployed to supplement existing services in the event of a disaster (e.g.
earthquake, flood), or as restoration following failure in a core network.

Military communications

The attractions of HAPs for military communications are self-evident, with their ability for rapid
deployment. They can act as nodes within existing military wireless networks, or as surrogate
satellitesin this case carrying a satellite payload and operating with conventional sitcom terminals.
Besides the ability to provide communications where none might exist, there is a benefit in that their
relatively close range demands only limited transmit power from the ground terminals, and this
provides enhanced LPI (Low Probability of Interception) advantages. Within military scenarios,
airships proper are currently confined largely to use at very low heights for mine clearance operations,
and their application for communications remains to be fully exploited. Although it might be thought
that airship HAPs are vulnerable to enemy attack, they do possess an advantage in that despite their
large size their envelope is largely transparent to microwaves and they present an extremely low radar
cross-section.

Advantages of HAP communications
HAP communications have a number of potential benefits, as summarized below.

Large-area coverage (compared with terrestrial systems). The geometry of HAP deployment means
that long-range links experience relatively little rain attenuation compared to terrestrial links over the
same distance, due to a shorter slant path through the atmosphere. At the shorter millimeter-wave bands
this can yield significant link budget advantages within large cells (see Fig. 11).

Flexibility to respond to traffic demands. HAPs are ideally suited to the provision of centralized
adaptable resource allocation, i.e. flexible and responsive frequency reuse patterns and cell sizes,
unconstrained by the physical location of base-stations.


Low cost
Incremental Deployment

Rapid Deployment

Platform and payload upgrading
Environment friendly






REFERENCES :
1. www.isro.com
2. Helinet project



ACKNOWLEDGEMENT :

Advanced Centre for Atmospheric Sciences (ACAS), S.V.University
Co-ordinator, Dr. S. Vijaya Bhaskar Rao
CREC Management

Thanking you






Submitted by:


T. C. PHALGUNA KUMAR G. RAMESWARA RAO

phalgunengg_ece@yahoo.com ramesh650555052@gmail.com








THIRD YEAR
ELECTRONICS & COMMUNICATION ENGINEERING
SIR.C.R.REDDY COLLEGE OF ENGINEERING
ELURU 534007
WIRELESS M2M SYSTEM
ARCHITECTURE
FOR DATA ACQUISITION AND
CONTROL


Abstract:

The high technological advance in the
field of communication and computation allows
a new form of machine interconnectivity-
wireless M2M technology. The article presents
solution for distributed wireless M2M system
architecture for data acquisition and control.
Some system's applications have been reviewed.
The system has hierarchy structure including
tree types of modules are specialized in data
collecting, analysis and control and
communication management. The data collecting
modules provides measure of different physical
parameters and supports wireless and wired
communication interfaces.
The control module change actuators
parameters regarding the data measurement it
received by data collecting modules and selected
control strategy. Communication modules
provide possibility of localized monitoring,
analysis and control of the system via Internet
and cooperation to other M2M stations.The
system supports Bluetooth & Zig Bee to GPRS
gateway capabilities.
This paper presents a suggestion for
design and implementation of Wireless M2M
system for data acquisition and control (Data
Collection and Control Virtual Network-
DCCVN). The design integrates data collection
module, control module, and communication
(gateway) module. Detailed description of each
system design components functionality and
structure are presented.

1. Introduction:

The recent convergence of Internet and
wireless communications become a premise for
advent of wireless M2M (mobile to machine,
machine to mobile) technology. It is a generic
name for emerging third generation of
computing Post PC generation. This third
generation involves low-cost, scalable and
reliable inter-machine (systems with mechanical,
electrical or electronics characteristics)
interaction via wireless communication standards
like GSM/GPRS ,IEEE 802.11, Bluetooth
(supports communication links between devices
on short distances), IEEE 802.15.4 (used for low
speed data transfer between low-power
consummation devices) .

M2M allows a wide variety of
machines to become nodes of personal wireless
networks, global Internet which provides to
develop monitoring and remote control
applications. This will decrease costs for
involved human resources and will make
machines more intelligent and autonomous.
Wireless M2M technology brings in new
direction the state of development of the systems
for data acquisition and control. The systems are
not only passive data collecting modules which
delivers sensed data to some central machine for
analysis and data processing in some proprietary
network, something more systems getting more
and more autonomous in decision making for
control and in Inter-machine coordination.
Although the technology is in its beginning
market analytics predicts its success taking into
consideration the fact that in the year 2010 there
will be over 1 billion mobile devices supporting
M2M connectivity. The M2M systems can
interface with virtually any type of mechanical,
electrical or electronic system for an unlimited
number of specific applications, including:
Access control and security
Vehicle tracking systems
Home automation systems
Automotive systems
Robotics
Medical systems

2. SYSTEM CHARACTERISTICS
AND ARCHITECTURE:

2.1 System Characteristics

Wireless M2M system for data-acquisition and
control belongs to the class of distributed,
heterogeneous, network systems for data
collecting, data processing and process control. It
provides following features:
Remote data monitoring and control of
subsystems via global Internet network
Dynamic configuration and software update of
system modules via Internet
Communication and control interfaces to
industrial micro controllers integrated into
machine network system
Possibilities for building data acquisition
tracking systems
Cooperative task processing and evaluations
between subsystems.
Possibility for generation and processing of
parametric query to distributed databases.
Support of useful component based software
framework
Building of heterogeneous networks based on
wireless and wired communication technologies
Service of different type of embedded devices
using wireless and wired interfaces

2.2. System Architecture

The system is designed as hierarchy
structure of components, which services control
and data collecting tasks via wired and wireless
interfaces. It includes a set of subsystems
described in the next points.

2.2.1 Operation Station (OS)

Operation station is realized as standard
computer configuration connected to the Internet
The system software includes custom tools for
end user access to the DCCVN abilities. The
operation station will be realized on the
advanced platforms as Windows XP, Linux
using NET framework.

2.2.2 Data Collecting Module (DCM)

Data collecting modules include a number of
smart nodes (SN) forming Bluetooth or ZigBee
wireless networks. The modules realize functions
of the interaction of the DCCVN with the
external environment. Each of them provides
possibilities for data collection of the measured
physical parameters based on dynamically
adaptive or fixed algorithmic scheme.

The data collecting adaptive schemes takes into
account the current power consumption status of
the nodes, network bandwidth load and applies
different algorithms for their reduction. The
addressing scheme could be context aware or
address based on whether module is running in
admin mode or in data acquisition mode.

2.2.2.1 Smart Node (SN)

Smart nodes form Bluetooth piconets or ZigBee
networks used for coordination and transmission
of the collected data to the Gateway. Each of the
smart nodes consists of sensor, communication
and processing units.

Bluetooth smart nodes offer the following
features:

Communication subsystem - provides
communication via Bluetooth RFCOMM
protocol with other devices, forming Bluetooth
piconets. The Bluetooth protocol stack is
implemented in KC-11 Bluetooth micro module.
It integrates arm-7 microprocessor and antenna,
which provides Bluetooth serial cable
replacement profile. The micro module provides
up to 723 kb/s data-rate and up to 300 m
communication range. Due to possibility of
forming more complex networks scatternet this
communication range could extend to kms.
Bluetooth micro module schemes are optimized
to work in low power consumption mode.
Processing subsystem-using custom AT
command set could control each Bluetooth micro
module. The hardware interface between the
micro module and processing subsystem is
realized with UART port of processor unit.
Radio data transmission is based on external or
chip antennas.

Processing subsystem-It is based on low power
consumption RISC microcontroller. The
subsystem realizes coordination tasks for service
of other subsystems under the control of DiOS.
Important role for node battery long life and
support of all nodes activities play the
specialized software responsible for optimal
consumption control.

Sensor subsystem Includes passive sensors
for measurement of different physical values
temperature, humidity, light. They vary in
functionality depending of the user case scenario.
Processing subsystem interface with sensor
subsystem via ADC, or input output digital
channels, realized into the microcontroller.

Battery power provides all subsystems
functions. Usually it is accomplished as
accumulator block and is controlled by
processing subsystem.
ZigBee smart node applies the same system
features as Bluetooth smart node.

It is specified with the following differences:
Its communication and processing subsystems
are integrated in single chip solution J enic
wireless microcontroller. It integrates 16MHz
32-bit RISC optimized for low power
(3MIPS/mA) processor, 2.4 GHz IEEE802.15.4
transceiver and incorporates wide range of
digital and analog peripherals to interface with
sensor subsystem. It realizes ROM resident
IEEE802.15.4 communication stack and
provides ZigBee star, tree, mesh network of
devices located at maximum distance of 1 km up
to 250 kb/s data rates.

Low-level software libraries are provided for
control of node peripherals and communication
flow. There are capabilities for software design
of upper level user applications.

2.2.3 Gateway

Gateway module provides control and
localization services for data collecting and
industrial controllers included in machine
network .It supports Bluetooth, Zig Bee, GPRS,
and GPS gateway capabilities.


Gateway module main design components are as
follows:
Communication subsystem provides
Bluetooth, ZigBee, GPRS interface to service
Data collecting module and transmit collected
data to operation using Internet infrastructure.
The subsystem has full-duplex communication
with remote station for control and parameter
configuration. The module main components are
KC-11 Bluetooth micro module with external
antenna, J ennic wireless module for ZigBee
communication and TelIt GPRS/GPS
embedded modem. All communication modules
are controlled by RISK microcontroller.

Processing subsystem- It is based on Atmega
2560 microcontroller. It integrates 8-bit RISC
kernel, 256K Flash and a number of peripheral
units. It controls Communication subsystem
units via four UART ports in interrupt driven
mode. One of the ports is designed for interface
with industrial microcontroller Octoport using
RS232 or USB interfaces.
The subsystem includes 512K external
memory used for local data store.
Real time OS-resident in processor flash is
responsible for service of communication
protocols, peripheral protocols, and low power
task scheduling algorithms, OS updates, and
records to system store.

Localization subsystem -- provides
communication to GPS system in order to
provide precise location and timing information.
It is designed in Telit GPRS/GPS micromodule,
which supports up to 20 GPS channels and
NMEA data format. Processing subsystem
controls GPS micro module via UART interface.

2.2.4 Octoport Industrial Controller

Octoport industrial controller is designed for
realization of gateway between different
industrial wired networks. Additional feature of
its implementation is the possibility for
communication with wireless devices via
Bluetooth or ZigBee interfaces.
Octoport supports 8 independent 1-Wire busses,
RS 485/422 industrial buss, RS232/USB,
Bluetooth, ZigBee.

The module integrates ATmega 128 as host
processor, external 256 k energy independent
memory, Real Time clock calendar, LCD
indicator for operation status, four optoisolated
input output for direct control. The module
includes block for control and recharge of
accumulator battery. The system software
includes number of drivers for different
communication protocols, peripheral devices
drivers.



3. CONCLUSION:

The suggested system architecture gives
possibilities for building more intelligent and
autonomous wireless M2M system. It resolves
communication and control problems between
different in technical characteristics machines
and make them part of global Internet network.
The provided generic software framework allows
systems to function in different application
domains.
REFERENCE:


ZigBee Alliance, ZigBee
Overview , http://www.zigbee.org


Milanov S. Bluetooth Interface ,
www.comexgroup.com/communications/
bluetooth.htm


Atmel Corporation, Blootooth
General Information , White paper,
http://www.atmel.com


Telit Communications, M2M
modules and GPS to cellular functions
http://www.electronicstalk.com/news/twt/
twt101.html


http://www.atmel.com



A new way to network
WIRELESS MESH NETWORKING
K.Divya Lakshmi: divya_121229@yahoo.co.in
K.V.Lavanya: lavanyakv_1224@yahoo.co.in
Phone Number:08922-221588,227210
Information Technology
MVGR College of Engineering

ABSTRACT

A Wireless mesh network is a mesh network created through the connection of wireless
access points installed at each network users locale. Each user is also a provider,
forwarding data at next node. The networking infrastructure is decentralized and
simplified because each node need only transmit as far as the next node. Wireless mesh
networking could allow people living in the remote areas and small businesses operating
in rural neighborhoods to connect their networks together for affordable Internet
connections.
The last few years have seen wireless emerge as the preferred technology for
communications, the world over. By using wireless solutions as a major component of
their networking strategy, organisations can revolutionise the way they communicate,
helping them cut costs, boost employee productivity, improve community service and
increase public safety. While the demand for outdoor wireless access is on the rise,
organisations are also faced with tight budgets and reduced resources in a globally
competitive environment. Wireless Mesh Networking, a new, potentially disruptive
technology aims to address these challenges and extend the range of Wi-Fi technology,
including wireless technologies such as WiMAX to Ultra-Wideband.
This paper deals with wireless mesh network in which the main emphasis is made on its
network architecture, characterstics, and layers, how it works and application. This paper
also explains its beginning, advantages, and Critical factors influencing network
performance, application and its future.


INTRODUCTION:
The Beginning:
Wireless Mesh Networking (WMN)
was developed as a quick way to set-up
wireless networks during military
operations.
Since then it has grown considerably
in popularity based on its advantages in
both metropolitan and rural applications.
WMNs are being applied as Hot Zones,
which cover a broad area, such as a
downtown city district. By 2010,
municipal Wi-Fi networks will cover over
325,000 sq.km worldwide, an increase
from about 3,885 sq.km, says a new report
from ABI Research. More than one
million wireless mesh routers, generating
revenues of over $1.2 billion, will be
shipped in 2010 to service those networks,
says ABI.
Wireless mesh networking provides
opportunities to service providers as well.
Internet service providers (ISPs) view it as
an inexpensive way to compete with
incumbent service providers, and to
provide broadband access to underserved
areas.
Definition:
A Wireless mesh network is a mesh
network created through the connection of
wireless access points installed at each
network users locale.

Wireless mesh networks (WMNs)
are undergoing rapid progress and
inspiring numerous deployments. They are
intended to deliver wireless services for a
large variety of applications in personal,
local, campus, and metropolitan areas.
WMNs are anticipated to fundamentally
resolve the limitations and to significantly
improve the performance of wireless
LANs, PANs, and MANs. They will
greatly impact the development of
wireless-fidelity (Wi-Fi), worldwide inter-
operability for microwave access
(WiMAX), Ultra Wide Band (UWB), and
wireless sensor networks.
Mesh connectivity significantly
enhances network performance, such as
fault tolerance, load balancing,
throughput, protocol efficiency; and
dramatically reduces cost. Most of
existing standard wireless networks do not
have these capabilities. For example,
standard Wi-Fi and WiMAX present
typical pointto- multipoint communication
architectures. Networking protocols for
mobile ad hoc networks (MANETs) and
wireless sensor networks have considered
multipoint-to-multipoint connectivity;
however, their design have been focused
on either issues of frequent topology
changes due to high mobility, or a power
efficiency mechanism. Protocols
considering these constraints may be
insufficient to reduce cost, enhance
functionality, and improve performance
for WMNs. Moreover, an ad hoc or
wireless sensor network protocol may not
be compatible with an existing standard
wireless network. As a consequence,
whether or not existing wireless
networking technologies are mature
enough for WMNs, is still an open
question.
NETWORK ARCHITECTURE:
WMNs consist of mesh routers and mesh
client nodes. Other than the routing
capability for gateway/repeater functions
as in a conventional router, a mesh router
contains additional routing functions to
support mesh networking. To further
improve the flexibility of mesh
networking, a mesh router is usually
equipped with multiple wireless interfaces
built on either the same or different
wireless access technologies. Compared
with a conventional router, a mesh router
can achieve the same coverage with much
lower transmission power through
multihop communications. Optionally, the
medium access control protocol in a mesh
router is enhanced with better scalability
in a multi-hop mesh environment. Two
types of mesh client nodes exist in
WMNs, namely, Type I nodes with
equivalent functions of a mesh router and
Type II nodes with simpler functions than
a mesh router. Depending on different
functionality between mesh routers and
clients, WMNs have three different
configurations:
Flat Architecture: Here only
Type I mesh client nodes are
supported, so the key functions of
mesh networking such as MAC,
routing, management, and security
protocols are the same for both
mesh routers and clients.
2

Hierarchical Architecture: In
this case only Type II mesh client
nodes are supported. Thus, the
functionality between mesh
routers and mesh clients are
much different. Since Type II
mesh client nodes lack necessary
functions required for mesh
networking, certain functions
must be added into mesh routers
to support these nodes.

Hybrid Architecture: This is the
general case for WMNs, because
both Type I and II mesh client
nodes are supported. Again,
certain functions must be added
into mesh routers to provide access
for Type II mesh clients, and thus
the hierarchical architecture is still
needed. In addition, a Type I client
node can establish direct mesh
communications with mesh routers
and other Type I clients nodes.

CHARACTERISTICS:
The characteristics of WMNs are as
follows:
Multi-hop wireless network: An
objective to develop WMNs is to
extend the coverage rangeof
current wireless networks
without sacrificing the channel
capacity. Another objective is to
provide non-line-of-sight NLOS)
connectivity among the users
without direct line-of-sight
(LOS) links. To meet these
requirements, the mesh-style
multi-hopping is indispensable
,which achieves higher
throughput without sacrificing
effective radio range via shorter
link distances, less interference
between the nodes, and more
efficient frequency re-use.
Support for ad Hoc networking,
and capability for self forming,
self-healing, and self-
organization: WMNs enhance
network performance, because of
flexible network architecture, easy
deployment and configuration,
fault tolerance, and mesh
Connectivity, i.e., multipoint-to-
multipoint. communications [128].
Due to these features, WMNs have
low upfront investment
requirement, and the network can
grow gradually as needed.
Mobility dependence on the type
of mesh nodes.
Providing both backhaul access
to external networks and peer-to-
peer (P2P) communication
within the internal network
Dependence of power-
consumption constraints on the
type of mesh nodes. Mesh routers
usually do not have strict
constraints on power consumption.
3
However, mesh clients may
require power efficient protocols.
As an example, a mesh-capable
sensor [113,114] requires its
communication protocols to be
power efficient. Thus, the MAC or
routing protocols optimized for
mesh routers may not be
appropriate for mesh clients, such
as sensors, because power
efficiency is the primary concern
for wireless sensor networks .
Compatible and interoperable
with existing wireless networks.
For example, WMNs built based
on IEEE 802.11 technologies
[133,69] must be compatible with
IEEE 802.11 standards in the
sense of supporting both mesh
capable and conventional Wi-Fi
clients. Such WMNs also need to
be inter-operable with other
wireless networks such as
WiMAX, and cellular networks.
LAYERS:
Physical Layer : The key functions of
physical layer techniques involve two
aspects: efficient spectrum utilization and
robustness to interference, fading, and
shadowing. In order to increase capacity
and mitigate the impairment by fading,
delay-spread, and co-channel interference,
antenna diversity and smart antenna
techniques can be used in WMNs.

Medium Access Control: MAC
protocols for WMNs have the following
differences compared to classical
counterparts for wireless networks:
MAC for WMNs is concerned
with more than one hop
communication.
MAC is distributed and
cooperative and works for
multipoint-to-multipoint
communication.
Network self-organization is
needed for the MAC.
Mobility affects the performance
of MAC.
For single-Channel MAC, there are three
approaches: (i) improving existing MAC
protocols, (ii) Cross-layer design with
advanced physical layer techniques, and
(iii) proposing innovative MAC protocols.
A multi-channel MAC can be
implemented on several different hardware
platforms, which also impacts the design
of the MAC. A multi-channel MAC may
belong to one of the following categories:
(i) multi-Channel single-transceiver MAC,
(ii) multi-channel multi-transceiver MAC,
and (iii) multi-radio MAC.
Network Layer :IP has been accepted as
a network layer protocol for many wireless
networks including WMNs. However,
routing protocols for WMNs are different
from those in wired networks and cellular
networks. Since WMNs share common
features with mobile ad hoc networks, the
routing protocols developed for them can
be applied to WMNs. Despite of the
availability of several routing protocols for
MANETS, the design of routing protocols
for WMNs is still an active research area
because the existing routing protocols treat
the underlying MAC protocol as a
transparent layer. However, the cross-layer
interaction must be considered to improve
the performance of the routing protocols in
WMNs. The routing protocol for WMNs
must capture the following features:
Performance Metrics.
Fault tolerance with link failures.
Load Balancing.
Scalability.
Transport Layer :In WMNs, due to
multihop and ad hoc features, one of the
4
critical problems causing TCP
performance degradation is the network
asymmetry which is defined as the
situation in which the forward direction of
a network is significantly different from
the reverse direction in terms of
bandwidth, loss rate, and latency. The
packet loss rates and latencies may be the
sources for asymmetry in WMNs because
the TCP data and ACK packets may take
two different paths with different packet
loss rates and latencies. Even if TCP data
and ACK packets may take the same path,
their actual packet loss rates and latencies
may be significantly different due to the
unfairness in the link layer protocol and in
the large variation of measured RTTs
(Round Trip Times).

Application Layer: Applications
determine the necessity to deploy
WMNs. Thus, to discover and develop
innovative applications is the key to
success of WMNs. Application Layer
Protocols consist of management
protocols that maintain operation and
monitor performance of WMNs.
Network Management: Many
functions are performed in a
network management protocol.
The statistics in the MIB
(management information base) of
mesh nodes, especially mesh
routers, need to be reported to one
or several servers in order to
continuously monitor the network
performance. Data processing
algorithms in the performance
monitoring software on the server
analyze these statistical data and
determine potential abnormality.
Authentication, Authorization, and
Accounting: Authentication,
authorization, and accounting
(AAA) is usually performed
through a centralized server.
However, the centralized scheme
like RADIUS is not scalable in
WMNs; the dynamic and multi-
hop ad hoc network topology
significantly limits the efficiency
of any centralized schemes.
HOW IT WORKS:
Wireless mesh networking is mesh
networking implemented over a Wireless
LAN. Mesh networking is a way to route
data, voice and instructions between
nodes. A mesh network is a networking
technique that allows inexpensive peer
network nodes to supply back haul
services to other nodes in the same
network



.


A mesh network effectively extends a
network by sharing access to higher cost
network infrastructure. It differs from
other networks in that the component parts
can all connect to each other. This type of
Internet infrastructure is decentralised,
relatively inexpensive, and very reliable
and resilient, as each node need only
transmit as far as the next node. Nodes act
as repeaters to transmit data from nearby
nodes to peers that are too far away to
reach, resulting in a network that can span
large distances, especially over rough or
difficult terrain. If one node drops out of
the network, due to hardware failure or
any other reason, its neighbors simply find
another route.
5
Mesh networks are also extremely reliable,
as each node is connected to several other
nodes. The technology also allows
multiple buildings or remote locations to
share a single high-speed connection to the
Internet without cabling or dedicated lines.
Extra capacity can be installed by simply
adding more nodes.
Mesh networks may involve either fixed
or mobile devices. The solutions are as
diverse as communications in difficult
environments such as emergency
situations, tunnels and oil rigs to
battlefield surveillance and high-speed
mobile video applications on board public
transport or real time racing car telemetry.
The principle is similar to the way packets
travel around the wired Internet data
will hop from one device to another until
it reaches a given destination. Dynamic
routing capabilities included in each
device allow this to happen. To implement
such dynamic routing capabilities, each
device needs to communicate its routing
information to every device it connects
with, `almost in real time'. Each device
then determines what to do with the data it
receives either pass it on to the next
device or keep it. The routing algorithm
used should attempt to always ensure that
the data takes the most appropriate
(fastest) route to its destination.
The choice of radio technology for
wireless mesh networks is crucial. In a
traditional wireless network where laptops
connect to a single access point, each
laptop has to share a fixed pool of
bandwidth. With mesh technology and
adaptive radio, devices in a mesh network
will only connect with other devices that
are in a set range. The advantage is that,
like a natural load balancing system, the
more devices the more bandwidth
becomes available, provided that the
number of hops in the average
communications path is kept low.
Since this wireless Internet infrastructure
has the potential to be much cheaper than
the traditional type, many wireless
community network groups are already
creating wireless mesh networks. In fact,
an MIT project developing $100 laptops
for under-privileged schools in developing
nations plans to use mesh networking to
create a robust and inexpensive
infrastructure.
ADVANTAGES:
A good mesh networking solution can
yield the following benefits to service
providers.
Fast deployment
Flexible architecture
Low ownership cost
Easy management
High scalability
Peak reliability
Interoperability
CRITICAL-FACTORS
INFLUENCING-NETWORK
PERFORMANCE:
Before a network is designed, deployed,
and operated, factors that critically
influence its performance need to be
considered. For WMNs, the critical
factors are summarized as follows:
Radio techniques. Driven by the
rapid progress of semiconductor,
RF technologies, and
communication theory, wireless
radios have undergone a
significant revolution.
Scalability. Multi-hop
communication is common in
WMNs. For multi-hop networking,
it is well known that
communication protocols suffer
from scalability issues [62,72], i.e.,
6
when the size of network
increases, the network
performance degrades
significantly.
Mesh connectivity. Many
advantages of WMNs originate
from mesh connectivity which is a
critical requirement on protocol
design, especially for MAC and
routing protocols.
Security. Without a convincing
security solution, WMNs will not
be able to succeed due to lack of
incentives by customers to
subscribe to reliable services.
APPLICATION:
Some of the applications of WMNs are as
follows:
Broadband Home Networking:
Mesh networking is needed to
resolve the location of the access
points problem in home
networking. The access points
must be replaced by wireless mesh
routers with mesh connectivity
established among them.
Therefore, the communication
between these nodes becomes
much more flexible and more
robust to network faults and link
failures.
Building Automation: Access
points are replaced by WMNs and
thus the deployment cost will be
significantly reduced. The
deployment process is also much
simpler due to the mesh
connectivity among wireless
routers.
Enterprise Networking: When
WMNS are used, multiple
backhaul access modems can be
shared by all nodes in the entire
network, and thus improve the
robustness and resource utilization
of enterprise networks. WMNs can
grow easily as the size of
enterprise expands.
Metropolitan Area Networks:
Compared to cellular networks,
wireless mesh MANs support
much higher data rates, and the
communication between nodes
does not rely on a wired backbone.
Compared to wired networks, e.g.,
cable or optical networks, wireless
mesh MAN is an economic
alternative to broadband
networking, especially in
underdeveloped regions. Wireless
mesh MAN covers a potentially
much larger area than home,
enterprise, building, or
community networks

Security Surveillance Systems: As
security is turning out to be a very
high concern, security surveillance
systems become a necessity for
enterprise buildings, shopping
malls, grocery stores, etc. In order
to deploy such systems at locations
as needed, WMNs are a much
more viable solution than wired
networks to connect all devices.
FUTURE OF WMNs :
One of the most promising aspects of
mesh networks is their ability to
reassemble themselves to fit changing
environments. A company called Mesh
Networks is now working on such mobile,
7
flexible networks for public safety.
"Envision a fire crew at a disaster scene,"
says Peter Stanforth, CTO of Mesh
Networks. "They form an ad hoc wireless
network where all devices see the others
via a radio. A helicopter hovering over the
scene can instantly become part of the
mobile mesh on the ground."
Within five years, the truly mobile mesh
network may take to the highways, as
swarms of cars equipped with the
technology can serve as nodes. Moteran, a
partnership between Mitsubishi and
Deutsche Telekom, is outfitting cars in
some German cities with high-bandwidth
mesh networking equipment for
entertainment and communications. And
Mesh Networks' Stanforth is working with
U.S. car makers on an application to alert
a driver when a car in front deploys its
airbags, winning precious seconds to avert
a crash.
The technology doesn't stop there. Intel is
planning to mesh home PCs, TV sets, and
stereos via low-cost hardware add-ons you
can buy at Radio Shack. And Radiant
Networks has already installed roof-
mounted, high-bandwidth wireless nodes
with steerable antennae throughout the
city of Salem, Virginia, so customers
connect for online access via other
customers rather than using wireless base
stations.
CONCLUSION:
A properly designed and configured
wireless mesh network provides the
necessary safeguards for data security and
in-band radio interference. This is
important for mobile users that require
secure remote access over wireless LANs
to connect back to their private data
networks.
Wireless mesh networks can be used as
the last mile access technology for
delivering broadband applications such as
education, tele-medicine and even e-
governance. Even in the metros and major
cities, wireless mesh clusters can be
created in business districts where there is
high data consumption to deliver high-
speed data access to enterprises and
consumers.
Wireless Mesh Networking can also
enable local governments and
transportation agencies to increase
operational efficiency and service
delivery.
REFRENCES:
[1] A.A. Abouzeid, S. Roy, Stochastic modeling
of TCP in networks with abrupt delay variations,
ACM/Kluwer Wireless Networks 9 (2003) 509
524.
[2] A. Acharya, A. Misra, S. Bansal, High-
performance architectures for IP-based multihop
802.11 networks, IEEE Wireless
Communications 10 (5) (2003) 2228.
[3] A. Adya, P. Bahl, J . Padhye, A. Wolman, L.
Zhou, A multi-radio unification protocol for
IEEE 802.11 wireless networks, in: International
Conferences on Broadband Networks
(BroadNets), 2004.
[4] D. Aguayo, J . Bicket, S. Biswas, G. Judd, R.
Morris, Link-level measurements from an
802.11b mesh network, in: ACM Annual
Conference of the Special Interest Group on Data
Communication (SIGCOMM), August 2004, pp.
121131.
[5] D. Aguayo, J . Bicket, S. Biswas, D.S.J . De
Couto, R. Morris, MIT Roofnet Implementation.
Available from:
<http://pdos.lcs.mit.edu/roofnet/design/>.
[6] O.B. Akan, I.F. Akyildiz, ARC for real-time
traffic: ARC: the analytical rate control scheme
for real-time traffic in wireless networks,
IEEE/ACM Transactions on
Networking 12 (4) (2004) 634644.
[7] O.B. Akan, I.F. Akyildiz, ATL: an adaptive
transport layer suite for next-generation wireless
internet, IEEE J ournal on Selected Areas in
Communications 22 (5) (2004) 802817.
8
G.M.R.Institute of Technology
Rajam

Wireless LAN Security


Presented by

K.manikantaReddy II ECE & A.Satyanarayana II ECE
manikanta2006@gmail.com satya_021088@yahoo.com

Abstract
Although a variety of wireless
network technologies have or will
soon reach the general business
market, wireless LANs based on the
802.11 standard are the most likely
candidate to become
widely prevalent in corporate
environments. Current 802.11b
products operate at 2.4GHz, and
deliver up to 11Mbps of bandwidth
comparable to a standard Ethernet
wired LAN in
performance. An upcoming version
called 802.11a moves to a higher
frequency range, and
promises significantly faster speeds.
It is expected to have security
concerns similar to 802.11b.

This low cost, combined with strong
performance and ease of
deployment, mean that many
departments and individuals already
use 802.11b, at home or at work
even if IT staff and security
management administrators do not
yet recognize wireless LANs as an
approved
technology. This paper addresses
the security concerns raised by both
current and upcoming
802.11 network technologies.

Without doubt, wireless LANs have a
high gee-whiz factor. They provide
always-on network
connectivity, but dont require a
network cable. Office workers can
roam from meeting to meeting
throughout a building, constantly
connected to the same network
resources enjoyed by wired, desk-
bound coworkers. Home or remote
workers can set up networks without
worrying about how
to run wires through houses that
never were designed to support
network infrastructure.
Wireless LANS may actually prove
less expensive to support than
traditional networks for employees
that need to connect to corporate
resources in multiple office locations.
Large hotel chains, airlines,
convention centers, Internet cafes,
etc., see wireless LANs as an
additional revenue opportunity for
providing Internet connectivity to
their customers. Wireless is a more
affordable and logistically acceptable
alternative to wired LANs for these
organizations. For example, an
airline can provide for-fee wireless
network access for travelers in
frequent flyer lounges or anywhere
else in the airport.

Wireless LAN Business Drivers
Market maturity and technology
advances will lower the cost and
accelerate widespread adoption
of wireless LANs. End-user
spending, the primary cost metric,
will drop from about $250 in 2001
to around $180 in 2004 (Gartner
Group). By 2005, 50 percent of
Fortune 1000 companies will
have extensively deployed wireless
LAN technology based on evolved
802.11 standards (0.7
probability). By 2010, the majority of
Fortune 2000 companies will have
deployed wireless LANs
to support standard, wired network
technology LANs (0.6 probability).

Reality Check
For the foreseeable future wireless
technology will complement wired
connectivity in enterprise
environments. Even new buildings
will continue to incorporate wired
LANs. The primary reason is
that wired networking remains less
expensive than wireless. In addition,
wired networks offer
greater bandwidth, allowing for future
applications beyond the capabilities
of todays wireless
systems.
Although it may cost 10 times more
to retrofit a building for wired
networking (initial construction
being by far the preferred time to set
up network infrastructure), wiring is
only a very small fraction
of the cost of the overall capital
outlay for an enterprise network. For
that reason, many corporations are
only just testing wireless technology.
This limited acceptance at the
corporate
level means few access points with a
limited number of users in real world
production
environments, or evaluation test
beds sequestered in a lab. In
response, business units and
individuals will deploy wireless
access points on their own. These
unauthorized networks almost
certainly lack adequate attention to
information security, and present a
serious concern for
protecting online business assets.

Finally, the 802.11b standard shares
unlicensed frequencies with other
devices, including
Bluetooth wireless personal area
networks (PANs), cordless phones,
and baby monitors. These
technologies can, and do, interfere
with each other. 802.11b also fails to
delineate roaming
(moving from one cell to another),
leaving each vendor to implement a
different solution. Future
proposals in 802.11 promise to
address these shortcomings, but no
shipping products are on the
immediate horizon.

Wireless Security In The
Enterprise
802.11bs low cost of entry is what
makes it so attractive. However,
inexpensive equipment also
makes it easier for attackers to
mount an attack. Rogue access
points and unauthorized, poorly
secured networks compound the
odds of a security breach.

The following diagram depicts an
intranet or internal network that is
properly configured to handle
wireless traffic, with two firewalls in
place, plus intrusion detection and
response sensors to
monitor traffic on the wireless
segment. One firewall controls
access to and from the Internet. The
other controls access to and from the
wireless access point. The access
point itself is the bridge
that connects mobile clients to the
internal network.
The access point has a dedicated IP
address for remote management via
SNMP (Simple
Netw
ork Management Protocol). The
wireless clients themselves usually
laptops or desktops
and handhelds may also use
SNMP agents to allow remote
management. As a result, each of
these devices contains a sensor to
ensure that each unit is properly
configured, and that these
configurations have not been
improperly altered. The network itself
is regularly monitored to
identify access points in operation,
and verify that they are authorized
and properly configured.

While this paper focuses on the risk
issues from a corporate network
perspective, these same
issues apply to home networks,
telecommuters using wireless, and
public use networks such as
those being set up by Microsoft to
allow wireless Internet access at
select Starbucks locations.
Remote users are now able to
access internal corporate resources
from multiple types of foreign
networks. Even organizations
without internal wireless networks
must take wireless into account
as part of their overall security
practices.

Known Risks
Although attacks against 802.11b
and other wireless technologies will
undoubtedly increase in
number and sophistication over time,
most current 802.11b risks fall into
seven basic categories:
Insertion attacks
Interception and unauthorized
monitoring of wireless traffic
J amming
Client-to-Client attacks
Brute force attacks against
access point passwords
Encryption attacks
Misconfigurations
Note that these classifications can
apply to any wireless technology, not
just 802.11b.
Understanding how they work and
using this information to prevent their
success is a good
stepping stone for any wireless
solution.

Insertion Attacks
Insertion attacks are based on
deploying unauthorized devices or
creating new wireless networks
without going through security
process and review.

Unauthorized Clients An attacker
tries to connect a wireless client,
typically a laptop or PDA, to an
access point without authorization.
Access points can be configured to
require a
password for client access. If there is
no password, an intruder can
connect to the internal
network simply by enabling a
wireless client to communicate with
the access point. Note,
however, that some access points
use the same password for all client
access, requiring all
users to adopt a new password
every time the password needs to be
changed.

Unauthorized or Renegade
Access Points An organization
may not be aware that internal
employees have deployed wireless
capabilities on their network. This
lack of awareness could
lead to the previously described
attack, with unauthorized clients
gaining access to corporate
resources through a rogue access
point. Organizations need to
implement policy to ensure
secure configuration of access
points, plus an ongoing process in
which the network is scanned
for the presence of unauthorized
devices.



Interception and Monitoring of
Wireless Traffic
As in wired networks, it is possible to
intercept and monitor network traffic
across a wireless LAN.
The attacker needs to be within
range of an access point
(approximately 300 feet for 802.11b)
for
this attack to work, whereas a wired
attacker can be anywhere where
there is a functioning
network connection. The advantage
for a wireless interception is that a
wired attack requires the
placement of a monitoring agent on
a compromised system. All a
wireless intruder needs is
access to the network data stream.

There are two important
considerations to keep in mind with
the range of 802.11b access points.
First, directional antennae can
dramatically extend either the
transmission or reception ranges of
802.11b devices. Therefore, the 300
foot maximum range attributed to
802.11b only applies to
normal, as-designed installations.
Enhanced equipment also enhances
the risk. Second, access
points transmit their signals in a
circular pattern, which means that
the 802.11b signal almost
always extends beyond the physical
boundaries of the work area it is
intended to cover. This
signal can be intercepted outside
buildings, or even through floors in
multistory buildings. Careful
antenna placement can significantly
affect the ability of the 802.11b
signal to reach beyond
physical corporate boundaries.

Wireless Packet Analysis A
skilled attacker captures wireless
traffic using techniques
similar to those employed on wired
networks. Many of these tools
capture the first part of the
connection session, where the data
would typically include the username
and password. An
intruder can then masquerade as a
legitimate user by using this
captured information to hijack
the user session and issue
unauthorized commands.
Broadcast Monitoring If an
access point is connected to a hub
rather than a switch, any
network traffic across that hub can
be potentially broadcasted out over
the wireless network.
Because the Ethernet hub
broadcasts all data packets to all
connected devices including the
wireless access point, an attacker
can monitor sensitive data going
over wireless not even
intended for any wireless clients.
Access Point Clone (Evil Twin)
Traffic Interception An attacker
fools legitimate wireless
clients into connecting to the
attackers own network by placing an
unauthorized access point
with a stronger signal in close
proximity to wireless clients. Users
attempt to log into the
substitute servers and unknowingly
give away passwords and similar
sensitive data.

Jamming
Denial of service attacks are also
easily applied to wireless networks,
where legitimate traffic can
not reach clients or the access point
because illegitimate traffic
overwhelms the frequencies. An
attacker with the proper equipment
and tools can easily flood the 2.4
GHz frequency, corrupting
the signal until the wireless network
ceases to function. In addition,
cordless phones, baby
monitors and other devices that
operate on the 2.4 GHz band can
disrupt a wireless network
using this frequency. These denials
of service can originate from outside
the work area serviced
by the access point, or can
inadvertently arrive from other
802.11b devices installed in other
work
areas that degrade the overall signal.

Client-to-Client Attacks
Two wireless clients can talk directly
to each other, bypassing the access
point. Users therefore
need to defend clients not just
against an external threat but also
against each other

File Sharing and Other TCP/IP
Service Attacks Wireless clients
running TCP/IP services
such as a Web server or file sharing
are open to the same exploits and
misconfigurations as
any user on a wired network.
DOS (Denial of Service) A
wireless device floods other wireless
client with bogus packets,

creating a denial of service attack. In
addition, duplicate IP or MAC
addresses, both intentional
and accidental, can cause disruption
on the network.
Brute Force Attacks Against
Access Point Passwords
Most access points use a single key
or password that is shared with all
connecting wireless
clients. Brute force dictionary attacks
attempt to compromise this key by
methodically testing
every possible password. The
intruder gains access to the access
point once the password is
guessed.
In addition, passwords can be
compromised through less
aggressive means. A compromised
client can expose the access point.
Not changing the keys on a frequent
basis or when
employees leave the organization
also opens the access point to
attack. Managing a large
number of access points and clients
only complicates this issue,
encouraging lax security
practices.

Attacks against Encryption
802.11b standard uses an encryption
system called WEP (Wired
Equivalent Privacy). WEP has
known weaknesses (see
http://www.isaac.cs.berkeley.edu/isa
ac/wep-faq.html for more
information), and these issues are
not slated to be addressed before
2002. Not many tools are
readily available for exploiting this
issue, but sophisticated attackers
can certainly build their own.

Many access points ship in an
unsecured configuration in order to
emphasize ease of use and
rapid deployment. Unless
administrators understand wireless
security risks and properly
configure each unit prior to
deployment, these access points will
remain at a high risk for attack or
misuse. The following section
examines three leading access
points, one each from Cisco,
Lucent and 3Com. Although each
vendor has its own implementation
of 802.11b, the underlying
issues should be broadly applicable
to products from other vendors.

Server Set ID (SSID) SSID is a
configurable identification that allows
clients to communicate
with an appropriate access point.
With proper configuration, only
clients with the correct SSID
can communicate with access
points. In effect, SSID acts as a
single shared password
between access points and clients.
Access points come with default
SSIDs. If not changed,
these units are easily compromised.
Here are common default
passwords:

tsunami Cisco
101 3Com
RoamAbout Default Network Name
Lucent/Cabletron
Compaq Compaq
WLAN Addtron
intel Intel
linksys Linksys
Default SSID, Wireless

Other manufacturers
SSIDs go over the air as clear text if
WEP is disabled, allowing the SSID
to be captured by
monitoring the networks traffic. In
addition, the Lucent access points
can operate in Secure
Access mode. This option requires
the SSID of both client and access
point to match. By
default this security option is turned
off. In non-secure access mode,
clients can connect to the
access point using the configured
SSID, a blank SSID, or an SSID
configured as any.

Wired Equivalent Privacy (WEP)
WEP can be typically configured as
follows:
No encryption
40 bit encryption
128 bit encryption
Most access points ship with WEP
turned off. Although 128 bit
encryption is more effective than
40 bit encryption, both key strengths
are subject to WEPs known flaws.

SNMP Community Passwords
Many wireless access points run
SNMP agents. If the
community word is not properly
configured, an intruder can read and
potentially write sensitive
data on the access point. If SNMP
agents are enabled on the wireless
clients, the same risk
applies to them as well.
By default, many access points are
read accessible by using the
community word, public.
3Com access points allow write
access by using the community
word, comcomcom. Cisco
and Lucent/Cabletron require the
write community word to be
configured by the user or
administrator before the agent is
enabled.

Configuration Interfaces Each
access point model has its own
interface for viewing and
modifying its configuration. Here are
the current interface options for
these three access points:
Cisco SNMP, serial, Web, telnet
3Com SNMP, serial, Web, telnet
Lucent / Cabletron SNMP, serial
(no web/telnet)
3Com access points lack access
control to the Web interface for
controlling configuration. An
attacker who locates a 3Com access
point Web interface can easily get
the SSID from the
system properties menu display.
3Com access points do require a
password on the Web
interface for write privileges. This
password is the same as the
community word for write
privileges, therefore 3Com access
points are at risk if deployed using
the default comcomcom
as the password.

Client Side Security Risk Clients
connected to an access point store
sensitive information
for authenticating and
communicating to the access point.
This information can be
compromised if the client is not
properly configured. Cisco client
software stores the SSID in the
Windows registry, and the WEP key
in the firmware, where it is more
difficult to access.
Lucent/Cabletron client software
stores the SSID in the Windows
registry. The WEP key is
stored in the Windows registry, but it
is encrypted using an undocumented
algorithm. 3Com
client software stores the SSID in the
Windows registry. The WEP key is
stored in the Windows
registry with no encryption.

Installation By default, all three
access points are optimized to help
build a useful network as
quickly and as easily as possible. As
a result, the default configurations
minimize security.

Wireless Information Security
Management
Process and technology are always
easily confused, and never more so
than with wireless
information security management. In
fact, the same business processes
that establish strong risk
management practices for physical
assets and wired networks also work
to protect wireless
resources. The following cost-
effective guidelines help enable
organizations to establish proper
security protections as part of an
overall wireless strategy and will
continue to work in spite of
wireless networkings rapid
evolution. The following items are an
introduction to this approach.
Wireless Security Policy and
Architecture Design Security
policy, procedures and best
practices should include wireless
networking as part of an overall
security management
architecture to determine what is and
is not allowed with wireless
technology.

Treat Access Points As Untrusted
Access points need to be identified
and evaluated on a
regular basis to determine if they
need to be quarantined as untrusted
devices before wireless
clients can gain access to internal
networks. This determination means
appropriate placement of
firewalls, virtual private networks
(VPN), intrusion detection systems
(IDS), and authentication
between access point and intranets
or the Internet.

Access Point Configuration Policy
Administrators need to define
standard security settings
for any 802.11b access point before
it can be deployed. These guidelines
should cover SSID,
WEP keys and encryption, and
SNMP community words.

Access Point Discovery
Administrators should regularly
search outwards from a wired
network to identify unknown access
points. Several methods of
identifying 802.11b devices exist,
including detection via banner strings
on access points with either Web or
telnet interfaces.
Wireless network searches can
identify unauthorized access points
by setting up a 2.4 GHz
monitoring agent that searches for
802.11b packets in the air. These
packets may contain IP
addresses that identify which
network they are on, indicating that
rogue access points are
operating in the area. One important
note: this process may pick up
access points from other
organizations in densely populated
areas.

Access Point Security
Assessments Regular security
audits and penetration assessments
quickly identify poorly configured
access points, default or easily
guessed passwords and
community words, and the presence
or absence of encryption. Router
ACLs and firewall rules
also help minimize access to the
SNMP agents and other interfaces
on the access point.

Wireless Client Protection
Wireless clients need to be regularly
examined for good security
practices. These procedures should
include the presence of some or all
of the following:
Distributed personal firewalls
to lock down access to the
client
VPNs to supplement
encryption and authentication
beyond what 802.11b can
provide
Intrusion detection and
response to identify and minimize
attacks from intruders, viruses,
Trojans and
backdoors_Desktop
assessments to identify and
repair security issues on the
client device

Managed Security Services for
Wireless Managed Security
Services (MSS) helps
organizations establish effective
security practices without the
overhead of an extensive, in-house
solution. MSS providers handle
assessment, design, deployment,
management and support
across a broad range of information
security disciplines. This 24/7/365
solution works with the
customer to set policy and
architecture, plus provides
emergency response, if needed.
These
services help an organization
operating wireless networks to:
Deploy firewalls that separate
wireless networks from
internal networks or the
Internet
Establish and monitor VPN
gateways and VPN wireless
clients
Maintain an intrusion
detection system on the
wireless network to identify
and respond to
attacks and misuse before critical
digital resource are placed at risk.

Internet Security Systems
Wireless LAN Solutions
Internet Security Systems products
and services provide a robust
security management solution
for wireless LANs. These rapidly
expanding offerings encompass:

Security Software Products
Internet Security Systems security
products already protect
wireless LAN environments against
known security risks. ISS Internet
Scanner network
vulnerability assessment product
probes networks to detect
unauthorized or poorly configured
wireless access points, as
represented in the diagram below.













The RealSecure Protection
System, deployed between a
wireless access point and the
corporate network, recognizes and
reacts to attacks and misuse
directed over the wireless LAN
(below). In addition, ISS renowned
X-Force research and
development team continually
update these products.
















Managed Security Services
Internet Security Systems Managed
Security Services protect
wireless LANS on a 24x7 basis
through remote network
assessments and tactical
deployment of
remotely managed intrusion
protection services. As new wireless
protections are added to ISS
security products, Managed Security
Services will deliver these additional
capabilities to our
customers.

Security Architecture Consulting
Internet Security Systems
Consulting Solutions Group has
in-depth security knowledge,
expertise, and proven methodology
required that helps
organizations assess, integrate,
design, and configure their wireless
LANs and surrounding
security infrastructure.

Wireless LAN Security Education
Internet Security Systems
SecureU education services
organization has developed wireless
LAN security content to help
customers understand the
nuances of wireless LAN security
and establish valid defensive
techniques to minimize security
risks.

Product Updates Internet Security
Systems X-Force research and
development team
continually adds product
enhancements that deliver new
protections against wireless LAN
risks.
These X-Press Update
enhancements quickly and easily
integrate into existing product
installations.

About Internet Security Systems
(ISS)
Founded in 1994, Internet Security
Systems (ISS) (Nasdaq: ISSX) is a
world leader in software
and services that protect critical
online resources from attack and
misuse. ISS is headquartered
in Atlanta, GA, with additional
operations throughout the United
States and in Asia, Australia,
Europe, Latin America and the
Middle East.

REFERENCES:






1.www.wikipedia.com
2.www.googlemail.com







PAPER ON WIRELESS ATM


PRESENTED BY


V.S.KASHYAP P.SURYA PRAKASH
vsk.kashyap@gmail.com prakash1431@gmail.com
ph no :9247562052 ph no:9885233780


Abstract:

This paper sketches the requirements and possibilities of
wireless ATM [Asynchronous Transfer Mode] in local area networks.
Because of the wide range of services supported by ATM networks, ATM
technology is expected to become the dominant networking technology in
the medium term for both public infrastructure networks and for local area
networks. ATM infrastructure can support all types of services, from time-
sensitive voice communications and desk-top multi-media conferencing, to
bursty transaction processing and LAN traffic. Extending the ATM
infrastructure with a wireless access mechanism meets the needs of those
users and customers that want a unified, end-to-end networking
infrastructure with high-performance, consistent service characteristics. The
paper introduces ATM concepts, discusses the requirements for wireless
ATM, in particular for data link control and radio functions. It closes with
some notes on development of wireless ATM research systems
standardization and spectrum allocations.







Introduction

Asynchronous Transfer Mode (ATM) combines the performance of
high speed switching systems developed by the telecommunications industry
and the flexibility of LANs developed by the computer industry. ATM
networks consist of fast packet switching systems linked by means of point-
to-point links to each other and to their terminal nodes. All data are
packaged in cells of 53 bytes a historical compromise with each cell carrying
5bytes of control information that allows the switch to process cells
according to service requirements of the originator and to route cells to the
applicable destinations. Terminals set up a connection, i.e. a route through
the network, to the destination terminal before they start transferring cells
over it. Besides advertising their destination, they also negotiate a quality-of-
service, specifying, for instance, the tolerable cell delay and cell loss
probability. ATM's high data rates, its negotiated connection set-up as well
as its intrinsic simplicity allow it to carry any type of traffic between any
number of network nodes.

ATM applications:
ATM technology is not geared to specific applications. Instead, it
aims to be future proof; i.e. capable of carrying any kind of traffic, ranging
from circuit switched voice to bursty asynchronous computer data traffic, at
any speed. With multi-media capabilities becoming available on computers,
ATM becomes the logical choice for networking multi-media desk tops .The
popularity of audio and video conferencing is rising and the ability to tele
share documents and to process them in real time with multiple remote
parties is being developed. These applications demand both large, scale
able bandwidths and quality-of-service guarantees, that legacy networks
such as Ethernet cannot provide. Such applications are used typically to
communicate with people at remote locations. The further away, the more
likely one is to desk-top conference with a person, instead of paying a visit.
As a result, main bandwidth consumption will be attributable to wide area
access communications, which is an important consideration in evaluating
wireless systems architectures and the efficiency with which they handle
traffic flows.











ATM services and adaptation layers:

Even in high-speed systems like ATM switches, capacity is,
ultimately, a scarce resource that must be shared between its users. In ATM
systems, the sharing method is arbitration by the switch (known to some as
``capitalist switching'') as opposed to the random sharing method exemplified
by Ethernet (known to some as``democratic switching''). In order for an ATM
switch to be able to play its role as service arbitrator, it must know the needs
of its terminal nodes. ATM terminal nodes request services in terms of
destination(s), traffic type(s), bit rate(s) and quality-of-service (QoS). If the
request can be granted without impacting the services already committed to,
the switch will grant the request. In that case, the node can expect to obtain
the requested services provided it does not exceed its requests until it
changes its requests (e.g. to ``drop all services''). The switch would deny
new users access to the medium if that access would cause degradation of
the services already agreed with existing users. Because cells pass through
finite buffers in the switches, ATM networks will occasionally drop cells,
and will also introduce delay variation. The ATM Adaptation Layer (AAL) can
conceal these effects by retransmission or reconstruction of dropped cells
and by filtering out jitter. Other functions of the AAL include segmentation of
user frames into cells, and re-assembly of those frames at the receiving side.

AAL0 is virtually empty, and provides direct access to the cell relay
service.
AAL1 emulates a synchronous, constant bit rate connection;
it is intended to support Constant Bit Rate services the bit rate is
constant during the connection; the switch services the node at a rate
that is consistent with the bit rate agreed during connection set up. This
service is well suited to conventional digital voice and video traffic.
AAL2 is geared to traffic that has a bit rate that varies in time and that
also requires delay bounds. Compressed video is a good example, its
rate varies with the compressibility of the successive images and delay
guarantees are needed to avoid jerky motion. AAL2 is intended for
Variable Bit Rate -Real Time services the bit rate varies between zero
and a peak bit rate as agreed at connection set-up. In addition, the
terminal advertises a sustained, or average cell rate and a maximum
burst size; i.e. the maximum time during which the source generates
cells at the peak rate. This service makes more effective use of the
switch capacity as it allows some statistical multiplexing to occur
between variable bit rate sources. The delay is bounded, which makes





this type of service especially suitable for compressed voice or video
transmission.
AAL3/4 as well as AAL5 provide frame segmentation and re-assembly
functions only, and are therefore suited for variable-rate traffic with no
delay requirements.

These AALs support three types of service:
Variable Bit Rate Non-Real Time is essentially VBR, except that
there is no guaranteed delay bound, just a guaranteed average. This type of
service would be appropriate for data traffic, which has less stringent delay
requirements but which still needs throughput guarantees. An example is
LAN interconnection. Available Bit Rate is a best effort service, neither rate
nor delay are guaranteed. However, the minimum and maximum rate are
guaranteed, as is a bound on the cell loss rate. This service allows an ATM
system to fill its channels to the maximum capacity when CBR or VBR
traffic are low. Unspecified Bit Rate is equivalent to ABR, but without
guaranteed minimum rate and bound on the cell loss rate. Typical use is for
non-real-time applications, such as file transfer, backup traffic and e-mail.
The standard protocol for broadband signaling, known as Q.2931 in ITU-T,
has its own adaptation layer the Signaling AAL or SAAL for short. It is
terminated in terminals, as all other AALs, but also in switches to
allow the terminals to communicate with switch-based connection control
functions.



Fig. 1 shows an example of a video
Conferencing application running on
a terminal that has a connection
to the ATM network through a high
speed optical fiber. The application
makes use of various AALs and of com-
munications management to set up conne-
ctions. Communications management,
in turn, relies on the services of
TheQ.2931 protocol and the underlying
SAAL








Wireless ATM:
Wireless ATM adds the advantages of mobility (i.e.,cordless
operation) to the service advantages of ATM networks. The mobility aspect
forces decoupling of the normal mapping of node and switch port. Instead, a
wireless access point ``connects'' the set of wireless nodes it services on a
single port of the ATM switch as shown in Fig. 2. There is a potential conflict
between the notion of reliable service as used in the ATM world and the use
of radio waves as a transport medium for information. Although in principle
unsolvable, radio link reliability is an engineering problem that can be solved
in practical terms ^through the design of adequate radio systems
and wireless link protocols. There are two fundamentally different
architectural options for the design of the wireless ATM link protocol:
Encapsulation and native mode operation.


Encapsulated ATM cell transport:

This type of link protocol carries ATM cells as pay-load of another link
protocol. A separate inter working unit, that emulates an ATM end-node,
provides the packing and unpacking of the ATM cells. See Fig.3





This approach allows existing wireless protocols to be used in
implementing a wireless ``ATM'' subsystem. However, few, if any, of these
protocols were designed to support traffic with small data units such as ATM
cells. For example, the wireless LAN standard IEEE 802.11 specifies link
turnarounds in the order of hundreds of microseconds, far more than the
duration of an ATM cell, even at the modest bit rate of 25 Bit/s. The
efficiency of encapsulated ATM is therefore low, too low to be practically
acceptable. Another problem with this approach is that the existing protocols
have no means of communicating load and rate information between nodes.
In order to service its nodes in the most efficient manner, the wireless access
point would need to know the instantaneous load conditions of its nodes.
Wireless protocols like IEEE 802.11 and ETSI HIPERLAN do not support the
transfer of such knowledge. Even if a suitable radio protocol would be
available, the conversion taking place in the inter working unit is inefficient at
best ^as the conversion would take place at every transition from wired to
wireless network segments, the overall impact would be considerable.
Further, any change in ATM protocols would impact the inter working
function; upgrading the installed base could well prove impossible ^if only
because interworking functions, having to operate at high speed, could be
implemented in firmware or even hardware rather than more easily upgraded
software.

Native mode ATM cell transport:

The native mode ATM approach avoids the inefficiency inherent
in ``misusing'' an existing protocol; instead it uses a protocol that is designed
with the basic needs of ATM operations in mind. The unit of information
transfer is the ATM cell and the wireless access point functions as an
extension of the switch that services the nodes in a sequence best fitted to
their advertised traffic needs. The wireless subsystem operates on ATM
cells and thus it allows existing as well as new ATM- based communications
software to be used. This integrated approach also simplifies resource and
capacity management.







The block diagram in Fig. 4 translates into the following more detailed
protocol architecture for an ATM network with a wireless segment.
The application data path is shaded, the bold text indicates areas of new
work and/or significant change to existing functions and standards.


In the arrangement of Fig. 5, the access point fulfills the role of an
ATM switch, with mobility management added to it. This solution allows
transparent inter working. With transparency we mean that the ATM switch
does not need to be different in this environment as compared to the pure
wire line situation. However, in this approach the access point would have to
provide all additional functionality such as mobile call set-up, call re-routing
(hand-overs), required to support moving users, etc. These additional
functions could also be added to the Signaling and Control functions of
anATM switch. This would require extra communications between Access
Point and Switch to support resource management and mobility support. Fig.
6 shows a simple solution. Instead of the full suite of ATM stacks,


this diagram shows a configuration where resource management and
mobility control are performed by means of a ``backdoor'' connection
between ATM switch and access point. Signaling traffic is transparently
forwarded to the switch where most of the resource management and





mobility functions reside. The switch relegates access point related tasks to
a subsidiary resource and mobility manager in the access point. The back
door is typically a semi-permanent ATM connection. The advantage of this
approach is that a powerful switch controller relieves the base station,
that can be more compact. The flip side is that the switch's functionality
needs to be extended, notably in the call control signaling and call handling ^
this has implications for standardization. In summary, native mode wireless
ATM, because of its seamless integration with the ATM switching
environment, promises to deliver optimum performance.

Security:

Like any wireless system, wireless ATM networks will be subject to
potential listening in by anyone with the same equipment. Therefore, some
form of protection to eavesdropping must be provided. Note that the security
needs of premises networks are limited to what is called `wire equivalent
protection''. Sophisticated authentication and access controls as
implemented in public mobile services like GSM are not needed. A simple
form of encryption on the links between nodes and access points will suffice
for premises networks.


Data link control requirements:
The functional requirements of Data Link Control(DLC, includes
LLC and MAC) in a wireless ATM system differ from those of other LAN-type
wireless MAC designs. A major difference is the need to reduce the high
physical overhead inherent in high-speed radios using a small packet size,
the need to conceal the lossy medium to ATM users and ATM's requirement
to provide a range of different bearer services and quality-of-service
guarantees. Other important issues are co-operation with the ATM switch
and its resource management and reuse management, i.e. dealing with the
presence of interfering access points .
.
Connection set-up and clearing:
In ATM systems, connection set-up and clearing precede and
terminate the use of transfer services by a node. During connection set-up
and clearing, user and switch negotiate the service to be provided
(connection set-up) or agree to terminate a service. In a wireless ATM
system this is not different: here too the switch is involved but so is its logical




extension, the access point. The latter becomes the third party in the
resource management process. The simplest way to model this is probably
such that the node ^switch interaction remains unchanged but that the
switch's resource management verifies changes in capacity allocation with
the appropriate access point through a backdoor ^see also section 2.3. In
this way, consistency between the capacity decisions of the switch and the
actual use of capacity by the access point is assured. Tight coupling
between the management and control functions of switch and access points
would be advantageous in terms of performance but the proprietary nature of
such functions may well prove to stand in the way of standardization of this
back door. Connection set-up and clearing in ATM systems is handled by
Control and Signaling functions that have their own communications protocol
(referred to as Q.2931 in the ITU-T) above the ATM layer. The resource
management model presented here allows the Q.2931 protocol to be re-
used in a wireless system with none or a few extensions.

Transfer services:
The wireless ATM DLC has to support a set of transfer services that
maps to the set of services defined by UNI 4.0. The service parameters are
negotiated at connection set-up time and, if successful, result in the creation
of a bearer with desired characteristics. The user of the wireless ATM DLC
sees only the bearer or bearers. User data transfers on a given bearer result
in the execution of the appropriate DLC procedures to deliver that data to its
destination. Note that this is different from the typical LAN-based DLC such
as Ethernet or its wireless derivatives IEEE 802.11 and ETSIHIPERLAN.
These have no concept of Fig. 6. Detailed protocol architecture of native
mode ATM, with a signaling -transparent access point. bearer and hence no
memory of service parameters over time. Here, each packet transmission
request carries service parameters.

. Mobility support:
Common to most wireless systems is the support for the mobility of
users, either while communicating or between communications periods:
mobility with continuous service or mobility with service interruption .In the
former, service is maintained as a moving node leaves the area supported
by one access point and moves into the area supported by another access
point. This process is called handover. It will be obvious that main taining full
communications service, certainly at the full user bit rate, during a handover,
is a tall order: traffic flows must be switched between access points with
microsecond precision. Such a handover must be supported by link level
protocol exchanges that tell the access points about the movement of the
mobile node.




It could be argued that service degradation during a handover is
acceptable: few if any users can be expected to interact with their node
devices while moving down a hallway or going up a staircase in such a way
as to require a significant fraction of the link capacity.

Whereas a voice conversation or a file transfer seem reasonable
operations to perform while walking down the hall (and so soliciting a
handover), a video conference seems less easily justified in those
circumstances. This consideration leads to a relaxation of the handover
performance demands: low rate connections tolerate delays of many
milliseconds, ample time to switch access point if one assumes that access
point acquisition by the moving nodes has occurred before the switching
occurs. Successful handovers require that the access point being moved to,
has spare capacity to handle the additional traffic of the bearer(s) being
transferred. This is a common problem in all mobile and cellular systems.
Being essentially a reservation-based system, a wireless ATM system would
require holding capacity in reserve at each access point for supporting
handovers. The amount of spare capacity depends very much on the
applications for which the wireless system is used. Given that the types of
bearer that can be handed over typically are low-rate bearers, the reserve
capacity could be as low as 10 to 20%of the capacity of each access point.

.ATM inter working:
Mobility is not a concept that is known in the ATM world. Here, users
``hang'' off a port and their operating parameters and service profiles are
associated with that port. The provision of mobility will require changes to the
way ATM systems handle their users. Mobility management could well be
implemented in much the same way as resource management by means of
a ``back-door'' between switch and wireless access point. Like with resource
management, the question arises if this backdoor should be standardized.
The answer could well be the same, irrespective of the type of mobility to be
supported.
Alternative models of link control
The preceding discussion has skirted the issue of the link control
model for wireless ATM systems. The use of access points as interchange
points between the wired and the wireless segments of the ATM network is
taken for granted but their role in managing the sharing of the radio link ( use
of the medium) between the nodes that make up the local network, was not
described.





Distributed control:
Current link control models for wireless systems include
centralized control (TDMA or CDMA) as used in AMPS and GSM and Packet
Reservation Multiple Access (PRMA), distributed control as in HIPERLAN, or
a combination thereof as in IEEE 802.11 (DCF/ PCF). In theory, centralized
control at the link level could be combined with distributed control at the
medium access level. In this case, the link level control would have to
assume that the medium access mechanism can support its service
requirements provided these fall within the range of the capacity of the
medium. In practice, the link level control would not be able to see changes
of the instantaneous medium activity levels and hence it would not be able to
adapt to changing conditions in any other way than by managing the bearer
flows on the basis of feedback from the medium access mechanism. This
``re-active'' control can only operate by the virtue of a certain level of data
loss. In a homogenous environment ^i.e. one in which all link controls
operating within radio range perform the same algorithms ^this approach
could work if one assumes that the above data loss is acceptable. However,
considering the huge installed base of LAN-based software that does not
implement such controls, homogeneous systems are a theoretical possibility
only. That leaves the question of performance. Distributed medium access
control carries the penalty of having to resolve medium access contention at
each medium access attempt. As this process depends on factors that are
either time independent (such as the CSMA of IEEE 802.11 or the EY-NPMA
of HIPERLAN) or use some form of slotting in combination with predictive
contention resolution (the URN scheme), the overall performance is less
than what could be achieved by combining link and medium access control
as in a centralised medium access scheme. For example, EY-NPMA attains
an efficiency on the order of 30% under typical conditions.
Physical layer requirements:

A bit rate between 10 and 25 Mbit/s will be required for wireless ATM
systems (see Air interface performance, below) In the typical in-door
applications of wireless ATM networks, multi-path distortion of the radio
signal is a serious problem that does not admit of cheap and cheerful
solutions when the bit rate is in the tens of Mbit/s. The only system today
that has attacked this problem is ETSI's HIPERLAN. It specifies GMSK
modulation in combination with an equalizer to combat the multi-path effects.
The required computational power has been estimated in the 4 Gflops range.
Coded OFDM would reduce that somewhat and lead to better overall system
parameters. The improvement relative to GMSK with equalizer, remains a
subject of research at this time. Another possible way to reduce multi-path




effects is to use adaptive directional antennas, but their use for portable and
thus mobile use has yet to be demonstrated. As antenna size goes down
with increasing frequency, this solution is unlikely to prove practical at
frequencies below 10 GHz. In a wireless ATM system, switching the link
between access point and a node to another node will happen frequently,
with the maximum rate corresponding to the size of an ATM cell . Even at
``only'' 10Mbit/s, the transmission of an ATM cell takes no more than 40
microseconds. Radio link turnaround is dominated by physical effects: the
time it takes for the transmitter signal to stabilize, the time it takes, after
transmission, for a receiver to become sensitive enough to detect a weak
signal, the time it takes for a receiver to synchronize to a signal and compute
the channel compensation parameters (e.g. equalizer settings, antenna
selections) etc., easily add up to tens of microseconds, much more than the
time it takes for a radio signal to reach the most distant node and to travel
back. Although these problems present significant technical challenges,
there is no reason to assume they cannot be solved. As indicated above, link
reliability is not to be assumed in radio-based systems where both bursty
interference and background noise affect the radio channel. Both types of
noise can be treated with the aid of a signal coding that spreads the effect of
interference bursts and that enhances the short range redundancy of the
data being transmitted. These codings can be applied by the radio itself or
they could be applied by the applications that use the radio link. The latter
has the advantage that more efficient coding is possible as the applications
know the structure and value of the data they process. Wireless ATM
systems may be expected to incorporate this adaptive coding approach.
Forward error correcting codes, possibly incorporating some of the coding
mentioned above, in combination with Automatic Retransmission Request
(ARQ) error control are well known techniques that can be applied to
wireless ATM as well as to any other means of wireless data transmission.
Air interface performance:

Scarce spectrum favors the use of multiple access in the time
domain. This requires half duplex operation with frequent link turnarounds ^
which puts limits on wireless link utilization and on the increase of
performance with increases in bit rate. Link turnaround is subject to some
physical constants such as transmitter turn-on and turn-off times and
receiver turn-on and synchronization times. Moreover , omni-directional radio
transmission is affected by multi-path fading. Methods to combat this
phenomenon (e.g. equalization, multi-tone transmission), require the
receiver to train on a transmitted preamble before data reception is possible.
The duration of the preamble is by and large independent of the bit rate,
which implies, that the overhead per fixed length packet increases with




increasing bit rate. Also, signal processing requirements grow more than
linearly with increasing rate. With current technology, this factor argues
against radio bit rates that exceed, say, 25 Mbit/s. The jury is still out on
what rate is the best in terms of systems response time, medium utilization
efficiency and channelisation needs, but it is expected that the actual ratewill
lie between 10 and 25Mbit/s. Full duplex operation with separate, possibly
directional, radio channels would be possible for continuously used links
such as wireless backbones.
Radio spectrum:
All wireless ATM systems developed or planned use radio as the
wireless medium and all except the J apanese AWA project are intended for
private use. With bandwidths of 10 to 20 Mb/s being required for ATM, the
only RF spectrum with sufficient capacity available for unlicensed
applications are the 2.4 GHz and 5.8 GHz ISM bands in the US and most
other countries and the 5.150 to 5.300 GHz band in Europe for
``HIPERLANs'' In the US, the FCC has issued a Notice of Proposed
Rulemaking (NPRM 96-193) that proposes 5.150 to 5.350 GHz, parallel the
European 5.2 GHz allocation .The result would be that 5 GHz equipment
would be assigned to high speed digital networks inmost countries of the
world with J apan as notable exception. Given the presence of interference
from industrial, scientific and medical equipment and other unlicensed users
in the ISM bands, the 5.2 GHz band is an ideal band for wireless ATM
applications. The 200 MHz (150 MHz in Europe) should be considered
sufficient for initia deployment of user systems; as market penetration
increases, correspondingly more spectrum will be needed to avoid local
overloading of the radio spectrum








Such overloading would deny wireless ATM systems to deliver on
their most important promise ^a reliable quality of service. Fig. 8 shows the
various users of the 5 GHz band and the range that could be used for
expansion


Conclusion
Wireless ATM has been firmly established as a promising area of
research, product development and standardization. The momentum to
make progress in each of these three fields is there, thanks not only to the
stabilization of the ATM service specifications, but also thanks to the efforts
of business and regulators to find suitable RF spectrum for local area
networks based on the concept of wireless ATM.


References
[1] J .T. J ohnson, ATM networking gear, Data Commun. (October 1993).
[2] L. Ceuppens and T. Bosser, ATM, setting the standards,
Telecommunications, Int'l ed. (April 1994).
[3] J . Gould, ATM's long, strange trip to the mainstream, Data
Commun. Int'l (J une 1994).
[4] P. Newman, Capitalists, socialists and ATM switching, Data
Commun. Int'l (December 1994).
[5] L. Goldberg, ATM's growing pains bring maturity as UNI 4.0
evolves, ElectronicDesign(May 1995).
[6] D. Hughes and K. Hooshmand, ABR stretches ATM network
resources, Data Commun. Int'l, (April 1995).
[7] N. Rickard, ABR, realizing the promise of ATM,
Telecommunications, Int'l ed. (April 1995).
[8] M. Schwartz, Network management and control issues in multi-
Media wireless networks, IEEE Personal Commun. (J une 1995).





.

.







A DOCKING SYSTEM FOR MICROSATELLITES
BASED ON
MEMS ACTUATOR ARRAYS


N.Sandeep Kumar A.V.Rohit
III Year EEE III Year ECE
VNR VJIET CVRCE
sandy_nagapuri@yahoo.com rohit_tx@yahoo.co.in




Abstract:

Microelectromechanical system
(MEMS) technology promises to
improve performance of future
spacecraft components while reducing
mass, cost, and manufacture time.
Arrays of microcilia actuators offer a
lightweight alternative to conventional
docking systems for miniature satellites.
Instead of mechanical guiding structures,
such a system uses a surface tiled with
MEMS actuators to guide the satellite to
its docking site.
This report summarizes work on an
experimental system for precision
docking of a picosatellite using
MEMS cilia arrays.Microgravity is
imulated with an aluminum puck on an
airtable. A series of experiments is
performed to characterize the cilia, with
the goal to understand the influence of
normal force, picosat mass, docking
velocity, cilia frequency, interface
material, and actuation strategy (gait)
on the performance of the MEMS
docking system.





We demostrate a 4cm2 cilia array
capable of docking a 45g picosat with a
2cm2 contact area with micrometer
precision. It is concluded that current
MEMS cilia arrays are useful to position
and align miniature satellites for docking
to a support satellite.

Introduction:

Micro-Electro-Mechanical systems
(MEMS) is the integration of mechanical
elements, sensors, actuators, and
electronics on a common silicon
substrate through microfabrication
technology. While the electronics are
fabricated using integrated circuit (IC)
process sequences (e.g., CMOS, Bipolar,
or BICMOS processes), the
micromechanical components are
fabricated using compatible
"micromachining" processes that
selectively etch away parts of the silicon
wafer or add new structural layers to
form the mechanical and
electromechanical devices.
A number of MEMS cilia systems
have been developed with the common
goal of moving and positioning small
objects, so far always under the force of
gravity. Similar to biological cilia, all of
these systems rely on many actuators
working in concert to accomplish the
common goal of transporting an object
of much larger size than each individual
cilium. Recent techniques range from air
jets electromagnetic actuators
piezoelectric actuators single crystal
silicon electrostatic actuators thermal-
bimorph (bimaterial) actuators to
electrothermal (single-material)
actuators. In parallel, researchers have
studied the control of distributed
microactuation systems and cilia arrays.



The goal of this project was to
investigate the feasibility of a MEMS-
based space docking system. For such a
system, the docking approach is divided
into two phases: (1) free flight and
rendezvous, with the goal to achieve
physical contact between the two
satellites, and (2) precision docking with
the goal to reach accurate alignment
between the satellites (e.g., to align
electrical or optical interconnects). Phase
1 constitutes unconstrained motion with
6 degrees of freedom and lower accuracy
for the rendezvous. Phase 2 constitutes
planar motion with 3 degrees of freedom
and high accuracy. The MEMS-based
approach for phase 2 represents the key
innovation in this project. Therefore, the
investigation of MEMS cilia as a means
to achieve precise alignment between
two satellites represents the primary
focus of this report. To demonstrate a
complete system-level solution, phase 1
was investigated as well.


During this project thermally
actuated polyimide based microcilia, as
seen in Figure 1 are extensively
characterized to ascertain their
practicality for docking miniature
spacecraft. To this end, experiments
were performed using airtable, seen in
Figure 2, which was designed to support
the microcilia in a vertical configuration.
A rectangular aluminum block, referred
to as a puck, which has a mass between
40g and 45g is used to simulate a
picosatellite. The airtable can be tilted
towards the microcilia producing a
known normal force against the faces of
the chips. This force can then be
adjusted independently from the mass
of the simulated picosatellite. To
increase the realism of the experiment
and to facilitate data collection, position
sensing and position feedback are
incorporated and computer controlled.
Two position-sensing systems are used:
an array of Hall effect sensors and a
video capture based system. These are
strictly non-contact techniques
compatible with a space environment.
Next we describe the experiments
that were performed with the microcilia
with the goal to evaluate the
appropriateness of microcilia to
spacecraft docking applications. The
microcilia successfully moved blocks
of aluminum in excess of 40g of mass
and calculations indicate that a patch of
cilia 25cm in radius would be sufficient
to position a 40kg satellite. Four
different materials, polished ceramic,
polystyrene, smooth aluminum and
silicon, both rough and polished, are
used for the cilia to puck interface
surface. Polished ceramic achieved the
highest puck velocities of all interfaces
and polished silicon attained higher
velocities than rough silicon.
Throughout the course of this study
microcilia were able to provide the
speed, robustness, reliability and
strength for use in miniature spacecraft
applications.



Figure 3 describes a large, broad purpose
satellite surrounded by a constellation of
smaller, mission specific satellites. The
miniature satellites provide inspection,
maintenance, assembly and
communication services for their larger
brethren. One important future task for
the microsatellites is inspecting the
larger satellite for damage. Cameras
mounted on the microsatellite provide
imagery of the primary platform that is
otherwise unobtainable. From these
pictures, damage could be assessed and
the mission of the main satellite adapted.
Due to their simplicity, small size,
weight and limited interaction with
ground controllers these specialized
satellites are expected to be
indispensable during future missions.

Figure 4. Conceptual cilia picosatellite docking
application

As the size of satellites shrink, their
ability to carry fuel and power is
reduced. It is expected that this will
force microsatellites to dock frequently
to replenish their resources. Since the
time spent docking subtracts from the
microsatellites mission time, this
procedure should be as simple and quick
as possible. When docking micro
spacecraft two primary tasks: attaching
the microsatellite to the larger craft, and
orienting the satellite to connect fuel,
data and electrical services. The first
docking task is largely the domain of the
microsatellite and is dependent on how
quickly velocity adjustments can be
made, and on the specific attachment
mechanism. The second phase of
docking is dependent on the speed at
which the satellite can be positioned to
connect electrical and other services.
Figure 4 shows the conceptual
cilia application for space docking of
miniature satellites. A surface on the
mother satellite is covered with cilia
actuators that will position the
picosatellite for refueling and data
transfer.

Figure 5. Cross-sectional view of the cilia with two layers of
polyimide, titanium-tungsten heater loop, silicon-nitride
stiffening layer, and aluminumelectrostatic plate.

Experimental Setup

The measurements in this paper are
performed using the thermal actuator
based microcilia. A cross-section of
microcilia arms is shown in Figure 5.
The arrayed actuators are deformable
microstructures that curl out of the
substrate plane. The curling of the
actuators is due to the different CTE of
the polyimide layers that make up the
bimorph structures. For these devices the
top layer CTE is greater than the bottom
CTE. The thermal stress from this
interface causes the actuator to curl away
from the substrate at low temperatures
and towards it when heated. This stress
also aids in releasing the microcilia arms
because they automatically rise out of
the plane when the sacrificial layer is
etched. The displacement of the
microcilia arm relative to its location
before release both vertically, V, and
horizontally, H, is given by:



where R (?800m) is the radius of
curvature and L (430m) is the length
of the motion actuator which results in a
horizontal displacement, H, of 20m
and vertical displacement, V, of
114m.
An array of cilia is configured in
an 8 ?8 motion cell layout within a 1cm2
die. Each motion cell contains four
orthogonally-oriented actuators within
an area of 1.1mm ?1.1mm. A control
line independently actuates each actuator
of the motion cell. All actuators oriented
in the same direction in each motion cell
are controlled in parallel.
The microcilia arm is placed into
motion using a titanium-tungsten heating
resistor that is sandwiched between the
two silicon nitride and two polyimide
layers. When a current is passed through
this loop, the temperature of the actuator
increases, and the structure deflects
ownward. This produces both horizontal
and vertical displacements at the tip of
the microcilia.
Objects in contact with the surface
of the array are made to move by
coordinating the deflections of many
actuators. For this study both three and
four-phase gaits are used during the
interface experiments but only the four-
phase gait is used for the normal force
experiments. The motions of the
microcilia arms for the four-phase gait
are shown in Figure 6. This motion has
two transitions that produce forward
motion.
To assess the applicability of
microcilia to spacecraft docking this
study investigates the effects of:
operating frequency, normal force,
interface surfaces, microcilia
temperature, and, indirectly, microcilia
life span. Of these variables, frequency
and life span depend directly on the
thermal actuation nature of the cilia
while the remaining parameters should
be applicable to other of MEMS
microactuator arrays.


Figure 6. The four phase cilia motion gait showing all
phases of Motion

To perform measurements the
microcilia are placed vertically, at the
end of a tilted airtable as show in Figure
7. The table is first leveled and then the
angle adjustment is manipulated to
specify a slope running towards the
microcilia. By adjusting the slope of the
table, the mass of the aluminum airtable
puck can vary while the normal force
against the microcilia remains constant.
Conversely, the mass can remain fixed
while a variable normal force is applied
against the microcilia. Using this
parameter independence, the airtable
allows for an accurate simulation of
microsatellite docking in microgravity.
With this setup, the microcilia
can manipulate objects that would
otherwise flatten the actuators if all the
gravitational force were applied as the
normal force. This experiment uses four
microcilia chips attached to a copper
block that both actively cools the
microcilia using a Peltier junction and
holds them vertical at the end of the
airtable. The microcilia chips were glued
into a grove machined in the copper
block, forcing all four chips to lie in the
same plane in a horizontal linear.


Figure 7. LabView interface for the microcilia




Figure 8. Side view of the airtable

During all of the experiments the
microcilia were controlled with the
LabView interface seen in Figure 7 and
custom circuitry. Each of the microcilia
gaits is broken down into a statemachine
describing the sequence of movements for
each of the microcilia arms. This
statemachine is then loaded into a LSI
programmable gate array, one per
microcilia chip. The LabView interface
instructs the LSI chips which gait to use,
the direction to travel and the frequency
through which to cycle the cilia gait.
LabView also reads the Hall effect sensor
array and from that data controls the
starting and ending points of the puck.
To make displacement
measurements two separate systems are
employed. The first is a high-resolution
video capture system. This system,
equipped with a zoom lens, allows for
relative measurements on the order of
5m and for capturing expanded views
of the system. A video capture and
distance measurement system has been
developed. A flow diagram, shown in
Figure 9, details the specific tasks that
are performed upon a video image. The
image of the puck is captured using a
black and white CCD camera mounted
on a variable zoom, 1-10, microscope.
This image is then converted to a black
and white image to allow processing
using Matlab.



Figure 9. Block flow diagramof image capture
system
Once converted, the center points
of both the horizontal and vertical white
regions are calculated. After the center
point calculations are complete, Matlab
performs an AND function to determine
the center pixel of the captured image.
All of the center points can be combined
and averaged relative to an internal clock
to reduce the error of the capture system.
The other measurement system is
an array of Hall effect sensors. These
sensors interact with a magnet mounted
atop the puck to provide micrometer
resolution. The Hall effect sensor array
is integrated into the LabView
controlling software allowing fully
automated experiments to be performed.
Using either of these systems it is
possible to collect relative puck
position and from this to compute the
velocity and acceleration of the puck

Experimental Results
The goal of this research is to
evaluate the applicability of microcilia
arrays to microsatellite docking.
Thermal microcilia arrays are
parameterized for operating frequency,
normal force, puck mass, interface
surfaces, cilia temperature and cilia
lifespan. The results for these
experiments are presented here.
.

Figure 10. The velocity of the airtable puck (using the
beveled plastic interface) versus different drive frequencies
as a function of normal forces


Effects of Normal Force

Figure 10 shows the velocity of the puck
at different frequencies for four different
normal forces. For all of these data
points the mass of the puck is 41.20gr
and the interface surface is a 0.9cm
?2.5cm piece of polystyrene, beveled on
the edges. Each data point is an average
of four runs over a distance of 0.8mm.
This setup contains two strong
resonant frequencies between 13 and
16Hz and between 30 and 33Hz as
illustrated by the graph flattening at
these points. For these measurements the
video system is used to record the puck
velocity.
Outside of these regions the puck
velocity increases linearly which
indicates the puck moving in accordance
to the driving period. This characteristic
indicates the interface between the puck
and microcilia arms is experiencing a
fixed slip component. At these
frequencies the puck motion seems
largely the result of a step and carry
transport. One conclusion from this
graph is that the overall velocity of the
puck increases as the normal force
against the cilia surface decreases to a
minimum normal force value. This is an
expected result because as the normal
force increases so does the
precompression of the cilia, reducing
their total vertical and horizontal motion.





Figure 11. Influence of interface material on puck velocity
using a three-phase gait


Figure 12. Influence of interface material on puck velocity
using a four-phase gait

Differences between thermal
conduction and surface roughness of the
puck to microcilia interface affect step
size and puck velocity. The five puck-to
- microcilia interface materials
examined are: polished ceramic, hard
polystyrene plastic, aluminum, polished
and unpolished silicon. Puck velocity
versus frequency for the differing
interface materials is shown in Figure 11
and Figure 12 for three and four-phase
gaits, respectively. Puck velocity is
obtained by averaging a minimum of
five trials per frequency with a normal
force of 63N/mm2. The distance the
puck traveled for these measurements
varies between 0.1mm to 0.6mm with
increasing actuation frequency.



Figure 13. Comparison of both rough and polished silicon
interface for both three and four-phase gaits

As summarized in Figure 13, the
velocity of the puck is dependent on
the material interfaced with the
microcilia. The thermal conduction of
the interface material is thought to be
the major cause for the variation in
velocity magnitude per material.
Surface roughness is also observed to
have some influence, but to a much
lesser extent. Aluminum and silicon
have the highest thermal conduction
and this results in the lowest velocities.
Ceramic, an excellent thermal and
electrical insulator, delivers some of
the highest velocities. Low thermal
conduction of the ceramic interface
allows the cilia to heat and cool in an
optimal fashion resulting in high
actuation amplitudes and high
velocities.
Missing data points in the
three-phase graph and the flatter areas
of the other graphs are due to the puck
oscillating with zero or reduced
velocity for multiple trials at that
frequency. This effect is distributed
over the entire cilia experimental
surface. The regions of 17.5Hz and
33Hz show the most pronounced
reduction in puck velocity for both
gaits and all interface materials. The
variation of this effect for different
surface material and puck mass
indicates that it is strongly dependent
upon the specific geometry of the
experiment. Regardless of this minor
variation, it is thought that this
phenomenon can be traced partly to
the puck breaking contact with the
microcilia surface during part of the
motion cycle. As the normal force is
increased, this effect becomes less
pronounced, however, it is still
consistently observed. Within these
frequency bands the puck was
observed to move away from the puck
surface on the order of 100m, lending
support to this theory.
Thermal Effects
As the background temperature
of the microcilia is allowed to increase
the actuators become less effective.
With rising background temperature it
takes longer for the cilia to gain heat
during the actuated portion of its
motion cycle. This results in lowering
of the maximum available driving
frequency.The maximum background
temperature becomes large enough that
the heater loop cannot raise the
temperature of the cilia higher than the
background. At this background
temperature, no heating period would
be sufficient to allow the cilia to have
a net displacement. At this point,
objects in contact with the cilia would
no longer be transported.
This scenario was
experimentally verified. If the polarity
of the Peltier that normally cools the
microcilia is reversed, it provides
active heating, rather than active
cooling. As the background
temperature of the cilia increases,
actuation displacement decreases.
Eventually all visible movement halts.
Once this point is reached the heater is
turned off and the microcilia are
allowed to cool. Subsequent checks of
the microcilia, under standard
operating conditions, could determine
no mechanical or electrical faults.
However, prolonged operation at
elevated temperatures will eventually
damage the actuators. Possible failures
include charring of the polyimide and
fusing of the heater wire.


Life Span
Over the course of these
experiments the microcilia were shown
to be robust and the results
reproducible. Four chips,
corresponding to 4256 actuators,
were run for approximately 180 h at an
average of 25 Hz. This corresponds to
16.2 million actuations per cilia
actuator or a total of 16.6 billion total
actuations. The cilia were designed for
a nominal resistance of 1.5 k_ [13];
actual resistance varied between 1.2
and 3.0 k ohmz. Voltages up to 60 V
were applied to the network of 8
parallel ?8 serial cilia for each motion
direction on a chip, resulting in a peak
power dissipation of approximately 20
mW per cilium (3 k_ at 60 V; or 1.2 k_
at 40 V). In a motion gait, the duty
cycle of a cilium is 1/2 or 1/3 of an
entire period (for four-phase or three-
phase gaits, respectively; see again
figure 6). Therefore, the average power
dissipation is 6.7 10 mW for an
individual cilium, or 1.72.6 W for the
entire chip. We estimate the peak
temperature to be approximately 300
.C, which was sufficiently high to
cause localized melting of contacting
polystyrene surfaces and charring of
paper.
During the entire test time only one
microcilium actuator leaf was lost.
This failure was in an individual heater
loop and probably corresponded to a
local thickening of the material or
contaminants in that area during
manufacture. Subsequent checks of
other microcilia, under standard
operating conditions, could determine
no mechanical or electrical faults.
However, prolonged operation at
elevated temperatures can eventually
damage the actuators. Possible failures
include fusing of the heater wire and
charring of the polyimide. Localized
charring over some heating loops was
observed and is shown in figure 14.
This charring was confined to 70% of
actuators on one chip and resulted
from driving at extremely high
temperatures. No significant charring
was seen on the other seven tested
chips.

Conclusions
The results from these
experiments indicate that a microcilia
surface can be useful for docking small
spacecraft. These spacecraft, used for
inspection, maintenance, assembly and
communication services, will see
increased use as space missions
become more autonomous and far
reaching. During this scenario,
microcilia provide a good match,
allowing for simple docking
procedures to be used with these
simple satellites.


Figure 14. Localized charring of polyimide surface above
heater loop.

Results from the interface
experiments indicate that a variety of
materials common to spacecraft can be
used as docking surfaces, including
aluminum and silicon, thus avoiding
the need for special materials on the
mating surfaces. When studying the
performance of different interface
materials, thermal conduction
dominates surface roughness to
achieve optimal object velocity.
Surface roughness does affect object
velocity as seen in the polished and
unpolished silicon. An interface
material, such as ceramic, with low
thermal conduction and little surface
roughness should be selected for an
optimal docking surface.
Using microcilia to perform the
delicate final orientation and
positioning of the satellite will greatly
speed up the docking operation
because the entire satellite, with its
fixed connections, could be mated to
fixed connections on the main satellite.
This alleviates the use of flexible and
cumbersome umbilical cords and
attendant positioning systems.
A further benefit of using
microcilia as a docking surface is a
reduction in mass compared to other
docking and alignment techniques. On
the host satellite only a surface of
microcilia is required along with
minimal control electronics and
sensors. The microcilia docking
system could simply replace one of the
satellites body panels for maximum
weight savings. On the microsatellite
side, the additional mass to incorporate
docking functionality could be as low
as zero. The optimal microcilia
interface is a flat plane, which may
already be part of the microsatellite
chassis, thus requiring minimal
integration.
The microcilia themselves
have inherent advantages for this
application. Foremost among these
advantages is their ability to arbitrarily
position the satellite anywhere on the
surface and in any orientation. The
microcilia can also act as sensors,
however, it has already been
demonstrated that they can position
objects open loop with little loss of
accuracy. By using thousands of
microcilia on a single docking patch, it
is possible to build systems that
incorporate massive redundancy. Thus,
if there is some kind of docking
mishap the entire mission need not be
affected.Finally, thermal microcilia
have been shown to perform better in
vacuum then air. This is largely due to
a lack of convective cooling which
slows the heating cycle.
The scalability of microcilia
also enables the construction of widely
varied systems. While the primary task
envisioned for microcilia is
manipulating picosatellites (mass
<1kg) much greater masses are
feasible. By using additional cilia and
a greater contact area, larger
microsatellites can be handled. The
current generation of microcilia is
capable of moving a 41.2g puck with
an interface area of 2cm2. This
indicates that a patch only 25cm in
radius (100,000 times as large as the
area in the experiment) would be
sufficient to position satellites with
more than 40kg mass under
microgravity conditions.
Throughout the course of this
study the microcilia exhibited the
speed, robustness, reliability and
strength needed for this application.
These results show that microcilia can
be an attractive alternative to
conventional docking systems for
microsatellite
applications.


Copyright Forms

*Copyright _ 2001 by the American Institute of
Aeronautics and
Astronautics, Inc. All rights reserved.


References

1. K. S. J . Pister, R. Fearing, and R.
Howe, A planar air levitated
electrostatic actuator system, in Proc.
IEEE 5th Workshop on Micro Electro
Mechanical Systems, Napa Valley,
CA, Feb. 1990, pp. 6771.

2. S. Konishi and H. Fujita, A
conveyance system using air flow
based on the concept of distributed
micro motion systems, J.
Microelectromech. Syst., vol. 3, no. 2,
1994, pp. 5458.

3. Y. Mita, S. Konishi, and H. Fujita,
Two dimensional micro conveyance
system with through holes for
electrical and fluidic interconnection,
in Transducers 97 Dig. 9th Int.Conf.
on Solid-State Sensors and Actuators,
vol. 1, Chicago, IL, J une 1619, 1997,
pp. 3740.

4. W. Liu and P. Will, Parts
manipulation on an intelligent motion
surface, in Proc. 1995 IEEE/RSJ Int.
Conf. on Intelligent Robots and
Systems, Pittsburg, PA, vol. 3, Aug. 5
9, 1995, pp.

5.
http://www.memsnet.org/mems/what-
is.html
A.C. Avalanche Drive


V. R. SIDDHARTHA ENGINEERING COLLEGE


Vijayawada


DEPARTMENT OF EEE






Author(s):
R.V.DEEPTI
deepthi_ee05@yahoo.co.in

V.BHARGAVI
reachto_bhargavi@yahoo.co.in



Abstract:

Electric motor is the most important
component in many industrial
applications. An electric motor
together with its control equipment
and energy-transmitting device forms
an electric drive. An electric drive
together with its working machine
constitutes an electric drive system.
This paper deals with electric motor
drives i.e. dc electric drives and
mainly ac electric drives including
their advantages. The main focus of
the paper must be on ac avalanche
drive being mentioned. Avalanche
drive is a 4-phase ac drive, which
works on a similar principle as that
of an ac drive; it includes a 4-phase
bridge converter unit, dc link choke,
capacitive filter and IGBT inverter
unit.
The 3-phase ac drive and Avalanche
drive when compared results in
avalanche drive to be more
advantageous in terms of smooth
controlling and harmonic reduction.


Keywords:
Electric Drives, Harmonics,
Polyphase analysis,
Fourier series
AC Avalanche Drive,
Simulation, PSPICE

Introduction:
An electric drive is a system that
performs the conversion of electric
energy into mechanical energy at
adjustable speeds. It is also called
adjustable speed drive (ASD). Moreover,
the electric drive always contains a
current (or torque) regulation in order to
provide safe current control for the
motor. Therefore, the electric drive
torque/speed is able to match in steady
state the torque/speed characteristics of
any mechanical load. This motor to
mechanical load match means better
energy efficiency and leads to lower
energy costs. In addition, during the
transient period of acceleration and
deceleration, the electric drive provides
fast dynamics and allows soft starts and
stops, for instance.
A growing number of applications
require that the torque and speed must
vary to match the mechanical load.
Electric transportation means, elevators,
computer disk drives, machine tools, and
robots are examples of high-performance
applications where the desired motion
versus time profile must be tracked very
precisely. Pumps, fans, conveyers, and
HVAC are examples of moderate
performance applications where
variable-speed operation means energy
savings.
An electric drive has three main
components:
The electric motor
The power electronic converter
The drive controller

The motor used in an electric drive is
either a direct current (DC) motor or an
alternating current (AC) motor. The type
1
of motor used defines the electric drive's
classification into DC motor drives and
AC motor drives.

D.C. Electric Motor Drives:
The ease of producing a variable DC
voltage source for a wide range of speed
control made the DC motor drive the
favorite electric drive up to the 1960s.
These are used extensively in adjustable
speed drives and position control
applications. Their speeds below base
speeds can be controlled by armature-
voltage control while speeds above the
base speeds are obtained by field-flux
control. Phase controlled converters and
dc choppers provide an adjustable dc
output voltage from a fixed dc input
voltage. The dc motors used in
conjunction with power electronic
converters are dc separately excited
motors or dc series motors.
Dc drives are classified as:
Single-phase dc drives.
3 phase dc drives.
Chopper drives.
Single-phase dc drives:
It contains a converter unit along with
load from a single-phase source. The
firing angle control of converter
regulates the armature voltage applied to
dc motor armature. The discontinuous
armature current produced causes more
losses in the armature and poor speed
regulation. An inductor is used in series
with armature circuit to reduce the ripple
in armature current making it continous
for low values of motor speed.
Depending upon the type of power
electronic converter used in an arm
circuit, single-phase dc drives may be
sub-divided as:
1. 1- half-wave converter drives
2. 1- semi-converter drives
3. 1- full converter drives
4. 1- dual converter drives.
Three-phase dc drives:
Large dc motor drives are always fed
through three-phase converter for their
speed controls. A three-phase controller
converter feeds power to armature
circuits for obtaining speeds below base
speeds; another three-phase controlled
converter is inserted in the field circuits
for getting speeds above base speeds.
The output frequency of three-phase
converter is higher than those of single-
phase converter. Therefore for reducing
the armature current ripple, the
inductance required for three-phase dc
drives is of lower value than that in a
single-phase dc drive. As the armature
current is mostly continous, the motor
performance in three-phase dc drives is
superior to those in single-phase dc
drives.
Chopper drives:
When variable dc voltage is to be
obtained from fixed dc voltage, dc
chopper is used. A chopper is inserted in
between a fixed voltage dc source and
the dc motor armature for its speed
control below base speed. In addition,
chopper is easily adaptable for
regenerative braking of dc motors.

Disadvantage of dc drive:
The dc drive is not widely used
in industrial applications
The dc drives are unsuitable to
operate in hazardous conditions.

2
A.C. Electric Motor Drives:

The advances of power electronics
combined with the remarkable evolution
of microprocessor-based controls paved
the way to the AC motor drives
expansion. In the 1990s, the AC motor
drives took over the high-performance
variable-speed applications.
In general, there are two types of ac
drives:
Induction motor drives.
Synchronous motor drives.
Induction motor drives:
Three-phase induction motors are more
commonly employed in adjustable-speed
drives than Three-phase synchronous
motors. Three-phase induction motors
are two of two types. i.e. squirrel cage
induction motors (SCIMs) and slip ring
(or wound-rotor) induction motors
(SRIMs). Stator windings of both types
carry Three-phase windings. Rotor of
SCIM is made of copper or aluminum
bars short-circuited by two end rings.
Rotor of SRIM carries Three-phase
windings connected to three slip rings on
the rotor shaft.
Speed control of Three-phase
induction motors:
Three-phase induction motors are
admirably suited to fulfill the demand of
loads requiring substantially a constant
speed. Several industrial applications,
however, need adjustable speeds for
their efficient operation. The object of
the present section is to describe the
basic principles of speed control
techniques employed to Three-phase
induction motors through the use of
power electronics converters. The
various methods of speed control
through semiconductor devices are as
under:
1. Stator voltage control
2. Stator frequency control
3. Stator voltage and frequency
control
4. Stator current control
5. Static rotor-resistance control
6. Slip-energy recovery control
Synchronous motor drives:
Synchronous motors have two windings,
one on the stator is three-phase armature
winding and the other on the rotor is the
field winding. The three-phase winding
on its stator is similar to the three-phase
windings on the stator of a three-phase
induction motor. Field winding is
excited with dc and it produces its own
mmf called field mmf. Three-phase
stator winding carrying Three-phase
balanced currents creates its own
rotating armature mmf. The two mmfs
combine together to produce resultant
mmf. The field mmf interacts with the
resultant mmf to produce
electromagnetic torque. A synchronous
motor runs always with zero slip, i.e. at
synchronous speed. Varying its field
current can also control power factor of
synchronous motors. For the speed
control of synchronous motors, both
inverters and cycloconverters are
employed.
The various types of synchronous
motors are:

1. Cylindrical rotor motors
2. Salient-pole motors
3. Reluctance motors
4. Permanent-magnet motor

3


The above figure shows the general
circuit of three-phase ac drive,
commonly used for industrial
applications.
Working:
When three-phase AC supply is given,
the capacitor charges up to its peak.
Each diode allows current to flow in
only one direction. For the six diodes in
the three-phase bridge, three are
conducting at a time while the other
three are blocking. When the polarity of
the AC input changes, the conducting
and blocking diode pairs also change.
When a load is applied, the capacitor
will begin to discharge. When the next
input line is cycled, the capacitor only
draws the current through the diodes.
When the line voltage is greater than the
DC bus voltage again the diode will be
forward biased. This occurs at the peak
of applied sine wave resulting in a pulse
of current that occurs at every input
cycle around the +/-peak. As load is
applied, the capacitor discharges and the
DC voltage level drops. The width of the
pulse of current is determined in part by
the load on the DC bus.
Advantages of AC drive:
1. For the same rating, ac motors
are lighter in weight as compared
to dc motors
2. AC motors require low
maintenance as compared to dc
motors.
3. AC motors are less expensive as
compared to equivalent dc
motors.
AC motors can work in hazardous areas
like chemical, petrochemical etc.
whereas dc motors are unsuitable for
such environments because of
commutator sparking.
Disadvantages of AC drives:
Power converters for the control of ac
motors are more complex.
1. Power converters for ac drives
are more complex.
2. Power converters for ac drives
generate harmonics in the supply
system and load circuit. As a
result, ac motors get derated.

4
The advantages of ac drives outweigh
their disadvantages. As such, ac drives
are for several industrial applications.
Harmonics:
The purpose of this part is to provide
some basic information about harmonics
with a simplified explanation of
harmonics, showing its affect.
Current is drawn from the AC line by a
drive. The term harmonics can be
analyzed by the current waveforms. If
the waveform is close to a perfect sine
wave and the current is proportional to
voltage (although the current will be
lagging the voltage) then it can be said to
contain no harmonics. A perfect sine
wave by definition has no harmonics but
instead it contains one fundamental
component at one frequency.
The nonlinear loads such as AC to DC
rectifiers produce distorted waveforms.
Harmonics are present in the waveforms
that are not perfect sine waves, due to
distortion from nonlinear loads. The
distorted waveform can be represented
as a series of sine waves each as an
integral multiple of the fundamental
frequency and of specific magnitude.
The higher order waveforms are called
harmonics. The collective sum of the
fundamental and each harmonic is called
a Fourier series.
Hence, the more the waveform looks
like a sine wave; the lower will be the
harmonic content. If a waveform is a
perfect square wave, it will contain all of
the odd number harmonics up to infinity.
Even number harmonics can be detected
by a lack of symmetry about the X-axis.
If the top and bottom half of the
waveform doesnt look like mirror
images of each other then even
harmonics are said to be present.
Typically a drive will not cause any even
harmonics.



The sources of most even harmonics are
arc furnaces, some florescent lights,
welders and any device that draws
current in a seemingly random pattern.
Another noteworthy fact is that balanced
three phase rectifier type loads (such as
an AC drive) do not produce a third
harmonic component. Nor do they
produce any harmonic component with 3
as multiple, which are known as triplen
harmonics and are not present in most
AC drives. From the above figure we
can see that no even harmonics or
triplens are present. The 11th harmonic
and higher is a point where the
magnitude diminishes to a very low
level. Now, We are left with the 5th and
7th order. These are the problem child
harmonics for AC drives. If we can
reduce these two harmonic components,
we will have gone a long way in meeting
any harmonic specification for AC
drives.
These harmonics can further be reduced
by using an A.C.Drive, mentioned in this
paper called A.C. Avalanche Drive,
which works with 4-phase supply as
input.

5
Poly-phase Analysis:
In a three phase analysis if we consider
the vertices RYB perpendicular to the
plane about its centroid, then there are
three rotations which leave the
configuration unchanged. The first
rotation i.e. through 120
0
in counter
clockwise direction sends vertex into Y,
Y into B and B into R. It permutes
cyclically the vertices. If the rotation
through 120
0
is re[presented by a1,
through 240
0
by a
2
and through 360
0
, i.e.
zero rotation by a
0
, then the overall
effect of the operations is given by
following relations :

a
0
.

a
0
= a
0
a
0
. a
1
= a
1 .
a
0
=a
1
a
0
. a
2
= a
2 .
a
0
=a
2
a
1
. a
2
= a
2 .
a
1
=a
0
a
2
. a
2
= a
1
a
1
. a
1
= a
2

These relations may be more compactly
expressed in the form of a group
multiplication table, known as Cayley
table. In this table the entry in the i
th

row and j
th
column is product a
i
a
j
.



Looking at this table, it is easily verified
that operations a
0
, a
1
and a
2
constitute a
group, satisfying the group axioms, with
their product serving as the pertient
binary operations. For, since the product
of any two elements is also an element in
the set, closure is satisfied. Associativity



is apparent and a
0
has the role of
identity, i.e. unit element since


a
0
.ai = ai.a
0
= a
i
for i=0,1,2 .


Finally, elements a
0
, a
1
and a
2
have
inverses a
0
, a
2
and a
1
respectively since
a
0
.a
0
=a
0
, a
1
a
2
=a
0
and a
2
a
1
=a
0
i.e.
each of these rotations has an inverse
which when multiplied with original
rotation produces zero rotation a
0
.




Similarly if we consider the symmetry
operations of a balanced 4-phase
network it is evident that there are four
rotations that send the 4-phase network
ABCD into coincidence with itself.
These are the rotations through 90
0
,
180
0
, 270
0
and 360
0
about the centroid in
plane.










Let these rotations in that order be
denoted by a
1
, a
2
, a
3
and a
0
. If we again
understand by product of two rotations,
their successive applications, then we
obtain the following multiplication table
just as that of 3-phase.


6


From the above multiplication table
these symmetry operations which are
the rotations through 90
0
, 180
0
, 270
0
and
360
0
i.e. zero rotation, also constitute a
group.
In an ac system it is possible to connect
two or more number of individual
circuits to a common polyphase source.
Though it is possible to have many
sources in a poly phase system, the
increase in the available power is not
significant beyond the three-phase
system. The power generated by the
machine increases 41.4% from single
phase to two-phase, and 50% from
single phase to three-phase. Beyond
three-phase, the maximum possible
increase is only 7%, so an increase
beyond three-phase doesnt justify the
extra complications. More than two-
phases are used only in exceptional
cases. Circuits supplied by six, twelve
and more phases are used in high power
radio transmitter stations, while two-
phase systems are used to supply two-
phase servomotors in feedback control
systems. The 4-phase system is under
research that can be used in ac
Avalanche drive. In solving networks
having considerable number of branches,
sometimes experiences a great difficulty
due to large number of simultaneous
equations. Such complicated networks
can be simplified successively by
replacing delta to star and vice-versa in a
three-phase system and a similar
analysis for 4-phase system i.e. the cross
network and square network shown in
figure where the resistance of each arm
of the cross network is given by the
product of the two opposite sides of a
square network that meet at its end,
divided by sum of all the square network
resistances.




Similarly the equivalent square
resistance between any two terminals is
given by the sum of the cross resistances
between the terminals and the product of
two cross resistances, divided by other
cross-resistance to be summed up.







7
AC Avalanche Drive:
The ease of producing a variable DC
voltage source for a wide range of speed
control made the DC motor drive the
favorite electric drive up to the 1960s.
Then the advances of power electronics
combined with the remarkable evolution
of microprocessor-based controls paved
the way to the AC motor drive's
expansion. In the 1990s, the AC motor
drives took over the high-performance
variable-speed applications. But, these
ac motor drives had its own drawbacks,
which can be compensated to some
extent on a new working component
called A.C. Avalanche Drive. The
harmonics, which is considered to be a
main disadvantage in major industrial
applications, can be further reduced.
Working:
A.C. Avalanche Drive, works on 4-
phase input supply whose constructional
features and operational functions are
similar to normal ac drive. The diodes
used in converter circuits are replaced
with SCRs with certain firing angles.
Upon application of AC power, the
capacitor will charge up to the peak of
the applied line voltage through the
diode bridge. Each diode works
electrically i.e. it allows current to flow
in only one direction. For the eight
SCRs, which act as diodes in the 4-phase
bridge, four are conducting at a time
(one plus and one minus) while the other
four are blocking. When the polarity of
the AC input changes, the conducting
and blocking diode pairs also change.
When a load is applied to the DC bus,
the capacitor will begin to discharge.
With the passing of the next input line
cycle, the capacitor only draws current
through the diodes and from the line
when the line voltage is greater than the
DC bus voltage. This is the only time a
given diode is forward biased. This
occurs at the peak of the applied sine
wave resulting in a pulse of current that
occurs every input cycle around the +/-
peak of the sine wave. As load is applied
to the DC bus, the capacitor bank
discharges and the DC voltage level
drops. A lower DC voltage level means
that the peak of the applied sine wave is
higher than the capacitor voltage for a
longer duration. Thus the width of the
pulse of current is determined in part by
the load on the DC bus.
8
The above waveforms are obtained after
simulating the converter circuits of
three-phase and Avalanche drive in a
laboratory using PSPICE software
respectively, the waveforms resulted in
the following advantages of Avalanche
drive.
Advantages:
1. Harmonics can be reduced to its
maximum possible value.
2. Operation using this drive is very
easy as control current is almost
linear, resulting in a smooth
control.
3. The filtering requirements for
this drive is less compared to
single phase and 3 phase as the
ripple frequency of the converter
output voltage is high
4. In a 4-phase system only one
diode can be made to operate to
obtain full wave rectification,
which is not possible in three-
phase.
5. Maintenance is easy.
6. Load performance is superior in
4-phase converter when
compared with the single phase
and 3 phase converters.
7. Comparatively, the speed
regulation is enhanced.
The only disadvantage of this drive is
the installation cost required to convert
the available three-phase supply to 4-
phase supply. Even this disadvantage
could have been overcome if at all we
had a 4-phase supply from the
generating point.
Acronyms:
IGBT: Insulated gate Bipolar J unction
Transistor
ASD: Adjustable Speed drive
SCIM: Squirrel Cage Induction Motors
SRIM: Slip Ring Induction Motors
SCR: Silicon Controlled Rectifiers
HVAC: Heat Ventilation Air
Conditioners.

9
Conclusion:
The production unit in any industry
consists of primarily three basic
components mainly electric motors,
transmitters and working machine. A
working machine is a driven machine
that performs the required process. An
electric drive together with its working
machine constitutes an electric drive
system. Most of the production mainly
depends on the working machine. So,
our part involves in the design of a
machine that works with maximum
efficiency. The AC Avalanche drive
mentioned in this paper can be a best
replacement to an existing ac drive in the
applications where the disadvantages of
ac drive i.e. harmonics, control ripples
etc. is to be seriously treated.
As an exception, we are using six-phase,
twelve-phase, twenty four-phase etc in
a few applications. But, these higher
phases are limited to the multiples of
three as the conversion from three-phase
to above mentioned is easy compared to
conversion to non multiples of three,
mainly 4-phase. Further research on the
same lines may fetch more advanced and
better technology in upgrading the
Avalanche drive.
In India, we are using three-phase, 50Hz
AC supply as an input. There are so
many applications that require higher
frequency and phases for better
operation of industrial equipments. Even
then we are continuing with same
generated input for general applications
and going for conversion units for
particular applications keeping economy
in point of view.
Finally, we would like to conclude that
the industrial applications that suit the
advantages of avalanche drive could be
employed in India so that the efficiency
of the production can be improved.


References:
Power Electronics 3
rd
edition
1999, by Dr. P. S. Bimbhra.
IEEE Recommended Practices for
Harmonic Control in Electric Power
Systems, by Roger C. Dugan &
Mark F.MaGranaghan, September
IEEE Std.519-1992.
Electrical Power Systems Quality
McGraw-Hill inc. 1996, by
H. Wayne Beaty.
Polyphase Circuits 4
th
edition
McGraw-Hill inc 2004 by
Shyammohan S. Palli
Analysis of power system networks
by Dr. L.P.Singh, Phd Thesis, IIT
Kanpur.
Fundamentals of electric drives
second edition, Narosa Publishing
House, New Delhi, 2002 by
Gopal K. Dubey.














10














11

ANALYSIS OF FERRORESONANCE AND ITS MITIGATION


B. Ravi Chandra Raju B. Karthik Charan
III EEE III EEE
04711A0249 04711A0218
Email: raviraj_249@yahoo.co.in Email: charkart@yahoo.co.in

NARAYANA ENGINEERING COLLEGE, Nellore.


ABSTRACT
Ferroresonance is a non-linear resonance
phenomenon that can affect power networks.
The abnormal rates of harmonics and transient
or steady state overvoltages and overcurrents
that it causes are often dangerous for electrical
equipment. Some unexplained breakdowns can
be ascribed to this rare, non-linear
phenomenon.Ferroresonance is a phenomenon
known to cause high currents and voltages in
transformers which can result in permanent
damage. For this reason, suppression of
ferroresonance is needed.Capacitor voltage
transformers are commonly used to step down
voltages to a level used for protection and
control systems. These transformers are
equipped with ferroresonance suppression
circuits which consist of a saturable reactor in
series with a resistor. This circuit will cause
constant losses in the transformer during
normal operation and increase overall energy
costs.The purpose of this paper is to design and
construct a feedback control system to detect
and dampen ferroresonant oscillations. The
design is capable of controlling the amount of
resistance inserted by implementing a power
electronic circuit. This design maximizes
damping and controls power losses.


INTRODUCTION


The term Ferro-resonance, which
appeared in the literature for the first time in
1920, refers to all oscillating phenomena








occurring in an electric circuit which must
contain at least:
a non-linear inductance
(ferromagnetic and saturable),
a capacitor,
a voltage source (generally
sinusoidal),
low losses.
Power networks are made up of a large
number of saturable inductances (power
transformers, voltage measurement
inductive transformers
(VT), shunt reactors), as well as capacitors
(cables, long lines, capacitor voltage
transformers, series or shunt capacitor
banks, voltage grading capacitors in circuit-
breakers, metalclad substations). They thus
present
scenarios under which ferroresonance can
occur. The main feature of this phenomenon
is that more than one stable steady state
response is possible for the same set of the
network parameters. Transients, lightning
overvoltages,energizing or deenergizing
transformers or loads, occurrence or
removal of faults, live works, etc...may
initiate ferroresonance.The response can
suddenly jump from one normal steady state
response (sinusoidal at the same frequency
as the source) to an another ferroresonant
steady state response characterised by
high overvoltages and harmonic levels
which can lead to serious damage to the
equipment.A practical example of such
behaviour (surprising for the uninitiated) is
the deenergization of a voltage transformer
by the opening of a circuit-breaker. As the
transformer is still fed through grading
capacitors accross the circuit-breaker,this
may lead either to zero voltage at the
transformer terminals or to permanent highly
distorted voltage of an amplitude well over
normal voltage. To prevent the
consequences of ferroresonance (untimely
tripping of protection devices, destruction of
equipment such as power transformers or
voltage transformers, production losses,...),it
is necessary to:
understand the phenomenon,
predict it,
identify it and
Avoid or eliminate it.
Little is known about this complex
phenomenon as it is rare and cannot be
analysed or predicted by the computation
methods (based on linear approximation)
normally used by electrical engineers. This
lack of knowledge means that it is readily
considered responsible for a number of
unexplained destructions or malfunctionings
of equipment. A distinction drawn between
resonance and ferroresonance will highlight
the specific and sometimes disconcerting
characteristics of ferroresonance. Practical
examples of electrical power system
configurations at risk from ferroresonance
are used to identify and emphasise the
variety of potentially dangerous
configurations. Well-informed system
designers avoid putting themselves in such
risky situations. A predictive study should be
undertaken by specialists should doubts
persist concerning limit, inevitable
configurations. Appropriate numerical
analysis tools enable prediction and
evaluation of the risk of ferroresonance in a
power system for all possible values of this
systems parameters in normal and
downgraded conditions. Practical solutions
are available to prevent or provide protection
against ferroresonance.



ANALYSIS OF A BASIC
FERRORESONANT CIRCUIT



A basic series ferroresonant circuit is shown
above. It includes a series connection of a
voltage source, a relatively small circuit
resistance, R, a Saturable magnetic core
inductor comprising the unsaturated LM,US
(which becomes LM,S above saturation) and
the leakage inductance LL, and a
capacitance C. The inductor in the circuit
typically represents either a power
transformer or a voltage transformer rated
for continuous voltage 5% to maybe 15% (or
higher with some VT applications) above
nominal operating voltage, depending on
how it is bought and applied, so the inductor
is somewhat close to entering saturation.
Also, consider that there is no transformer
secondary load or primary shunt loading. If
only fundamental frequency voltages and
currents are involved, the circuit can be
converted over to the basic impedance
concepts of XL,Total (referred to as XM for
brevity) and XC. The resultant highly basic
circuit follows. Basic Circuit for Analysis
Numerical analysis using specific purpose
software or more generic ATP/EMTP type
software is the method required to truly
model Ferro resonance. However there may
be too many unknowns in a partially
saturated multi-phase core transformer (e.g.,
steel B-H curve response over large non-
uniform core area, with multiple windings,
and a oddly distributed capacitance) to
exactly model what occurs during Ferro
resonance. The non-linearity of minor loads
that may be present further complicate
analysis. A common measure of excitation is
flux density vs. magnetic field strength (B vs.
H). However, to make the data useful for
most power system studies, one needs
to have excitation in terms of V and I, which
then is reduced further to V and I at
the system fundamental frequency. An
interesting method of converting between
B-H curves and the V vs. I is shown.








The graph is fairly involved but self
explanatory. Start at a given applied voltage,
v(t). Trace upward to find the corresponding
flux level, (t). Note the voltage is the
derivative of flux (v=df/dt), and if there is no
voltage or flux offset from 0, and the input
voltage is a clean sine wave, the flux is a
900 phase shifted reproduction of the
voltage sine wave. Thereafter, trace across
to find the corresponding B, then H, and
then the associated current. Finally plot the
current against the voltage curve to see I vs
V. Note the non-linear B-H curve and its
hyseresis strongly distort the excitation
current. It can be seen by this example
above that a notable source of error is being
introduced by simply modelling the core as
an equivalent XM. The core effective
impedance is fairly complex and has a high
harmonic content.

DIAGNOSIS ELEMENTS
Ferroresonance is frequently accompanied by
some of the symptoms described below:
high permanent overvoltages of
differential
mode (phase-to-phase) and/or common
mode
(phase-to-earth),
high permanent overcurrents,
high permanent distortions of voltage
& current waveforms,
displacement of the neutral point
voltage,
transformer heating (in no-load
operation),
continuous, excessively loud noise in
transformers and reactances,
damage of electrical equipment
(capacitor banks, VT, CVT,...) due to
thermal effect or insulation breakdown.
A characteristic symptom of VT destruction by
ferroresonance is a destroyed primary winding
and an intact secondary winding. apparent
untimely tripping of protection devices. Some of
these symptoms are not specific to the
ferroresonance phenomenon. For example
permanent displacement of the neutral point of
an unearthed neutral system may be the
consequence of a single phase-to-earth fault. An
initial diagnosis is simplified by comparing the
curves of any recordings taken with typical
ferroresonance waveforms, specified in the
above paragraph .Faced with the difficulty of
diagnosis (no recordings, a number of possible
interpretations of the symptoms) the first reflex
is to analyse system configuration while the
symptoms are present, together with the events
preceding them (transformer energizing,
industrial process specific operating phase, load
rejection...) which might initiate the
phenomenon. The next step is to determine
whether the three conditions necessary (but not
sufficient) for ferroresonance to be present, are
united:
simultaneous presence of capacitances
with non-linear inductances,
existence in the system of at least one
point whose potential is not fixed
(isolated neutral,
Single fuse blowing, single phase
switching....),
Lightly loaded system components
(unloaded power or instrument voltage
transformers...) or low short-circuit
power sources generators).
If any one of these conditions is not verified,
ferroresonance is highly unlikely. Otherwise
more extensive investigations are required. A
predictive study may be carried out by
specialists, which will require implementation of
methods defined later on in this document. A
comparison with examples of typical power
system situations favorable to ferroresonance
may simplify identification of configurations at
risk.

Typical Circuit Configurations That Have
A Ferroresonance Risk.



Due to the multitude of various sources of
capacitances and non linear inductances in
a real power network and the wide range of
operating conditions, configurations under
which ferroresonance can occur are
endless. Experience has, however, made it
possible to list the main typical
configurations that may lead to
ferroresonance. A few standard examples
are given below.

Voltage transformer energized through
grading capacitance of one (or more)
open circuit-breaker(s).
In EHV, certain switching operations
(padlocking a bus coupler or switched bus
bar circuit-breaker, removal of a fault on a
bus bar section...) can drive voltage
transformers (VT) connected between
phases and earth into ferroresonance.
These configurations can be illustrated by
the circuit in figure1. Opening of circuit-
breaker D initiates the phenomenon by
causing capacitance C to discharge through
the VT which is then driven into saturation.
The source delivers enough energy through
the circuit-breaker grading capacitance Cd
to maintain the oscillation. Capacitance C
corresponds to all the capacitances to earth
of the VT and the connection supplied by
means of the grading capacitances of the
open circuit-breaker(s). Ferroresonance is of
the sub harmonic type.


Figure-1: ferroresonance of a
voltage transformer connected in series with
an open circuit-breaker.



Voltage transformers (VT) connected to
an isolated neutral system
This earthing system can be chosen, can
result from the coupling of an isolated
neutral emergency source or from a loss of
system earthing. Transient overvoltages or
overcurrents due to switching operations on
the power system (load rejection, fault-
clearing ...) or to an earth fault, can initiate
the phenomenon by driving into saturation
the iron core of one or two of the VTs of the
parallel ferroresonant circuit in figure-2
Ferroresonance is then observed both on
the phase-to-earth voltages and on the
neutral point voltage (VN).The neutral point
is displaced and the potential of one or two
phases rises with respect to earth, which
may give the impression of a single phase-
to-earth fault in the system. Over voltage
values may exceed normal phaseto-phase
voltage under steady state condition, and
cause dielectric destruction of the electrical
equipment.


figure:2
ferroresonance of a VT between phase and
earth in an isolated neutral system.


Transformer accidentally energized in
only one or two phases
A few examples of configurations at risk are
given in figure3 These configurations can
occur when one or two of the source phases
are lost while the transformer is unloaded or
lightly loaded, as a result of a fuse blowing
on an MV power system, of conductor
rupture or of live works, for example when
commissioning a remote controlled breaking
cubicle (ACT). The capacitances can be in
the form of capacitance of underground
cable or an overhead line supplying a
transformer whose primary windings are
wye connected with isolated or earthed
neutral, or delta connected. For example the
series ferroresonant circuit is made up of the
connection in series of the phase to earth
capacitance (between circuit breaker and
transformer) of the open phase and the
magnetizing impedance of the transformer.
The modes are fundamental, sub harmonic
or chaotic. The phase-to-phase and phase-
to-earth capacitances, the primary and
secondary windings connections, the core
configuration (three single phase, free flux or
forced flux), the voltage source system
neutral earthing (solidly earthed, earthed,


figure 3 :examples of unbalanced systems


isolated) and the supply mode (one or two
phases energized) are all factors involved in
the establishment of a given state. Isolated
primary neutral is more susceptible to
ferroresonance. To avoid such risks, use of
multi-pole breaking switchgear is
recommended.
Voltage transformers and HV/MV
transformers with isolated neutral
Ferroresonance may occur when the HV
and MV neutrals are isolated, and unloaded
VTs are connected on the MV side between
phase and earth (see fig.4a).When an earth
fault occurs on the HV side upstream from
the substation transformer, the HV neutral
rises to a high potential. By capacitive effect
between the primary and secondary,
overvoltages appear on the MV side, and
may trigger ferroresonance of the circuit
made up of the voltage source E0, the
capacitances Ce and C0 and the
magnetizing inductance of a VT (see fig:4b).
Once the HV fault has been removed, the
voltage of the HV neutral due to a natural
unbalance of the system, may be enough to
sustain the phenomenon. Ferroresonance is
fundamental.
fig 4: ferroresonance of a VT between
phase and earth with an isolated neutrals
source transformer.

Ferroresonance Suppression Circuit for a Single
Phase Voltage Transformer


Presently, Capacitively Coupled Voltage
Transformers (CCVTs) with built in
ferroresonance suppression circuits are the most
widely used transformers in industry to suppress
ferroresonance. The problem with CCVTs is
that they incur constant losses under normal
system operation. The motivation behind this
project is to produce a detection and mitigation
circuit which will incur negligible losses as well
as detect and dampen ferroresonance. The
mitigation circuit employs a feedback control
system which can determine an optimal
operating point. This operating point maintains
suppression while minimizing the power losses
caused in the suppression circuit. The basic
design is a back-to-back gated thyristor circuit
connected with a series resistance that is inserted
in the transformer secondary. By calculation of a
flux level from the core, the detection circuit is
able to detect when the core is in over-flux
(indicating ferroresonance). Once detected, the
power electronic circuit can be used as a variable
resistance to be inserted into the circuit by
varying the firing angle of the thyristors. This
resistance dampens out the ferroresonance. A
common technique for suppressing
ferroresonance is inserting a resistance into the
circuit. A CCVT is used to transform a high
voltage input into a low-voltage output where
monitoring devices and protection relays can
operate. In case ferroresonance occurs, a
saturable reactor is connected into the secondary
of the CCVT for protection.

A saturable reactor is a magnetic circuit element
consisting of a single coil which is wound
around a magnetic core. The presence of a
magnetic core drastically alters the behavior of
the coil by increasing the magnetic flux and
confining most of the flux to the core. Additional
increases in current through the magnetizing
winding will not result in further increases of
magnetic flux. If an inductor has a saturated
core, no further magnetic flux will result from
further increases in current, and so, there will be
no voltage induced in opposition to the change in
current. In other words, an inductor loses its
inductance (ability to oppose changes in current)
when its core becomes magnetically saturated.
How it works in a CCVT is that under normal
voltage it behaves as an open switch drawing
virtually no current. As soon as the voltage rises
beyond the knee-point of the saturable reactor, it
will turn on a damping resistor, effectively
mitigating the voltage as seen in Figure 1.3. In
this example, the knee voltage of the saturable
reactor is 1.5p.u. while the CCVT itself has a
knee voltage of 3p.u. If ferroresonance occurs,
the voltage will saturate at 1.5p.u. and thus limit
the current. This prevents damaging currents
from entering the CCVT windings and keeps it
out of ferroresonance.



Figure 1 -3 (Capacitive Coupled Voltage
Transformer)
The saturable reactor has the ability to eliminate
any sub harmonic ferroresonant states. The
satiable model is what is currently being used at
Dorsey, but it is not without its problems. A
constant current is going through the dampening
resistor at all times, therefore a loss is always
occurring in the transformer. Another method of
suppression is to use some switched in resistive
circuit. A switched in resistance may be used to
dampen ferroresonance after it has occurred. By
switching a resistance into the circuit, it changes
the characteristics of the transformer
ferroresonant circuit and dampens the
oscillations. This resistance can either be
switched in automatically by switchgear, such as
a relay device, or it can be a manually switched
in by field staff. These methods are undesirable
as switchgear requires mechanical moving parts.
These devices may become unreliable with aging
or neglect in maintenance. Also their reaction to
ferroresonance is not quick. If a solid state
switching mechanism could be designed it is
desirable. There would be no losses during
normal operation and the method of switching
would be highly reliable.


POWER ELECTRONIC SUPPRESSION
CIRCUITS

A damping suppression circuit with power
electronic components can be used to switch in a
resistance, with greater reliability and no
dependence on manual operations. The power
electronic F.S.C. depends on the use of
thyristors.

The basic design of the F.S.C. is two back to
back thyristors (S.C.R.s) in series with a
damping resistance. Both thyristors in the
schematic share the





Figure 3-1 shows this basic design.
same gate signal from a pulse generator. This
configuration of paralleled thyristors with a joint
gate signal is commonly called a triac. If a
source is connected across the terminals of the
F.S.C. then the current passing through the
damping resistor can be controlled. Current
conduction is determined by the firing angle,
which in turn is controlled by the input to the
gates of the thyristors.
CONCLUSIONS:
Though ferroresonance may be complex and
hard to analyze, it need not be mysterious.
Ferroresonance has been shown to be the result
of specific circuit conditions, and can be induced
predictably in the laboratory. Power system
ferroresonance can lead to very dangerous and
damaging overvoltages, but the condition can be
mitigated or avoided by careful system design.

It seems a good idea at this point to briefly
remind readers of the events initiating
ferroresonance and of the configurations at
risk: A few examples of phenomena likely to
cause ferroresonance:
capacitor switching,
insulation faults,
lightning,
transformer switching.
A few configurations at risk
deserving particular attention:
voltage transformer (VT) between
phase and earth of an isolated
neutral power system,
long and/or capacitive cables or
lines supplying a transformer,
fuse protection where blowing
results in a nonmulti
pole breaking,
unloaded or lightly loaded voltage or
power transformer,
voltage transformer working at
saturation limit,
over-powerful voltage transformer.



REFERENCES:
1 Ralph H. Hopkinson, Ferroresonance
During Single-Phase Switching of 3-
Phase Distribution Banks, IEEE
Transactions on Power Apparatus and
Systems (PAS), p289-293, PAS-84, April
1965, discussion J une 1965, p514-
517
2 Ralph H. Hopkinson, Ferroresonant
Overvoltage Control Based on TNA
Tests on Three Phase Delta-Wye
Transformer Banks, IEEE Trans. on PAS,
p1258-1263, PAS-86, Oct. 1967
3 Ralph H. Hopkinson, Ferroresonant
Overvoltage Control Based on TNA
Tests on Three-Phase Wye-Delta
Transformer Banks, IEEE Trans. on PAS,
p352-361, PAS-87, Feb. 1968
(Note: Ralph Hopkinson wrote other material
on Wye-g/Wye-g transformers,
but as best I can tell this material remained
as internal GE documents. I have
not found where they were published for
wide distribution.)
4 S. Prusty, M. Panda, Predetermination of
lateral length to prevent
overvoltages problems due to open
conductors in three-phase systems, IEE
Proceedings, p49, Vol 132, Pt. C, No.1 J an.
1985
5 Philippe Ferracci, Ferroresonance,
Group Schneider Cahier Technique
Series, No 90, http://www.schneider-
electric.com/en/expert/e2a.htm
6 George E. Kelley, The Ferroresonant
Circuit, AIEE Transactions on PAS,
p843-848, PAS-78, Pt III, J an 1959;
discussion p1061
7 P. E. Hendrickson, I. B. J ohnson, N. R.
Schultz, Abnormal Voltage
Conditions Produced by Open Conductors
on 3-Phase Circuits Using Shunt
Capacitors, AIEE Transactions on PAS,
p1183-1193, PAS-72, Pt III, Dec.
1953
8 D. R. Smith, S. R. Swanson, J . D. Borst,
Overvoltages with Remotely
Switched Cable Fed Grounded Wye-Wye
Transformers, IEEE Transactions
on PAS, p1843-1853, PAS-94, Sept. 1975
9 R. A. Walling, K. D, Parker, T. M.
Compton, L. E. Zimmerman, Ferroresonant
Overvoltages in Grounded Wye-Wye
Padmount Transformers with Low-Loss
Silicon Steel Cores, IEEE Transactions on
Power Delivery, p1647-1660, Vol.
8 No. 3, J uly 1993
10 Elmo D. Price, Voltage Transformer
Ferroresonance in Transmission
Substations, Texas A&M 30th Annual
Conference for Protective Relay
Engineers, April 25-27, 1977
11 Harold Peterson, book: Transients in
Power Systems, Chapter 9:
Overvoltages Caused by Open
Conductors, J ohn Wiley and Sons, 1951,
then republished with slight corrections by
Dover Publications for GE, 1966
12 Reinhold Gruenberg, book: Transient
Performance of Electric Power
Systems, Chapter 48, Saturation of Iron in
Oscillatory Circuits, McGraw Hill,
1950, then reprinted 169/1970 by MIT
Press. ISBN 0 262 180367
13 M. R. Raven, Chair, IEEE Working
Group, Slow Transients Task Force of
IEEE Working Group on Modeling and
Analysis of System Transients Using
Digital Programs, Modelling and Analysis
Guidelines for Slow Transients -
Part III: The Study of Ferroresonance, IEEE
Transactions on Power Delivery,
p255-265, Vol 15, No, 1, J an. 2000;
Discussion in Trans. on Power Del., Vol 18,
No 2, April 2003.
14 Frank S. Young, Ronald L. Schmid,
Petter I Fergestad, "A Laboratory
Investigation of Ferroresonance in Cable
Connected Transformers," IEEE
Transactions on PAS, p1240-1249, PAS 87,
May 1968
15 Eugene C. Lister, Ferroresonance on
Rural Distribution Systems, IEEE
Transactions on Industry Applications, p105-
111, Vol IA-9, No. 1, J an/Feb
1973.




M.Rajeswari
S.Muthamil Selvan

ABSTRACT
Economic Load Dispatch (ELD) is the
scheduling of generators minimizes the
total operating cost depending on
equality and inequality constraints. The
transmission line loss also is to be kept
as minimum as possible. So, the problem
is of multi-objective optimization. In this
paper we solved economic operation for
5 generating with non-monotonically
increasing cost functions. The generators
are interconnected through lossy
transmission lines. Numerical results
indicated PSO techniques produce
optimal results for 5 generating system
effectively compare to conventional
Lambda iteration technique. (PSO) The
comparison of test results is made
between conventional technique and
PSO technique of solving Economic
Load Problem for a standard IEEE 14
buses. PSO execute in the least time,
yielding near optimal results and this is
the major merits of PSO techniques over
any other evolutionary technique
Keywords: Economic Load Dispatch
(ELD), Particle Swarm Optimization


Economic Load Dispatch (ELD) is an
important daily optimization task in an
interconnected Power System Operation.
It deals with how the real power output
each Thermal generating units of a large
power system is selected to meet a given
system load, maintaining the generation
capacity inequality constraints and
minimizing operating cost and
transmission line losses. This is one of
the important multi-objective
combinatorial constrained optimization
problems in a Power System.
M.Rajeswari Lecturer in the EEE Dept.
of KLN College of Engineering,
Pottapalayam. (Madurai.) T.N

S.Muthamil Selvan is Under Graduate
Student in EEE Dept at Sasurie College
of Engineering, Vjiayamangalam, Erode.

In numerical methods such as Lambda-
iteration method and gradient methods






Application of Particle Swarm Optimization
for Economic Dispatch Problem



NOTATION
a
i
, bi, c
i
: Cost coefficient of i-th unit
BB
ij
: Transmission loss coefficients
|V
i
: Voltage in i-th bus
|V
j
| : Voltage in j-th bus
Cos
i :
Power Factor of i-th bus
Cos
j
: Power Factor of j-th bus

2
bi
&
2
bj
: Current Distribution Factor
R
b
: Resistance of each branch
CF
i
: Cost Function of i-th
generation
P
D
: Total Load
P
L
: Total Transmission loss
P
gi
: Active power output of i-th unit
P
gimax
: Maximum generation of i-th
unit
P
gimin
: Minimum generation of i-th
unit

INTRODUCTION


1
for solution of ELD problems, an
essential assumptions is that the
incremental cost curve of the units are
monotonically increasing piece wise
linear function but this assumptions is
usually not valid for practical systems.
Traditionally in the economic dispatch
problem, the cost function for each
generator has been approximately
represented by single quadratic cost
function.

The conventional methods are like
Genetic Algorithm, Artificial Neural
Network, Hopfield Neural Network,
Fuzzy System Control are proposed to
solve ELD. Normally the assumed
values of Lambda are randomly
generated and finally the error limits are
verified. This limits should as possible
as minimum. Moreover there is no limit
of iteration to get the solution. By using
Particle Swarm Optimization this
problems are over come. The PSO will
generate the n-population which is
depending on the swarm of the problem
taken.

ECONOMIC LOAD DISPATCH
(ELD) PROBLEM

The objective of ED is to determine the
generation levels for all units, which
minimize the total fuel cost while
satisfying a set of constraints. It can be
formulated as follows.

a) Fuel Cost
The Cost function of ELD problem is
defined as follow:
n
CF = CFi n=1, 2 ..5 (1)
i=1
CF
i
=a +b P
i
+c P
i
2
i =1, 2.n (2)
where a - Fixed cost, b - Semi fixed cost,
c - Running cost, P
gi
- Active power
output of i-th unit

b) Power Balance Constraint
The total power generation has to be
equal to the sum of load demand and
transmission loss.
P
g =
P
D
+P
L
... (3)
P
L
= P
i
B
ij
P
j
(4)
i j
Loss coefficients can be determined by
1
BB
ij
=
2
bi

2
bj
R
b.
.(5)
(|V
i
| |V
j
| cos
i
cos
j
) b
.i =1, 2, 3..n
.j =1, 2, 3..n
b number of branches in the
network

c)Capacity Limits Constraint
The generation power of each generator
has its own lower and upper limit of
generation and it can be expressed as.
P
gimin
P
gi
P
gimax
(6)

SIMPLE ALGORITHM OF ELD

Step1: Pick a set of starting values for
P
1
, P
2
.. P
n
that sum to the load.
Step2: Calculate the incremental losses
and total P
loss
/ P
i
as well as losses
P
loss.
The incremental losses and total
losses will be considered constant until
we return to step 2.
Step3: Calculate the value of that
causes P
1
, P
2
P
n
to sum to the total
load plus losses.
Step4: Compare the P
1
, P
2
P
n
from
step 3 to the values used at the start of
step2. If there no significant change in
any one the values, go step5, otherwise
go back to setp2.
Step5: Done.


2
PARTICLE SWARM
OPTIMIZATION AN OVERVIEW

The term Particle Swarm Optimization
(PSO) refers to a relatively new family
of algorithms that may be used to find
optimal solutions to numerical and
qualitative problems. It easily
implemented in most programming
languages and has proven both very
effective and quick when applied to a
diverse set of optimization.

The PSO is a population based
evolutionary computation technique
from the evolutionary computation
technique motivated from the evolution
of nature, i.e., the behavior of bird
flocking. In PSO each solution is a
particle in the search space. Each
particle adjusts its flying according to its
own flying experience and its
companions flying experience.

All the particles have fitness value,
which are evaluated by the fitness
function to be optimized and have the
velocities, which direct the flying of the
particles. PSO initialized with a group of
random particles search for optima by
updating generations.

The PSO is a flexible, robust population
based stochastic search/optimization
algorithm with implicit parallelism,
which can easily handle with non-
differential objective function, unlike
traditional optimization methods. Each
particles present position is realized by
the pervious position and present
velocity information

As per the problem solving method the
dimensions and swarm of the problem
are fetched. Each particle is treated in a
D dimensional space. Thus the initial
populations are generated as follow and
this population is called as global
population
Population =rand (D, PSize)*(UB-LB)
+LB .(7)

The best pervious position of i-th
particle is recorded and represented as
pbest. The index of the best particle
among the entire particle in the
population is represented as gbest. The
rate of the position change velocity fro
particle .i is represented as V
i
. After
finding the two best values, the particle
updates i-th velocity and position with
following equations:

Velocity = Weight inertia * initial
velocity + C1*rand (). * (pbest-
Population) +C2*rand ().*(a-population)
(8)

The modified velocity and position of
each particle can be calculated using the
present velocity and distance from pbest
(i) to gbest (i) as
Present [] =present [] +velocity
.(9)

PSO PARAMETER CONTROL

There are not many parameter needed to
be turned in PSO. The following
parameters are subjected to vary and
their values are given below.

The number of particles: The typical
range is 20-40.10 particles are large
enough to get good result. For some
difficult or special problems can use
100-200 particles.

Dimension of particles: this are vary as
problem to be optimized.

3
Range of particles: these also vary as
problem to be optimized.

Vmax: it determines the maximum
change one particle can take during a
iteration. Vmax=20

Learning Factors: C1 and C2 usually 2.
Other setting values are also can be used
(C1=C2=2)

Inertia Weight: Suitable selection of the
inertia weight and provided a balance
between global and local exploration
abilities and thus required less iterations
on average to finding the optimum.
(Wmin=0.4 &Wmax=0.9)

W =W max [(Wmax Wmin)/ iter
max] * iter (10)

The stop condition: the maximum
number of iteration the PSO execute and
the minimum error requirement.

PSO ALGORITHM

Step1: Initial global populations are
generated with in the limits by using
equation (7)
Step2: Pbest is set to each initial
searching point. The best evaluated
values among Pbest and gbest
Step3: New velocities are calculated
using equation (8)
Step4: If velocity <Vmin then velocity
= Vmin else velocity >Vmin then
velocity =Vmax
Step5: New searching points are
calculated using equation (9)
Step6: Check capacity limits of each
units
Step7: Evaluate the fitness values for
new searching point
Step8: If the error is within the limit stop
the program
Table1 Results of Conventional
Lambda Iteration Method


No of iteration: 37
Time (sec) :61.3615


Table2 Results of POS for ELD
problem


No of iteration: 24
Time (sec) :39.8020

SIMULATION RESULTS AND
COMPARISON

The programs are simulated in
MATLAB7 and tested for 14 bus
system. To compare the solution of the
proposed PSO technique, the same data
used in the conventional Lambda
iteration method was used.

The optimal flow dispatch with system
demand rising from 250MW to
P
D
P
1
P
2
cost
250 150.532 100.025 2740.62
500 267.990 233.510 4790.34
750 407.483 345.065 6929.41
1000 544.027 457.010 9149.97
1250 685.035 565.833 11481.53
1500 821.028 680.027 13923.94
1750 959.033 791.533 16460.06

P
D
P
1
P
2
cost
250 150.000 100.000 2736.35
500 266.990 233.010 4778.00
750 406.483 344.065 6911.86
1000 543.527 456.510 9140.78
1250 684.535 565.533 11473.95
1500 821.658 679.027 13919.46
1750 958.533 791.533 16455.10

4
1800MW shown in Table 1which is
obtained by conventional Lambda
iteration method. The results of PSO
technique are shown in Table 2
Parameter selection of PSO
technique
Population Size: 50
Upper Limit : 10
Lower Limit : 07
Acceleration Constant : C1=C2=2

DISCUSSION ON RESULT

It is clear that the solutions obtained by
PSO are converging to high quality
solution at the early iteration.
Simultaneously it can be found that the
solving tendency of PSO is much faster
then the conventional Lambda iteration,
because the earlier method produce
variation due to mutation. This can be
proved that the PSO technique has better
convergence efficiency in solving the
optimization in solving the optimization
problem.

CONCLUSION

This paper presents Application of
Particle Swarm Optimization for
Economic Dispatch Problem and
comparing its results with conventional
Lambda iteration method. The proposed
EDP is simulated in both PSO and
conventional Lambda iteration method
by considering loss coefficients and
generation limits are demonstrated. The
PSO has superior feature and conditions
executed easily and good computational
efficiency. The results show the
proposed PSO method is compatible of
obtaining the high quality solution
efficiency in Economic Dispatch
Problem.


REFERENCES

1. L.K.Kirchmayer, Economic operation of
Power System, Wiley Eastern Limited 1958

2. A.J .Wood and B.F.Wollenberg, Power
Generation, Operation and Control. John
Wiley & Sons ,Ins 1984

3. C.L.Wadhwa, Electrical Power System.
New Age International Publishers 2002

4. Prof S. P. Ghoshal and R.Roy , A
Comparative Study of Genetic Algorithm
based Optimization and Particle Swarm
Optimization in Economic Load Dispatch.
IEEE Transaction on Power System, Vol 87,
no 01 ,Dec 2006, pp3-9

5. G.Sathes Kumar and Dr.T.J ayabarathi,
Particle Swarm Optimization for Economic
Dispatch with Piecewise Quadratic Cost
Function, IEEE Transaction on Power
System, Vol 86 March2006, pp 209-212

6. Zwe-Lee Gaing, Particle Swarm
Optimization to solving th Economic
Dispatch considering the Generator
Constraints , IEEE Transaction on Power
System, Vol 18 no 3 August 2003, pp 1187-
1195

7. T.J ayabarathi and G.Sadasivam,
Evolutionary Programming Based
Economic Dispatch for Unit with Multiple
Fuel Options, European Transaction on
Electric Power, Vol 10, no 3, May/J une
2000, pp 167-170
5


RENEWABLE SOURCE
Applications of solar:
AN OVERVIEW OF FUTURE SOURCE OF ELECTRICITY




COLLEGE OF ENGINEERING
(Autonomous)

Gandhi Institute of Technology and Management

Author Info Sheet:

Author: N. Ramaswamy Reddy
Branch: Electrical and Electronics Engineering
Topic Name: APPLICATIONS OF SOLAR
E-Mail ID ramaswamyneelapu@yahoo.com
Contact Phone No: 9948011515








ABSTRACT
Currently, power generation from
renewable energy sources is of global
significance and will continue to grow
during the coming years. Renewable
energy technologies are receiving
increased attention as an attractive
electricity supply option for meeting
electric utility needs in the 1990s and
beyond.
This paper comes with some general
topics and then with innovative
concepts to applications for solar water-
heating systems, and the use of plastics
in residential solar water-heating
systems, powering air-conditioning
systems using solar-energy systems, the
use of flat-plate collectors for residential
and commercial hot water. Ventilation
air preheating for commercial buildings
using transpired air collectors.
Concentrating and evacuated-tube
collectors for industrial-grade hot water
and thermally activated cooling.
This paper describes the use of solar
energy in the field of controlling
pollution and can save our nature up to
some extent. And it comes with some
advantages and disadvantages of solar
energy. This also provides an
application of nanotechnology in solar
system.
Introduction
The sun has produced energy for
billions of years. Solar energy is the
solar radiation that reaches the earth.
Solar energy can be converted directly or
indirectly into other forms of energy,
such as heat and electricity. The major
drawbacks (problems, or issues to
overcome) of solar energy are: (1) the
intermittent and variable manner in
which it arrives at the earth's surface
and, (2) the large area required to collect
it at a useful rate. Solar energy is used
for heating water for domestic use, space
heating of buildings, drying agricultural
products, and generating electrical
energy.
In today's climate of growing energy
needs and increasing environmental
concern, alternatives to the use of non-
renewable and polluting fossil fuels have
to be investigated. One such alternative
is solar energy. As pollution increases
and the most accessible coal, gas, and oil
go up in smoke, renewable energy looks
increasingly attractive. It does not cause
acid rain, add to the greenhouse effect,
or share the problems of nuclear power.
Solar energy is quite simply the energy
produced directly by the sun and
collected elsewhere, normally the Earth.
The sun creates its energy through a
thermonuclear process that converts
about 650,000,000 tons of hydrogen to
helium every second. The process
creates heat and electromagnetic
radiation. The heat remains in the sun
and is instrumental in maintaining the
thermonuclear reaction. The
electromagnetic radiation (including
visible light, infra-red light, and ultra-
violet radiation) streams out into space
in all directions. Only a very small
fraction of the total radiation produced
reaches the Earth. The radiation that
does reach the Earth is the indirect
source of nearly every type of energy
used today.

HISTORY
Solar energy history is a very short
history, right? After all, we are only now
trying to find renewable resources, right?
It was in the 1970s that we had the
energy crisis, is that not when we started
looking for an alternative form of
energy? In truth, solar energy history
extends much further back than you
might think!
Way back in the late 1830s, Edmund
Becquerel published his findings on how
light can be turned into energy. Of
course, his findings were not really ever
applied. One might say that the true solar
energy history because in the 1860s
when Augusted Mouchout received
funds from the French monarch to work
on a new energy source. Mouchout
created a motor that ran on solar energy
and even a steam engine that worked off
of solar energy. He even used energy
from the sun to make ice! He did this by
connecting his steam engine to a
refrigeration device.
William Adams used mirrors and the sun
to power a steam engine during the
1870s. His design is still in used today. It
is called the Power Tower Concept. In
1883, Charles Fritz turned the rays of the
sun into electricity. In the later 1880s,
Charles Tellier installed a solar energy
system to heat the water in his house!
It was not until the 1950s that Gerald
Pearson, Calvin Fuller, and Daryl
Chaplin (of Bell Laboratories)
discovered how well silicon worked as a
semi-conductor. Silicon is what solar
cells and solar panels are generally made
of today. Rather, solar energy is a major
influence at present, and will only
continue to be more so in the future.
THREE MAIN FORMS OF USING
SOLAR ENERGY:
1. Solar Cells (really called
"photovoltaic" or
"photoelectric" cells)
Principle:
These convert light directly into
electricity. In a sunny climate, you can
get enough power to run a 100W light
bulb from just one square metre of solar
panel.This was originally developed in
order to provide electricity for satellites,
but these days many of us own
calculators powered by solar cells.
2. Solar water heating
Heat from the Sun is used to heat water
in glass panels on your roof. This means
you don't need to use so much gas or
electricity to heat your water at home.
Principle:
Water is pumped through pipes in the
panel. The pipes are painted black, so
they get hot when the Sun shines on
them. This helps out your central heating
system, and cuts your fuel bills.
However, in the UK you must remember
to drain the water out to stop the panels
freezing in the winter. Solar heating is
worthwhile in places like California and
Australia, where you get lots of
sunshine.
3. Solar Furnaces
Principle:
Use a huge array of mirrors to
concentrate the Sun's energy into a small
space and produce very high
temperatures. There's one at Odellio, in
France, used for scientific experiments.
It can achieve temperatures up to
33,000 degrees Celsius.
TYPES OF SOLAR COLLECTORS:
1. flat plate collector
2. parabolic trough collector
3. mirror strip reflector
4. fresnel lens collector
5. flat plate collector with
adjustable mirrors
6. Compound parabolic
concentrator.
FIG. Concentration of sunlight using
(a) parabolic trough collector (b)
linear Fresnel collector (c) central
receiver system with dish collector
and (d) central receiver system with
distributed reflectors
Flat plate collector is non-concentrating
type of collector, which is made in
rectangular panels. These are simple to
construct and erect and can collect and
absorb both direct and diffuse solar
radiation, they are consequently partially
effective even o cloudy days when there
is no direct radiation.
Parabolic trough collector is solar
radiation coming from the particular
direction is collected over the area of the
reflecting surface and is concentrated at
the focus of the parabola, if the reflector
is in the form of a trough with parabolic
cross-section, the solar radiation is
concentrated along a line.
Mirror strip reflector is a kind of
focusing collector, a number of plane or
slightly curved mirror strips are mounted
on a flat base. The angles are such that
they reflect solar radiation from a
specific direction on to focal line.
Fresnel lens collector is a refraction
type of focusing collector. It utilizes the
focusing effect of fresnel lens Flat plate
collector with adjustable mirrors and
compound parabolic concentrator are
also similar type of concentrating
collectors.
Another type is paraboloidal collector
is a point focusing type of collector the
entire radiation received is concentrated
at a point.
Photovoltaic cells:
Many of us may be utilizing electricity
during the day for fan, air coolers,
electric stoves, television e.t.c. all such
equipments charges a lot of units and
leads a lot of wastage of electricity. But
however we cannot avoid all these
facilities for that purpose we can save
electricity by constructing photovoltaic
cells on our home.
Photovoltaic energy is the conversion of
sunlight into electricity through a
photovoltaic cell, commonly called a
solar cell. A photovoltaic cell is a non
mechanical device usually made from
silicon alloys.
Sunlight is composed of photons, or
particles of solar energy. These photons
contain various amounts of energy
corresponding to the different
wavelengths of the solar spectrum.
When photons strike a photovoltaic cell,
they may be reflected, pass right
through, or be absorbed. Only the
absorbed photons provide energy to
generate electricity. When enough
sunlight (energy) is absorbed by the
material (a semiconductor), electrons are
dislodged from the material's atoms.
Special treatment of the material surface
during manufacturing makes the front
surface of the cell more receptive to free
electrons, so the electrons naturally
migrate to the surface.
When the electrons leave their position,
holes are formed. When many electrons,
each carrying a negative charge, travel
toward the front surface of the cell, the
resulting imbalance of charge between
the cell's front and back surfaces creates
a voltage potential like the negative and
positive terminals of a battery. When
the two surfaces are connected through
an external load, electricity flows.
Photovoltaic reduce your electric power
needs. Become your own power supply
using the most efficient Photovoltaic
Modules commercially available.
The picture shows how photo voltaic
cells are used to generate electricity and
drive a motor.


APPLICATIONS OF SOLAR
ENERGY
Saving of electricity is nothing but
generation of electricity. How can we do
this? This is achieved by the following
applications of it.
Roof Integrated Solar Thermal
Collector
Artificial gas made from coal was
available to heat water, but it cost 10
times the price we pay for natural gas
today. And electricity was even more
expensive if you even had any in your
town.
There is a type of flat plate collectors
hence effective in cloudy days also. Now
a day all of us might be using heaters in
our house. By using these heaters we are
wasting a lot of electricity; to save it we
can use these types of collectors. Solar
water heating systems deliver an
amazing 2100 WATTS of energy for
every 3 square meter collectors.
SOLUTION FOR PROBLEMS OF
PETROL:
Recently we have observed in INDIA
two wheelers which run on battery but
the only problem with them is they need
a battery and we can drive only for 45
minutes at maximum. That means we
cannot go for long drives for this
purpose we can use solar systems so that
we can go for long drives without
stopping. Like every vehicle it too has its
share of drawbacks, its speed is limited
to 20 kms per hour to 40 kms per hour.
But then for some even this is a benefit.
Above all, this vehicle will not cause
pollution and is maintenance free. But
how can we construct solar cell on two
wheelers? How ever solar equipment
needs large surface, high initial cost,
proper safety and we have to check lot
other conditions for that purpose. For
that reason we have to design the
structure of photovoltaic cells in slightly
curved position so
Solar charger:
Most of us might be using cell
phones but we never concentrated
how much of energy we are
wasting by doing so. As a single
person this savage will not matter
for government but we concentrate
as a whole it always matters. If we
use solar cell charger means we
can save this energy however saving of
energy means generation of electricity.
One such model is shown as an example.
For charging batteries in cars:
The small solar module in front on the
Led(photovoltaic) serves for reloading of
12 V batteries. 12 V systems for the
lighting is independent from the batteries
for the electric drive. Indeed, the Led
looks very sleek, has, nevertheless an
antique cw value of 0.42. Thus a modern
construction with cw 0.28 could have
50% more front cross section; this would
be enough for a full-grown middle class
car.
Solar thermal heating
The major applications of solar thermal
energy at present are heating swimming
pools, heating water for domestic use,
and space heating of buildings. For
these purposes, the general practice is to
use flat-plate solar-energy collectors
with a fixed orientation.
Where space heating is the main
consideration, the highest efficiency
with a fixed flat-plate collector is
obtained if it faces approximately south
and slopes at an angle to the horizon
equal to the latitude plus about 15
degrees.
Solar collectors fall into two general
categories: non concentrating and
concentrating. In the non concentrating
type, the collector area (i.e. the area that
intercepts the solar radiation) is the same
as the absorber area. In concentrating
collectors, the
area intercepting
the solar
radiation is
greater,
sometimes
hundreds of
times greater,
than the
absorber area.
Where temperatures below about 200 F
are sufficient, such as for space heating,
flat-plate collectors of the non
concentrating type are generally used.
Solar pool heating:
Many of us may have pools in our house
it is quite uncomfortable to exercise in
pools during cloudy days then we can
use these flat plate collectors to warm
our pools. In a solar pool-heating
system, the existing pool filtration
system pumps pool water through the
solar collector, and the collected heat is
transferred directly to the pool water.
Solar pool-heating collectors operate just
slightly warmer than the surrounding air
temperature and typically use
inexpensive, unglazed, low-temperature
collectors made from specially
formulated plastic materials. Glazed
(glass-covered) solar collectors are not
typically used in pool-heating
applications, except for indoor pools, hot
tubs, or spas in colder climates. In some
cases, unglazed copper or copper-
aluminum solar collectors are used.
Solar Water Heating for Buildings
We can use these solar systems in two
ways one can use to heat the house and
another for generating electricity. Most
solar water-heating systems for buildings
have two main parts: (1) a solar collector
and (2) a storage tank. The most
common collector used in solar hot
water systems is the flat-plate collector.
Solar water heaters use the sun to heat
either water or a heat-transfer fluid in the
collector. Heated water is then held in
the storage tank ready for use, with a
conventional system providing
additional heating as necessary. The tank
can be a modified standard water heater,
but it is usually larger and very well
insulated.
IMPLEMENTATION IN
DIFFERENT FIELDS:
We all observe street lights they charge
more amount of power and it would be a
burden for government sector. For this
purpose if we use these photovoltaic
cells on street light means they are help
full in charging the battery during day
and supplying electricity to street lights
in nights. However there is no need of
any motors are alternators in this case as
these photovoltaic cells directly produce
D.C current on which is sufficient to
produce nearly 1000 watts sufficient for
the street lamps.
wind power with solar power:
We can use both these solar energy and
wind energy simultaneously the only
problem with the wind mills is they
cannot produce continuous power
supply. If we incorporate solar energy
means then it would be easy for us to
generate continuous power at least
during day.
The above picture clearly indicates us
that wind mills are constructed in a
particular area that area cannot be used
for domestic purpose and so we can
situate the solar cells in that free surface.
The only problem with the wind mill is
that they produce a lot of sound
pollution and difficult to bearable. Then
this problem can be overcome by using
this solar power and not only that by
constructing these solar cells we are able
to avoid transmission losses as they are
constructed on or near to our houses.

BENEFITS OF SOLAR
SHADING IN ALL TYPES OF
BUILDINGS

Solar shading, including shutters and
roller shutters, contributes to the
reduction of energy demand of buildings
in all seasons:
During the heating season, by reducing
the heating demand, especially in
wintertime due to the extra thermal
insulation they provide when in closed
position and by the optimum use of free
solar gains through the windows in the
autumn and spring with the solar shading
in a controlled, open position
In summertime, by reducing the
cooling energy demand by avoiding
solar heat gains due to excessive solar
energy entering through the glazed parts
of the buildings.
The solar shading and shutter industry
offers a very wide variety of products.
Only a few are shown here.
In this example, the top floor is equipped
with external fall arm awnings, while the
ground floor has internal roller blinds.

Environmental and Health
Benefits

PV is the cleanest renewable energy
resource4, with the lowest emissions of
any commercially available technology.
PV scores the highest in public
preferences for electricity sources,
scoring higher than natural gas,
hydropower and even wind power.5
Solar power creates no hazards or
barriers to birds, fish or other wildlife. It
is also silent, which can be an advantage
in situations requiring backup power
such as hospitals and banks.
The clean air benefits from incorporating
solar energy into a Pennsylvania RPS
would be very significant. By 2010 the
emissions offsets statewide would be.



Future of Solar Energy
One solution when sun goes down
'What do you do when the sun goes
down?'
The solution is to build an auxiliary
system that will store energy when the
sun is out.. However, the problem is that
such storage systems are unavailable
today. Simple systems, like water pipes
surrounded by vacuum, do exist. It is
based on the concept that provided the
pipes are insulated, the water will store
thermal energy.
How to grab sun in all directions
As the Sun moves
across the sky, the
mirrors turn to keep
the rays focused on
the tower, where oil
is heated to 3,000 degrees Celsius, The
heat from the oil is used to generate
steam, which then drives a turbine,
which in turn drives a generator capable
of providing 10kW of electrical power.
Solar One was very expensive to build,
but as fossil fuels run out and become
more expensive, solar power stations
may become a better option.

Application of nanotechnology in
solar system:
Water-splitting solar panels would have
important advantages over existing
technologies in terms of hydrogen
production. Right now, the primary way
to make hydrogen is to separate it from
natural gas, a process that generates
carbon dioxide and undercuts the main
motivation for moving to hydrogen fuel-
cell vehicles: ending dependence on
fossil fuels.
The current alternative is electrolysis,
which uses electricity to break water into
hydrogen and oxygen, with the two
gases forming at opposite electrodes.
Although electrolysis is costly, it can be
cleaner if the source of the electricity is
wind, sun, or some other carbon-free
source. But if the source of the
electricity is the sun, it would be much
more efficient to use solar energy to
produce hydrogen by a photochemical
process inside the cell itself. By
improving the efficiency of such solar
panels we take an important step toward
this goal.
The solution is including small amounts
of silicon and cobalt; they can grow
nanostructured thin films of iron oxide
that convert sunlight into the electrons
needed to form hydrogen from water.
And the iron oxide films do this more
efficiently than ever before with this
material.
ROLE OF ENGINEER:
But the approach has been difficult to
implement. "It has not delivered on the
promise, mostly because of the
complexity of the systems. The goal is to
engineer a concentrating system that
focuses sunlight, that tracks the
movement of the sun to keep the light on
the small solar cell, and that can
accommodate the high heat caused by
concentrating the sun's power by 500
to700 times--and to make such a system
easy to manufacture.
Advantages
Solar energy is free - it needs no
fuel and produces no waste or
pollution.
In sunny countries, solar power
can be used where there is no
easy way to get electricity to a
remote place.
Handy for low-power uses such
as solar powered garden lights
and battery chargers
Disadvantages
Doesn't work at night.
Very expensive to build solar
power stations.
Solar cells cost a great deal
compared to the amount of
electricity they'll produce in their
lifetime.
Can be unreliable unless you're
in a very sunny climate. In the
United Kingdom, solar power
isn't much use except for low-
power applications, as you need a
very large area of solar panels to
get a decent amount of power.
However, for these applications
it's definitely worthwhile.
EMERGING TECHNOLOGY:

Timothy Fisher is taking a Tiffany's
approach to converting sunlight into
electricity: with a $348,000 grant from
National Reconnaissance Office, the
assistant professor of mechanical
engineering is exploring the use of
polycrystalline diamond as a
replacement for the silicon solar cells
currently used in many space
applications. The solar wings of the
international space station are the largest
power-generating solar arrays ever flown
in orbit. Photo: NASA "Diamond has a
number of potential advantages for use
in outer space," says Fisher, who will be
working on the project with Weng Poo
Kang, an associate professor of electrical
engineering and computer science.
Fisher maintains that diamond films:
Can withstand the high levels of
radiation typical of the space
environment. By contrast, the
performance of silicon cells degrades by
about 50 percent after 10 years in orbit
can operate at high temperatures. As a
result, they can be used with low-weight
inflatable solar collectors resulting in an
energy system that produces more
electricity per pound, a critical factor in
space applications. Have a potential
conversion efficiency of 50 percent as
compared to 10 to 15 percent for silicon
solar cells
Conclusion:
Thus finally I conclude that by
implementing the vast usage of solar
energy all over the world. Then up to
50% of our electricity problem would be
solved. By using solar energy for the
purpose of heating and cooling of rooms
and using photovoltaic cells in vehicles
then we can probably eliminate pollution
in our country up to some extent.




REFERENCES:
NON CONVENTIONAL ENERGY
SOURCES BY G.D. RAI.

AUTOMATION AND CONTROL OF
DISTRIBUTION USING SCADA







Submitted by



S.Mohammed Rafi B.Vijay Kumar
III/IV B.Tech, EEE III/IV B.Tech, EEE
rafi_237@yahoo.co.in balu626@yahoo.co.in
Contact: 9885963955 Contact: 9885996486


JNTU COLLEGE OF ENGINEERING
ANANTAPUR



ABSTRACT:

In every power/substation certain
measurements, supervision, control,
operation and protection functions are
necessary.
Traditionally these
functions were performed manually by
system operator from control rooms.
With the progress in digital electronics,
data processing, data communication
and microprocessors, a host of new
devices and systems are being
introduced for power system automation.
SCADA is a computer based
programmable and distributed
supervisory control and data acquisition
system. It is mainly used for remote and
local supervision and control of
electricity and distribution on medium
voltage level. Distribution SCADA
supervises the distribution system.
In this paper we are
presenting the distribution SCADA and
distribution automation & its benefits.
The application of SCADA system to
automate power for Hyderabad &
Secunderabad and streamline
APCPDCL electrical distribution
network is presented. The current
situation, present benefits, long term
benefits & future needs are explained.
Finally managing distribution is
explained.

Introduction:
Traditionally, the protective
system comprising of relays and circuit
breaker were almost independent of
control systems for tap changer control ,
voltage control, data logging, data
monitoring and routine operations.
Before 1985, the protective functions
were segregated from control functions.
With traditional electromechanical
relays and earlier generation of hard-
wired static relays, the functions of
protection systems were limited to
sensing faults and abnormal condition,
giving alarm, tripping circuit breaker,
auto enclosing of circuit breakers. Data
logging and some control and
supervision functions were manually
performed by system operated from
control rooms.In traditional substation
controls the three functions i.e.,
protection, control and monitoring were
not integrated fully.

In modern automatic
SCADA systems, the functions are
interlinked by means of digital
processing devices and power line
carrier/radio communication links. Every
power/substation has a control room the
relay and protection panels and control
panels and man machine interface
(MMI) installed in the control room.
With the development of programmable
digital systems i.e., microprocessor
based SCADA systems, the entire
supervisory functions, control functions,
protective functions can be combined.

The main function of distribution
SCADA is to control and supervision of
distribution of power to various
consumers with minimum outage and
minimum loss.

Supervisory Control and Data
Acquisition (SCADA):
SCADA is a computer
based programmable and distributed
supervisory control and data acquisition
system. It is used to improve overall
system efficiency in case of both capital
and energy and to increase reliability of
service to essential loads .Supervisory
Control and Data Acquisition (SCADA)
system enables to control , distribution
equipment from a remote location. It
provides operating information, and
perform these additional services .
Feasibility studies
System design
Database design
Communications
Project management
Dispatcher training
Master Station specifications
RTU specifications

The application areas of SCADA can be
categorized into the following groups:

Small SCADA systems with a selected
number of functions, e.g. for distribution
networks and for electrical networks of
industrial complexes.

Medium-size SCADA and EMS with the
full spectrum of functions for
distribution and subtransmission
networks, and selected functions for
generation.
Large-scale SCADA and EMS with an
extensive and sophisticated range of
functions for transmission networks and
generation.

Distribution SCADA supervises the
distribution system. Similar system
architectures are used for power system
SCADA and load SCADA.

SCADA Process classified into three
parts
Input
Analog : Continuous
Electrical Signals Ex.Active
Power (MW), Reactive
Power(MVAR), Voltage (KV),
Frequency (Hz).etc..
Digital : Switching Signals
High(1) or Low(0) Signal
Ex.Breaker Close(high) or
Open(low), Isolator Closed(high)
or Open(low).
Process
The signals are converted into
digital format
Implement protocol between
Master and Slave
It operates with Real Time
Operating System (RTO)
Output
the results are exposed with user
friendly environment
Through displays can be possible
to control the substation and
generating station.

Distribution Automation:
It is an integrated system concept for the
digital automation of distribution
substation, feeder and user functions.. It
includes control, monitoring and
protection of the distribution system,
load management and remote metering
of consumer loads. Distribution
automation systems provide utilities the
ability to optimize the operations of their
distribution systems and directly
improve reliability. Adding targeted
distribution automation capabilities can
be especially economical when they are
an extension of existing investments in
SCADA and in system operations.
Distribution automation can optimize
system conditions or solve a specific
problem for a specific area or customer.

The key for distribution automation
success is to target the highest value
results and not choose the wrong
technology. Interest is growing in
distribution automation as utilities look
to extend their automation beyond the
substation and leverage the breadth of
new communication technologies
available.


Current Situation:
to automate and manage the power
distribution system has been evolving
for a few decades now, but has not yet
been implemented widely to achieve the
highly desirable benefits of improved
customer service and retention. Today's
market conditions have made
distribution automation a critical part of
every utility's operation. Markets with
highly restrictive reliability standards are
forcing utilities to monitor power quality
issues and events as never before, thus
storing large amounts of data is more
necessary than ever .Utilities are
covering larger territories with the
same personnel, so it is important to
optimize the use of the communication
lines by using them for multiple
applications Such as SCADA, REMOTE
IED CONFIGURATION, VIDEO
SURVEILLANCE.
Different Users, Different Needs:
Traditionally, SCADA systems have
communicated with Remote Terminal
Units (RTUs) located at the substations,
or in Distribution Automation
applications, RTUs were also located in
the field. RTUs were traditionally
connected to status and control points as
well as transducers to measure Voltage,
Current, Power, Frequency and other
electrical parameters. RTUs connected
in this way were only capable of
measuring parameters specific to the
transducer type they were connected to.
Information such as phase angle was not
available, so calculated parameters were
not available unless a dedicated
transducer was installed.

This situation was
solved once the "transducerless" RTUs
were introduced. As they accept direct
AC inputs from Voltage and Current
transformers, "transducer less" RTUs
can measure and calculate any electrical
parameter (depending on the RTU's
processing capability). With the
introduction of electronic devices at the
substation, the capability of providing
large amounts of data related to the
operation of the substations opened.
Depending on your needs, you could:

Communicate directly with your
IEDs without using an RTU or
any sort of Data Concentrator in
between.

Configure any of your IEDs
either locally or remotely using
the same communication line as
the SCADA system.

Retrieve non-SCADA related
information from your IEDs
(events, transients, oscillography,
etc.).

Store an unlimited amount of
historical information on any
relational database (SQL
compatible), either at the
substation or at any remote
location.

Deploy as many local or remote
workstations as you need, so you
can monitor and/or control your
substation from anywhere in the
world.

Segregate the information
generated at the substation for
reporting to different "masters"
(i.e. HMI, SCADA system,
Power Pool, Independent Market
operator, etc.).

Display Video Surveillance (Live
and Recorded) over the SCADA.

Application :
SCADA system to automate
power for Hyderabad & Secunderabad
and streamline APCPDCL electrical
distribution network .

APCPDCL (Andhra Pradesh Central
Power Distribution Company Ltd)
commissioned a state of the art SCADA
(Supervisory Control & Data
Acquisition) system supplied by ABB
India to monitor and control the power
distribution network for the twin cities of
Hyderabad and Secunderabad. The
project scope encompassed design,
supply, erection & commissioning of the
SCADA system with a control center at
Erragadda, Hyderabad for 132 kV and
33 kV substations in and around
Hyderabad and Secunderabad. The
SCADA system will provide immediate
benefits to APCPDCL in terms of
availability and quality of power as well
as lower outage and interruption time.
The SCADA
system has been installed to monitor and
control all substations in Hyderabad &
Secunderabad from a centralized
location enabling complete management
of power supply to serve consumers
more efficiently. The system offers
operational advantages by integrating
distribution automation functionalities
like Automatic / Remote meter reading,
Load balancing, trouble call
management etc. to facilitate effective
distribution management.
APCPDCL increase
the uptime of its network and facilitate
optimum allocation of resources. It
willalso facilitate instant remedial action
to restore normalcy in case of power
outages. Load shedding can also be
programmed in a systematic manner to
avoid grid collapse and complete black
out. There are 106 substations in
Hyderabad and Secundrabad feeding the
twin cities. This state-of the-art SCADA
system will facilitate collection of all
relevant information from these
substations to be sent to the central
computer system installed at the
APCPDCL Control Room. This will be
done over a TDMA backbone and over
digital UHF data radios covering an area
of 1500 sq km. A similar SCADA
solution has already been implemented
by ABB for TNEB serving the city of
Chennai.







Present Benefits:
Manual meter reading is being
replaced with automatic
reporting.

Real time alarms and data give
operators the information they
need to respond quickly. The
Utilities Division can be
proactive in providing quality
information to its customers.

More effective load management:.
Improved engineering, planning,
operation and maintenance of
distributed systems.

Effective load shedding with
minimum outage to important
consumers.

Reduced outages and increased
reliability and availability.

Long Term Benefits and Future
Needs:
Power System Operators
need to be able to continue to remotely
and instantaneously, identify electrical
power system failures at any location in
the distribution system. Accurate real
time alarming and historical information
is needed to continually meet the needs
of a diverse community of energy users.
A continuation of the demands for high
reliability and accurate performance and
trending data is paramount. Outage
Record
Post incident analysis is
required to prevent reoccurrence of
similar outages and power failures.

Power Quality
As needed, generally after an
event, or upon query from building users
power users power quality records are
requested.
Substation Security Fires and Door
Alarms
Early on in the design of the
SCADA application, it was determined
that fire/smoke detectors were needed in
all station switch gear rooms. This
function, along with door alarm contacts,
provides the system operators with the
ability to respond instantly to these types
of events.
Substation Battery Status
Loss of the critical function
of the battery system can be devastating
for a switchgear breaker unit and
inability to trip a faulted circuit can be a
disaster. Setting low voltage limits that
alarm instantly, warns the maintenance
personnel who can respond quickly and
avert a major problem.
Switching and Paralleling Operations
Confirmation of the physical
change in operator switch position was
not available prior to the use of remote
SCADA applications.
Substation Primary Transformer
Status
Most substation transformer
status alarms and events are monitored
by the system. The annunciation and
display of alarm conditions ensures
timely investigation of the problem.
Remedial action can prevent future
equipment damage and power outages.

Conclusion:
Distribution scada
improves over all system efficiency in
use of both capital and energy.
It increases reliability of service to
essential loads.

Improves quality of supply.
Improves continuity of supply.

Distribution automation systems provide
utilities the ability to optimize the
operations of their distribution.

Managers and engineers can view
process data from their desktop whether
the data is on site or miles away.



REFERENCES :
1. DISTRIBUTION
AUTOMATION BY
WALTER.K.EICHELBURG
2. IEEE/PES TRANSMISSION
AND DISTRIBUTION
CONFERENCE

3. BUSINESS LINE(INTERNET
EDITION) FROM THE
HINDU GROUP OF
PUBLICATIONS

2. SITES:
1. WWW.APTRANSCO.COM
2. WWW.SAT-
AUTOMATION.COM
3. WWW.ERCAP.ORG

















DISTRIBUTION AUTOMATION SYSTEM






COMPONENTS OF SCADA SYSTEM


PAPER PRESENTATION

ON

BATTERY LIFE ESTIMATION OF MOBILE EMBEDDED SYSTEMS

Authors


V.Saichand CH.VishnuChaitanya
III/IV B.Tech (C.S.E). III/IV B.Tech (C.S.E)
Emailid: vsaichand@yahoo.com Chitturu_vishnu@yahoo.co.in

.

SRI SARATHI INSTITUTE OF ENGINEERING AND TECHNOLOGY



NUZVID,
Krishna District,
Andhra Pradesh.



Abstract
Since battery life directly impacts the extent and duration of mobility, one of the key
considerations in the design of a mobile embedded system should be to maximize the
energy delivered by the battery, and hence the battery lifetime. To facilitate exploration
of alternative implementations for a mobile embedded system, in this paper we address
the issue of developing a fast and accurate battery model, and providing a framework for
battery life estimation of Hardware/Software(HW/SW) embedded systems. We introduce
a stochastic model of a battery, which can simultaneously model two key phenomena
affecting the battery life and the amount of energy that can be delivered by the battery:
the Rate Capacity effect and the Recovery effect. We model the battery behavior
mathematically in terms of parameters that can be related to physical characteristics of
the electro-chemical cell. We show how this model can be used for battery life estimation
of a HW/SW embedded system, by calculating battery discharge demand waveforms
using a power co-estimation technique. Based on the discharge demand, the battery
model estimates the battery lifetime as well as the delivered
Energy. Application of the battery life estimation methodology to three system
implementations of an example TCP/IP network interface subsystem demonstrate that
different system architectures can have significantly different delivered energy and
battery lifetimes.

1 Introduction

As the need for mobile
computation and communication
increases, there is a strong demand for
design of Hardware/ Software (HW/SW)
Embedded Systems for mobile
applications. Maximizing the amount of
energy that can be delivered by the
battery, and hence the battery life, is one
of the most important design
considerations for a mobile embedded
System, since it directly impacts the
extent and duration of the systems
mobility. To enable exploration of
alternative implementations for a mobile
system, it is critical to develop fast and
accurate battery life estimation
techniques for embedded systems. In this
paper, we focus on developing such a
battery model, and provide a framework
for battery-life estimation of HW/SW
embedded systems. Previous research on
low power design techniques [1, 2], tries
to minimize average power consumption
either by reducing the average current
drawn by a circuit keeping the supply
voltage fixed or by scaling the supply
voltage statically or dynamically.
However, as shown in this paper,
designing to minimize average power
consumption does not necessarily lead to
optimum battery lifetime. Additionally,
the above techniques assume that the
battery subsystem is an _ideal source of
energy which stores or delivers a fixed
amount of energy at a constant output
voltage. In reality, it may not be possible
to extract the energy stored in the battery
to the full extent as the energy delivered
by a battery greatly depends on the
current discharge profile. Hence,
accurate battery models are needed to
specifically target the battery life and the
amount of energy that can be delivered
by a battery in the design of a mobile
system. The lifetime of a battery, and the
energy delivered by a battery, for a given
embedded system strongly depend on
the current discharge profile. If a current
of magnitude greater than the rated
current of the battery is discharged, then
the efficiency of the battery (ratio of the
delivered energy and the energy stored
in the battery) decreases, in other words,
the battery lifetime decreases [3, 10].
This effect is termed as the Rate
Capacity Effect. Additionally, if a
battery is discharged for short time
intervals followed by idle periods,
significant improvements in the
delivered energy seem possible [11, 13].
During the idle periods, also called
Relaxation Times, the battery can
partially recover the capacity lost in
previous discharges. We call this effect
as the Recovery Effect. An accurate
battery model, representing fine-grained
electro-chemical phenomenon of cell
discharge using Partial Differential
Equations (PDE), was presented in [14].
However, it takes prohibitively long
(days) to estimate the battery lifetime for
a given discharge demand of a system.
Hence, the PDE models cannot be used
for design space exploration. Some
SPICE level models of battery have been
developed [6, 7], which are faster than
the PDE model. However, the SPICE
models can take into account the effect
of Rate Capacity only. Based on the Rate
Capacity effect, a system level battery
estimation methodology was proposed in
[4, 5]. Recently, a Discrete-Time battery
model was proposed for high-level
power estimation [8]. Though it is faster
than the previous models, it does not
consider the Recovery effect. In this
paper, we describe a stochastic battery
model, taking into account both the
Recovery effect and the Rate Capacity
Effect. The proposed model is fast as it
is based on stochastic simulation. Also,
by incorporating both Recovery and
Rate Capacity effects, it represents
physical battery phenomena more
accurately than the previous fast models.
We also show how this model can be
used for estimating the battery lifetime
and the energy delivered by the battery
for a HW/SW system, by calculating
battery discharge demand waveforms
using a power co-estimation technique
[9]. Based on the discharge demand, the
battery model estimates the battery
lifetime as well as the delivered energy.
Finally, we demonstrate how this
framework can be used for system level
exploration using a TCP/IP network
interface subsystem. The results indicate
that the energy delivered by the battery
and the lifetime of the battery can be
significantly increased through
architectural explorations. The rest of the
paper is organized as follows. Section 2
motivates the need for an accurate
battery life estimation methodology by
illustrating that the battery life and the
energy delivered by the battery can be
affected significantly by tradeoffs at the
system level. Section 3 provides
background on the physical phenomena
inside a battery, which affect the the
battery lifetime as well as the delivered
energy. The proposed battery model is
described in section 4. The methodology
used to calculate current waveforms is
described in section 5. In section 6, we
demonstrate how the battery life
estimation methodology can be used to
evaluate alternate implementations in the
design of Battery Efficient Systems.
Section 7 concludes the paper and
explores future research.
2 Motivation : Exploration For Battery
Efficient Architectures
In this section, we present the effect of
system architectures on the delivered
energy and the lifetime of battery. Our
investigations motivate the need for fast
and accurate battery life estimation
techniques that can used for system level
exploration. We analyze the
performance of an example TCP/IP
network interface subsystem with
respect to the delivered energy and the
lifetime of the battery. The subsystem
consists
of the part of the TCP/IP protocol stack
performing the chksum computation
(Figure 1). Create Packet receives a
packet, stores it in a shared memory, and
enqueues its starting address. IP Chk
periodically dequeues packet
information, erases specific bits of the
packet in memory, and coordinates with
Checksum to verify the checksum value.
Figure 1 shows a candidate architecture
for the TCP/IP system where Create
Packet and Packet Queue are software
tasks mapped to a SPARC processor,
while IP Chk and Chksum are each
mapped to dedicated hardware. Packet
bits are stored in a single shared memory
accessed through a common system bus.

We study the effect of alternate ways of
packet processing by the system on the
battery lifetime as well as the energy
delivered by the battery. In the first
implementation (Sys A), the packets are
processed sequentially as shown in
Figure 2(a). To estimate the battery
lifetime and the energy delivered by the
battery of the implementation, the
current profiles of each component of
the system need to be calculated. Figures
2(b) and (c) show the current demands
(in mA) for some of the components of
the system for Sys A plotted over time
(in ms), calculated using the
methodology described in Section 5. The
cumulative current profile for Sys A is
shown in
Figure 2(d).

Figure 3(a) shows another way of
scheduling the packets.
In this implementation (Sys B), instead
of sequential processing of packets as in
Sys A, the first two packets are
processed in a pipelined manner; for
example, while Chksum is processing
the first packet, Create Packet starts
writing the next packet to memory. The
cumulative current profile for Sys B is
shown in Figure 3(b).

As shown in Table 1, the average current
(and hence average power consumed) is
very similar for both the alternative
system implementations. Table 1 shows
the battery lifetimes and the specific
energy delivered by the battery,
calculated using an accurate battery
model [14] for both implementations,
Sys A and Sys B. The table also reports
the number of packets processed by each
system before the battery used by the
systems is completely discharged. Note
that, though the two discharge demands
have almost equal average current
requirement, the delivered specific
energy and lifetime of the
battery differ significantly.

The results show that the battery life of a
mobile embedded system can be
improved significantly by system level
tradeoffs. For instance, as shown in
Table 1, Sys A can process 119018
packets before the battery used for the
system is discharged, where as Sys B
can process 178828 packets using the
same battery. The results also show that
the delivered energy and the lifetime of a
battery can be significantly different for
current demands with the same average
current requirement. Above results
motivate us to develop a fast and
accurate battery model that can be used
in design exploration for battery optimal
design of mobile embedded systems.
The next section provides a brief
background on the operation of a
battery, emphasizing the physical
phenomena that affect the battery
lifetime and the energy delivered by the
battery.
3 Battery Background
A battery cell consists of an anode, a
cathode, and electrolyte that separates
the two electrodes and allows transfer of
electrons as ions between them. During
discharge, oxidation of the anode (Li for
LiIon battery) produces charged ions
(Li+), which travel through the
electrolyte and undergo reduction at the
cathode. The reaction sites (parts of the
cathode where reductions have occurred)
become inactive for future discharge
because of the formation of an inactive
compound. Rate Capacity Effect
(dependency of energy delivered by a
battery on magnitude of discharge
current) and Recovery Effect (recovery
of charged ions near cathode) are two
important phenomena that affect the
delivered energy and the lifetime of a
battery. A short description of the
physical phenomena responsible for
these effects follows. The lifetime of a
cell depends on the availability and
reach ability of active reaction sites in
the cathode. When discharge current is
low, the inactive sites (made inactive by
previous cathode reactions)are
distributed uniformly throughout the
cathode. But, at higher discharge current,
reductions occur at the outer surface of
the cathode making the inner active sites
inaccessible. Hence, the energy
delivered (or the battery lifetime)
decreases since many active sites in the
cathode remain un-utilized when the
battery is declared discharged. Besides
non-availability of active reaction sites
in the cathode during discharge, the non-
availability of charged ions (lithium ions
for lithium insertion cell) can also be a
factor determining the amount of energy
that can be delivered by a battery [11].
Concentration of the active species
(charged ions i.e. Li+) is uniform at
electrode-electrolyte interface at zero
current. During discharge, the active
species are consumed at the cathode-
electrolyte interface, and replaced by
new active species that move from
electrolyte solution to cathode through
diffusion. However, as the intensity of
the current increases, the concentration
of active species decreases at the cathode
and increases at the anode and the
diffusion phenomenon is unable to
compensate for the depletion of active
materials near the cathode. As a result,
the concentration of active species
reduces near the cathode decreasing the
cell voltage. However, if the cell is
allowed to idle in between discharges,
concentration gradient decreases because
of diffusion, and charge recovery takes
place at the electrode. As a result, the
energy delivered by the cell, and hence
the lifetime, increases. Summarizing, the
amount of energy that can be delivered
from a cell and the lifetime of a cell,
depend on the value of the discharge
current and the idle times in the
discharge demand. In the next
subsection, we define some notations
which will be used in the rest of the
paper.
3.1 Notations and Definitions
A battery cell is characterized by the
open-circuit potential (VOC), i.e., the
initial potential of a fully charged cell
under no-load conditions, and the cut-off
potential (Vcut) at which the cell is
considered discharged. Two parameters
are used to represent the cell capacity:
the theoretical and the nominal capacity.
The former is based on the amount of
energy stored in the cell and is expressed
in terms of ampere-hours. The latter
represents the energy that can be
obtained from a cell when it is
discharged at a specific constant current
(called the lathed current, Crated).
Battery data-sheets typically represent
the capacity of the cell in terms of the
nominal capacity. Finally, to measure
the cell discharge performance, the
following two parameters are
considered: Battery Lifetime and
Delivered Specific Energy. Battery
Lifetime is expressed as seconds elapsed
until a fully charged cell reaches the
Vcut voltage. Delivered Specific Energy
is the amount of energy delivered by the
cell of unit weight, i.e., expressed as
watt-hour per kilogram. In the next
section, we describe our basic stochastic
battery-model that captures the Recovery
Effect, and a description of an extension
of the model to incorporate the Rate
Capacity Effect.
4 Battery Models
The fine-grained electro-chemical
phenomena underlying the cell discharge
are represented by the accurate model,
based on PDE (Partial Differential
Equations) [14], which involve a large
number of parameters depending on the
type of cell. The set of results that can be
derived through the PDE model is
limited since as the discharge current
and the cut-off potential decrease, the
computation time becomes exceedingly
large. Hence, the accurate PDE model
cannot be used for system-level
exploration of a mobile embedded
system. In the following section, we
present a more tractable parametric
model that captures the essence of the
recovery mechanism.
4.1 Stochastic Battery Model
We model the battery behavior
mathematically in terms of parameters
that can be related to the physical
characteristics of an electro-chemical
cell [15, 16]. The proposed stochastic
model focuses on the Recovery Effect
that is observed when Relaxation Times
are allowed in between discharges. Let
us consider a single cell and track the
stochastic evolution of the, N successive
charge units cell from the fully charged
state to the completely discharged state.
We define the smallest amount of
capacity that may be discharged as a
charge unit. Each fully charged cell is
assumed to have a maximum available
capacity of T charge units, and a
nominal capacity of N charge units. The
nominal capacity, N, is much less than T
in practice and represents the charge that
could be extracted using a constant
discharge profile. Both N and T vary for
different kinds of cells and values of
discharge current. We represent the cell
behavior as a discrete time transient
stochastic process that tracks the cell
state of charge. Figure 4 shows a
graphical representation of the process.
At each time unit, the state of charge
decreases from state i to the state i_n if n
charge units are demanded from the
battery. Otherwise, if no charge units are
demanded, the battery may recover from
its current state of charge (i) to a higher
state (greater than i). The stochastic
process starts from the state of full
charge (V =VOC), denoted by N, and
terminates when the absorbing state 0 (V
=Vcut) is reached, or the maximum
available capacity T is exhausted. In
case of constant current discharge are
drained and the cell state goes from N to
0 in a time period equal to N time units.
By allowing idle periods in between
discharges, the battery can partially
recover its charge during the idle times,
and thus we can drain a number of
charge units greater than N before
reaching the state 0. In this model, the
discharge demand is modeled by a
stochastic process. Let us define qi to be
the probability that in one time unit,
called slot, i charge units are demanded.
Thus, starting from N, at each time slot,
with probability qi (i > 0), i charge units
are lost and the cell state moves from
state z to z_i (see Figure 4). On the other
hand, with probability q0 an idle slot
occurs and the cell may recover one
charge unit (i.e., the cell state changes
from state z to z+1) or remain in the
same state. The recovery effect is
represented as a decreasing exponential
function of the state of charge of the
battery. To more accurately model real
cell behavior, the exponential decay
coefficient is assumed to take different
values as a function of the discharged
capacity. During the discharge process,
different phases can be identified
according to the recovery capability of
the cell. Each phase f ( f =0,..., fmax)
starts right after df charge units have
been drained from the cell and ends
when the amount of discharged capacity
reaches df+1 charge units. The
probability of recovering one charge unit
in a time slot, conditioned on being in
state j ( j=1,. . . ,N _1) and phase f is
where gN and gC are parameters that
depend on the recovery capability of the
battery, and q0 is the probability of an
idle slot. In particular, a small value of
gN represents a high cell conductivity
(i.e., a great recovery capability of the
cell), while a large gN corresponds to a
high internal resistance, (i.e., a steep
discharge curve for the cell). The value
of gC is related to the cell potential drop
during the discharge process, and
therefore, to the discharge current. Given
the recovery probability, the probability
to remain in the same state of charge in
an idle time while being in phase f is
We assume that gN is a constant,
whereas gC is a piecewise constant
function of the number of charge units
value in correspondence with df ( f = 1;
:::; fmax). We have d0=0 and dfmax+1 =
T, while for df (f = 1; :::; fmax) proper
values are chosen according to the
configuration of the battery. One
simulation step of the battery cell
assuming the input discharge demand as
Bernoulli arrival is shown in Figure 5.
4.2 Validation of the Stochastic Mode
already drawn off the cell, that changes
l
s
.

In this section we present a comparison
between results obtained through the
stochastic model and those derived from
The PDE model of a dual lithium ion
insertion cell. The discharge demand is
assumed to be a Bernoulli process with
Probability q that one charge unit i
required in a time slot. Note that in this
discharge process the probability of an
idle time (q0) is (1_q) where as qi (i>1)
is equal to zero. The PDE model was
numerically solved by using a program
Developed by Newman et al. [17]
Results relate to the first discharge cycle
of the cell; thus, discharge always starts
from a value of positive open-circuit
potential equal to 4.3071 V. We consider
that the cut-off potential is equal to 2.8
V and the current impulse duration is
equal to 0.5 ms. Results obtained from
the stochastic model are derived under
the following assumptions: fmax =3, N
equal to the number of impulses
obtained through the PDE model under
constant discharge, and T equal to the
number of impulses obtained through the
PDE model when q = 0:1. Figure 6
presents the behavior of the delivered
capacity normalized to the nominal
capacity versus the discharge rate (q) at
which the current impulses are drained
for three values of current density: I=90,
100, and 110 A/m2. It can be seen that
the curves obtained from the PDE and
the stochastic models match closely.
For these cases, we assume gN=0 and
riod of ,

have described earlier, the
validate our
vary the parameters gC and df ( f =1,...,
fmax) of the stochastic model according
tothe considered value of current
density. Following this procedure, we
obtain a maximum error equal to 4% and
an average error equal to 1%.
average current over a time pe
which is based on the time constant that
characterizes the electrochemical
phenomena. Given any current demand
waveforms, we calculate the average
current drawn over each time period and
convert it to an appropriate number of
charge units to be drawn in the battery
model. We used a time constant = 0:5
4.4 Incorporation of Rate Capacity
Effect
As we
efficiency of the battery decreases when
the discharge current is more than the
rated current (or Crated). If the
efficiency of the battery is ; (0 < <1),
for current I, we can claim that the actual
current drawn is I= not I. For example,
if there is a demand of 2 charge units
and the efficiency of the battery is 60%
at that current level, then we would draw
3 charge units instead of 2. To
incorporate this Rate Capacity effect, we
change the number of charge units to be
actually discharged. We calculate the
actual number of charge particles to be
discharged by looking up a table, which
stores the relationship between the
demanded charge units and actual charge
units to be discharged, calculated based
on the simulation of the PDE model. The
basic step of the simulation,
incorporating the Rate Capacity effect
for any deterministic discharge demand,
is described in Figure 7.
We present results to
enhanced model in Section 6. For
estimating the battery lifetime and the
energy delivered by the battery of SoC
designs, we need current demand
waveforms for the system. The next
section describes the methodology used
to calculate cycle-accurate current
demand waveforms for HW/SW mobile
systems.

5. Conclusion:
ed a stochastic model of
e
Chandrakasan and R.W.
s,
. Pedram (Editors)
We have present
battery and a framework for estimating
the battery life as well as the delivered
energy for system-level design spac
exploration of battery powered mobile
embedded systems. The model proposed
is fast enough to enable iterative battery
life estimation for system level
exploration. At the same time, it is very
accurate, as validated using an accurate
PDE model. In future, we will use the
developed framework for exploring how
system architectures can be made
battery-efficient and for suggesting the
optimum battery configuration for any
HW/SW embedded system.
References
[1] A. R.
Broderson, Low Power Digital CMOS
Design, Kluwer Academic Publisher
Norwell, MA, 1995.
[2] J. Rabaey and M
Low Power Design Methodologies,









A Technical Paper
On
DRIVES
By

S. MOHAN & D.CHANDRA MOULI

SRI VENKATESHWARA UNIVERSITY COLLEGE OF ENGINEERING
TIRUPATI.
E-MAIL:mohan_yd2020@yahoo.com
moulienator @gmail.com

ABSTRACT:

The combination of a prime
mover, transmission equipment and
mechanical load is called a drive
.Electric drive is an industrial system
which performs the conversion of
electrical energy to mechanical energy
or vice versa for running various
processes such as: production plants,
transportation of people or goods,
home appliances, pumps etc.

An electric drive provides electrical
retarding and reduces service brake
wear. The system also has many
operational advantages including the
control of wheel slip and slide thus
reducing the tire wears. The system
delivers a smoother ride for the
operator. In electrical drives, most
useful drives are AC drives and DC
drives.

AC motors requires virtually no
maintenance and are preferred for
applications where the motor is
mounted in an area not easily reached
for servicing or replacement. AC
motors are better suited for high speed
operation (Over 2500 r,p.m)

DC drives are normally less expansive
for most horse power ratings DC
motor have a long tradition of use as
adjustable speed machines and wide
range of options have evolved for this
purpose .DC regenerative drives are
available for application requiring
continuous regeneration for over
hauling loads. DC motors are capable
of providing starting and accelerating
torque is excess of400% of rated





1.INTRODUCTION:
The combination of a prime mover,
transmission equipment and mechanical
load is called a drive .An electrical drive
can be defined as a drive, using an electric
motor as a prime mover, and ultimately
converting electrical energy to mechanical
energy. Conversely electric drive may also
define as the combination of electric motor
with its controlling devices, power
transmission equipment and mechanical
load. The success of an electric drive for
any purpose depends on the correct choice
of the driving motor for the particular
conditions under which it has to operate.
Among several prime
movers such as IC engines
(diesel/petrol engines), turbines, steam
engines or electric motors, the electric
motors are predominantly used in
industrial drives due to their inherent
advantages .a drive may require
transmission equipment such as
gearing or belt system or chain system
to match the speeds of the prime
mover and the load. The transmission
may also be required some times to
convert rotary to linear and vice versa.
2.DC DRIVES - PRINCIPLES OF
OPERATION:
DC drives, because of their simplicity,
ease of application, reliability and
favorable cost have long been a
backbone of industrial applications. A
typical adjustable speed drive using a
silicon controller rectifier (SCR)
power conversion' section, common
for this type unit. The SCR, (also
termed a thyristor) converts the fixed
voltage alternating current (AC) of the
power source to an adjustable voltage,
controlled direct current (DC) output
which is applied to the armature of a
DC motor.
SCR's provide a controllable
power output by "phase angle
control", so called because the
firing angle (a point in time
where the SCR is triggered into
conduction) is synchronized with
the phase rotation of the AC
power source. If the device is
triggered early in half cycle,
maximum power is delivered to
the motor; late triggering in the
half cycle provides minimum
power, as illustrated by Figure 3.
The effect is similar to a very
high speed switch, capable of
being turned on and "conducted"
off at an infinite number of points
within each half cycle. This occurs at
a rate of 60 times a second on a 60 Hz
line, to deliver a precise amount of
power to the motor. The efficiency of
this form of power control is
extremely high since a very small
amount of triggering energy can
enable the SCR (Silicon
Controlled Rectifier) to control a
great deal of power.


3.DC DRIVE TYPES:
Non regenerative DC Drives Non
regenerative DC drives are the most
conventional type in common usage.
In their most basic form they are able
to control motor speed and torque in
one direction only as shown by
Quadrant I in Figure 4. The addition of
an electromechanical (magnetic)
armature reversing contactor or
manual switch (units rated 2 HP or
less) permits reversing the controller
output polarity and therefore the
direction of rotation of the motor
armature as illustrated in Quadrant III.
In both cases torque and rotational
direction are the same.
Regenerative DC Drives -
Regenerative adjustable speed drives,
also known as four-quadrant drives,
are capable of controlling not only the
speed and direction of motor rotation,
but also the direction of motor torque.
This is illustrated by Figure 4.
The term regenerative describes the
ability of the drive under braking
conditions to convert the mechanical
energy of the motor and connected
load into electrical energy which is
returned (or regenerated) to the AC
power source.
When the drive is operating in
Quadrants I and III, both motor
rotation and torque are in the same
direction and it functions as a
conventional non regenerative unit.
The unique characteristics of a
regenerative drive are apparent only in
Quadrants II and IV. In these
quadrants, the motor torque opposes
the direction of motor rotation which
provides a controlled braking or
retarding force. A high performance
regenerative drive is able to switch
rapidly from motoring to braking
modes while simultaneously
controlling the direction of motor
rotation.
A regenerative DC drive is essentially
two coordinated DC drives integrated
within a common package. One drive
operates in Quadrants I and IV, the
other operates in Quadrants II and III.
Sophisticated electronic control
circuits provide interlocking between
the two opposing drive sections for
reliable control of the direction of
motor torque and/or direction of
rotation.
Converter Types - The power
conversion or rectified power section
of a DC drive is commonly called the
converter.


4.DC MOTOR CONTROL
CHARACTERISTICS:
A shunt-wound motor is a direct-
current motor in which the field
windings and the armature may be
connected in parallel across a constant-
voltage supply. In adjustable speed
applications, the field is connected
across a constant-voltage supply and
the armature is connected across an
independent adjustable-voltage supply.
Permanent magnet motors have similar
control characteristics but differ
primarily by their integral permanent
magnet field excitation.
The speed (N) of a DC motor is
proportional to its armature voltage;
the torque (T) is proportional to
armature current, and the two
quantities are independent, as
illustrated in Figure 5.
5.CONSTANT TORQUE
APPLICATIONS:
1. Armature voltage controlled DC
drives are constant torque drives.
2. They are capable of providing rated
torque at any speed between zero and
the base (rated) speed of the motor as
shown by Figure 6.
3. Horsepower varies in direct
proportion to speed, and 100% rated
horsepower is developed only at 100%
rated motor speed with rated torque.
6.CONSTANT HORSEPOWER
APPLICATIONS:
Armature Controlled DC Drives -
Certain applications require constant
horsepower over a specified speed
range. The screened area, under the
horsepower curve in Figure 6,
illustrates the limits of constant
horsepower operation for armature
controlled DC drives. As an example,
the motor could provide constant
horsepower between 50% speed and
100% speed, or a 2:1 range. However,
the 50% speed point coincides with the
50% horsepower point. Any constant
horsepower application may be easily
calculated by multiplying the desired
horsepower by the ratio of the speed
range over which horsepower must
remain constant. If 5 HP is required
over a 2:1 range, an armature only
controlled drive rated for 10 (5 x 2)
horsepower would be required.
Field Controlled DC Drives - Another
characteristic of a shunt wound DC
motor is that a reduction in field
voltage to less than the design rating
will result in an increase in speed for a
given armature voltage. It is important
to note, however, that this results in a
higher armature current for a given
motor load. A simple method of
accomplishing this is by inserting a
resistor in series with the field voltage
source. This may be useful for
trimming to an ideal motor speed for
the application. An optional, more
sophisticated method uses a variable
voltage field source as shown by
Figure 6. This provides coordinated
automatic armature and field voltage
control for extended speed range and
constant HP applications. The motor is
armature voltage controlled for
constant torque-variable HP ope
to base speed where it is transferred to
field control for constant HP-variable
torque operation to motor maximum
speed.
ration

7.AC DRIVES - PRINCIPLES OF
Adjustable frequency AC motor drive
er

l of
that
. The


es
8.AC CONTROLLER TYPES:
A number of different types of AC
ives:
e
nd
omy
PWM Controllers - The PWM

ull-
twork
ich
er
.
The voltage applied to the motor is a
form
d

wave and therefore provides improved
eating.
OPERATION:
controllers frequently termed inverters
are typically more complex than DC
controllers since they must perform
two power section functions that of
conversion of the AC line power
source to DC and finally an invert
change from the DC to a coordinated
adjustable frequency and voltage
output to the AC motor. The appea
the adjustable frequency drive is based
upon the simplicity and reliability of
the AC drive motor, which has no
brushes, commutator or other parts
require routine maintenance, which
more than compensates for the
complexity of the AC controller
robust construction and low cost of the
AC motor makes it very desirable for a
wide range of uses. Also, the ability to
make an existing standard constant
speed AC motor an adjustable speed
device simply by the addition of an
adjustable frequency controller creat
a very strong incentive for this type of
drive.
motor controllers are currently in
common use as general purpose dr
Pulse Width Modulated (PWM),
Current Source Input (CSI), and th
Load Commutated Inverter (LCI).
Each type offers specific benefits a
characteristics but the PWM type has
been selected by Fincor Electronics as
offering the best combination of
simplicity, performance and econ
for general purpose applications.
controller converts the AC power
source to a fixed DC voltage by a f
wave rectifier. The resultant DC
voltage is smoothed by a filter ne
and applied to a pulse width modulated
inverter using high power transistors.
The speed reference command is
directed to the microprocessor wh
simultaneously optimizes the carrier
(chopping) frequency and inverter
output frequency to maintain a prop
volts/Hz ratio and high efficiency
throughout the normal speed range
See Block Diagram, Figure 7.
pulsed approximation of a true
sinusoidal waveform.. This is
commonly called a PWM wave
because both the carrier frequency an
pulse width is changed (modulated) to
change the effective voltage amplitude
and frequency. The current waveform
very closely follows the shape of a sine
low speed motor performance,
efficiency, and minimal motor h
9.AC MOTOR CONTROL
CHARACTERISTICS
The synchronous speed of an AC
induction motor is directly
proportional to the applied frequency.
\

As the applied frequency is changed,
the motor will run faster or slower as
shown by Figure 10. The actual full-
load motor slip (as a percent of the
motor synchronous speed) varies in
inverse proportion to the frequency,
where a 3% slip motor 60 Hz would
have a 6% slip at 30 Hz or 1 1/2 % slip
at 120 Hz. Motor speed is limited only
by the maximum inverter output
frequency, load torque requirements,
and the mechanical integrity of the
motor.

10.MOTOR SELECTION
Constant Torque Applications - About
90% of all general industrial machines,
other than fans and pumps, are
constant torque systems where the
machine's torque requirement is
independent of its speed.
1. Standard three-phase AC
motors, designed for fixed
speed operation at standard line
frequency, may be easily
adapted for use with the AC
controller by considering the
following:
a. A slight increase in
motor losses occurs
with inverter power.
b. The motor thermal
capacity must typically
be derated as a function
of the minimum,
continuous operating
speed in accord with
Figure 11, due to the
reduced ventilation
provided by the integral
motor fan. Where the
application requires
100% rated torque at
speeds below 50% of
synchronous speed, a
separately powered
c. ventilation blower, a
non ventilated motor
with greater reserve 100
thermal capacity or, a
motor with higher rated
capacity should be
used. When a separately
powered ventilation
blower is used, a
thermostat should be
built into the motor to
prevent damage which
may result from a
failure in the ventilation
system.
2. Any three-phase synchronous
or induction AC motor
designed expressly for
adjustable speed service by
inverter control may normally
be used over its design speed
range with the AC controller.
Variable Torque Applications - The
application of standard AC motors to
adjustable speed variable torque
applications such as centrifugal fans or
pumps is ideal from a motor cooling
standpoint. The torque characteristics
of a variable torque (cubed exponential
horsepower) load are such that the load
falls off
rapidly as the motor speed is reduced.
100 The variable torque load
eliminates the necessity to derate the
motor due to excessive heat resulting
from diminished motor cooling at
reduced speeds. Figure 12 illustrates
the relationship between speed and
torque in variable torque applications.
Potential Power Savings - Most fan
and pump applications require the
system to run for sustained periods at
reduced outputs by either reducing the
speed of the motor or by mechanically
altering the flow. Figure 13 illustrates
typical energy savings, in percent of
rated power, which can be realized
when using an adjustable frequency
controller to reduce motor speed and
thereby system flow as opposed to a
constant speed motor
which has its system flow varied by an
outlet damper.
Constant Torque Operation - The
ability of the AC controller to maintain
a constant' 20 volts/Hz relationship is
ideal from a motor standpoint. This
permits operation of the motor at rated
torque from near standstill to rated
speed.
Figure 14 represents the relationship
between torque, horsepower and motor
speed with a maintained volts/Hz ratio
using a 60 Hz controller for
illustration. A standard 4-pole 460V
motor can be controlled by this method
to its synchronous speed of 1800 RPM.
If the same motor were wound for 50%
of the input voltage (230V), it could be
controlled with constant torque to
double the normal rated speed and
horsepower. The motor would not be
"overvoltaged" because the volts/Hz
ratio could be maintained e.g.: a motor
wound for 230 VAC can supply
constant torque to twice the AC line
frequency when used on a 460V power
source without overvoltaging the
motor because the volts/Hz ratio of
230V/60 Hz is the same as 460V/120
Hz. The horsepower would also double
since the same torque would be
developed at twice the normal rated
speed.
Caution must be observed when
applying standard motors for
continuous low speed, rated torque
operation. The motor's self-cooling
capability is dependent upon self-
ventilation schemes with efficiency
that is considerably reduced at lower
operating speeds.
Constant Horsepower Operation -
AC motor controllers are also
adaptable to constant horsepower
operation as shown by Figure 15. With
this mode of operation, the volts/Hz
ratio is maintained to a specific
frequency, normally 50 or 60 Hz. At
this point, the voltage is "clamped" at a
constant level while the frequency is
adjusted further to achieve the desired
maximum speed. Since the controller
maximum output voltage is limited to
the voltage of the AC power source,
the volts/Hz ratio must decrease
beyond this point as the frequency
increases. The motor becomes "voltage
starved" above the clamping point and
torque decreases as speed increases,
resulting in constant horsepower
output.
As shown in Figure 15 the drive
provides conventional constant
torque/variable horsepower operation
up to 60 Hz which is equivalent to the
1800 RPM base speed of the 60 Hz
motor. Between 1800 and 3600 RPM,
the drive provides constant
horsepower/variable torque operation.
If constant horsepower is required
between 900 and 3600 RPM (a 4:1
speed range) - using the same 1800
RPM base speed motor, the drive rated
horsepower must be increased since
900 RPM intersects the curve at a
point which is 50% of rated
horsepower.
Constant HP operation (above
synchronous speed) is limited to
induction motors only. In addition, at
some point, typically around three
times base speed for a four-pole
induction motor, the breakdown torque
of the motor prevents further constant
horsepower operation. Synchronous
reluctance motor characteristics
prevent operation in this mode.








11.APPLICATIONS OF
ELECTRICAL DRIVES:
1. The electric drive system is
especially designed to suit the
propeller or rotor research.
2. The electric drive system is capable
of powering three phase AC motors as
large as 75 horsepower.
3. Electric drives are also used in boats
and are capable of driving the boats at
hull speeds comparable with speeds
produced by internal combustion
engines.
4.They are used in vehicles to give
better payload, faster acceleration,
improved mobility, and lower fuel
consumption.













BIBLOGRAPHY:
1.Electrical drives by
Ion.Boldea,Syed Abu Nasar
2.Control of Electrical drives by
Werner Leonhard
3.Fundamentals of Eletrical Drives
by Gopal.K.Dubey
DSP Controllers DSP controllers

















O.FELIX
&
C.PRADHYUMN KUMAR
(III/II B.TECH)
Email:felix4boyz@gmail.com
Phone:9393318807







C.R.ENGINEERING COLLEGE
TIRUPATHI.





DSP Controllers DSP controllers

CONTENTS:
1. ABSTRACT

2. INTRODUCTION

3. FIELD ORIENTED DRIVES

4. DIRECT TORQUE CONTROL IN
INDUCTION MOTOR DRIVE

5. DSP MOTOR CONTROL IN
i) REFRIGERATION APPLICATIONS
ii) WASHING MACHINE APPLICATIONS

6. CONCLUSION


ABSTRACT:
The era of analog control systems for electric motor drives is
definitely over, and a wide array of digital devices is availab
Traditionally, general-purpose microcontrollers have been the
devices of choice when analog controllers were being replaced with digital ones. To
this day, microcontrollers prevail in the practical drives although, in recent years,
the digital signal processors (DSPs) have been enjoying increased interest and
implementation in control of electric motors. The DSPs still suffer from certain
misconceptions among design engineers who tend to associate them with classic data
processing tasks such as image analysis or speech recognition. However, the newly
emerging class of DSP controllers (DSPCs) is likely to dispel such notions and to
dominate the motion control market in the years to come.
This paper describes architecture of TEXAS instruments DSP
controllers. This paper also gives a detailed explanation of DSP motor control in
domestic refrigeration applications and washing machine applications.






I NTRODUCTI ON:

DSP Controllers DSP controllers
Embedded DSP motor control is
revolutionizing the appliance industry,
not only by delivering the highest levels
of motor performance, but also by the
method employed to deliver that high
performance. The computing power of a
DSP allows users to exploit software
modeling to implement closed-loop
motor control creating a shift from
hardware to software that yields
significant advantages for motor and
appliance manufacturers, ultimately
changing their relationship.
The latest generation of embedded DSP
motor control ICs enables advanced
control algorithms that, in turn, enable
appliance manufacturers to reduce
development times, costs and risks
while, at the same time, enhance the
efficiency, functionality, noise-reduction
and performance of their products.
While many processor options exist for
electronic motor control, the DSP has
long been acknowledged to be the best
suited for motor control because of its
strength in processing real world signals.
Today, many appliance OEMs and
motor manufacturers are rapidly
adopting cost-effective DSPs as the
motor control technology of choice.
Simple, Quick and Cost Effective
Appliance OEMs have discovered that
todays generation of motor control
DSPs
are as easy to use and as inexpensive as
conventional micro controllers (MCUs),
and they have been able to take full
advantage of the DSP power to
implement more advanced control
algorithms. This significantly reduces
system costs by modeling hardware
functions in software. By eliminating
hardware and focusing on software,
appliance OEMs can modify and
enhance their motor controls faster than
ever and also create a platform of motor
control solutions across an entire
portfolio of white goods including
refrigerators, washing machines and air
conditioners.

To understand the
significance of DSP controllers, major
trends in today's motor control must be
called up. The fundamental trend is the
ongoing transition from adjustable-speed
dc drives to ac drives. The latter are
predominantly based on induction
motors, but various types of synchronous
and reluctance motors are also making
inroads in industry. Generally, control
algorithms for ac motors are more
computationally intensive than those for
dc machines.

To illustrate the signal
processing requirements in modern ac
drive systems, three common types of
induction motor drives are used as
examples.
Block diagrams of
field-oriented (FO) drives, in which
the induction motor is made to
emulate a dc machine, are shown in
Figs. 1 and 2









DSP Controllers DSP controllers






In the direct
field orientation scheme in Fig. 1, the
crucial field angle, that is, the angle of
flux vector (typically, rotor flux
vector), is determined directly from
motor signals, usually current,
voltage, and speed. In the case of
indirect field orientation, illustrated in
Fig. 2, the field angle is computed by
integrating the reference synchronous
velocity, which is a sum of the actual
rotor velocity and a reference slip
velocity obtained from the flux and
torque commands [1]. The mathematical
operations performed within each
sampling period of the FO system that
generates reference current signals for
the inverter include:
- numerous multiplications and, in the
indirect FO scheme, division of
variables,
- integration,
- coordinate transformation (requires the
square root computation),
- sine and cosine calculations.
The current control in the inverter is
based either on the "bang-bang"
principle or,
increasingly, on more complicated
schemes, usually involving linear
controllers that produce reference
voltage commands. To realize a given
voltage vector in a pulse-width
modulated (PWM) inverter, precise
timing of switching signals for
individual inverter switches is required.
The Space-Vector PWM method, most
popular nowadays, is based on division
of each sampling period into three
unequal parts, whose durations are
computed on-line using simple formulas
involving sine functions.
Recently, a simple and robust class of
the so-called direct torque control
(DTC) methods has been gaining
popularity, particularly in Europe [2].
A simplified block diagram of a DTC
drive is shown in Fig. 3.








Comparing the control system with
those for FO drives in Figs. 1 and 2; the
relative simplicity of the DTC scheme
can easily be appreciated. The DTC
premises and principles can be
summed up as follows:
- Stator flux is an integral of the stator
EMF; thus the flux magnitude strongly
depends on
the stator voltage,
DSP Controllers DSP controllers
- Developed torque is proportional to the
sine of angle between the stator and
rotor flux
vectors,
- Reaction of rotor flux vector to
changes in stator voltage is slower and
less pronounced
than that of the stator flux vector.
Consequently, magnitudes of both the
stator flux and developed torque can be
directly
controlled by proper selection of stator
voltage vectors, that is, selection of
consecutive
inverter states.
Operating
algorithms of DTC systems involve
calculations of the stator flux vector and
developed torque from the motor
variables (in this regard, the DTC
scheme is similar to that of the direct FO
drive), as well as a more or less
sophisticated selection and timing of
inverter states. General simplicity of the
DTC approach notwithstanding,
advanced practical DTC schemes are
often quite computation-intensive. For
instance, at low speeds of the drive, the
value of stator resistance plays an
important role in control algorithms of
high-performance DTC drives. This
requires continuous estimation of that
resistance, which is not a simple task [3].
In addition to the control algorithms,
digital control systems of all types of
practical drives must handle
"housekeeping" routines associated with
turning-on and -off of the system,
personnel interventions, and fault
detection. Those are mostly conditional
(logic) operations.
The second
trend, easily discerned in today's
research literature on adjustable speed
drives, consists in the increasing use of
machine intelligence. The machine
intelligence is employed in various ways
to improve robustness and adaptivity of
control systems and enhance the
dynamics of drives, which are often
characterized by parameter uncertainty
and nonlinear mathematical models.
Research reports on successful
application of artificial neural networks
and fuzzy logic can be found in the
technical literature [4]. Although
structurally simple, high numbers of
variables characterizes the neural
networks or fuzzy and neuro-fuzzy
controllers, and their practical realization
requires high computing power.
In all three
diagrams in Figs. 1 through 3, speed (or
position) sensors are shown. Elimination
of those sensors, which spoil the
inherent ruggedness of induction
machines, is the third trend in the
modern drive technology. In sensorless
drives, open-loop estimators and closed-
loop observers of rotor speed must be
employed. Those functions place an
additional burden on the digital control
systems of drives.
Texas Instruments'
TMS320x24y series of inexpensive DSP
controllers represents a timely response
of microelectronic industry to the
challenge posed by the increasing signal
processing requirements in electric
motor drives. The series has been
specifically designed with the power
conversion applications in mind. The
"x," either a "C" or an "F," denotes the
program memory type (ROM or flash
EEPROM), while the "y," an integer (0,
1, 2, or 3), indicates individual types of
the DSPCs. Unit prices for the cost
optimized TMS320F241, TMS320C241,
TMS320C242, and TMS320F243 are
less than $5.
The architecture of the Texas
Instruments' DSPCs is shown in Fig. 4.
DSP Controllers DSP controllers



The DSPCs incorporate a 20 MIPS
fixed-point processor, 544-word RAM,
16-kword ROM or flash EEPROM, and
a specialized "Event Manager" with
three multimode timers, twelve PWM
channels, nine comparators, and
capture/encoder circuitry. Also included
are dual 10-bit ADCs with a total of
sixteen multiplexed channels and
simultaneous conversion capability, a
watchdog timer, and serial peripheral
and serial communications interfaces. In
addition, TMS320x241 and
TMS320x243 DCPs are equipped with a
CAN (Control Area Network protocol)
module. The Event Manager allows
direct control of semiconductor switches
of an inverter employing three types of
pulse width modulation, including the
popular Space Vector PWM method. To
protect the inverter from "shot throughs"
(i.e., simultaneous conduction of two
switches in the same phase), the dead-
band logic is provided too. The hardware
multiplier in the core processor
facilitates computations, and the high
speed of the processor allows extensive
use of look-up tables for such often
needed functions as the sine and cosine.
A single-chip DSPC, rich in useful
peripherals, facilitates implementation of
multiple functions required for the
control of the whole adjustable speed
drive system [5].


Electric motors are the major
components in electric appliances such
as refrigerators, washing machines. The
energy consumed by the electric motor is
a very significant portion of the total
energy consumed by the machine.
Controlling the speed of the appliance
motor can both directly and indirectly
reduce the total energy consumption of
the appliance. In many major appliances
advanced three phase variable speed
drive systems provide the performance
improvements needed to meet new
energy consumption targets. DSP based
motor control systems offer the control
bandwidth required to make possible the
development of advanced motor drive
systems for domestic appliance
applications.

DSP Motor Control In
Domestic Refrigeration
Applications

System Requirements: Energy efficient
compressors require the motor speed to
be controlled in the range from 1200
rpm to 4000 rpm. In fractional
horsepower applications, the motor of
choice with the highest efficiency is an
electronically controlled three phase
permanent magnet motor.
Motor Control Strategy:
In order to run
the permanent magnet motor efficiently,
it is important to synchronize the
frequency of the applied voltage to the
position of the permanent magnet rotor.
A very effective control scheme is to run
the motor in a six-step commutation
mode with only two windings active at
any one time. In this case, the back emf
on the unconnected winding is a direct
DSP Controllers DSP controllers
indication of the rotor position. The rotor
position is estimated by matching a set
of back emf waveform samples to the
correct segment of the stored waveform
profile. This technique averages the data
from a large number of samples giving a
high degree of noise immunity. The
control system, outlined below, has an
inner position control loop which adjusts
the angles of the applied stator field to
keep the rotor in synchronization.
Integrator input tracks the motor velocity
when the rotor position error is forced to
zero. The outer velocity loop adjusts the
applied stator voltage magnitude to
maintain the required velocity. The
controller is capable of accelerating the
compressor to its target speed within a
few seconds and can regulate speed to
within 1 percent of its target.
Motor Control Hardware:
The essential
hardware in a variable speed AC drive
system consists of an input rectifier, a
three-phase power inverter and the
motor control circuits. The motor control
processor calculates the required motor
winding voltage magnitude and
frequency to operate the motor at the
desired speed. A pulse width modulation
circuit controls the on and off duty cycle
of the power inverter switches to vary
the magnitude of the motor voltages.
An analog to
digital converter allows the processor to
sample motor feedback signals such as
inverter bus voltage and current.












FIG1: SENSORLESS CONTROL
SYSTEM
In this
sensorless control application, the motor
winding back emf signals are sampled in
order to calculate to drive the motor at
its most efficient point of operation. The
voltage signal conditioning consists of
resistive attenuators and passive filters.
A precision amplifier is used to capture
the motor winding current by sensing the
DC bus current. The controller uses this
information to limit the motor starting
current and to shut down the compressor
in overload conditions.
The DSP based
motor IC is the heart of the system. On
power up, the program performs
initialization and diagnostic functions
before starting the motor. The motor is
started in open loop until the back emf
reaches a minimum level, before
switching to running mode. Then, every
PWM cycle, the DSP uses the A/D
converter to sample the motor back emf,
the motor current and the bus voltage.
The internal multiplexer selects the
appropriate back emf signals to be
converted. The control law calculates a
new rotor position estimate and
calculates the PWM duty cycle required
applying the required voltage to the
DSP Controllers DSP controllers
motor. At particular values of estimated
rotor position angle, the algorithm
selects a new set of active motor
windings by writing to the PWM
segment selection register. The DSP
algorithm also performs diagnostic
functions, monitoring dc bus voltage, the
motor current and speed. In the case of
overload conditions, the drive will be
shut down and attempt to restart after a
short time delay.





DSP Motor Control In
Washing Machine
Applications
System Requirements:
The drum in the
horizontal axis washing machine is
driven at speeds between 35 rpm and
1200 rpm. The motor, however, must
run at speeds in the range of 500 rpm to
15,000 rpm. Three phase AC induction
motors provide advantages over
universal motors through the elimination
of brushes and a wider speed range.
To properly control a three-phase AC
induction motor over a wide speed range
both motor current and motor speed
information is required. The speed ripple
and load torque of the washing machine
motor also provides valuable
information about the washing load. The
washing drive described here is for a
500W AC induction motor with a speed
range from 500 to 15,000 rpm.
AC Induction Motor Control
Strategy:
The control of an
AC induction motor (ACIM) is
potentially much simpler than that
required for a permanent magnet ac
motor. The ACIM can be driven in open
loop by a three-phase inverter giving
adequate speed control performance for
many simple pump and fan applications.
However, when a wide speed range and
high dynamic performance is required,
then a field oriented control scheme is
necessary. In this case the flux and
torque currents are independently
controlled to give a performance similar
to that obtainable from a permanent
magnet synchronous motor. In the low
speed range of operation, the flux is kept
constant and torque is directly
proportional to the torque current. In the
high speed range, when the motor
voltages are limited by the DC bus
voltage, the flux is reduced to allow
operation at higher speeds

FIG 3:DIRECT STATOR FIELD
ORIENTED CONTROL SCHEME

A direct stator
field oriented control algorithm is
described in Figure 3. The key motor
variables are the flux and torque
producing components of the motor
currents. The choice of reference frame
is the key element that distinguishes the
various vector control approaches from
one another. In this scheme, a reference
frame synchronized to the rotating stator
flux is selected because of the
DSP Controllers DSP controllers
availability stator current and DC bus
voltage information. A number of other
field oriented schemes require position
information, or stator flux measurements
and are not suitable for this application
where controlled operation close to zero
speed is not required. The Park and
Clark reference frame transformation
functions calculate the effect of the
stator currents and voltages in a
reference frame synchronized to the
rotating stator field. This transforms the
stator winding currents into two quasi
DC currents representing the torque
producing (Iqs) and flux producing (Ids)
components of the stator current. The
stator flux angle is an essential input for
the reference frame transformations. The
stator flux is calculated in the fixed
reference frame by integrating the stator
winding voltages. In this system, the
stator voltage demands to the inverter
are known, so the applied stator voltages
can be calculated from the voltage
demands and the DC bus voltage
measurement. The flux estimation block
uses stator current to compensate for the
winding resistance drop. The outputs of
this block are the stator flux magnitude
and the stator flux angle. There are four
control loops closed in this application.
There are two inner current loops which
calculate the direct and quadrature stator
voltages required to force the desired
torque and flux currents. The Park and
Clark functions transform these voltages
to three AC stator voltage demands in
the fixed reference frame. The outer
loops are the speed and flux control
loops. The flux demand is set to rated
flux for below base speed operation and
is reduced inversely with speed for
above base speed operation in the field
weakening mode. Finally, the torque
loop is the same as in any classical
motion control system.

AC Induction Motor Control
Hardware

The DSP based AC
induction motor system has similar
hardware to the permanent magnet drive
described previously. In this case, the
motor is rated at over 500 W and so
IGBTs are the power devices most
suited. The feedback signals include the
motor currents, the bus voltage and a
pulse train from a digital tachometer.
The motor
winding currents are derived from
sensing resistors in the power inverter
circuit.

AC Induction Motor Control
Software:

One challenge in the
development of the control software was
to run four simultaneous control loops
where the variables have a very wide
dynamic range. A solution, which very
much improved performance, was to use
floating point variables for all the PI
control loops. This extended the
processing time somewhat but was not
found to be a significant burden when
using a 25 MIPS DSP core.

The processor can easily handle the
multiple interrupt sources from the A/D
converter, the digital I/O block, the
communications ports and the timer. A
number of useful device features of the
device such as the auto-buffered serial
port and the single context switch made
the task possible without significant
overhead in pushing or popping a stack.
Finally, the code development was
somewhat simplified by the availability
of motor control library functions in the
internal DSPROM.
DSP Controllers DSP controllers



CONCLUSION:
It is worth pointing out that
the remarkably low cost of
the DSPCs opens the way to
their successful application
in millions of uncontrolled
drive systems. Such
functions as fault
monitoring, overload
warning, torque estimation,
or vibration detection would
significantly enhance
operational capabilities of
those drives.


References:
[1] A. M. Trzynadlowski, "The Field
Orientation Principle in Control of
Induction
Motors," Kluwer Academic Publishers,
Boston, 1994.
[2] G. Buja, D. Casadei, and G. Serra,
"DTC-based strategies for induction
motor
drives," Proc. ISIE'97, pp. 1506-1516.
[3] S. Mir, M. E. Elbuluk, and D. S.
Zinger, "PI and fuzzy estimators for
tuning the
stator resistance in direct torque control
of induction motors," IEEE Trans.
Power Electron., vol. 13, no. 2, pp. 279-
287, 1998.
[4] B. K. Bose, "Expert system, fuzzy
logic, and neural network applications in
power electronics and motion control,"
Proc. IEEE, vol. 82, pp. 1303-1323,
Aug.
1994.




THANK U
S.R.K.R. ENGINEERING COLLEGE
CHINAAMIRAM, BHIMAVARAM, W.G. DIST




















EFFICIENT ENERGY
MANAGEMENT.

S.V.R.Sai Kumar B.Venkata Krishna

EEE EEE

svrsaikumar@gmail.com bvenkatakrishna@gmail.com








ABSTRACT:
It has been recognized globally that advances in environmental energy
technologies are now playing a critical role in helping organizations face the complex and
rapidly changing dynamics of market scenario world over. These technologies are aimed
at reducing energy consumption, and decreasing harmful emissions to meet the
environmental standards, and in turn enhancing the marketability of products and
processes. Consequent to the energy crisis of the early seventies and the late eighties,
energy conservation assumed tremendous importance in the Indian industry. Improving
energy efficiency may be the most profitable thing that you can do in the short term. How
much you will actually benefit from this opportunity depends on how you approach it.
The paper objectively aims and provides an overview of technologies for energy
management.
Why we have to save energy?
The answer is very simple; we need more and more energy for our daily
duties. This increased the need of energy drastically. But the power generation is limited.
So it drastically increased the cost of the energy. The energy cost as a percentage of the
manufacturing cost has been increasing several fold in the last decade. For example, the
cost of power in 2000 has escalated three-fold vis--vis the cost in 1991. The profit
margins in the energy intensive industries have not been able to bear the brunt of such
steep increase in energy costs.
The increasing trend in energy costs is depicted in the graph below:



Energy costs turned out to be a major operating expense due to an ever-increasing trend
in energy prices. Energy conservation techniques to reduce energy costs were seen as an
immediate and handy tool to enhance competitiveness. The advent of innovative energy
saving devices at affordable costs lent a cutting edge to adopt new techniques in energy
conservation. So in this paper we are going to see those techniques.

INTRODUCTION:
The energy management can be done in three main fields they are
1.Power generation.
2.Power transmission.
3.Power utilisation.
We cannot decrease the power utilisation because it is not in our hands. So we
can manage the power or energy in first two fields. The power generation must cheap,
quick, reliable. We have to decrease the transmission losses. All these tasks lead to
energy management techniques.
The application of best energy management techniques is one of the easiest ways
to reduce energy cost, improve profitability and contribute to a cleaner environment. It
has become imperative to devise energy auditing and accounting system as a measure of
energy conservation strategy to become competitive. How ever there are many techniques
for the energy management. We see them in detail here.

1.Three-Pronged Approach:
Capacity utilization, Fine-tuning and Technology upgradation are the new mantra
which have yielded astounding results in energy management, enabling companies to
build a competitive edge. Any effort towards energy conservation efforts gets ultimately
reflected on the specific energy consumption. A study of about 400 companies revealed
that excellent energy efficient companies have achieved lower specific energy
consumptions by religiously adopting the three-pronged approach. Now we see this three-
pronged approach in detail. The Three-Pronged approach is developed based on the data
collected through questionnaire from industries, diagnostic survey visits and through
interaction with the plant personnel. The approaches are shown in the schematic below:

a. Capacity Utilisation
High capacity utilisation is very essential for achieving energy efficiency.
This brings down the fixed energy loss component of the specific energy consumption.
Survey of excellent energy efficient companies show that 80% of the companies attribute
capacity utilisation as one of the foremost reason for a major drop in specific energy
consumption. At least 90% capacity utilisation is to be ensured for achieving low specific
energy consumption. Also achieving high capacity utilisation is under the control of plant
personnel. Hence the first and foremost step for an aspiring energy efficient unit should
be on increasing capacity utilisation and reduce the specific energy consumption.

b. Fine Tuning of Equipment
This is another opportunity for saving energy. On achieving high capacity
utilisation, the fine-tuning of equipment should be taken up by the energy efficient plants.
Various energy audit studies reveal that 'Fine-tuning', if efficiently done can yield 3 to
10% of energy saving. The greatest incentive for resorting to fine-tuning is that it requires
only marginal investment.

c. Technology Upgradation
Higher capacity utilisation and fine-tuning of equipment have significant energy
saving potential. But quantum jumps in energy saving can be achieved only by
application of new technologies/upgradation of existing technology. Innovation,
improving of existing technology and application of newer technology should be made an
on-going activity in all the sectors of industry. If a company is targeting for 20% savings
and above, the three-pronged approach is to be adopted. In fact one of the characteristic
of excellent energy efficient companies is to have three teams working separately on
Capacity utilization, Fine-tuning of equipment and Technology upgradation with ultimate
focus on energy conservation.

2.Energy Audit Methodology:
Energy audit methodology is a systematic approach to reduce energy
consumption. It covers all forms of energy and energy costs of the system, such as
electrical energy, thermal energy etc. The ultimate aim of the Energy Audit is to reduce
the energy cost per unit of production. A systematic approach to Energy Audit is to
identify all Energy users and ask simple questions related to energy like "What? When?
Why? Where?" Answers to these questions have resulted in identification of innovative
Energy projects in many successful companies.

The next step is to identify the losses and segregate them as "Avoidable" and
"Unavoidable " losses. For a motor with a rated efficiency of 90%, the 10% loss that
comes by design is termed as the "Unavoidable loss". From the viewpoint of operating
personnel, it is not possible to minimize on the 10% loss, unless there are design
improvements at the manufacturing stage. On the other hand, if the same motor is
operating at 80% efficiency, the loss between 90% and 80% i.e. 10% is termed as the
"Avoidable loss". It is possible for the operating personnel to identify the reasons for
inefficiency and take corrective action. The entire focus of the audit will be to reduce
the Avoidable losses. The next logical step is to quantify the losses. Quantification helps
to appreciate the magnitude of the losses and is a powerful starter for the drive
towards implementation.



3.Alternative Energy:
The easiest way of energy management is the utilisation of the energy content in
the Non-Conventional Energy resources. This also called as alternative energy. These
include Renewable energy also.
The sources for this energy are
SOLAR: Solar energy is the earliest source of energy known to mankind and is also
the origin of other forms of energy used by mankind. Energy from the sun has many
salient features, which make it an attractive option. These include wide spread
distribution, lack of pollution and a virtuality in inexhaustible supply. This solar
energy can be utilized in two ways they are photovoltaic systems and solar thermal
applications.
WIND: Before the development of electric power in large scale, wind power has
served many countries as source of power in early days and was called as wind
mills. The propulsive power of wind can be used to drive multi bladed turbine wheel.
INDIA now has the 5
th
largest wind power installed capacity in the world, which has
reached 1870MW.
TIDAL: The ocean contains renewable energy in the form of temperature gradients,
waves, tides and ocean currents. Tidal, or as it is sometimes called lunar energy has
been known to man from time immemorial. The use of this form of energy goes back
into antiquity. Various devices particularly mills, were set to work by tidal power.
TIDE means a periodic rise of water level in the sea or ocean due to the
gravitational pull of the Moon and Sun.
GEO-THERMAL: Geo-thermal energy is domestic energy resource with cost,
reliability and environmental advantages over conventional energy source. It
contributes both to energy supply, with electrical power generation and direct heat
uses. For generation of electricity, hot water is brought from the under ground
reservoir to the surface through production wells and is flashed to steam in special
vessels by release of pressure. The steam is separated from the liquid and fed to the
turbine engine, which turns a generator. If the reservoir is to be used for direct heat
application, the geo-thermal water is usually fed to a heat exchanger and the heat thus
extracted is used for home heating, green house, vegetable drying, and a wide variety
of other small scale industries.
FUELL CELLS: Fuel cells produce electricity from an electro chemical reaction.
Hydrogen and oxygen fuel cells are efficient, environmentally benign and reliable for
power production. The use of fuel cells has been demonstrated for stationary or
portable power generation and other applications.
BIOGAS: Scientists developed a novel process to obtain bio-oil from wastes such as
wood chips, saw dust, bark, and straw, agricultural residues as well as used industrial
pallets. This process is called ENSYN process, known as rapid thermal processing
(RTP) involves fast pyrolysis a special way in which heat is applied to produce new
and more useful products. This biomass Gasifier programme has been introduced to
get better quality and cost effectiveness. In the area of small-scale biomass
gasification, significant technology development work has made INDIA a world
leader. A total capacity of 55.105MW has so far been installed, mainly for stand-
alone applications.
M.H.D: One of the main objectives to the field of energy technology is the Magneto
Hydro Dynamic (M.H.D) generator. They will convert the heat energy into electrical
energy directly. This is possible with the study of nuclear, plasma physics and
metallurgy e.t.c This direct conversion of heat to electricity enables the industry to
use fuel resources to much better advantage. The principle involved in this M.H.D.
generator is Faradays law of induction. This law states that an e.m.f. is induced in a
conductor moving in a magnetic field. The conductor may be solid or liquid or gas.
The study of the dynamics of an electrically conducting liquid or gas interacting with
a magnetic field is called Magneto Hydro Dynamics. An electrically conducting
fluid is passed through a strong magnetic field to get power.

This is about Alternative energy resources.

4.Super Conductivity:
The capacity of super conducting materials to handle large amounts
of current with no resistance and extremely low energy losses can be applied to electric
devices such as motors and generators, and to electricity transmission in power lines. A
super conducting power system would meet the growing demand for electricity with
fewer power plants and transmission lines than would otherwise be needed.
Applications of this super conductivity are wide spread.
Cables: HTS cables can carry three to five times more power than conventional
utility cables, which means they can more easily meet increasing power demands
in urban areas.
Motors: Motors made with super conducting wire will be smaller and more
efficient.
Generators: HTS generators will use super conducting wire in place of iron
magnets, making them smaller and lighter. They may also get more power from
less fuel.
Transformers: HTS transformers are compact, quiet, and use no cooling oil, so
they are much more environmentally acceptable for utility substations located in
high-density urban areas. They can even be installed within large commercial
buildings.
Fault-current limiters: HTS fault-current limiters detect abnormally high current
in the utility grid (caused by lightning strikes or downed utility poles, for
example). They then reduce the fault current so the system equipment can handle
it.
Super conducting magnetic energy storage: SMES systems store energy in a
magnetic field created by the flow of direct current in a coil of super conducting
material that has been cryogenically cooled.


5.General Methods:
Replacing older process equipment with higher efficiency models,
Upgrading lighting and heating, ventilation, and air conditioning (HVAC)
systems,
Increasing insulation to reduce fuel requirements,
Shutting down equipment when not in use,
Adjusting industrial furnaces to operate at a lower temperature in standby mode,
Implementing process improvements to increase efficiency (e.g., shifting from
batch production to continuous operation to reduce electricity and steam
generation requirements),
Monitoring and controlling energy usage throughout the facility, and
Designing energy efficient buildings and equipment,
Installing co-generation systems, and
Using environmentally preferable energy sources (e.g., wind and solar energy).
Conducting energy audits to identify further areas for improvement.
Reduced energy input requirements,
Reduced waste treatment and disposal costs through increased process efficiency,
Increased productivity and process efficiency through optimized production
cycles, and
Reduced capital and space requirements

Conclusion:
Energy management can be a clear winner in building a competitive edge for the
Indian Industry. Besides improving the cost competitiveness and environmental
performance, excellent energy management practices would offer other spin-off benefits
like percolation of cost saving culture across the organization, improved productivity and
improved performance of plant & equipment. Each ecosystem has a specific tolerance
limit. Energy changes continue to occur within and outside the various ecosystems, but it
has its limitations. Loading any system beyond its carrying capacity results in failure of
the system. In case of environmental carrying capacity, overloading of the ecosystems
sometimes leads to devastation. So we need to save energy. That is main task before the
mankind today.

References:
[1] Introduction to Energy Technology by V.A.Venikov and E.V.Putyatin.
[2] Electrical Power by S.L.Uppal.
[3] A text book on power system engineering by A.Chakrabarti, M.L.Soni, P.V.Gupta,
and U.S.Bhaunagar
[4] Developing Energy Conservation Strategies For A Greener Planet, The Earth a
technical paper by Dr.J .P.N.Giri. (ONGC).
[5] Energy Efficiency Manual by Donald R. Wulfinghoff.
[6] Energy Efficiency and Renewable energy from U.S. department of energy.
[7] Publications of Ministry of Non-conventional Energy sources.
[8] A Course in power plant Technology by Arora and Domundwar.



ELECTRIC DRIVES


RECENT TRENDS IN ELECTRIC TRACTION
(INTRODUCTION OF A.C. DRIVES)








DEPT OF ELECTRICAL AND ELECTRONICS ENGG,
V. R. SIDDHARTHA ENGINEERING COLLEGE,
VIJAYAWADA.






AUTHORS:

G.V.V.SUDHAKAR, P.MARUTHI,
III/IV B.TECH, III/IV B.TECH,
V.R.SIDDHARTHA ENGG COLLEGE, V.R.SIDDHARTHA ENGG COLLEGE,
VIJAYAWADA. VIJAYAWADA.
sudhakar_smart_3@yahoo.com maruthi_ee@yahoo.co.in

















ABSTRACT:
In general
D.C series motors are used in
traction. But due to the considerable
advances in the modern power
electronics, A C motors crept into
the electric railways. Since the D C
Motors are presented with many
disadvantages, the application of A
C motors has become vital in many
respects. This paper glimpses on the
electrical design features of the
modern 3 phase A C built in loco.
Advancements with respect to
various auxiliaries were also
discussed in brief. Case studies were
presented which shows that A.C
motor driven locos are far better
than that of D C motor driven locos.
Further A C motors pave way for
higher flexibility in various aspects
INTRODUCTION:

With the ever increasing thirst for
reliability, flexibility, simplicity and
efficiency, a new step towards
modernization of electric traction is the
intervention of 3-phase A C motors for
traction. Owing to its disadvantages, D.C
motors applicability in traction is a bit
absurd. But with the intervention of
these A.C motors those were overcome
and enough flexibility is achieved in
various respects. Moreover generation
and transmission of A.C power is much
easier and involves less loses when
compared to D C power.

The D C traction motor:
The traditional D C electric motor
driving a train is a simple machine
consisting of case with a fixed electric
part and moving electric part or rotor.
As the rotor turns it turns a pinion
which drives a gear wheel. The gear
wheel is shrunk into the axle and drives
the wheel. The motion of the motor is
created by the inter action of the
magnetism caused by the currents
flowing through the stator and rotor. The
interaction causes the rotor to turn and
provide the drive.
Now a day with the advancements
in the modern power electronics the
control of A C motors has become easy
which lead to the advent of A C motors
for traction.
The advantages of A C drives over D
C are:
They are simple in construction
and require no mechanical
contacts for work. And they are
less in weight.
A C motors can be easily
controlled with micro processors
to a fine degree and can
regenerate current to almost a
stop where as D C regeneration
fades quickly at low speeds
They are robust and easy to
maintain.
A C voltage is less hazardous
over D C.
Finer degree of speed and
acceleration is possible.




MODERN A.C DRIVE BUILT IN LOCO:


The above diagram shows an A c motor driven electric loco. Red lines indicate 1-phase
A C circuits green lines indicate D C circuits and the purple lines indicate the 3-phase A
C circuits. Let we now consider various electrical components involved and the way they
are modernized.

ADVANCED PANTOGRAPHS:
It is a folding traction current collection
device mounted on the roof of the loco.
Both the powered and non-powered
locomotive feature two pantographs that
operate automatically according to the
trains direction. The rearmost
pantograph on each unit rises
automatically when moving forward. To
Change the direction, the pantographs
automatically lower, and the opposite
pantographs rise. Or raise and lower the
pantographs manually regardless of
direction using the CAB-1 Remote
Controller. A break-a-way feature allows
the pantographs to pop off, and be easily
snapped back into place, should they
accidentally hit an obstruction. Now
days pantographs are sophistically aero
dynamically designed devices which can
operate at
High speeds with out loosing the
contacts. A common problem that is met
with pantographs is that they catch the
above regions of the wire and pull it
down for a considerable distance. But
the modern pantographs are fitted with
automatic detection and lowering
devices the horns of the pantographs are
equipped with frangible pneumatic
sensors which if broken by the wire
supports, cause the detector system to
lower the pantograph..
MAIN TRANSFORMER UNIT:
A high voltage transmission is very
much economical as it involves fewer
losses. Suitable transformers are kept to
step done the voltage to system levels. In
modern transformers the secondary of
the transformer is fitted with capacitive
filter units and computer based tap
changers which assures quality power
to the load concerned.
Booster transformer unit: On lines
equipped with A Cover head
transmission lines, special precautions
are to be taken to reduce the interference
with communication lines. If a
communication cable is laid long side
rails carrying the re turn current of the
over head supply, it can have unequal
voltages in it. To prevent this, a booster
transformer is implemented to reduce the
noise levels in the communication cable.
The middle diagram is a schematic for
the booster transformer (BT) feeding
system. There is now a return conductor,
a wire that is close to and parallel to the
catenaries wire. The return conductor is
connected to the rails (and earthed) as
shown. Periodically, there are breaks in
the catenaries where the supply current
is forced to flow through one winding of
a booster transformer (marked B.T.); the
other winding is in series with the return
conductor. The 1:1 turns ratio of the BT
means that the current in the catenaries
(IC) will be very nearly the same as the
current in the return conductor (Irw).
The current that flows through the loco
goes to the rails but then up through a
connecting wire to the return conductor
and through it back to the substation.
Insulated rail joints (marked I.R.J .) are
also provided -- this ensures that current
flows in the rails only in the particular
section where the loco is present. At all
other places, the inductive interference
from the catenary current is nearly
cancelled by that from the return current,
thus minimizing the interference effects.
The problem of stray earth currents is
also reduced.
.

: FIG showing the arrangement of booster transformer


One of the disadvantages in this system
is that as a loco passes a booster
transformer, there is a momentary
interruption in the supply (because of the
break in the centenary) with the
attendant problems of arcing and
transients on the line, as well as radio
frequency interference

POWER SUPPLIES: Single
system (AC) Modern electric locos have
some fairly sophisticated electronic
circuitry to control the motors depending
on the speed, load, etc., often after first
converting the incoming 25kV AC
supply to an internal AC supply with
more precisely controlled frequency and
phase characteristics, to drive AC
motors. Some AC locos (WAG-4,
WAM-4) have DC motors, instead.
Some AC locos (WAP-5 and WAG-9,
both designs from ABB) generate 3-
phase AC internally using a thyristor
converter system; this 3-phase supply is
then used to power asynchronous AC
motors. (3-phase AC motors are
somewhat more efficient, and can
generate higher starting torque.): The
overhead catenary is fed electricity at
25kV AC (single-phase) from feeding
posts which is positioned at frequent
intervals alongside the track. The
feeding posts themselves are supplied
single-phase power from substations
placed 35-60km apart along the route.
The substations are spaced closer (down
to 10-20km) in areas where there is high
load / high traffic. (These substations in
turn are fed electricity at 132kV AC or
so from the regional grids operated by
state electricity authorities.) A Remote
Control Centre, usually close to the
divisional traffic control office, has
facilities for controlling the power
supply to different sections of the
catenaries fed by several substations in
the area.
The power system used should be safe
user, friendly and economical. A C
system always use overhead lines and
they are usually much longer. Each sub
section is isolated from its neighborhood
by a section insulator in the over head
contacts. The subsections are joined by
high speed selection switches. In the
advanced model track magnets are
usually used to automatically switch on
and off the power supply when the train
is approaching the neutral.
ELECTRONICS INVOLVED:
These are different kinds of rectifying
devices used to convert alternating
current to direct current. In the early
days, this could be done only with
vacuum-tube technology as
semiconductor technology could not
handle the high voltages and currents
that need to be switched in an
application such as in a locomotive.
An excitron is a mercury vapour rectifier
which uses a pool of liquid mercury as
its cathode. An arc is maintained
between this cathode and an auxiliary
excitation anode, which maintains a
concentration of ionized mercury in the
tube. Current flows between the main
anode and the cathode by mercury ions
only when the main anode is positive
with respect to the cathode.
An ignitron is also a mercury vapour
rectifier. However, in this an arc is not
maintained continuously between the
anode and the cathode. The arc exists
while the anode is positive but is
extinguished as the polarity switches.
Instead, an igniter electrode of a
semiconductor material such as silicon
carbide is kept partly dipped in the
mercury pool cathode to initiate the arc
when the anode goes positive.
Developments in solid-state devices
have allowed the use of semiconductor
rectifiers instead of the more capricious
and fragile vacuum-tube rectifiers in
today's locomotives.

Control by thyristors: The
thyristor is a development of the diode.
Once it has been gated and the current is
flowing the only way of turning it off is
to send current in the opposite direction.
With this development, controllable
rectifiers became possible. In reality a
smoothing circuit is added to remove
most of the ripple in the wave form and
provides more constant flow. The power
level for the motor is controlled by
varying the point in each rectifier cycle.
The later in the cycle the thyristor us
gated , the lower is the current available
to the motor, as the gating is advanced
the amount of current increases until the
thyristors are on for the full cycle. This
form of control is called the phase angle
control.

With further advancements the thyristors
had developed to a stage where it could
be turned off by a control circuit as well
as turned on by a one. This was the gate
turn off or the G T O thyristor. This
means that the thyristor turn on circuit
could be eliminated for D.C power fed
circuits. Now the thyristors could be
turned off and on virtually by will and
now a single thyristor can be used for
controlling a D C motor.
Introduction of I G B T s: further
research and new developments led to
the introduction of I G B Ts or insulated
gate bipolar transistors. It could be
turned off and on like a thyristor, but
does not require the high currents of
thyristor to turnoff. Now the modern
devices in the form of I G B Ts can
handle thousands of amps and it has
appeared in traction application. Its
principle benefit is that it can switch a
lot faster than G T Os. This reduces the
current required and losses owing to
heat, thus reducing the size of the units.
The I G B Ts gave much smoother and
sounding acceleration buzz from under
the train.
A C DRIVE SYSTEM:
The limits of technology favored D C
motors for a considerable period. The
progression of power electronics has
lead to the usage of 3 phase A C motors
which are much efficient than the D C
motors.
In general the type of motors used are
asynchronous motors owing to better
speed control and rugged construction,
squirrel cage induction motors are used.
The synchronous motor is used when G
T Os did not develop to reach the
traction requirements. These kinds of
motors are used in French in their T G V
atlantique train. The advantage of
synchronous motor in the application is
that the motor produces reverse voltages
need to turn off the thyristors. But they
are quickly over taken by the second
type of A C motors, the asynchronous
motors.
Drive mechanism: The line voltage
is fed to a transformer which steps the
voltage to system voltages. The output is
taken off for the rectifier which produces
D C output. This is then passed to the
inverter which produces the controlled
three phases to the traction motors. All
the thyristors are generally G T Os.
Modern drives use I G B T s which is 3
4 times faster. Controls of these
systems are complex, but these are
carried by the micro processors. The
control of the voltage pulses and
frequency has to be matched with the
motor speed.
Control mechanism. The speed of
the 3 phase A C motor is determined by
the frequency of the supply, but the
power should also be controlled at the
same time. A modern traction motor is
controlled by feeding in 3 ac currents
which interact to cause the machine to
run. The variations of the voltages and
frequency are controlled electronically.
For the catenary microprocessor-based
system called 'SCADA' (Supervisory
Remote Control and Data Acquisition
System) for remote control of electric
substations and switchgear. A central
SCADA facility (the division control
centre) can control a region extending to
about 200-300km around it. SCADA
allows remote monitoring of electrical
parameters (voltage, current, power
factor, etc.) in real time and remote
operation of switchgear, as well as
automatic fault detection and isolation,
allowing better control of maximum
demand, trouble-shooting, etc. SCADA
replaces an older system that used
electromechanical remote control
apparatus.

In order to reduce the likelihood of
excessive braking, many locomotives
and multiple units are fitted with wheel
slide control systems. The most common
of these operates rather like ABS
(automatic braking systems) on road
vehicles. The railway systems usually
monitor the rotation of each axle and
compare rotational speeds between pairs
of axles. If a difference appears between
a pair of axles during braking, the brake
is released on those axles until the
speeds equalize, when the brake is re-
applied. All this occurs automatically.
Modern systems also detect too rapid
deceleration of an axle. Another form
of slip/slide detection uses Doppler radar
techniques. This measures the ground
speed of the locomotive against the
revolutions of each wheel set and uses
the detection of a difference to regulate
the control systems
SWITCH GEAR AND
PROTECTION: For most loco
classes, loss of power in the OHE is
sensed automatically and the main
electrical circuit from the pantograph to
the traction equipment is disconnected.
A lamp ('LSS') glows on the control
panel indicating loss of catenary voltage
(in addition, of course, to all the voltage
and current gauges going dead), and in
some locos a warning buzzer also
sounds. The main circuit breakers open,
all the auxiliary equipment is reset, and
the master controller reverts to the zero
notch position automatically. A similar
sequence of events occurs if the wrong
pantograph circuit is used (DC for AC or
the other way around). Of course, once
the main circuit breaker is open and the
loco is empowered, it will coast to a halt,
and all that the driver can do is to apply
the brakes to control the stop. The loco
needs to be powered up again as if
starting up from scratch.
An electric loco is always provided
with circuit breakers on the top of the
roof of the engine near the pantograph.
These are of two types, the air blast
circuit breaker and the vacuum circuit
breaker. The air vacuum part is used to
extinguish the arc which comes when the
C B is opened. Buckholz relays, earth
fault relays, notching relays, no volt
relays etc. are some other relays
.
Case study: The following table shows a comparison based on the performance of A
C and D C locos.




Electric Locomotives, D C driven
Class Maker
Power
(hp)
Speed
(km/h)
Weight
(tones)
Starting
TE
(kg force)

WCP-1
SLM /
Metro Vick

2160
(1860 cont.)
120 102-105 15240
WCP-2
SLM /
Metro Vick

2160
(1860 cont.)
120 102-105 15240
WCP-3 HL / GEC
2250
(2130 cont.)
120 113
10890 (1
hr)
WCP-4 HL / BB 2390 120 111
11300 (1
hr)
.A C driven locos
class maker Power (h p) Speed(km/hr) wt Starting
torque
WAP-3 CLW
3900 (3760
cont.)
140 112 22400
WAP-4 CLW
5350 (5000
cont.)
140 113 30800
WAP-5 ABB / CLW
6000 (5440
cont.)
160 79 26300
WAP-6 CLW
5350 (5000
cont.)
160 design
105(restricted)
170(upgrade)
113 35800

Thus the A C driven locos are much better in performance and has more output power
with ever increasing development in electronics field speeds unto 500 kms per hr are
being achieved.


CONCLUSION: Thus A C motors
have got with them better performance
which gives an insight to the futuristic
option of choosing them. With the rapid
development of modern power
electronics the control has become easier
and flexible. Thus the A C driven locos
will be of greater importance as the
locos of the up coming generation.



New Functional Electrical System Using Magnetic Coils for Power
Transmission and
Control Signal Detection

Functional Electrical System in the treatment
of Paralysis










A. Avinash D.V.R. Phaneendra
III B.Tech, J NTU III B.Tech, J NTU
Anantapur Anantapur
Email: ailas2030@gmail.com Email: manavi.roni@gmail.com



Abstract
In order to improve the Functional Electrical
Stimulation system, we considered new
methods of power transmission to the
electrodes and control of the system.
Concerning power transmission, a method
is proposed to directly supply electric power
to the electrodes by magnetic coupling
without the use of electric wires. Selective
power transmission to two electrodes is
studied experimentally. Power transmissions
at frequencies of 96 kHz and 387 kHz are sent
separately to secondary circuits, and about 5
electrodes can be used. Concerning control of
the system, a new control method that uses
auricular movements is proposed. The effect
of electrical stimulation is examined in the
process of acquiring voluntary auricular
movement. The acquisition method along
with biofeedback or functional electrical
stimulation is effective to some extent.
Index TermsBiofeedback, functional
electrical stimulation, magnetic coils.

1. Introduction

Functional Electrical Stimulation (FES) is a
method that uses computer-controlled electrical
stimulation to restore the movement of limbs
paralyzed by spinal cord injured [1]. Implant
system using electrodes that dont penetrate the
skin are under development [2][3]. As opposed
to FES using transcutaneous electrodes [Fig.
1(a)], this method [Fig. 1(b)] poses no risk of
infection, and therefore eliminates the need for
a disinfecting process or other extra procedures.
In this study, in an effort to improve the
patients Quality of Life (QOL) we propose an

improvement over the present FES system by
developing a new method of power transmission
to the electrodes and a new method of
controlling the FES system by the patient. By
using these methods, we propose a new FES
system (Fig. 2).

A. Direct Power Transmission

In the conventional FES implant system, the
internal power receiver supplies the electric
power to the electrodes through electric wires
[Fig. 1(b)]. In this method, there is the
possibility of the wire snapping. Therefore, we
investigated a method to supply electric power
directly to the electrodes without electric





Fig. 1 (a). The transcutaneous electrodes
method



Fig. 1 (b). The conventional implant
electrodes method.

wires to avoid the dangers of snapped wires. Fig.
1(c) shows the proposed direct power
transmission method. The power is transmitted
by a magnetic coupling, where primary
(external) and secondary (internal) coils are
connected to each electrode. An FES system
doesnt activate all of the electrodes all of
the time. Therefore, if it were possible to
transmit the power selectively to only the
working electrodes, we could make the


Fig. 1 (c). The direct power transmission
method.






Fig. 2. The concept of the new FES system
that we propose in this paper.


direct power transmission method more efficient.
We examined the possibility of selective power
transmission to two electrodes (Fig. 3).

B. Control Utilizing Auricular Movement

Concerning the method to control the system,
the conventional FES system utilizes a
mechanical ON/OFF switch, breathing exercises,
and so on. However, these methods limit
remained movement functions. Therefore, we
propose a new method to obtain a control signal
by magnetically detecting the auricular
movement by using a permanent magnet with a
diameter of 10 mm and a coil thickness of 2 mm
and a search coil with a diameter of 8 mm and 20
turns (Fig. 4). This method allowed us to obtain
a signal (Fig. 5) and that could be used to control
the FES system. However, not many people can
move their auriculae voluntarily. In this paper,
we examined the effect of electrical stimulation
in the process of achieving voluntary auricular
movement.










Fig. 3. Power transmission to two secondary
coils from a transmission coil.



Fig. 4. Block diagram of detecting auricular
movement utilizing permanent
magnet and search coil.


II. Direct Power Transmission

A. Experimental Method of Selective
Power Transmission

For selective power transmission, we made a
series of resonance circuits that had different
resonance frequencies and that received only
the transmitted power of the corresponding
resonance frequencies. We connected resistors of
1 k as a load and capacitors of different
capacitance to the secondary coils A and B. We
then transmitted the power at frequencies of 96
kHz and 387 kHz, respectively, which were the
resonance frequencies of the secondary circuits.
Primary Coil Design: The primary coil was a
solenoid type with diameter of 55 mm, 2200
turns in three layers, and a wire diameter of 1.5
mm.


Fig. 6. Block diagram of biofeedback with
EMG.




Fig. 7. Frequency separation of coil A and B.


Fig. 5. Voltage made by auricular
movement utilizing permanent magnet
and search coil.

Secondary Coil Design: The secondary coil was
a spiral type with diameter of 11.5 mm, coil
thickness of 3 mm, and 200 turns. Bunched
amorphous wires were inserted in the center
of the coil; these had a diameter of 2 mm and a
length of 20 mm.

B. Experimental Results and Discussions

For each of the two coils, Fig. 7 shows the
ratio between the power received at the
resonance frequency of the coil and the power
received at the other frequency used. For coil B,
about 3% of the 387 kHz component was
detected as an unnecessary addition to the 96
kHz component. However, for coil A, the power
of the necessary component was about 37% of
the power of the desired component, indicating
that some improvement must be made.
In this experiment, we assumed that the
secondary coils are implanted and limited the
coil volume. Accordingly, the maximum
inductance of the coil was limited. The
resonance frequencies were set only by the
capacitance of the capacitor connected to the coil
in series. The value of Q, which shows the
sharpness of the resonance, Is

Q=(1/R)(L/C)
1/2
where R is
resistance, L is inductance, and C is capacitance.
Therefore,Q2 (Q of coil B) was larger than Q1(Q
of coil A), showing that coil B had good
frequency separation. The measured values of
L1(L of coil A),L2 (L of coil B), C1(C of coil
A), and C2(C of coil B) were 0.9 mH, 0.9 mH,
3.0 nF, and 0.18 nF, respectively. The calculated
values of and were 0.54 and 2.2, respectively. To
make coil A have frequency separation equal to
that of coil B (assuming that the inductance of
the coil is in proportion to the square of the
number of turns), twice the number of turns and
about twice as large a coil would be needed. If a
coil with equal inductance is used, as the
transmission frequencies become higher, the Q
value the secondary circuit would become larger
and the interval between frequencies needed to
obtain good selectivity could be narrower. If
frequencies up to 1 MHz are used and coils with
0.9 mH are used at each frequency, about 5
channels of electrodes could be used.

III. Control using auricular movement

A. Experimental Method of Achieving of
Voluntary Auricular Movement

Training with electrical stimulation and
biofeedback was given to 9 healthy subjects aged
2122. Initially, they could not move their
auricular voluntarily. First, a short period of
electrical stimulation (frequency=20hz, pulse
duration=0.2 m sec) was applied to all of the
subjects through surface electrodes placed over
the posterior auricular muscle. One stimulation
was given to the subjects to provide a sensation
of the auricular movement. Then, the subjects
were trained by biofeedback [4], [5] and
electrical stimulation. The biofeedback was
carried out using electromyography (EMG) and a
mirror. Fig. 6 shows a block diagram of
biofeedback with EMG. We divided the subjects
into three groups (groups A, B, C) to examine
the effect of the electrical stimulation. The
amplitude of stimulus voltage for Group A was
over the threshold voltage of stimulation that
could develop the movement, and that for Group
B was under this threshold voltage. The subjects
in Group C were trained without electrical
stimulation. This training was performed once a
day up to the 15th day. Subjects who acquired
sufficient auricular movement before the 15th
day stopped training.

B. Experimental Results and Discussions
Seven subjects out of nine (two of Group A,
three of Group B, and two of Group C) were able
to acquire voluntary movement of the auricular
within 15 days. The amplitudes of the smoothed
EMG signals tend to be higher in Group A than
in the other groups. Subjects of Group C, who
were trained without electrical stimulation, could
acquire voluntary movement of the auricular as
well. Consequently, the electrical stimulation
was effective in giving information about the
location of the muscle and a sensation of the
movement, and the biofeedback was effective to
some extent in helping the subjects learn how to
control the movement after they had acquired it.

IV. CONCLUSIONS

For direct power transmission, we found that
about 5 electrodes can be used. Furthermore, this
method offers the possibility of restoring simple
movement for paralysis victims. Concerning
control by auricular movement, the three
methods used in the experiment proved to be
somewhat effective in helping patients acquire
voluntary auricular movement.



REFERENCES

[1] N. Hoshimiyai, A. Naito, M. Yakima, and Y.
Honda, A multichannel
FES system for the restoration of motor functions in
high spinal cord
injury patients: A respiration-controlled system for
multipoint upper extremity,
IEEE Trans. Biomed. Eng., vol. 36, no. 7, pp. 499
508, J uly
1989.
[2] K. Takahashi, N. Hoshimiyai, H. Matsuki, and Y.
Honda, An implantable
stimulator for FES with the externally powered
system,
Iyodenshi to Seitaikogaku, vol. 37, pp. 4351, 1999.
[3] B. Smith, P. H. Peckham, M. W. Keith, and D. D.
Roscoe, Externally
powered, multichannnel, implantable stimulator
versatile control of paralyzed
muscle, IEEE Trans. Biomed. Eng., vol. BME-34, no.
7, pp.
499508, J uly 1987.
[4] H. Hirai and S. Watanabe, The Biofeedback:
Seishin-Shobo, 1975.
[5] S. Kakii, I. Hurmitsu, and H. Kurisu, A Basic Study
on the Biofeedback:
The General Institute of Hiroshima Syudo Univ.,
1985.






ENERGY CONSERVATION MANAGEMENT




PAPER PRESENTED BY
T.V.SUBRAHMANYA SASANK
&
G.SARVESWARA RAO
EEE
S.R.K.R.ENGINEERING COLLEGE
BHIMAVARAM.
EMAIL:tvssasank@rediffmail.com;phone:9348010203,9440292801





Abstract:
In the present world Energy conservation is a big task for the people.Many countries
have come forward for solving this problem but it was in vain,so in this article we want to
explain about the various means of managing energy conservation.Those are :
Long term measures
DEMAND SIDE MANAGEMENT
ENERGY EFFICIENCY IN BUILDINGS AND ESTABLISHMENTS
ENERGY CONSERVATION BUILDING CODES and others

INTRODUCTION:
The strategy developed to make power available to all by 2012 includes promotion of
energy efficiency and its conservation in the country, which is found to be the least cost
option to augment the gap between demand and supply. Nearly 25,000 MW of capacity
creation through energy efficiency in the electricity sector alone has been estimated in
India. Energy conservation potential for the economy as a whole has been assessed as
23% with maximum potential in industrial and agricultural sectors.
LONG TERM MEASURES
1. Industry specific Task Forces.
2. Notifying more industries as designated consumers.
3. Conduct of energy audit amongst notified designated consumers.
4. Recording and publication of best practises (sectorwise).
5. Development of energy consumption norms.
6. Monitoring of compliance with mandated provision by designated consumers.


STANDARDS AND LABELLING PROGRAMME
Standards and labelling (S&L) programme has been identified as one of the key activities
for energy efficiency improvements. The S&L program when in place would ensure that
only energy efficient equipment and appliance would be made available to the
consumers. Initially the equipment to be covered under S&L program are household
refrigerators, air-conditioners, water heater, electric motors, agriculture pump sets,
electric lamps &fixtures, industrial fans & blowers and air-compressors. Preliminary
discussions have already taken place with manufacturers of refrigerators, air conditioners,
agricultural pump sets, motors, etc., regarding procedure to fix labels and setting
standards for minimum energy consumption.
DEMAND SIDE MANAGEMENT
The Demand Side Management and increased electricity end use efficiency can together
mitigate power shortages to a certain extent and drastically reduce capital needs for
power capacity expansion. The Bureau will be assisting 5 electric utilities to set up DSM
Cell and will also assist in capacity building of DSM Cell staff. The preparation of
investment grade feasibility reports on agricultural DSM, municipal water pumping and
domestic lighting in each of the 5 states will also be undertaken by the Bureau under
DSM programme.
ENERGY EFFICIENCY IN BUILDINGS AND ESTABLISHMENTS
Energy audit studies conducted in several office buildings, hotels and hospitals indicate
energy saving potential of 20-30%. The potential is largely untapped, partly due to lack
of an effective delivery mechanism for energy efficiency. Government buildings by
themselves, constitute a very large target market. The Government of India is committed
to set an example by implementing the provisions of the EC Act in all its establishments
as a first initiative. To begin with, the Bureau has begun conduct of energy audit in the
Rashtrapathi Bhawan, Parliament House, South Block, North Block, Shram Shakti
Bhawan, AIIMS, Safdarjung Hospital, Delhi Airport, Sanchar Bhawan, and RailBhawan.
Energy audit in the Rashtrapati Bhawan PMO, S S Bhawan, Sanchar Bhawan & Rail
Bhawan has been completed the new buildings are required to be designed and built with
energy efficiency consideration right from the initial stages itself. The development of
energy conservation building codes is necessary for this purpose. The codes would be
applicable to commercial buildings constructed after the relevant rules are notified under
the Energy Conservation Act. The Bureau would constitute Committee of Experts for
preparation of Energy Conservation Building Codes for different climatic zones.
PROFESSIONAL CERTIFICATION AND ACCREDITATION
Designated Consumer. Under the EC Act, 2001 is required to appoint or designate energy
manager with prescribed qualifications and also to get energy audit done from accredited
energy auditor. It has been decided that prescribed qualification for energy manager will
be the passing of certification examination to be arranged by the Bureau. Also, regular
accreditation is proposed to be given to energy audit firms having a pool of certified
energy auditors. The syllabus and other preparatory activities for conducting the
examination have been finalized and the first National Level Certification Examination is
scheduled to be conducted in August 2003.
MANUAL AND CODES
In order to standardize the energy performance test procedures and adopt uniform codes
while performing energy audit in the designated consumer premises, the Bureau has
undertaken this activity. Initially twenty energy intensive equipments have been
identified for development of performance test codes which will be developed and
reviewed by experts, validated by field tests and pilot tested by training energy manager
and energy auditors in these codes.
SCHOOL EDUCATION PROGRAMME
Considering the need to make next generation more aware regarding efficient use of
energy resources, it is necessary to introduce children during their school education. In
this regard, a Steering Group comprising members from NCERT, CBSE, School
Principals and Teachers has been constituted, which is assisting in preparing a detailed
project covering review of the existing curriculum, training of teachers and sensitisation
of school principals, undertaking practice oriented programme and launching of
awareness campaign.
DELIVERY MECHANISMS FOR ENERGY EFFICIENCY SERVICES
Although the benefits of energy efficiency are well known and recognised, investments in
energy efficiency have not taken place due to variety of barriers faced by energy users,
such as risk averseness and lack of motivation for making energy efficiency investments,
and low credibility of energy auditors and their services, lack of confidence in the ability
of energy efficiency equipment to deliver energy savings as expected, etc. An innovative
way of overcoming such barriers is the approach of using performance contracting
through energy service companies (ESCOs). The Bureau would be conducting
investment grade audits in industries, which are proposed to be implemented on the
performance contract basis by ESCOs.
INDO-GERMAN ENERGY EFFICIENCY PROJECT(PHASE-II)
The Phase-I of the Indo-German Energy EfficiencyProject has been successfully
completed by theerstwhile Bureau of Energy Efficiency(BEE) in J une 2000.Activities in
the Phase-II of the Project have alreadybegun and the project would be supporting
thethrust areas of the Bureau as mentioned above.
ENERGY CONSERVATION AWARD 2002
To give national recognition through awards to industrial units for the efforts undertaken
by them to reduce energy consumption in their respective units, the Ministry of Power
launched the National Energy Conservation Awards in 1991. BEE provides technical and
administrative support for the Awards Scheme. In the Awards Scheme 2002,for Large
and Medium Scale Industry, applications were invited from 17 Industrial Sub-Sectors i.e.,
automobile, aluminium, cement, chemicals, ceramics, chlor-alkali, edible oil/vanaspati,
fertilizers, glass, integrated steel, mini-steel, paper& pulp, petrochemicals, refractories,
refineries, sugar and textile plants. The automobile sector has been included for the first
time in the Awards. 2002. The response from the industries to the year 2002 scheme has
been encouraging. In total, one hundred seventy four (174) industrial units belonging to
the above sub-sectors responded, which is a record for the Award Scheme since its
inception.
The award scheme has motivated the participating units to undertake serious efforts in
saving energy and environment. The data pertaining to 174industrial units indicated that
in 2001-2002, these units have been able to save collectively 641million kwh of electrical
energy which is equivalent to the energy generated from a 122MW thermal power station
at a PLF of 60%.Besides the above electrical energy savings, the participating units have
also saved 1.7 lakh kilolitres of furnace oil, 7.4 lakh metric tonnes of coal and 3588 lakh
cubic meters of gas per year.In the monetary terms these units have been able to save
Rs.594 crores per year and the investment of Rs.691 crores was recovered in 14 months
time period. This year, the Awards were given by the Hon.ble Vice President of India.
SUPPLY SIDE ENERGY CONSERVATION
A number of Pilot Projects/Demonstration Projects have been taken up for load
management and energy conservation through reduction of T & D losses in the System.
The schemes under implementation during the year 2002-2003 include:-
Two pilot projects for energy audit study, one in the distribution network of West
Bengal State Electricity Board (WBSEB) sanctioned in 1994-95 with the Ministry
of Power's contribution of Rs.181.03 lakhs and the other in the distribution
network of Kerala State Electricity Board (KSEB) sanctioned in the 1994-95 with
the Ministry of Power's contribution ofRs.114.62 lakhs, have been completed
successfully by WBSEB and KSEB through REC, during the current financial
year 2002 . 2003.
A pilot project for installation of 2414 LTS witched Capacitors was sanctioned in
1993-94 with the Ministry of Powers contribution of Rs.199.32 lakhs. The
installation of 2370LT Switched Capacitors (with minor reduced scope of work)
has been completed in 2002-2003 in Andhra Pradesh, Haryana, Punjab and Tamil
Nadu, through Rural Electrification Corporation (REC).
A pilot project of installation of 3000Amorphous Core Transformers in the
distribution network of various State Electricity Boards, sanctioned in the 1993-
94 with the Ministry of Powers of contribution ofRs.300.00 lakhs through REC
has been completed successfully in 2002-2003 with some minor reduced scope of
work.
Three pilot projects on Remote Controlled Load Management in the distribution
network of RRVPNL (sanctioned in 1996-97 with the Ministry of Powers
contribution of Rs.297.50lakhs and revised to Rs.252.00 lakhs in March, (2000),
PSEB (sanctioned in 1995-96 with the Ministry of Power's contribution of
Rs.443.88lakhs) and HVPNL (sanctioned in 1997-98 with the Ministry of Power's
contribution of Rs.237.22lakhs) were under implementation through REC. The
projects of RRVPNL and HVPNL have been completed successfully during the
2001-2002while implementation work of remote controlled load management in
respect of the 132 KV sub-station at Hoshiapur has been completed by PSEB in
2002-03.
An Energy-cum-System Improvement Project, involving installation of
Amorphous Core Transformers and LT Capacitors in the distribution network
serviced by the Cooperative Electric Supply Society, Sircilla (Andhra Pradesh),
sanctioned in 1995-96 with the Ministry of Power's contribution of
Rs.508.00lakhs, is under implementation through REC and is likely to be
completed shortly.
Two pilot projects, one of the introduction of 500Electronic Meters with Time of
the Day (TOD) facility, sanctioned in 1996-97 with the Ministry of Power's
contribution of Rs.88.00 lakhs and the other for Energy Conservation and
Demand Side Management by energy efficient lighting in WBSEB.s HQs at
Kolkata sanctioned in 1998-99with the Ministry of Power's contribution ofRs.5.55
lakhs have been completed successfully during the year 2002-2003
Office Accommodation Management Framework

Why have it?
The cost of office accommodation is the second-highest recurrent cost component for
Queensland Government agencies (agencies) after staff salary and wages costs, and
represents a significant component of their service delivery costs.
The area of government accommodation managed by the Department of Public Works
exceeds 840,000 square metres, and is located in over 200 government-owned buildings
and over 500 private sector leased buildings. The Department of Public Works manages
a total rental revenue stream in relation to this accommodation exceeding $180 million
and a combined office-building works program of $32 million annually.
Effectively-managed government office accommodation can make a key contribution to
the successful achievement of the Queensland Government's Priorities and the delivery
of government services.
The primary purpose of the Office Accommodation Management Framework (OAMF)
is to support agencies in all aspects of their acquisition, management and utilisation of
office accommodation. In addition, the OAMF:
supports the use of best practice methodologies in accommodation management
and establishes a structured approach to accommodation planning, space
management, fitout, accommodation use and accommodation change;
promotes consistency, equity, cost-efficiency, sustainability and accountability in
all phases of accommodation management and aims to achieve the strategic and
operational alignment of government accommodation with the delivery of
government services to the community; and
is designed to achieve a better workplace for government employees, enhance
delivery of government services and generate savings in the overall cost of
government office and commercial accommodation.
What is it?
The OAMF incorporates processes for office accommodation planning, space
management, fitout and occupancy. The OAMF suite of documents comprises:
the OAMF master document which defines the purpose, scope and objectives of
the framework and establishes roles and responsibilities and the OAMF four
operating principles.
guidelines for planning, space management, fitout and occupancy; and
practice notes, agreements, standards, templates and references.
The OAMF was approved by the Government Office Accommodation Committee for
implementation on 19 November 2004.
The OAMF applies to:
Queensland Government departments, as defined in s4A of the Financial
Administration and Audit Act 1977 ;
Queensland Government statutory authorities, that are wholly or partly funded
through the Queensland State Budget; and
Queensland Government commercialised business units.


What is it supposed to achieve?
The implementation of best practice in office accommodation strategic planning and
project implementation has the potential to achieve significant savings as well as enhance
the delivery of government services to all Queenslanders.


What it's about
The Government Energy Management Strategy (GEMS) is a whole-of-Queensland
Government energy efficiency initiative.
Queensland Government Agencies consume significant amounts of electricity,
accounting for 930 million kWh in 200405. This level of use equates to an annual
electricity bill of $87M and annual emissions of greenhouse gas equivalent to 957 000
tonnes of carbon dioxide (equivalent to 220 000 cars on Queensland's roads). Greenhouse
gas emissions are harmful to the environment, contributing to global warming.
GEMS was launched in December 2003. While its primary focus is energy use, it is also
tackling water consumption.
GEMS seeks to improve government agencies use of energy and water producing
financial and environmental benefits.
GEMS implementation is coordinated by a multidisciplinary team within the Built
Environment Research Unit of the Department of Public Works.
The GEMS team works with agencies to:
promote smarter ways of managing energy across Government
incorporate energy management into a whole-of-life approach to asset
management
reduce overall costs to Government and community by being more energy-
efficient
fulfil a commitment to care for the environment.
Benefit to agencies
By using less energy, agencies save money.
GEMS has set a target of reducing the whole-of-government electricity bill by $22
million by 2008.
Agencies taking up the GEMS challenge are rewarded financially. They get to keep the
savings they make on their electricity bill, redirecting that money to fund their own
priority projects.
According to the Australian Greenhouse Office, an office of 200 people could cut its
electricity bill from $42,000 to $5,000 simply by turning off all computers at night and
using energy-star office equipment.
Environmental benefits
Using less energy helps save the environment by reducing greenhouse gas emissions.
A computer left on overnight all year generates the same amount of greenhouse gas as a
car driving from Sydney to Perth.
According to the Australian Greenhouse Office, an office of 200 people could cut their
greenhouse gas emissions from 280 tonnes to 30 tonnes a year by turning off all
computers at night and using energy-star office equipment.

Building Research
Overview
The Department provides advice to government and agencies on all asset management
matters relating to environmental issues in the built environment. The Department also
coordinates or facilitates research and development initiatives to support ecological
sustainability including the Cooperative Research Centre (CRC) for Construction
Innovation and advanced research into integrated online communication system
technologies.
Background
South East Queensland is experiencing the worst drought in over 100 years, where major
reservoir water levels have dropped to historically low levels. The Queensland
Government is leading a number of key water management strategies, including the
conservation of potable water within high water use Government assets.
The Department of Public Works is leading water smart buildings: the Government
Buildings Water Conservation Program to deliver a strategy and associated policies to
reduce water consumption in new and existing Government owned commercial buildings,
as well as facilities and parks
Program aims:
reduce potable water consumption
demonstrate Government leadership in water conservation through the adoption
of best practice technologies and efficiencies, meeting triple bottom line
requirements.
The targeted outcomes below are preliminary key milestones to be met in accordance
with one to ten year strategic plans:
reduce potable water usage by 20% below the original design specifications using
available baseline data in existing targeted high water use Government owned
commercial buildings; and
reduce potable water usage by 20% below the original design specifications or
current industry practice in all targeted new government buildings, facilities and
parks.

Program strategy:
Monitor water usage in commercial office buildings
Evaluate emerging water saving technology
Develop and trial water auditing processes
Encourage other agencies to adopt the audit process
Implement a staged, prioritised program of retrofitting WELS approved water
saving technology in Government owned buildings and facilities.
Develop and implement Water Efficiency Management Plans for 10 high water
use Government owned office buildings
Promote water efficient practices to other agencies, including awareness of best
practice
Implement key demonstration projects using emerging technologies (eg best
practice cooling tower maintenance and management); and
Develop strategies and trials for water recycling and alternative water use
opportunities in new government buildings, facilities and parks.

Current initiatives:
Water consumption auditing processes have been developed for commercial
buildings, schools and police stations.
Water monitoring trials have commenced at 80 George Street and 111 George
Street. Some bathrooms have been fitted with newly developed water-saving
technologies and water usage is being monitored to compare with background
data collected up to the end of J une 2006.
A number of high water-use facilities in the Brisbane CBD, Rockhampton and
Toowoomba areas are being targeted with a program of works to improve their
water-efficiency. Water-efficient showers, tapware, toilet systems, vandal-proof
taps and bubblers will be installed.
Medium- to long-term solutions such as investigation into waste water reuse for
government owned buildings and facilities and opportunities to reduce water use
in cooling towers
Financial models and other incentives to encourage departments to adopt water-
saving practices are also being studied.






ENERGY MANAGEMENT INFORMATION SYSTEM

Presented by
P. Suresh, M. Nasir
III EEE, GPREC
Kurnool


Abstract

This paper begins giving a brief view of
Energy Management Information System
(EMIS) and stresses on the various hardware
components and its architecture involved in
different management levels in energy
accounting centres to customize EMIS
employing efficient software systems.

I NTRODUCTI ON
An Energy Management Information
System (EMIS) is an important element of a
comprehensive energy management program. It
provides relevant information to key individuals
and departments that enable them to improve
energy performance. It involves the following.

Developing responsibility for those who
deal with energy bills and have the
authority to change the way energy is
used.
Providing resources where required.
Collecting and analyzing existing energy
use data.
Undertaking energy audits to determine
where, and how efficiently energy is
used.
Implementing energy saving measures.
Regular reporting of savings achieved.



There are two central energy management
strategies.


Conservation: The avoidance of
wasteful energy-use and the reduction in
demand for energy-related services. This is
achieved mainly through the operation of the
asset.

Efficiency: The reduction in
consumption of energy by introducing more
efficient equipment and systems. This is
achieved mainly though the acquisition of an
efficient new plant.




What is EMIS?
Energy Management Information
System is a programme of well-planned actions
aimed at reducing an organizations energy bills
and minimizing the detrimental impact.
An EMIS can be characterized by its
deliverables, features, elements and support.
Deliverables include the early detection of poor
performance, support for decision making and
effective energy reporting.
Features of an EMIS include the storage of data
in a usable format, the calculation of effective
targets for energy use, and comparison of actual
consumption with these targets.
Elements include sensors, energy meters,
hardware and software.
Essential support includes management
commitment, the allocation of responsibility,
procedures, training, resources and regular
audits.
An Energy Management Information
System (EMIS) is only one element of a
comprehensive energy management program
(EMP), albeit an important one without which
full benefits will not be achieved and sustained.
A good EMIS should reduce energy use (and
cost) by at least 5 percent.

EMIS HARDWARE COMPONENTS

The major hardware components of any
EMIS are given below.
Field transducers
Power monitor modules
Data concentrators/PLC (programmable
logic controllers)
Communication bus

Fi el d t r ansduc er s
Transducers change physical phenomena
into electrical signals. For example:
thermocouples, resistance temperature detector
thermistors and IC (integrated circuit) sensors
convert temperature into voltage or resistance.
In each case, the electrical signals are
proportional to the physical parameter they
monitor. These field transducers essentially
supply information about the measured
parameters containing standard electrical signals
in the form of 4-20 mA. However, this signal is
only a carrier of information. The information
content has to be defined by calibrating the 4-20
mA signal. The information content in the
signal is translated or defined by a
programmable logic controller.
Power monitor modules

These modules are capable of
monitoring and displaying several electrical
energy parameters like current, voltage, power
factor, etc. They need electrical signals from
external CTs (current transformers). The
modules are generally located in the local
panels where displaying of electrical parameters
are carried out. They have the capability to
communicate with a programmable logic
controller through the RS485 communication
link. This kind of communication facilitates
data acquisition to a remote PC.

Pr ogr ammabl e l ogi c c ont r ol l er s

The power sensor and power monitor
modules are not capable of generating all energy
parameters. Therefore, additional calculations
are derived using PLC numeric processing
capabilities.
Cont r ol pr oc essor uni t
The CPU is often referred t as a
microprocessor. The basic set instruction is a
high level programme installed in the ROM.
The programme logic is usually stored in the
EPROM (electrically erasable permanent red
only memory). The programming device is
connected to the CPU whenever the operator
needs to monitor, troubleshoot, edit or
programme the system.
Input and output modules
This assembly contains slots to receive
various I/O modules. Each rack is given a
unique address recognizable to the CPU. Power
and communication cables are required for
remote installation. The replaceable I/O
modules plug into a back plane that
communicates directly with the CPU. I/O
modules are available in different
configurations, and voltages (AC/DC). Special
modules are available to read analog signals and
produce analog outputs, provide communication
capabilities.
Programmable devices

Computer-based programmers typically
use a special communication board installed in
the PC, with an appropriate software
programme.
Communications network
The communication network provides
pathways for data communication between
various hardware components of the EMIS.
There are various types of networking
technologies ranging from simple networks like
RS232/RS485 to complex networking
technologies like Industrial Ethernets.
Different Management levels involve
in EMIS Designing

Management decision-making can be
organized according to the level of managerial
responsibilities.
At the higher level, strategic decisions
require wide sources of information and
flexibility in modeling and presenting
ideas in support of their strategic
mission.
At the middle level, tactical matters
require the management to reach into
corporate records and obtain external
data to implement and support the
company strategy. Control of corporate
resources, such as performance
monitoring and budget planning, are part
of the job of tactical management.
At the lower management levels, daily
operations are the chief concern.
Therefore, there is a need to establish the
EMIS requirement at each of the
management levels. This can be ascertained
by carrying out an actual field interview at
different management levels.

Various processes within an
organization are inter-related and if so, it is
desirable that the information networks within
the organization are similarly inter-related. This
implies, firstly, a knowledge of all information
networks within an organization and, secondly,
the ability to determine the interfaces between
such networks. The advantages of this kind of
integration are that.
It allows the systems to be designed in
such a way that they interlock as they
are implemented, thus providing the
required communication between files.
The overview that the system provides
enables the priorities to be established
from a company viewpoint rather than a
technical viewpoint.
However, integration between various
information systems calls for different kinds
of hardware architecture, leading to
additional costs.

Setting up Energy Accounting Centres
(EACs)
The next step in installing an energy
management programme is to identify along the
energy flow paths of the plant, a series of EACs
(energy accounting centres). These will provide
the requisite breakdown and framework both for
monitoring energy performance and for
achieving targets. An energy accounting centre
might consist of individual equipment,
section/department or even building as a whole.
Determination of the hardware
architecture of the system

Once the energy accounting centers are
identified, the next step is to determine the
hardware architecture. This would involve
identifying the measuring transducers, the
power monitor modules, configuration of the
PLC, selecting the communication buses, and
finally the configuration of the monitoring PC.

Measurement transducers are
installed to measure various
energy parameters.

In general the power monitor
modules are connected on all
major electrical feeders. These
are essentially power monitor
modules capable of monitoring
and displaying all electrical
parameters namely current, real
power, energy etc.

These signal from these transducers are
transmitted to the input card of the PLC. All
power monitor modules are connected in a
series to form a communication network. This
network is interfaced with the PC through the
communication processor in the PLC, where
online monitoring and data logging of the
measured parameters is done. The software
package in the central PC controls all function
like acquiring the data, filing, analyzing it, and
presenting in the appropriate formats.

Software selection and customization
of the EMIS:

The software used in these systems
spans a broad range of functionality, from
device drivers for controlling specific hardware
to application software packages for developing
system. The type and quality of the software
used to develop the data acquisition system
ultimately determines its overall flexibility and
usefulness.

Once the EMIS is installed, the next
step is to customize the system to generate the
information required for different management
levels. This involves specifying the sampling
period of different energy parameters for energy
monitoring, specifying formulae and other
analytical tools for the specific data analysis,
and determination of the structure of output as
per the requirements of different management
levels.

Energy benchmarking

The information obtained from the
EMIS can be used to evolve a set of internally
based standards, which could be used as a
yardstick to measure performance. It should be
noted that specific energy consumption is a
function of several variables such as CUF
(capacity utilization factor), machine condition,
etc. Therefore, the effect of these variables
shall be visible only when monitoring at the
EACs is carried out over a considerable period
of time.

The plant would also be able to develop
some internal yardsticks for different EACs,
which could be used as performance indicators
where regular comparisons of actual specific
energy consumption with the set of internally
based standards are carried out. The difference
between actual specific energy consumption and
these standards will reveal an improvement in
energy-efficiency levels or decline in the
performance level. Necessary action like
carrying out a detailed energy audit can then be
taken up for identifying measures to improve
the energy performance of the plant.


DESI GNI NG AN ENERGY
MANAGEMENT I NFORMATI ON
SYSTEM


START





Determination of EMIS
requirement at different
management level






Whether Yes
Integration required
with other MIS




No
Identify
integration
requirements






















Run the
EMIS
Not OK

OK


END



Setting up energy
accounting centers
Deciding hardware
architecture of the system
Installation and
customization
Debugging


Identifying plant outages:

The EMOS can help in identifying
reasons behind various breakdowns and
outages. The monitoring software generally
installed in the central PC shows trends and can
also record disturbances of various parameters.
These can be analyzed to identify reasons
behind breakdowns and outages. Suitable
measures and operating practices can be
developed to improve the plant performance,
thereby reducing monetary losses. This type of
information is generally required at lower
management levels.

Conc l usi on

EMIS is the field of vital importance,
which gives an over-view of establishment of
system at minimize cost using a designed
software with efficient hardware components
and thus analyses can be drawn, taking in to
consideration its past and on-going energy-use
and also providing a scope for saving energy.

Reference:

Energy Audits and Management System
By J . Bopi
Websites:
http://oee.nrcan.gc.ca
www.energymanagement.umich.edu










































PAPER ON

EVOLUTION OF SCADA TO AUTOMATION

(With latest advancements)











Authors:-

T.ANURADHA B.C.S VIMALA
EEE 4/4 II SEM, EEE -4/4-II SEM,
VNRVJ IET, BACHUPALLY. MGIT ENGG COLLEGE,
HYDERABAD, HYDERABAD.
Email:- anu_optimal@yahoo.com








ABSTRACT

This paper describes one of the most advantageous power
system protections, system named SCADA. It describes the
SCADAsystem in terms of its architecture, introduction,
components, functionality, evolution from scada to automation
with small explanation of substation interface, monitoring state
changes in equipment, controlling power equipment, operations
centre , master stations, communication advancements,
communication pathways and the application development
facilities they provide. This paper even describes the latest
advancements in power system protection using Intelligent
Electronic Devices (IEDs) which replaces RTUs. Some
attention is also paid to the advantages of SCADAover other
protection systems.










EVOLUTION OF SCADA TO AUTOMATION

INTRODUCTION
Power system involves very costly equipment .If we wont protect these costly
equipment from over currents, over voltages, external faults and other damages, it
incurs huge amount of losses. So we have to protect power system from these
abnormal conditions.
Generally protection system which are used in India are inefficient ,requires more
amount of labor, requires more amount of time to find the problem and make the
system work properly. The protection system named SCADA (supervisory control
and data acquisition) provides fast, efficient and accurate protection of power system
equipment.
The electric utility SCADA system is now a mainstay that would be classified as a
SCADA system. The long sought improvements in efficiency promised by upgrading
manned substations to monitored substations have been largely achieved. While
Substation Automation is considered current technology, it is valuable to understand
the steps in technological evolution and to recognize that some of that history is still
in use in utilities today. This Chapter will trace the changes from early technology to
the Substation Automation System. It will discuss the functions that exist for SCADA
or Automation Systems and identify the significant changes that have occurred along
the way.
Why SCADA is so popular
The major attraction of SCADA to a municipality is the ability to significantly reduce
operating labor costs, while at the same time actually improve plant or regional
system performance and reliability. Information gathering within a plant no longer
requires personnel to spend time wandering all over the site, and correspondingly the
frequency of field site inspections required in a regional system can be minimized.
SCADA based alarming is also very reliable since it is in-house and tied directly to
process control.


BASIC VIEW OF SCADA SYSTEM

Supervisory Control and Data Acquisition (SCADA)
SCADA systems consist of one or more computers with appropriate applications software
(Master Stations) connected by a communications system (wire, radio, power line carrier
or fiber optics) to a number of remote terminal units (RTUs) placed at various locations to
collect data, remote control, and more recently perform intelligent autonomous (local)
control of electrical systems and report results back to the remote master(s).
OVERVIEW OF SCADA SYSTEM FUNCTIONS
The SCADA System connects two distinctly different environments. The substation,
where it measures, monitors, controls and digitizes and the Operations Center, where
it collects, stores, displays and processes substation data. A communications pathway
connects the two environments. SCADA system RTUs collect measurements of
power system parameters, transport them to an Operations Center where the SCADA
Master presents them to system operators. But other measurements like tank levels,
pressures and tap positions are common to SCADA systems. These belong to the class
of measurements termed analogs. SCADA system master stations monitor the
incoming stream of analog variables and flag values that are outside prescribed limits
with warnings and alarms to alert the system operator to potential problems. In the
Operations Center a SCADA system has at least one computer, communicating to
substations and/or generating stations collecting data, issuing control commands, and
storing the incoming data. The system operator views data and messages through a set
of displays on view stations. Besides these basic functions the Operations Center
computer archives data and displays selected data sets, such as trends and logs in
special ways for the operators.

THE OPERATION CENTER
The SCADA system has always focused around activities in the utility operation
center. Whether the system supports transmission, distribution or generation dispatch
functions, or a combination of these, its task remains essentially the same. These
include: collecting data from substations and power plants, storing data, populating
displays and graphics with data, providing a mechanism to operate equipment,
maintaining logs of important activities and alerting system operators to abnormal and
threatening conditions and events.
MASTER STATION
In the evolution of SCADA systems, the operations center control and display
area acquired the role of central controller for the system. All the remote measuring
and controlling elements responded to its queries and commands. They remained
quiescent otherwise. In this environment the operations center is the Master. In an
automation environment this role often changes to where the remote sites ship data
without request and the operations center master only behaves as a data repository
(server) for operations centers and corporate users. The Computer Based SCADA
System The advent of the digital process control computer made the SCADA system
take on its present form. Computer driven SCADA systems emerged from the process
control industry tailored to meet the needs of the electric, oil, gas and water industries.
The SCADA Master Station, which monitors and controls RTUs and their
attached electric apparatus, is no longer a turn-key custom product. The Master Station
has a core program which is called the operating system. Running on the operating
system are the applications programs written by the utility or the SCADA vendor
REMOTE TERMINAL UNITS
RTUs are special purpose computers which contain analog to digital converters (ADC),
digital to analog converters (DAC), digital inputs (status) and outputs (control Those are
transmission automation RTUs, distribution automation RTUs and programmable logic
controllers, electrical transducer, pulse accumulators.
RTU SOFTWARE
RTU software is written by the manufacturer, usually in a high level language such as C,
and is designed to interpret commands (polling requests) from a master computer and to
initiate reports of out of bound conditions.
SCADA PROTOCOLS
A protocol is necessary in order for a SCADA master to create a path for exchanging
information with RTUs; it establishes the necessary conventions, a standard
communications path, and a standard data element. The protocols are Bit Protocols and
Byte Protocols.
The SUBSTATION SCADA ENVIRONMENT
This section will describe the substation environment and the evolution that leads
to Substation Automation. Measuring Systems bridge the physical world to the
Operations Center display. Whether they are electrical measurements associated with
the transportation, generation or distribution of electric energy or physical
measurements of temperatures, pressures, position etc; measuring systems transform
the physical world to the digital world. They are a critical aspect of SCADA and
Automation Systems.
SUBSTATION INTERFACE
A key function for SCADA or Automation systems is measuring of activity on the
power system, processing the measurements and reporting that data to an operation
center.. Improved capacitor voltage transformers (CVT) have made revenue class
measurements possible at lower cost than VTs. Since the CVT is required for
transmission line protection schemes that use power line carrier this accuracy
improvement simplifies finding voltage sources for SCADA measurements. Latest
technology for sensors is based around light wave polarization rotation fiber optic
transmission techniques. These are emerging technologies and promise very high
accuracy and wide dynamic range.
MEASURING DEVICES.
Along with the voltage and current sensors is the measuring device to convert the
instrument transformer signals into something that can be easily converted to a digital
measurement for transportation to the Operation Center.

Fig2: Measuring device connections to sensors
Here the evolutionary forces have been hard at work. Over the last three decades the
semi-conductor industry has provided components to make dramatic changes in the
conversion process.
Early on, A/D converters were expensive. Most RTUs had only one A/D and the
inputs were switched (multiplexed) to the A/D for conversion periodically on
command from the operations center computer. Such switching also served to protect
the A/D converter from voltage and current transients that can occur on the input
terminals. A steady stream of semiconductor improvements made A/Ds cheaper, more
accurate, more stable and more reliable.

Fig 3: Analog to digital converters
The microprocessor and the low cost A/D converter have changed the processing of
analog signals in the substation dramatically. Combining Measuring Functionality
With Other Functions. The electric utility industry grew up with separate devices
performing protection, revenue measurement and operations measurements. But, as
the semiconductor industry provided new technology for these differing functional
products, their suppliers discovered added market potential simply by providing
measurements that fit another products functionality.
WHERE HAS THE RTU GONE.
The RTU no longer has to convert the data from analog signals. This function has
been driven down to measuring units and programmable controllers. The RTUs
function becomes that of a gateway to convert protocols and ship the data where it is
needed. Standard protocols make it possible to eliminate the RTU function for
handling power system measurements. A networked automation environment allows
multifunction intelligent electronic devices to serve multiple users transparently.
Hence we can say that IEDs slowly replacing the RTUS.

MONITORING STATE CHANGES IN EQUIPMENT
The movement towards combining functions with IEDs has impacted the method of
collecting state data. Often an IED can collect state data rather than having to wire to
an RTU. A breaker state might be collected from a protective relay, a power monitor
or a PLC which provides some form of substation control for the breaker.


Fig 4: State monitoring
REPLACING INTELLIGENT ELECTRONIC DEVICES(IEDs) WITH
REMOTE TERMINAL UNITS (RTUs).

Have the advancements in substation meter technology created an intelligent
electronic device(IED) that can replace the SCADA systems remote terminal
unit(RTU). The rapid advances in semiconductor technology hav resulted in a
blurring o the common distinctions of substation device functionality along with a
merging of communications capabilities. Often, as an individual acquires a working
knowledge of data network technology, it Is easy to forget the more primitive
technological environment in which the original RTU was created. Thus, it is easy to
forget that the RTU was designed t solve specific operational problems such as
momentary change detection or safety features that restrict the operation of circuit
breakers.

THE METER AS AN RTU.

The answer to the question of whether the smart meter can replace the RTU is yes
maybe. The maybe is because it depends on how the user has implemted the DNP3
protocol.

SIMPLE EXPLATION ABOUT DNP3 PROTOCOL.
DNP3 (Distributed Network Protocol) is a set of communications protocols used
between components in process automation systems. Its main use is in utilities such as
electric and water companies. Usage in other industries is not common, although
technically possible. Specifically, it was developed to facilitate communications
between various types of data acquisition and control equipment. It plays a crucial
role in SCADA systems, where it is used by SCADA Master Stations (aka Control
Centers Remote Terminal Units (RTUs) and Intelligent Electronic Devices (IEDs). It
is used only for communications between a master station and RTUs or IEDs. ICCP
the Inter-Control Centre Protocol, is used for inter-master station communications.

Coming back to the topic, Lets say that an industrial customer substation that
contains a bulk power meter and an RTU. The traditional design would have the
substation meter send pulses to the RTU that would accumulate the energy readings.
At a specified time, the SCADA master would send a freeze to the TRU and then
download the accumulated energy value. In addition, the RTU could respond to polls
from the SCADA master and send back analog readings of measured quantities and
device status indications. RTUs also have been designed to report changes to values
or status based on exception to reference values or indications. Thus, instead of
responding to a periodic poll, the RTU would notify the SCADA master that an
exception had occurred by sending the new value or indication to the SCADA master
without waiting for request from the SCADA master.
RTU LIMITATIONS THAT THE SMART METER DOES NOT HAVE
The two most significant limitations of RTUs are the lack of memory and the inability
to perform logic operations and calculations. Todays smart meter comes with
sufficient memory to store 15-minute interval kW values for 31 days, waveforms for
transient disturbances, and sequence of events, In addition, most smart meters hav
quite extensive programming capability using gated logic, which is defined as a
programming approach that uses binary logic versus structured programming
language.
One consideration in the use of the stored information in the meter is the recovery of
this information. It must be remembered that the RTU is designed to handle
instantaneous readings and that the RTU, until the advent of DNP3, was not able to
handle file transfers. Even with a DNP3 implementation, one must be sure that the
RTU has the ability to transfer object 70 and that the SCADA master has the ability to
process DNP3 object 70.

SOME ADVANCED FETURES OF SMART METERS.
These new power meters can be reviewed as stand-alone IEDs or as communications
devices in mini-SCADA systems.
The new capabilities of the meters as substation IEDs are impressive. These meters
are described by their manufacturers as cetified billing grade revenue meters, capable
of performing as RTUs with I/O and control, a power quality recorder, an EN50160
flicker monitor, certified as compliant in DNP3 level 2 and ModBus, significant on-
board memory, offering a variety of communications capabilities, including onboard
Ethernet connectivity, and a total Web server solution that includes alarming, data
viewing and XML export capability.
DESCRIBED BELOW ARE DETAILS OF THESE IMPORTANT FEATURES:
Detailed Power Quality Recording
For 16-bit waveform and fault recorders, the meter records up to 512 samples per
cycle for an event. Voltage and current are recorded with pre-and post-event analysis.
Both hardware and software triggers are available to activate an waveform reading,
which can be used for tasks such as power quality surveys, fault analysis, and breaker
timing.
Measure and Record Harmonics to the 255
th
Order
The meter measures harmonics up to the 255
th
order for each voltage and current
channel. Real-time harmonics are resolved to the 128
th
order. Percent THD and K-
Factor are also calculated.
Status Input Triggers.
The meter records the waveform at the time of the status change. The input change
and waveform recording are time stamped to a 1m/sec resolution.
Additional Inputs.
The meter offers inputs for neutral to groung voltage measurements. This allows the
user to analyze rising ground potential that often damages electrical equipment.The
meter also calculates and measures the neutral current.
Sub-Cycle Transient Recorder.
The meter records sub-cycle transients on voltage and current readings. It monitors
switching noise from capacitors, static transfer switches, SCRs, and many other
devices that can harm power quality.
Reports and Power Quality Analysis.
This meter used artificial intelligence, evaluates all data, rates events for severity,
identifies probable causes, recommends corrective actions or solutions, and prepares
and formats reports of all power quality events.
Expandable External I/O, RTU capability, Control.
These meters offer multiple analog and digital I/O modules. The meters can pool
different I/O devices, log data, and send data to a master station via Modbus or DNP3.
In addition, the meters come with advanced logic and control programming capability
that can be used to specify limits and define logic that can be used to trigger
operation. This capability can be used for capacitor control, load shedding, automatic
transfer schemes, transformer monitoring and control, redundant protection, and other
applications.
Controlling Power Equipment
SCADA systems are used to control power equipment. This normally includes circuit
breakers, switches and tap changers and peaking generators. Traditional RTU Control
Traditional SCADA systems have an interface to the control circuits of the power
equipment driven by an RTU. Distributed Control in a distributed SCADA system,
control of power equipment is implemented with IEDs. Rather than have all the
control logic and security being in a common controller like the RTU, distributed
control places it in multiple controllers.
Advanced Applications
SCADA systems have acquired a large list of advanced applications to execute,
with inputs based on SCADA data. As a result they have been re-named Energy
Management (EMS) or Distribution Management (DMS) systems. EMS advanced
applications include state estimation to refine the incoming data, contingency analysis
to spot the utility weak spots and power system simulators to allows operation center
employees to try what ifs before making changes to the network. DMS applications
have included trouble call analysis, outage analysis and even crew scheduling.

Fig5: Master station software layering for real time and advanced applications

SCADA COMMUNICATIONS..
To connect RTUs and master stations, a wide area network (WAN) is required, typically
composed of more than one media type because a single technology is often not
economically practical for utilities that span several states. The various ways of
communication used are Metallic Cable, Derived Telephone Channel Technology, Two
Way Land Mobile Radio, Power Line Carrier, Fiber optics, Fiber Technology.


COMMUNICATION PATHWAYS
While this discussion has centered around wire line communications paths and this is
a predominant media, there are others that bring new options .While this discussion
has centered around wire line communications paths and this is a predominant media,
there are others that bring new options. Wireless Technologies Wireless
communication services are mushrooming. The explosion of cellular and PCS phone
service offer some SCADA system solution as well. Current technology makes it is
possible to connect to field devices using digital modems and standard wireless
messaging just like dial-up telephone connections for PCs. Another wireless service,
Cellular Digital Packet Data (CDPD), uses the low traffic control channel of the
cellular system to transfer data suitable for SCADA system. High Speed Services
There are also a number of large scale high speed communications pathways
available. Many utilities own and operate their own communications networks which
are microwave based. Microwave networks are licensed by the Federal
Communications Commission (FCC). A MICROWAVE backbone gives utilities
as large pipe to transport voice and data. To the SCADA system these behave like
copper
Lines.



Fig 6: Microwave communication system for SCADA.

ADVANTAGES
Reduced waste of human resources.
High level of software re-usability creates a focus to problem solving.
Reduced misunderstandings and errors.
Provided programming techniques usable in broad area: general industrial
control.
Combines harmoniously different components form different locations,
countries, and projects.
Telemetry can significantly reduce operational expenses and material
requirements in a properly planned system.
As sites are able to be monitored and controlled from the System Control
Computer, the need to drive to the site is greatly reduced.
Provides more supplier independence at control level.
High level of re-usability reduces costs and increases confidence in planning.
More independent towards system / service suppliers.
Usable world wide, over and over again.
Provides better communication tool between engineers at different locations


CONCLUSION
In the years that SCADA systems have existed there has been a steady stream
of improvements driven by forces of technological change. This chapter has
discussed many of them. But the next significant leap is the Automation System.
That has embraced the architecture of the information technology network and
offers a completely different way to implement the functions performed in the
substation and Operations Center. This new form offers many potential
enhancements and some interesting new problems.





REFERENCES


Standard handbook for electrical engineers by Robert J . Landman President,
H&L instruments; senior member IEEE
A System wide Solution for Distribution Automation, First International
Symposium on Distribution Automation and Demand Side Management by
Donald. E.
Internet (www.google.com, www.ask.com, etc)






SRI VENKATESWARA UNIVERSITY
COLLEGE OF ENGINEERING,
TIRUPATI-517502,A.P.


PAPER PRESENTATION

ON

Design And FEA Simulation of Single Wafer Process MEMS Capacitive
Microphones



PRESENTED BY



K.KHADAR VALLI P.VASU
III- B.Tech, III B.Tech,
EEE, EEE,
SVUCE, SVUCE,
TIRUPATI TIRUPATI

E-Mail: vasuinsvu@yahoo.com



















Design And FEA Simulation Of Single Wafer Process MEMS Capacitive
Microphones


Abstract

This paper presents the design, analysis and simulation of a
condenser microphone utilizing a thin poly silicon
diaphragm suspended above a silicon nitride perforated
back plate. The microphone is constructed using a
combination of surface and bulk micromachining
techniques in a single wafer process without the need of
wafer bonding. The finite element analysis (FEA) using the
IntelliSuite MEMS simulation tool and equivalent circuit
method have been used to predict the structure behaviors
and the microphone performance. The simulation result
shows sensitivities of 34.67 mV/Pa for 2.5 mm
2
diaphragm
with bias voltage of 8 V in good agreement with expected
performance calculations.

Keywords: MEMS, Single Wafer, Capacitive
Microphone, FEA, Sensitivity

1. Introduction

Over the last two decades, extensive efforts have been
devoted to the development of micromachining miniature
microphone [1]. Miniature microphone have superior
performance in hearing aid, detectaphone, ultrasonic and
precision acoustic measurement, noise and vibration
control, voice communications and electronic products. In
all these areas, silicon miniature microphone seems to be
very promising [2]. Micromachining has potential
advantages in these applications, such as small size,
excellent mechanical properties and batch fabrication at low
cost. Therefore, it is believed that more cheep and small
size microphone with high performance can be realized
with micromachining technique [3].
Most of the silicon microphones are based on capacitive
principle because of the high sensitivity, flat frequency
response, small size, low noise level and low power
consumptions [4]. The capacitive microphone consists of a
thin, flexible diaphragm and a rigid back plate. In this paper
we use low-stress poly silicon as the diaphragm, a silicon
nitride perforated back plate, an n-type silicon substrate and
the metal contacts. The equivalent-circuit method and
finite-element analysis (FEA) using the IntelliSuite MEMS
simulation tool are applied to simulate the mechanical and
acoustic performance of the microphone.

2. Design of MEMS Capacitive Microphone
The high-sensitivity microphones for the proposed
acoustical sensor are of the capacitive type and can be
fabricated as a single structure using MEMS technology.
The microphone is to be fabricated using a diaphragm, an
air gap and a back plate with acoustical ports, as shown in
Fig. 1. When biased oppositely, the diaphragm and the back
plate constitute an air core parallel plate capacitor. When
the acoustical wave strikes on the diaphragm, it causes the
diaphragm to vibrate; accordingly that changes the
capacitance due to changing air gap. The capacitance
change causes a time varying current.
A well-designed subminiature microphone should be easy
to manufacture, should deliver a large output signal despite
of its small size, should have a flat frequency response and
a low noise level. A large microphone output signal is
obtained, if the open loop microphone sensitivity (S
open

=S
m
.S
e
) is high. The mechanical sensitivity is defined as the
increase in the deflection of the diaphragms center, dw,
resulting from an increase in the pressure, dP, acting on the
diaphragm, S
m
=dw/dP. A high mechanical sensitivity (S
m
) to
dynamic changes in air pressure requires the use of a low
stress material to construct a thin membrane with a large
surface area and a small air gap between the diaphragm and
the back plate. The desired miniaturization is in contrast to
a large membrane area and also, technology sometimes
limits the upper size. Therefore, it might be advantageous to
use an array of smaller microphones instead of one large
one. Low streaming losses in the air gap prevent a reduction
in mechanical sensitivity at higher frequencies. Providing
acoustical ports in the back plate can minimize losses at
high frequencies, due to the compression of air in the air
gap.
The shape of the frequency response of the microphone is
determined by the damping and resonance behavior of the
microphone structure, which depends mainly on the size
and stress of the diaphragm. Damping is caused by losses in
the diaphragm and the viscous losses associated with the air
streaming in and out of the air gap. The perforation of the
back plate to create acoustical ports provides a means to
control streaming losses and therefore the damping
characteristics of the microphone structure. A low damping
property can be obtained using a highly perforated back
plate [5, 6].
The sensitivity and the resonance frequency of the
diaphragm can be optimized by choosing appropriate
structure parameters. The thicker the diaphragm, the higher
the resonance frequency, but the lower the mechanical
sensitivity.
Upper limit of the operating frequency can be adjusted by
the density and size of the holes in the back plate. The



1

lower limit of the frequency can be adjusted by the size of
the static pressure hole in the diaphragm [7].
The air between the electrodes offers a mechanical
resistance to the membrane deflection. If the back electrode
is realized as a perforated plate an oscillating membrane
causes air streaming laterally through the air gap towards
the holes. The narrow air gap offers a high resistance to the
streaming air caused by friction of neighboring layers of air
due to the viscosity of air according to the following
equation:
signal [11]. A rigid back plate is a prerequisite of good
performance. A deposited stiff back plate can only be
achieved if the thickness is large, i.e. in excess of at least 5
m. Such thickness is difficult to achieve by deposition
because internal stresses cause cracks and deposition times
are often long. The use of a stress-free monocrystalline
back plate of approximately 20 m is stiff enough to
prevent a reduction in mechanical sensitivity at higher
frequencies.
In order to prevent the diaphragm and the back plate from
sticking after sacrificial layer release, several supports were

R
gap
1

=
a
2
.n
12 A
2

A
. .[
h
3
.
2 8

LnA

3
]
4 8

(1)
designed. This can reduce stiction by reducing the surface
contact between the diaphragm and back plate, and proved
to be efficient in the release process [12].

where a is the diaphragm side, h the air gap size, n the
ventilation hole density, the viscosity of air and A the
surface fraction occupied by the ventilation holes. In order
to realize a low resistance for air streaming from air gap to
holes, can be employed a highly perforated back plate. The
distance between adjacent holes in the back plate should be
as small as possible. Furthermore, the radius of each hole
should be as large as possible [8].
In Eq. (R
g
), the air resistance of the air gap, R
gap
, has a close
relation to the density of the air holes in the back plate and
to the size of the air gap. Although increasing the air gap
can effectively reduce the air resistance and thus increase
the -3 dB cut-off frequency, the microphone capacitance
may become too small to be measured. Hence increasing
the density of air holes is an effective method [9].
The electrical sensitivity S
e
=dV/dS
a
with dS
a
being the
change in thickness of the air gap and dV the resulting
change in the voltage across the air gap can be expressed by
S
e
= E
a
= V
b
/ dS
a
if the electrical field E
a
in the air gap is
assumed homogenious, where V
b
is the bias voltage. S
e
becomes large if V
b
is large and dS
a
is small. In fact, the
electrical sensitivity increases with the bias voltage.
However, the bias voltage cannot be increased without
limit. At a certain pull in voltage, the diaphragm collapses
to the back plate [10]. Therefore, improving the mechanical
sensitivity of the diaphragm becomes the correct approach
towards high sensitivity of condenser microphones.


Fig. 1 Schematic description of a miniature condenser
microphone with flat diaphragm

At higher frequencies, the air flowing in and out of the air
gap causes the thin back plate to vibrate in phase with the
diaphragm, and results in decreasing the microphone output
3. Simulations

Simulations included performance analyses of the
microphone and mechanical analyses of the structure. The
influence of the viscous damping, the sensitivity and the
frequency response of the microphone were analyzed with
an equivalent circuit method. The fundamental
characteristics of the structure, including mechanical
sensitivity, resonance frequency and static deflection of the
diaphragm, were simulated by use of the finite-element
method (FEM) using the IntlliSuite MEMS simulation tool.
The design parameters for the microphone are given in
Table 1.

Table 1- Design Parameters

Parameter Value
Diaphragm thickness, t 0.8 P
Diaphragm side length, a 2.5 mm
Air gap thickness, d
0
P
Youngs modulus, E 160 GPa
Poissons ratio, v 0.22
Diaphragm residual stress, 20 MPa
Diaphragm material Poly Si
Back plate thickness, h P
Back plate material Silicon Nitride
Back Plate hole size [ P
2


3.1. Microphone performance simulation using
equivalent- circuit method

The performance of the microphone depends on the size
and stress of the diaphragm. Other parameters, such as air
gap distance and the bias voltage, also affect the sensitivity.
The dynamic behavior of the microphone can be calculated
using and equivalent analog electrical model [13] as given
in Figure 2. The acoustic force, F
sound
, and flow velocity of
air, V
m
, are modeled as equivalent voltage and current
sources, respectively. The air radiative resistance is defined
as R
r
, and the air mass is defined as M
r
, The diaphragm
mechanical mass is M
m
, and its compliance is C
m
. The air
gap and back vent losses are represented by viscous
resistances R
g
and R
h
, respectively. The air gap compliance
is given by C
a
[14].


2
2
C =

0
a

2
=
Ea

2
=
V
b
a


where V
b
is the bias voltage between the diaphragm and the
back plate. Therefore, the sensitivity of microphone is a
function of the frequency. Calculations with numerical
values of the microphones mechanical elements yield
predictions of the sensitivities and the frequency ranges of
the microphone. The capacitance between the diaphragm
and the back plate can be expressed as:



Fig. 2 Equivalent electrical circuit of the microphone d

(7)

The first resonant frequency for the diaphragm [14]:



2

3.2. Capacitive Microphone simulation using IntelliSuite
MEMS Tool

The analysis was done using the IntelliSuite MEMS design
f =
1
(
D
res
+
T
)


(2)

and simulation tool. The analysis objectives are:
a
4
2a
2

1. To verify the deformation of the diaphragm due to
the electrostatic attraction force between the
diaphragm and the back plate and the
where a is diaphragm edge width, T =
r
t is the tensile mechanically applied force.
force,
r
is the residual stress, t is the diaphragm thickness
of the diaphragm material. , D is flexural rigidity of the
diaphragm is given by [15]:

Et
3

D = (3)

12(1 v
2
)

where E is Youngs modulus of elasticity and is Poissons
ratio. The total equivalent mechanical impedance, Z
t
,
following from the equivalent electrical circuit that is
shown in figure 2 can be obtained as:
2. To verify the capacitance between the diaphragm
and the back plate.
The analysis options are non-linear mechanical analysis,
accuracy of convergence is 0.001 micron, No. of iterations
set is 20 and maximum mesh size is %2.4 of X-Y
dimension. The microphone diaphragm has a proposed
thickness of .8 um, an area of 2.5 mm
2
, an air gap of 3.0 um
and a 1.0 um thick silicon nitride back plate with acoustical
ports. A 8.0 volt DC bias voltage is provided between the
diaphragm and the back plate. It is possible to increase the
bias voltage until the electrostatic force between the
diaphragm and the back plate is so large that the diaphragm
collapses. The collapse voltage is dependent upon the
diaphragm stiffness.
= + ( +

j M
r
M
m

) +
1
+

R
g
+ R
h

Fig. 3 shows the simulation setup MEMS microphone.

Z
t
R
r


jC
m


1 + j (R
g


+ R
h
)C
a
Silicon nitride back plate, silicon wafer faces and 4 lateral
(4)

The open-circuit output voltage V
0
of the microphone under
the assumption of constant charge on the condenser plates
is given by:
faces of the poly silicon diaphragm are X, Y, Z fixed.

V = Ex =
EV
m

o
J
=
EF

JZ
t

EPA
eff

=
JZ
t


(5)

where E is the electrical field in the air gap, x is the average
diaphragm deflection, V
m
=j [ )=
t
is the velocity of the
diaphragm, F =PA
eff
is the force on the diaphragm and P is
the sound pressure. Consequently, the sensitivity of the
microphone is given by dividing the output signal by the
applied acoustic sound pressure, yielding the following
expression:




Fig. 3 Simulation Setup

4. Results and discussion

V
o


S = =

EA
eff

(6)

8VLQJ W P DQG G P WKH FDOFXODWHG IUHTXHQF\
response for 2.5x2.5 mm
2
microphone at different biases is

P JZ
t

JZ
t

JdZ
t

shown in Fig. 4. The sensitivity decays in the high
frequency range due to the viscous loss in the air gap and
back vent holes.



3





Fig.4 Sensitivity frequency response for a
microphone at different biases, (a)V
b
=4v,
(b)V
b
=8v, (c)V
b
=12v.

Fig.5 shows the frequency response at different number of
back plate holes, n. It can be seen that the cut of frequency
increases when the acoustic holes density is increased.
Because the viscous losses at high frequencies, due to the
compression of air in the air gap, is decreased. The
calculated resonant frequency of this device is about 26.276
KHz, the microphone capacitance, C, is 18.446 pF.


Fig.5 Sensitivity frequency response at different number of
back plate holes, (a) n=900, (b) n=1280, (c) n=1700.

Fig. 6 shows diaphragm deformation in the Z-axis. The
maximum Z-axis deformation is -1.17 micron and
maximum sensitivity of microphone is 34.67 mV/Pa.


Fig. 6 Diaphragm Deformation in the Z- axis


Fig. 7 shows diaphragm deformation in Z-axis versus
iteration, convergence criteria of 0.001 micron was reached
at the 14
th
iteration.
Fig. 8(a) shows microphone capacitance structure and Fig.
8(b) shows microphone capacitance versus iteration, finite
element analysis (FEA) computed capacitance is 17.38 pF
and calculated value is 18.446 pF. The FEA simulation
shows good agreement with the theoretical prediction.


Fig. 7 Diaphragm deformation in the Z-axis










(a)















(b)

Fig. 8 (a) Capacitance structure, (b) capacitance
measurement

Fig. 9 shows the stress distribution over of the diaphragm
with initial stress of 20 MPa at an applied pressure of 1 Pa.
It can be seen that the stress concentration at the corners of
the diaphragm has be found. Fig. 10 shows the charge
distribution on the diaphragm and back plate of the
microphone at a bias voltage 8.0 V.






4













Fig. 9 Simulated stress distribution


Fig.10 Simulated charge distribution

5. Conclusions

A single-chip miniature silicon condenser microphone has
been designed a poly silicon diaphragm with an air gap and
a silicon nitride perforated back plate. The finite element
analysis (FEA) and equivalent method have been used to
predict the structure behaviors and the microphone
performance. The FEA simulations show good agreement
with the theoretical predictions. The device shows
sensitivity of 34.67 mv/Pa for 2.5x2.5 mm
2
diaphragm at
8.0 V, with a high frequency response extending to 25.0
kHz.

References

[1] P. R. Scheeper, 1994, A review of silicon
microphones, Sensors and Actuators A, 43: 1-11.
[2] Chen J ing, 2003, On the single- chip condenser
miniature microphone using DRIE and back side etching
techniques, Sensors and Actuators A, 103, 42-47.
[3] Q. Zou, 1997, Theoretical and experimental studies of
single-chip processed miniature silicon condenser
microphone with corrugated diaphragm, Sensors and
Actuators A, 63, 209-215.
[4] Chen J ing, 2002, Single-chip condenser miniature
microphone with a high sensitive circular corrugated
diaphragm, IEEE.
[5] S. Chowdhury, 2002, Design of a MEMS acoustical
beamforming sensor microarray, IEEE Sensors
Journal, Vol. 2, NO. 6.
[6] C. J ing, 2001, Design, simulation and evaluation of
novel corrugated diaphragms based on backside
sacrificial layer etching technique, Technical Report,
Institute of Microelectronics, Tsinghua University,
China.
[7] O. Rusanen, 1998, Adhesives as a thermomechanical
stress source comparing silicones to epoxies, IEEE.
[8] S. J unge, 2003, Simulation of capacitive
micromachined ultrasonic transducers (cMUT) for low
frequencies and silicon condenser microphones using
an analytical model, IEEE, Ultrasonics symposium.
[9] Q. Zou, 1997, Theoretical and experimental studies of
single chip processed miniature silicon condenser
microphone with corrugated diaphragm, Sensors and
Actuators A 63, 209-215.
[10] W. Kronast, 1998, Single chip condenser microphone
using porous silicon as sacrificial layer for the air gap,
EEE.
[11] M. Ying, 1998, Finite elements analysis of silicon
condenser microphones with corrugated diaphragms,
Elsevier Science,Finite elements in analysis and design
30, 163-173.
[12] C. J ing, 2001, design and fabrication of a silicon
miniature condenser microphone, Technical Report,
Institute of Microelectronics, Tsinghua University,
China.
[13] S. Chowdhury, G. A. J ullien, 2000, "MEMS acousto-
magnetic components for use in a hearing instrument",
Presented at SPIEs Symposium on Design, Test,
Integration, and Packaging of MEMS/MOEMS, Paris,
Frence.

















5
















































































.
FINGER PRINT SENSOR USING MEMS



Presented By,

PRABAL SEN ASHISH CHAUHAN
E.C.E II/IV B.E E.C.E II/IV B.E
sbr_prabal7@yahoo.co.in
Phone No:9949259605







DEPARTMENT OF E.C.E.
G.M.R INSTITUTE OF TECHNOLOGY
RAJAM
SRIKAKULAM(DIST.),A.P




















ABSTRACT:
The personal identification of a
person using photographs and
personal details is insecure .A
person may change his details any
number of times with the growing
technology .But the identification of
a person using fingerprints is the
most secure one due to the fact
that no two persons in the world
have same fingerprints. So, the
government is planning to
implement this in airports and in
many other sectors.
To identify a person
using fingerprints we either use
semiconductor capacitive sensors
developed using LST international
technology or the sensing
technology using sensors having
small array of pixels. Since both of
them have
disadvantages we use a better
fingerprint sensor technology called
MEMS.

WHATS MEMS?
Micro Electronic Mechanical
Systems (MEMS) is the integration
of mechanical
elements, sensors, actuators and
electronics on a common silicon
substrate through
micro fabrication technology.
The electronics are fabricated using
Integrated Circuit (IC) process
sequences.
(Ex: CMOS, BIPOLAR, or BICMOS
processes)




This paper describes the concept
and sensing principle of our
fingerprint sensor, which we named
as a MEMS finger sensor .Next, the
analytical model of the MEMS
structures described. Then, a
fabrication process to stack the
MEMS structures directly on the
sensing circuits is presented. In the
process, a sealing technique, STP
(spin coating film transfer and hot-
pressing) is applied. Finally, we
discuss the fabrication results and
evaluate the sensors ability to
obtain fingerprint images
regardless of wheather the finger is
dry or wet.

1. Introduction
As the network society
becomes more pervasive, higher
security is required.
Personal identification guarantees that
the person is authenticated as a
proper user of
network terminals, such as mobile
units. Personal identification by
fingerprint has
become attractive because fingerprint
identification is more secure than
conventional
methods based on passwords and
personal identification numbers. As a
key device for
fingerprint identification,
semiconductor capacitive sensors
have been developed using
LSI interconnect technology. The
capacitive fingerprint sensor has an
array of sensor
plates and detects the capacitance
between a finger surface and the
sensor plates.Capacitive fingerprint
sensors have some advantages.
First, they are small and
thincompared to other optical
fingerprint sensors, as they are
fabricated on
semiconductorsubstrates.Second, the
sensing circuits can measure small
capacitances withlittlebecause they
are directly below the sensor plates.
The sensing circuits can do various
kinds of functional operations like
signal processing to obtain the best
fingerprint image
corresponding to each individual.
Third, sensor reliability that is high
enough for
practical use has been established.
These advantages originate from the
fact that the
sensing array is stacked on the
sensing circuits on the semiconductor
substrate. However,
since theses sensors work on the
principle of capacitance detection,
they are too sensitive
to finger surface conditions and
humidity in the atmosphere. For
example it is difficult to
obtain clear images from fingers
that are too dry or too moist
without an image
processing step, it is almost
impossible to obtain an image of a
finger wetted with water.

The sensors can neither be
used in the rain nor in extremely dry
weather. This means that the
applications of fingerprint sensors are
restricted to moderate finger surface
conditions and moderate atmosphere
like in the room.

Another proposed sensing
technique detects the topography of
the finger surface
To overcome the above problems.
These fingerprint sensors have arrays
of small pixels.
Each pixel itself is a capacitive
pressure sensor. When a finger
touches the sensor, each
pixel detects the magnitude of the
pressure from the finger ridges. The
values output from
every pixel compose a fingerprint
image. This sensing technique will
enable the sensor to
detect directly the topography of
the finger surface regardless of the
finger surface
conditions. However, these sensors
did not integrate sufficient large
number of small
pixels required for detecting the
topography of a finger. The sensor
array consisted of
16*16 pixels at most. In addition, the
moving parts of one of the sensors
were not sealed,
which made it vulnerable to water that
invaded the cavity structures of the
pixels.




2. MEMS FINGERPRINT
SENSORS
We propose the MEMS
fingerprint sensor as shown in fig. a
finger touches the
sensor surface where a large number
of small pixels are arrayed. The
detailed structure of
the sensor is shown in fig. each pixel
is separated by a grounded wall in an
area of 50-um
square. The pixels have MEMS
structures stacked on the sensing
circuits. Each MEMS
structure comprises a protrusion, a
cavity, a pair of electrodes, and a
grounded wall. The
upper electrode and sealing layer are
made of metal and dielectrics.
Respectively, and are
deformable thin films. The
protrusions are also made of
dielectrics. The upper electrode
is grounded through the grounded
wall.
The sensor works
as follows. The ridge of a finger
surface pushes the protrusion down,
and the protrusion deflects the
upper electrode as shown in fig. the
protrusion transfers the pressure from
a finger to the center of the upper
electrode.(since the pixels are small
compared to the width of finger
ridges, it is difficult for the ridges to
directly deflect the upper electrodes).
The deflection of the upper electrode
increases the capacitance between it
and the lower electrode. Then, the
capacitance is converted into the
output voltage of the sensing circuit
just under the lower electrode. The
value of the output voltage is
translated into digitized signal levels.
On the other hand, the valley of a
finger surface does not push the
protrusion, and the capacitance is kept
small. Therefore, the capacitance
under a ridge is larger than that
under a valley. These values of the
capacitance are translated into the
digitized signal levels. With this
sensing, the detected signals from all
the pixels generate one fingerprint
image.
We had to design the MEMS
structure and determine the structural
parameters so that the sensor would
produce the maximum capacitance
change for a certain pressure from a
finger surface. We also had to
develop a new sensor fabrication
process because the MEMS structure
is stacked on the sensing circuits and
has a cavity. It is necessary to
fabricate the MEMS structure without
damaging the underlying sensing
circuits and seal the cavities to keep
the water and contaminants out of the
cavities. Next, we describe the
structural design and the fabrication
process of the MEMS fingerprint
sensor.

3. STRUCTURAL DESIGN
In order to check
analytically whether the upper
electrode bends enough. We modeled
the mechanical and electrical
dynamics of the MEMS structure, and
calculated two relationships: that
between pressure from a finger and
bending displacement, and that
between the bending displacement
and capacitance change. Based on
these relationships, the structure
parameters are determined that will
produce sufficient capacitance change
to be detected by the sensing circuits.
3.1 Structure modeling
The MEMS fingerprint
sensor model is shown in fig. the
origin is set at the center of the upper
electrode. The upper electrode is
assumed to be a square plane. the side
length of the upper electrode is a, and
its thickness is t. the lower electrode
is also a square plane with a side
length of b. the distance between the
upper and lower electrodes is h. we
assume that the pressure p from a
finger is loaded equally on the whole
area of the top surface of the upper
electrode at z =-t/2 in the direction of
z-axis.
Classical mechanics
describes the bending displacement
(x, y) when pressure p is loaded.
The bending displacement (x, y) is
proportional to pa4/t3.the relationship
between (x, y) and capacitance
change c is given as





The bending displacement
(x, y) is a function of pa4/t3.and the
capacitance change is a function of
(x, y) and h. we focus on four
parameters, p, a, t and h. according to
our previous investigation, the
measured maximum pressure of a
finger among 100 fingers is 0.6 MPa.
We consider a range of finger
pressure p from 0 to 2 MPa to be
wide enough. A ridge on a finger
surface is about 100 to 400 um wide,
so the size of each pixel is set to 50-
um square to achieve sufficient
resolution. Considering that the width
of the grounded wall is 8-um the side
length of the upper electrode a is
42um. Thus, these values are applied
to the range of p and value of a. the
thickness of the upper electrode t and
the distance h should be determined
based on the analytical model
described earlier. The capacitance


Where is 0 permittivity in
a vacuum.. Next, we determine the
structural parameters of the MEMS
structure so that the capacitance
change c would be large enough to
be detected by the sensing circuit.

Though the real structure has a
protrusion, a sealing layer. And etch
holes that are explained later. We
considered the simple model
described above as the first
approximation. The model described
above as the first approximation.
The model included those factors is
almost impossible to solve
analytically, and needs numerical
calculations based on a finite-
element method. In order to discuss
the relationships between a, b, t, and
p, we selected the simple model that
can be expressed by explicit analytical
solutions.

3.2 MEMS structure Analysis for
structure determination
change calculation is explained here.
The bending displacement (x, y) is
numerically calculated as shown in
fig, from the analytical solution of (
x , y).In fig a pressure p of 1 MPa and
a thickness t of 1 um are assumed.
The upper electrode is gold. Whose
modulus of elasticity and Poissons
ratio are 7.8 x 1010 Pa and 0.44
respectively. The bending
displacement has a maximum at the
center of the upper electrode (x, y) =
(0 ,0) . The relationship between the
pressure p and bending displacement
(0, 0) is obtained the capacitance
between the upper and lower
electrodes are determined. The
relationship between capacitance
change c and pressure p is
calculated as shown in fig for
thickness t of 1um. Thus, we can
predict the capacitance change for a
given pressure.



This region is shown with bold lines
using the relationship between the
capacitance change and pressure; we
determined the value of t and h so as
to produce a sufficient capacitance
change. We considered three
requirements in determining t and h:
in normal use, when a maximum
pressure of 1 MPa is applied by a
finger, the capacitance change should
be as large as possible. The upper
electrode should not contact the lower
electrode even if a pressure of 2MPa
is applied as a worst case. The
structure given by the parameters of t
and h should be achievable in our
fabrication process. For the first
requirement, the capacitance change
c when the pressure p is 1 MPa was
calculated as a function of t and h are
too small, however, the upper
electrode comes into contact with the
lower electrode. In this case when
h<= (0, 0) at (p=1 M Pa) is satisfied,
the capacitance change c can not be
calculated. ( this region is shown with
bold lines in the graph.) within the
region where h> (0,0) (at p=1MPa)
is satisfied, we consider the other two
requirements so as to make the
capacitance change as large as
possible. In fig the second
requirement is also expressed with the
dotted line of h= (0, 0) (at p=2MPa).
We have to choose the most
appropriate t and h from the region
where region h> (0, 0) at p=2MPa)
is satisfied. Due to the variations of
the film thickness in each fabrication
step, t or h that is too small is not
preferable. Taking this third
requirement into consideration, we set
h to 1 um and t to 1 um as shown in
fig. for these values of t and h, the
bending displacement of the upper
electrode at (0, 0) with the pressure of
2 MPa is 0.97 um as shown in fig. the
electrode hardly contacts the lower
electrode, even when strong pressure
is applied by a finger. The capacitance
change is several femtofarads as
shown in fig. when a pressure p of 1
MPa is applied. In this way, the
structural parameters of the MEMS
structure are properly determined by
the analytical model and the
numerical calculations.

4. FABRICATION PROCESS
we have developed a CMOS
compatible MEMS fabrication
process that includes the STP
technique for sealing. In stacking the
MEMS structures, it is important that
the MEMS fabrication process does
not deteriorate the underlying CMOS
sensing circuits. The MEMS
structures must be sealed to prevent
water and contaminations from
entering the cavities to meet the
requirements, we select low
temperature processes in order not to
damage the underlying CMOS LSI
and adopt a new sealing technique.
We use gold electroplating to form
thick films. STP seals the cavity
structures to protect them from water
and contaminations. The fabrication
process is illustrated in fig.




first, the sensing circuits are
fabricated in the 0.5-um CMOS LSI
and three-metal interconnection
process. Next seed layers for
electroplating are deposited by
evaporating Au-Ti on the dielectric
film of SIN. The seed layers re each
0.1-um thick are electroplated.
Following that, the resist pattern is
removed. The 2-um grounded wall is
electroplated in the same way. The
lower electrodes and grounded wall
are patterned by wet-etching of the
seed layers. Then the sacrificial layer
of photosensitive polyimide is spin-
coated. The top of the grounded wall
is exposed by photolithography so
that it makes contact with the upper
electrode that is fabricated in the same
way as the lower electrode is
electroplated on it in the same way as
the lower electrodes. We made etch-
holes at the four corners of the upper
electrode by covering the corners with
resist material patterned like islands.
There are etch-holes 5-um square at
the four corners in each pixel. Then,
the sacrificial layer is etched away
through the etch-holes in order to
make cavities. The etch-stoppers are
gold or titanium. The oxygen radicals
etch away the polyimide. (the fold or
titanium of the upper and lower
electrodes and grounded wall do not
react with the oxygen radicals.) the
cavities are sealed with a 1-um-thick
sealing layer by using the STP
technique. the technique is developed
for planarization in the multilevel
interconnects process for LSIs .the
process of STP for cavity sealing is
shown in fig STP can seal vertical
etch-holes while preventing the
sealing material from flowing into the
cavities. Finally, large protrusions
about 10-um square and 10-um high
are patterned using photosensitive
polyimide, and the polyimide film is
annealed at 310 c. in this process
flow, the MEMS structure is
fabricated.


5. RESULTS
5.1: Fabricated MEMS fingerprint
sensor
We fabricated the MEMS fingerprint
sensor chip as shown in fig. the chip
specifications are summarized in
table. The chip has 224*256=57334
pixels in the sensing area of
11.2mm*12.8mm.each pixel is 50-um
square and has a sensing circuit with
102 transistors. The scanning electron
microscope (SEM) photograph in fig
shows that the MEMS fingerprint
sensor has a lot of protrusion 50-um
apart on the surface. A magnified
image of a pixel cross section
obtained by a focused ion beam (FIB)
is shown in fog. The MEMS structure
was fabricated properly as designed.
The upper electrode does not contact
the lower electrode; it is positioned
over the lower electrode with the
proper spacing. The sealing layer
seals the cavity; no sealing material
flowed into it. The MEMS structures
were stacked on the sensing circuits.
The achievement of this MEMS
structure with a cavity is a significant
step forward in enhancing the ability
of fingerprint sensing.
5.2Fingerprint Images
We obtained a fingerprint image with
the MEMS fingerprint sensor as
shown in fig. the image demonstrated
that the structural design was
appropriate and all the pixels and
transistors described in table worked
together. This means that MEMS
structure was fabricated without
destroying the transistors in the
sensing circuits; the MEMS structure
fabrication process is compatible with
the CMOS LSI and metal
interconnection processes. After the
finger in fig was captured. The image
is white, which means that the upper
electrode returned to its initial state
after deflection. We compared our
MEMS fingerprint sensor with a
conventional capacitive fingerprint
sensor. Fingerprint images of a dry
finger are shown in fig. the fingerprint
image from the MEMS fingerprint
sensor in fig is very clear. On the
other hand, the raw image of the same
finger from the capacitive fingerprint
sensor in fig is unclear. The MEMS
fingerprint sensor can clearly capture
the image of a dry finger easily
without any operation to enhance the
quality of the image. We also
captured a fingerprint image of a
finger wetted with water. The MEMS
fingerprint sensor captured the
fingerprint image of a finger wetted
with water. The MEMS fingerprint
sensor captured the fingerprint image
as clearly as it did the normal finger,
as shown in fig. while the capacitance
fingerprint sensor could not as shown
in fig. the water filled the valleys of
the finger surface, which made the
capacitance at the valleys almost
equal to that at the ridges.
Therefore, the image is quite
dark in fig. in contrast; the MEMS
fingerprint sensor could detect the
finger topography directly regardless
of the water in the valleys. These
finger
surface conditions and environmental
conditions, which makes it suitable
for wide practical use.
6. Summary
We proposed a novel MEMS
fingerprint sensor with arrayed cavity
structures stacked on CMOS sensing
circuits. A MEMS structure was
devised to implement a principle for
detecting the topography of a finger.
An analytical model of a MEMS
structure was described, and the
optimized structural parameters were
determined by numerical calculations.
We developed a MEMS sensor
fabrication process that is compatible
with the standard CMOS LSI process.
The process includes the STP
technique to seal the cavities of the
MEMS structure. By using the
process, we were able to stack the
MEMS fingerprint sensor on a CMOS
LSI. It was confirmed that the
fabricated sensor detects the
topography of a finger directly,
regardless of the finger surface
conditions. This MEMS fingerprint
sensor has the potential to widen the
application of fingerprint
identification for various people in
various environments












Conclusion:
Thus the above finger print sensing
technique is very reliable and is more
efficient at different surface
conditions than the conventional
optical finger print sensor or
capacitive finger print sensor. The
sensing circuits do variety of
operations such as signal processing
to obtain the best finger print image,
moreover noise is greatly reduced
compared to ordinary sensors. But
only the cost of this is very high. This
problem can be engulfed easily since
the MEMS technology is in the phase
of rapid development. There by this
finger print sensor based on MEMS
provides a good security system in
near future.

























References:
N.D.young, G.harkin,
R.M.bunn,novel finger print
scanning array using polysilicon
tfts on glass and polymer
substrates,vol.18.pp.19-20,jan 1997

R.J .d.souza &K.D.wise,a very high
density bulk micromachined
capacitive tactical imager,
97,1997,pp.1473_1776.

Theory of plates and shells by
N.sato,1999,17pp
A pixel level automatic calibration
circuit scheme for sensing
initialization of a capacitive finger
print sensor LSI in symp .

VLSI circuits digi paper. IEEE
electron devices lett..

vol.18.pp.19-20,jan1997 IEEE solid
state circuits .

vol.33,pp133-142,jan.1997

www.sensors.com






AUDISANKARA COLLEGE OF
ENGINEERING&TECHNOLOGY






N.DORASANAMMA, K.J YOTHSNA,
3-YEAR, EEE, 3-YEAR, EEE,
04G21A0214, 04G21A0223,
E_MAIL: PROTO_ANJALI@YAHOO.COM J OSHI_223@YAHOO.CO.IN

































FLEXIBLE AC TRANSMISSION SYSTEMS
SIMULATION AND CONTROL


ABSTRACT


Concepts relating to flexible AC Transmission systems (FACTS) are gaining
popularity for enhancing power transfer limits as well as stability characteristics of
power systems. This paper presents an overview of FACTS devices and associated
nonlinear; modern control strategies. simulation results based on the electromagnetic
Transients Program(EMTP) are presented for dynamic breaks, devices for rapid
adjustment of network impedance and subsynchronous resonance (SSR) phenomena.
The paper concentrates on power systems with geographically concentrated
generation and load centers, long transmission lines and potential for loop flow. While
these are characterstics of the Western North American power systems (WNAPS) the
concepts presented can readily be applied to the power system in Southern Africa.





























INTODUCTION

The research program relating to FACTS has been initiated by the
Electric Power Research Institute in the United States to enhance power systems
operational flexibility and stability by using modern power electronic concepts
.FACTS devices, such as series capacitors, provide the capability to dynamically
adjust network configuration and impedance, thus enhancing steady state power
transfer capability as well as transient stability.

While the use of capacitors and braking resistors has been investigated
for transient stability control for quite some time(!),only recently was feedback
control of these devices introduced to enhance system damping (2).current research
investigates the feasibility of modern, nonlinear control theory for FACTS
controllers(3).Series capacitor based devices can enhance steady state power transfer
considerably and allow line loadings up to the thermal limit. However, system and
control design needs to address and mitigate secondary effects, such as
subsynchronous resonance.
The present paper gives an overview over FACTS devices and suggests
appropriate control methodologies. Simulation results are used to illustrate power
system performance in the presence of FACTS implementations using modern
power electronics and control theory. The work presented is based on the authors
ongoing research, which is discussed extensively in EPRI-reports(4).



FACT DEVICES



Devices under investigation for FACTS applications include the Following:
1. static VAR Compensators (SVC) for capacitor based voltage support and stability
enhancement (5).
2. convertor-based SVC methods utilize force-commutated or resonant voltage and
current source converter concepts for varcompensation(6).This technique eliminates
capacitor banks, reduces filter requirements and improves dynamic performance.
3. Phase Shifter for voltage and/or current injection to control effective impedance.
Continuous losses in associated transformer windings detracts from some phase
shifter technologies. The utilization of power electronics concepts promises improved
performance.
4. Adjustment of Network Impedence to control line power flow. A schematic of a
practical series capacitor inductor topology under investigation is shown in fig.1.This
500 KV system is being installed in the pacific Nort west of the United States.
Completion is scheduled for 1993 and the installation is expected to provide valuable
insight into the operational aspects of large scale FACTS devices.
5. Dynamic Brake to dissipate accelerating power during faults. While braking
resistors can be very effective in damping oscillatory modes during transients, steady
state power-flow control is not possible due to rating and economic considerations.
However, next generation brakes using superconducting toroids(7) will offer
increased control flexibility.


Most FACTS devices in planning today utilize phase controlled
thyristor technology. However , the development of improve gate turn-off
thyristors(GTO) and the potential capabilities of the MOS-controlled thyristor(MCT)
are expected to allow the incorporation of higher frequency converter topologies in to
utility applications. practical considerations for FACTS device implementation need
to phase-controlled power converters an issues relating to system controllers require
information on system status, communication delays between local and central
controllers need to be considered, resulting in a structure .


SYSTEMS CONSIDERATION AND MODELS

FACTS controllers under investigations are on equivalents of the western
North American Power System. This system is characterized by concentrated
generation and load centers, service inter-area modes, long transmission lines limiting
power transfer, as well as loop flows in to the southern load area. In order to design
FACTS controllers , simplified system representations are needed, such as the version
show in fig3,which is suitable for transient stability programs. However , initial
evaluation of control effectiveness without undue computational burden , and more
detailed investigation of controller performance require yet simpler models, which are
suitable for implementation on the EMTP.fig4 shows a four-generator equivalent
developed for this purpose, which can be used to study inter-area modes. Line and
machine parameters are chosen to obtain tie-line power swings typical of those
experienced in the actual system.

While the EMTPs TACS module can be used for simple control studies, it
does not allow for the mathematical manipulations required in the development of
advanced, non linear controllers. The MODEL subroutines.



CONTROL DESIGN

While linear control of FACTS devices is possible ,research at Oregon
state university has concentrated on nonlinear control methodologies , which can be
tailored very well to power system FACTS applications. Control strategies
investigated for FACTS controllers include variable structure control(3), nonlinear
time series adaptive control(9), nonlinear self-tuning control(10) and adaptive neural
control(11).

Generally, the complexity of the power system does not allow control
design and real-time implementation on the basis of a large scale model. Thus ,
control design is based on very simple models and implemented on higher order
representations to test for the required robustness.For the development of specific
control algorithms, the four-machine equivalent of fig.4. can be reduced further for
example, fig.5 shows an equivalent single machine infinite bus representation suitable
for the investigation of series compensation and inter-area modes between the North
Western (generator) and south western US(infinite bus).

Using variable structure control (VSC) as an example, the power system
shown in a fig.5 can be approximated by the following non-linear model, which is
linear in control:
X=A(X)+B(X)u,

Where X is the state vector , u is the scalar vector (for instance, the
capacitance value of a series capacitor)and A(.) and B(.) are appropriate nonlinear
vector functions. Appropriate simplification yields manageable expressions for
control implementation .for example , the VSC results shown in the following
sections are developed on the basis of second order machine models and tested on
11th order machine representations.


CONCLUSION

Modern power electronic concepts coupled with advanced, nonlinear
control techniques can be successfully applied to the power system to improve steady
state and dynamic system characterstics. The control methodologies presented are
developed from simple sysyem equivalent and are verified and tested for robustness
on large scale models. It has been shown that the EMTP can successfully be used in
developing controllers.

The paper illustrates the potential of FACTS using relatively simple
examples based on the requirements of the Western North American Power System.



REFERENCES


OGATA
WWW.SOOPLE .COM
HINGARONI











SRI VENKATESWARA UNIVERSITY COLLEGE
OF ENGINEERING
TIRUPATI
DEPARTMENT OF ELECTRICAL ENGINEERING



This paper is presented by

1.M.KAMALAKAR 2.V.GIREESH KUMAR
s.v.u.college of engineering s.v.u.college of engineering
Room no 1313 Room no 1313
Visweswara block Visweswara block
Svuce hostels Svuce hostels
Tirupati Tirupati
Ph: 9985624343 Ph:9885404539
email:kamalakar_m2002@yahoomail.co.in email:vgk409@yahoo.co.in
kamalakarsvu336@gmail.com



GENETIC OPTIMIZATION
FOR THE DESIGN OF CASCADED CONTROLLERS FOR
DC DRIVES







ABSTRACT A new design procedure for
cascaded controller for electrical drives based
on evolutionary algorithms is described in this
paper. Most of the drives have two separate
controllers for current and speed control,
which are in general designed in two
consecutive steps. In conventional method
firstly current controller and then speed one is
designed. A hybrid evolutionary algorithm is
developed to test and compare controllers of
different orders. The couple of controllers is
searched simultaneously for achieving the
optimal compromise of cost and performance
indices.


1. INTRODUCTION

Genetic Algorithms (GAs) are
stochastic search techniques deriving
inspiration from the principles of natural
evolution and genetic laws. Gas are versatile,
and require minimal a priori assumptions on
the nature of the problem. In contrast with
other search strategies, they do not require
differentiability, continuity or other restrictive
hypothesis on the objective surface to
converge. In fact, Gas have been successfully
applied to many control engineering problems.
Typically, a GA has been used to find the set
of parameters describing either a process
model or a controller that minimizes an
arbitrarily defined index of performance. This
general idea has been recently particularized in
a variety of applications, which differ in the
controller structure, in the type of objective
function is computed.










Recently, Genetic Algorithms (Gas)
have been successfully applied also to electric


drives control [9]. Namely in [1] a
fuzzy-like Luenberger observer is used to
estimate the stator flux and the speed of an
induction motor. A GA determines the best
pole placement for the observer minimizing a
cost function related to the time response of
the error between the estimated and actual
state variables. In [2], a GA is used for the
optimal design of a fuzzy controller. The
chooses number and position of the
membership functions and modifies the rule
table. The method does not require for
professional expertise or mathematical
analysis of the plant model. Unfortunately
fuzzy controllers are still difficult to
implement on commercial low-cost micro-
controller, and may not offer significant
improvements in terms of robustness and
performances: the use of classical PID
controllers are preferred whenever non-linear
techniques are not strictly required. In [3], the
authors utilize a GA- based algorithm to tune
the speed controller of a brushless DC drive.
In their work the authors do not consider the
tuning of the inner current control loop.
Moreover large and small speed steps are
separately considered leading to two different
speed regulator designs that are only locally
optimal.

This paper uses a GA to optimize
both the structure and the associated
parameters of two cascaded controllers for a
DC electric drive. The approach has two main
peculiarities with respect to existing literature.
Firstly, in contrast with the conventional
Gas that the work on populations of
homogeneous structures, the evolutionary
algorithm hereby proposed has been designed
to test and compare simultaneously controllers
having different orders. In other words, the
GA must find both the numbers and the
locations of controllers zeros and poles
providing the maximum fitness. Secondly,
while conventional design strategies firstly
tune the current controller and then the speed
controller, thereby this algorithm
simultaneously optimizes the two controllers;
to obtain the best overall two-cascaded-loops
control systems.


2. FUNDEMENTALS ON Gas

A GA works iteratively on a
population of candidate solutions (individuals)
of the problem, preliminarily encoded in
strings of characters (the chromosome
associated to the solution) from a predefined
alphabet (usually binary or real valued). The
fitness of each individual rules the iterative
solution search schema, which is performed in
two subsequent phases. In the first phase, a
subset of individual solutions in the current
population is selected to form a new
intermediate population (the mating pool). The
most common selection operators for this
phase work on a metaphor of the surviving
of the fittest principle, i.e. solution with
higher fitness will have higher likelihood to be
part of the mating pool. During the second
phase known as recombination, solutions in
the mating pool are randomly selected to take
part into cross over and mutation operations,
which are performed by special operators
emulating genetic phenomena. In particular,
the cross over works on two randomly selected
solutions (the parents) and forms the two new
solutions (children), which inherit some of the
chromosomes of both parents. The mutation
randomly alters a portion of the chromosome
to produce a new solution with a random
similarity to its predecessor. After the random
crossover and mutation have been applied, a
new population (the offspring) is ready for
fitness evaluation. This process is then
iterated, so that new solutions are continuously
created and evaluated until a given stopping
criterion is met. As in all the stochastic search
algorithms, the convergence of a GA relies
algorithms, the convergence of a GA relies
on the random determination of solutions with
increasingly higher fitness.


3. DC DRIVE CONTROLLERS DESIGN.

The most common linear
approximation of DC motor current and
voltage dynamics can be summarized as
follows:
v
a
= R
a
i
a
+ L
a
dt
di
a
+ k
T
e
= ki
a

Where v
a,
i
a
, R
a
and L
a
are the armature
voltage, current, resistance and inductance
respectively, k is the torque constant the rotor
speed and T
e
the electromagnetic torque
developed by the motor. DC motors are
usually controlled using a cascaded scheme
with an inner current PI controller and an outer
speed PI controller.

TABLE 1- DC MOTOR PARAMETERS

Rated Current I
an
= 1000 A
Rated Voltage V
n
= 220 V
Rated Speed
n
= 50 rad/sec
Torque Constant K = 3.8 Nm/A
Rotor Inertia J = 100 Kg m
2
Armature Resistance R
a
= 0.03
Armature Inductance L
a
= 0.0006 H

Traditionally PI-type regulators are
employed in both control loops due to their
high performance/ simplicity trade-off. Very
often, proportional and integral gains are
chosen using linear analysis methods. The
non-linearities and delays introduced by power
converters, sensors, stiction, current and
voltage limits and the load characteristic are
not considered. Therefore, an on-line fine-
tuning procedure using trial and error methods
is necessary. Manual tuning requires
experienced operators, is time-consuming, and
does not guarantee to find the optimal
controllers. Moreover PI regulators are
suitable to control second order linear systems
but their performances may deteriorate with
higher-order non-linear systems such as
electric drives. Higher-order controllers can
offer considerable enhancements in this case
but the tuning difficulties may become
prohibitive.

This paper formulates the global
design of a two-loop discrete control system
for electric drives as a search problem. As in
optimal control design problems, we aim to
find the couple of controllers achieving the
maximum satisfaction of a cost-to-
performance merit function. Typically,
optimal control theory for linear systems
requires to optimize the sum of quadratic
norms of the cost and output error with respect
to set point. In this paper the conventional
objective function is extended to take into
account desirable characteristics of the current
and speed responses.

Our objective function is a weighted
sum of several performance indices that are
directly measured on systems response to the
input signal. Namely, we define the following
fitness (to be minimized)


j
j
j
f f .
6
1

=
=

Where
j
represent positive weights and f
j
are
six performance indices defined as follows.
Weights
j
are set heuristically, i.e. performing
preliminary runs of the GA with changed
weights until the desired trade-off between
indices is achieved. Symbols and
ref
indicate motor and reference speed,
respectively. Analogously, i and i
ref
indicate
the actual current and its reference value
provided by the speed controller. All signals
are normalized with respect to their nominal
value. The integer n indicates the number of
samples in the simulated interval.

1. Steady state speed response
) ( . ) ( ) (
1
1
j g j j f
n
j
ref
=
=
Where


=
otherwise
j if
j g
0
) ( - (j) 1
) (
0 ref



This index measures the speed
absolute error only along segments of the
speed response settling around the steady state.
Parameters
0
defines the settling band.

2. Disturbance rejection

) ( ) (
2
n n f
ref
=

This index measures the absolute
error in the last time sample. It is used to
evaluate the ability to reject the change in load
torque, which is applied in the final part of the
simulation.

3. Speed transient response duration
) ( .
3 t
t
n l
n
n n
f

=
Where

=
=
n
j
t
j g n
1
) ( ;


=
otherwise
n l
t
1
.n n if 0
) (
t


This index measures the duration of
transient condition, i.e. estimates the sum of all
settling times. The parameter defines the
desired steady-state time of speed response in
per-unit.

4. Current response

= ) ( ) (
4
j i j i f
ref

The index measures the sum of
absolute current errors.

5. Current overshoot
) ( ). (
max max 5
i m i i f
s
=

With

and , ) ( max
.. 1
max
j i i
n j =
=

>
=
otherwise 1
i i 0
) (
s max
max
if
i m

This index measures the highest
current peak i
max
exceeding the maximum
current supply i
s
.

6. Current oscillations for null speed
reference
n
n
f
+
=
/
6


This index accounts for small ripples
and oscillations of the current induced by
typical dead band nonlinearities when the
motor speed tends to zero. The integer n
+/-
counts the number of change of sign in current
response.


4. GAs FOR STRUCTURE AND
PARAMETER OPTIMIZATION

The performance of any GA in terms
of speed of convergence, reliability of the
search and accuracy of the final solution is
strictly related to the selection and
recombination strategies. Here, crossover and
mutation operators have been redefined to deal
with variable order controllers, and their
optimal occurrence rate has been modified to
deal with variable order controllers, and their
optimal occurrence rate has been modified to
ensure a simultaneous convergence of
controller structure and parameters.


Figure 1 describes the encoded
structure of a generic solution. A first part of
the chromosome contains the integer
parameters describing the orders of the
polynomials of the two controllers, and the
remaining part contains the transfer function
gain and the positions of their zeros and poles.
Both controllers are described in discrete time
domain using z-transform. To obtain minimum
phase controllers, all the zeros and poles must
lie within the unit circle. Poles and zeros are
encoded in the chromosome using the mapping
strategy described in [4], which ensures a
correct manipulation of conjugate roots.
Furthermore, both the controllers are
implemented using anti-windup algorithm to
improve the response of the system in
presence of saturation of the actuators.

The crossover and mutation operators
have been redesigned to deal with controllers
of variable structure. The mutation of a
solution consists in a random perturbation of
on (or more) components of the solution. A
mutation of the structure consists in the
random elimination or addition of one or more
random zeros and poles, located within the
search bounds. The probability of adding and
removing more roots is decreased with the
number of roots, i.e. the elimination/addition
of a single root has higher probability than two
or more roots.


The design of PI controllers is briefly
described in the following. The current control
loop has been reduced to a first order system
having time constant equal to
a
a
ia
R
L
1 . 0 =
By placing the current PI time constant
2
) (

k
JR
a
ii
=
And its proportional gain
a
ia
a
pi
R
L
k =


For the speed control loop the
symmetric optimum criterion [9] has been
used. Accordingly, the speed PI time constant
and proportional gain has been selected as
follows:

) / 1 ( 2
, 4
a a ia ia
pw ia iw
L R k
J
k

= =

The maximum order of both the controllers to
be designed by the GA was chosen equal to
four. The bounds of the controller gain were
chosen so to ensure stable behavior. An initial
population of 100 random individuals is then
evolved throughout 200 generations.

Figure 1-2 are the results obtained
with the GA- designed (GA.DES) double loop
system. All the figures show both the reference
set-point signal (either the speed or the
current) and the corresponding output. Due to
the cascaded double loop structure, the current
set point is the output of the speed controller.





5. SIMULATION RESULTS

FIGURE 1(A): SPEED OUTPUT VS SET-
POINT-GA-DES.CONTROL




FIGURE 1(B): CURRENT OUTPUT VS
SET-POINT-GA-DES.CONTROL


FIGURE 2(A): SPEED OUTPUT VS
SET-POINT-GA-DES.CONTROL.





FIGURE 2(B): CURRENT OUTPUT
VS SET-POINT-GA-DES.CONTROL


The best control loop obtained by the
GA encompasses a first order controller for the
current and second order controller for the
speed. Figures 1(A) and 1(B) show the GA-
designed double loop which enhances both the
speed and current response. Figure 2(A) and
2(B) shows that the GA-designed loop avoids
the overshoot but suffers from a residual
steady state error due to the small offset. The
current response is also significantly
improved, since there are no current
oscillations in the low speed region in spite of
the presence of the dead-band. This behavior is
not obtainable with a lower order speed
controller.








6.CONCLUSIONS

In this paper, we used a hybrid
Evolutionary Algorithm to search for the
optimal cascaded two loop control system for
non-linear DC electrical drives. Both control
loops are based on discrete linear controllers
with anti-windup algorithms. In digital micro-
controllers the increased order of a transfer
function does not significantly increase the
computational burden, since only a few more
samples of the controlled variables have to be
stored in the processor memory. Thus the
algorithm was devised to search for the
structure and associated parameters. Using a
slightly higher order for the speed controller,
the GA is able to design a cascaded loop with
an optimal trade-off of merit figures. Namely,
the effects of nolinearities as saturation and
dead-zone, which degrade the results of linear
design techniques, are fully compensated by
the optimized combination of controllers. The
proposed design technique well lends itself to
be directly applied to real drives, rather than to
simulation models. By applying GAs to real
drives, the a priori knowledge of the detailed
model is not indispensable. In principle the
genetic design of higher order controllers can
be more profitable in the case of direct
application to drives.



7.REFERENCES

[1] G.Griva, F.Profumo, L.Rossel, R.Bojoi:
Optimization of Fuzzy like Luenberger Observer for
High Speed Sensor less Induction Motor Drives
Using Genetic Algorithms, Proceedings of IAS 2000.

[2] Y.S Zhou, L.Y.Lai: Optimal Design for Fuzzy
Controllers by Genetic Algorithms, IEEE
Transactions on Industry Applications, Vol.36, no.1,
J anuary/February 2000, pp.93-97.

[3] W.G.da Silva, P.P.Acarnley, and J .W.Finch:
Application of Genetic Algorithms to the Online
Tuning of Electric Drive Speed Controllers,
IEEE Transactions on Industrial Electronics, Vol.47,
no.1, February 2000, pp217-219.

[4] M.Dotoli, G.Maione, D.Naso, B.Turchiano,
Genetic Identification of Dynamical Systems with
Static Nonlinearities, IEEE SMCia/01, Mountain
Workshop on Soft Computing in Industrial
Applications, Virginia Tech, Blacksburg, Virginia,
J une25-27, 2001

[5] D.B.Fogel, System Identification Through
Simulated Evolution-A Machine Learning Approach
to Modeling,Ginn Press, Needham Heights, MA,1991

[6] J .J Grefenstette. Optimization of Control
Parameters for Genetic Algorithms, IEEE Trans.on
Sys, Man, and Cyb. Vol.16, no.1, pp122-128, 1986.

[7]Koza, J .R., Keane., M.A., Yu, J ., Mydlowec, W.,
Bennett, F.H., Automatic synthesis of both the control
law and parameters for a controller for a three-lag
plant with five-second delay using genetic
programming and simulation techniques.
Proceedings of the 2000 American Control
Conference, Vol.1, pp.453-459.

[8] C.H.Marrison, and R.F.Stengel, Robust Control
system Design Using Random Search and Genetic
Algorithms, IEEE Trans. Automatic Control, vol.42,
no.6, pp.835-839, 1997.









































[9] M.L.Moore, J .T.Musacchio, and K.M.Passino,
Genetic Adaptive Control for an Inverted Wedge,
Proc. of the American Control Conference, San
Diego, California, pp.400-404, J une 1999.

[10] M.Sami Fadali,Y.Zhang,and S.J .Louis, Robust
Stability Analysis of Discrete-Time systems Using
Genetic Algorithm, IEEE Trans. System, Man, and
Cybernetics, Part A:System and Humans, vol. 29, no.
5, pp. 503-508,1999.

[11] Y.C.Sim, S.B.Leng and V.Subramaniam, A
Combined Genetic Algorithms-shooting method
approach to solving optimal control problems, Int.J .
of System Science , vol. 31, no. 1, pp. 83-89, 2000.

[12] A.Trebi-Ollennu, and B.A. White, Multiobjective
fuzzy genetic algorithm optimization approach to
non-linear control system design, IEE Proc.-Control
Theory Appl. Vol. 144, No.2, pp. 137-142,1997.

USE OF GI S I N I MPROVI NG POWER
QUALI TY

SUBMITTED TO
SREE VENKATESHWARA UNIVERSITY-
TIRUPATI
(UNDER THE MODULE POWER QUALITY )


BY



B.SUDEEPTHI,
III/IV B.Tech.,
DEPARTMENT OF ELECTRICAL & ELECTRONICS
ENGINEERING,
J AWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY,
COLLEGE OF ENGINEERING,
ANANTAPUR.




















ABSTRACT
Each Indian consumes 359 units
of electricity a year. We pay Rs. 2.85 per
unit on average. Power is for us a
perennially a scarce resource. Against a
peak demand of 73,000 MW, we have an
availability of 64,000MW. This is
despite the increase in installed capacity
of over 1,00,000 MW as of 2001. As we
race in the future, peak demand is
expected to be and 1,57,000 MW in
2012.
Bridging this gap is a challenge
to the power sector. Techniques adopted
in the west, like remote monitoring of
meters, might not be the answer. To
adopt high-tech practices that seem to
work in privatized sectors abroad, in
fact, would be a technological overkill in
India, where the real problem is the huge
chunk of generated power is given away
to the sectors like agriculture.
Therefore, we should find a
solution and adopt those techniques that
make sense to the Indian situation. There
is an imperative need for solutions to
optimize efficiencies within the existing
systems. The canny course is not to
adopt high-tech practices that seem to
work in privatized sectors abroad, but to
adopt those that make sense to Indian
situation.

This is where the technologies
like GIS - Geographical Information
Systems, the digitization of maps and
spatial information as well as GPS
Global Positioning Systems, the use of
satellites to fix co-ordinates on earth
need to be employed to become both
profitable and self-sufficient. Such high-
tech tools are known to improve the
power quality and, in the long run, bring
down the level of consumption. There is
wealth of expertise in the country when
it came to power utility applications of
GIS/GPS. Globally, distribution
automation by utilities has shown that it
pays for itself in a very short span of
time.
Quite of few of the SEBs and
most of the newly formed distribution
companies are increasingly looking at IT
to provide solutions ensuring efficient
distribution of power across their
respective territories, despite financial
and communications infrastructure
constraints.

There is no silver bullet in sight
to bridge the gap except the sporting
chance that Information Technology
offers.



INTRODUCTION:

At present, in India, the demand
for electric power is 73,000 MW where
as the supply is only 64,000 MW. This
gap is further expected to increase by at
least 20% this year.

This demand-supply gap can be
bridged in two ways. One way is to set
up new generation plants to increase
generation. But this is ruled out in the
near future on such a large basis because
of inadequate capital mobilization and
many other factors. The other way is to
improve the performance of the sector
by increasing the efficiency of the
system. This includes the losses due to
poor collection mechanism.

This inferior performance
includes the inefficiency in power
generation, transmission and
distribution, and, end-use systems. These
end-use systems include irrational
tariffs, technical obsolesce, inadequate
policy drivers and others. The
transmission and distribution losses,
world over, amounts close to 30%. The
poor collection mechanism adds to the
problem. The Government of India,
acknowledging the problem, has set a
target of improving the efficiency by
15% by 2007-08.

NEED TO AUTOMATE:

Existing systems have certain
inherent inefficiencies due to their
legacy. For one, almost all systems are
monitored manually. This results in
maintenance work being performed only
during breakdowns. The present system
also does not ensure complete and
reliable power system and usage
information that can help trend
forecasting or help the utility in better
management and planning. New
challenges are faced, as imposed by the
deregulation of energy market, greater
environmental concern and proliferation
of open information system. The new
energy scenario has compelled everyone
to confirm their position as a prime
power quality. In addition, the social
pricing for rural and other sectors puts
an increasing pressure on utilities to
improve productivity as also reduce
operating and maintenance costs to
remain financially viable.




IMPROVING DISTRIBUTION
EFFICIENCY:

With the advancement in IT and
telecom, the new millennium has
leapfrogged into a revolution in
networking and communications
systems to offer automation as a
solution to improve distribution
efficiencies. Distribution Management
System (DMS) is such a tool for
enterprise-wide management of an
electric utility system. In other words, an
ERP for an electric utility that, when
properly applied, provides for efficient
operations, enhances operational outputs
and translates into economic benefits.
Some of the initiative in distribution
management includes complete
distribution automation, city power
distribution automation and distribution
managements with automatic meter-
reading for electrical utilities.
Internationally, power generation,
transmission and distribution are
attracting equal investments. In India,
too, private players have started
investing in various distribution
automation tools.
The major IT based tools that are
used in energy accounting and energy
flow as detailed below
Consumer indexing (GIS/GPS
based)
Remotely monitoring and
operating distribution systems in
a real time mode (SCADA
based)
Consumer analysis tool
Micro-controller based 11 KV
feeder management system (Feeder
automation)
Power networking software for
load flow analysis

GEOGRAPHICAL INFORMATION
SYSTEMS (GIS)

Geographical Information
Systems is the technology of digitizing
the maps and spatial information. Once
mapped, the details of the whole
geographic area are in ones hands and
these can be used for any purpose. GIS
is the only information technology that
understands the concept of location, and
location is embedded in most data in the
form of an address, city, region, district,
state or country. GIS transforms that
locational component into visual maps
that provide users a powerful
information perspective. These maps,
along with the technology of Global
Positioning Systems, find wide
applications in science and engineering
fields.

Paper based maps, aerial
photographs, remote sensing data and
data from other sources can be converted
to GIS formats. Software are available
which provide tools to interpret
information from a geographic
viewpoint. The following picture shows
the aerial view of an American
landscape.
.



In raster model, a feature is
defined as a set of cells on a grid. All of
the cells on the grid are of the same
shape and size and each one is identified
by a coordinate location and a value
which acts as its identifier, features are
represented by a cells or groups of cells
that share the same identifier. The raster
model is particularly useful for working
with continuous forms of features such
as soil types, vegetation etc.
DATA FOR GIS:

Data for a GIS comes in two
forms geographic or spatial data, and
attribute or aspatial data. Spatial data are
data that contain an explicit geographic
location in the form of a set of
coordinates. Attribute data are
descriptive sets of data that contain
various information relevant to a
particular location, e.g. depth, height,
sales figures, etc. and can be linked to a
particular location by means of and
identifier, e.g. address, pin code, etc.

DATA MODELS:

All graphical features on the
earth can be represented by only three
identities that are line, point and
polygon. The layers of data are stored in
the GIS using one of two distinctly
different data models, known as raster
and vector.


In vector, a feature is represented
as a collection of begin and end points
used to define a set of points lines or
polygons which describe the shape and
size of the feature. The vector model is
particularly useful for representing
highly discrete data types. This model is
extensively used in digitizing things
such as transmission lines, distribution
transformers, sub-stations, boundaries
and the like.

A GIS can represent the spatial
variation of a given field property by
means of a cell grid structure in which
the area is partitioned into regular grid
cells (raster GIS) or using a set of points,
lines and polygons (vector GIS). The
spatial nature of a GIS also provides an
ideal structure for modeling. However,
the vector model is used in applications
related to power industry.

The steps into which digitization of
maps for GIS can be divided are:
DATA COLLECTION:
Conventionally data on Networks were
collected and stored in paper format.
This is not only labour intensive and
error prone but less effective. With the
advent of newer technologies like GPS
and digital cameras, data collection can
be made faster and accurate and also
better GIS integration of data. For
newly developed areas where no data is
available aerial photographs or satellite
images can be used. It is also possible to
use a combination of raster and vector
data.
The important data that are digitized
for power utility applications are:
Creation of seamless digital land
base data based on zones
Digitization of electrical feeder
networks, usually as feeder
single line diagrams, and
boundaries demarcating parts of
the feeder network belonging to
respective countries/cities
Conversion of facility and related
data pertaining to transmission,
sub-transmission, primary
overhead and underground,
secondary, street lighting, poles,
towers, and related plant items,
to digital format.
DATA CONVERSION
There are many types of software
available that convert the collected data
into easy to use maps, helping one view,
analyse and interpret information from a
geographic view-point. Process of
conversion includes pre-processing,
scaling, heads-up digitizing of base map
and electric utilities. Some of the
softwares are:
Raster to vector conversion of
underground electric utilities
from scanned images (.cit) into
output (.dgn) files using
Microstation J and Field View
software.
Scanning for digitization of
geographical maps to create
detailed land base data in digital
form
GeoLOGIC is another software,
provided by Infotech, which puts
the power of GIS on the desktop.
DATA STORAGE:
A GIS stores a representation of the
world in the form of layers connected by
a common geographical frame of
reference. Each of the features on a
layer has a unique identifier which
distinguishes it from the rest of the
features on the layer and allows you to
relate it to relevant information stored in
external databases etc. Different views
and data about the world e.g., streets,
soils, pipes, power cables, etc. can be
captured and stored in the GIS over time
to accommodate the needs of various
different users and to reflect changes in
the landscape over time.
Data on network components can be
stored in GIS for spatial querying. The
main data that will be stored will be sub-
stations, distribution transformers,
distribution lines and depending on level
of sophistication even include types and
details of consumer locations. This data
can be collected and inputted by field
staff in spreadsheets (e.g., MS Excel)
and later imported into a GIS. This
helps bring down costs as technicians
with minimum GIS skills can do bulk of
the data handling operations.
DATA MAPPING:

Generating the maps of distribution
system for various criteria using GIS
makes every day querying based on
specific criteria quick, easy and
understandable for engineers.
Fundamentally the model presents
information, as on a map. But it is an
intelligent map in that each item that you
can see is supported and described by
other data that the user can access by a
simple click on the mouse button. There
is a great deal of flexibility in the way
that data can be used. Background maps
in a variety of formats can be supported.
Parcel map is a utility software that
allows for easy identification of where a
property is located relative to public
ways and adjacent properties.

A typical digitized map of a
geographical area is shown here.



GIS, as a technology, finds
immense use in mapping wired
networks. Mapping solutions and
location information assist engineers in
network planning, building and
maintenance processes. Accounting for
geographic variables for a single tower
or the entire network, GIS ensures
accurate and cost-effective network
engineering.

HOW IS GIS USED IN POWER?

GIS, along with Global Positioning
Systems (GPS), has wide applications in
power sector.


This is based on an extension of
the GIS based geographical distribution
network connectivity model and has
been promote by suppliers of GIS
software as an additional GIS
application. The geographical relation
between the distribution network and the
customer service entrance can be
developed as part of the data take up.
This with the addition of the link with
the customer name and address from the
customer information system (CIS)
records provide the customer network
link information and enables all
geographically related tasks to be
managed from essentially one
application

GIS and Remote Sensing provide
effective solutions for distribution
system management and power
transmission. GIS based solutions can
help in:
Selection of suitable areas
Optimum path finding
Profile analysis
Engineering design of towers and
wires
Cost estimation
DISTRIBUTION MANAGEMENT
SYSTEM
FIRST PRIORITY - CONTROL

It is apparent from the majority
of recent DMS implementation and
surveys that been completed that
implementation of remote control
provides the fastest and greatest benefits
to distribution operations. Therefore, we
will state the first priority of any DMS is
the control. The depth of this control is
again dependent on economics.

With all the required details in its
hands, the distribution utility can control
its distribution system, its crew and work
on its priorities. It can always have an
eye on its properties and look for better
working of its systems.


SECOND PRIORITY CONTROL
ROOM MANAGEMENT

The second priority is to be able
to manage the entire LV and MV
network of combined remotely
controlled switches and remaining
manually operated devices. This is
accompanied by the Control Room
Management function, which employs a
connectivity model of the entire
electrical network, which can be viewed
graphically.

The utility has the complete
knowledge of all its networks, with the
help of the digitized maps, and this
diagram can be dressed manually to
show manually operated devices, as they
are implemented by field crews, the
application of tags to show areas of work
or restrictions and also temporary
network changes.
THIRD PRIORITY TROUBLE
CALL OR OUTAGE
MANAGEMENT
Different approaches have been
taken by the industry to tackle outages.

Utilities with very limited
penetration of real time control but good
customer and network records use the
approach of GIS technology. This
solution is prevalent in places where
distribution sub-stations are smaller and
except for large down town networks the
low voltage feeder system is limited with
on the average between 6 and 10
customers being supplied from one
distribution transformer. This system
structure makes it easier to establish the
customer-network link.

The frequency of maintenance
requirements of the link between the
main control station and the sub-stations
are relatively slow and can be satisfied
by a batch transaction. The only missing
data is the status of any circuit element
which may be operated under protection,
SCADA, or by the field crews directed
from the control room. This information
and thus the link to the GIS data model
must operate at SCADA transaction
speeds. In practice the GIS data base
were not designed for such transactional
speeds and suffered from unacceptable
performance when there was a
significant penetration of real time
control. The performance was improved
by moving the connectivity model onto a
separate server mirroring any related
changes to the GIS data base. However,
the fast data link with SCADA still has
to be maintained with the dedicated
server. GIS centric solutions have been
implemented where very limited
penetration of real time control exists
and the control room management
function is in the majority GIS based.
Thus all network data changes and
extensions are within the frequency of
most GIS data base amendments.

The following is a typical map
that a utility uses to locate each
customer, during a trouble call, along
with the other information related to the
outage.



SCADA Centric

In contrast, those with good real
time systems and extended control use
direct measurements from automated
devices. European systems with very
extensive secondary systems with up to
400 consumers per distribution
transformer concentrate on using real
time systems such as SCADA.

In this system, any information
about outage will precede any
customers knowledge of an outage on
the MV net work and hence his trouble-
call. This is done by the field devices
such as Remote control Units (RTUs).
Thus the trouble call process is more to
maintain customer relations and
association with the outage during the
isolation and restoration process.
However for any fault on the LV
network undetectable by the SCADA the
trouble call will provide invaluable
information about an LV outage in a
similar manner to the GIS centric
solution. SCADA centric solution also
use separate server that is synchronized
with the SCADA run time database in
real time

In conclusion, both solutions
have moved to using dedicated server for
the trouble call application to ensure
adequate performance. It is the
penetration level of automation and the
desired detail of LV representation
(connectively) that is realistic to
maintain that are the main issues. The
only advantage of geographical diagrams
is for crew management. Geographic
background maps can be either loaded as
an additional .display of accessed
directly from the GIS through modern
cross navigational features now readily
available in most DMSs.

A trouble call approach to be
truly effective would have to operate
from the LV system where establishing
the customer network link is more
challenging. In these cases, the trouble
call response was aimed at maintaining
customer relation as a priority over fault
location which is achieved through a
combination of system monitoring and
advance applications.

ADVANTAGES OF GIS

There are many advantages of using the
GIS systems over the other
contemporary techniques.
No new equipment is required
when compared with other
contemporary techniques, for
managing distribution system.
Its use brings down costs as less
skilled people can handle large
amount of data.

System Integration
Integration of the GIS with
Network Analysis Systems to
carry out load flow studies and
other analyses
Integration with SCADA, to
provide better outage
management
Customization and integration
with other management systems,
such as, Financial Energy
Management and Billing
systems.
Digital Pages is one such software which
provides tools for integrating GIS with
other systems.
The following picture shows how
GIS can be made to work with many
applications.


Application Development
Application to automate data
management and map making
processes and to ensure up-to-
date, accurate spatial and
attribute data
Development of an integrated
suite of applications that provides
automated mapping/facilities
management/GIS functions for
mobile mapping and staking,
network maintenance, network
design, work orders, outage
management, engineering
planning, and integration with
customer billing
Development of custom web-
application, with custom data
entry pages to link every
customer's meter number with
the transformer, for tracking
residential and business
customers

THE FUTURE

GIS systems are mostly
applicable in areas where population
density is less. In India, the population
density is very high. Still it can be
applied. Other techniques, such as
SCADA, which are generally applied to
populated areas, are equipment
intensive. The narrow streets of India do
not have sufficient infrastructure to
support these systems. As already
pointed out, there are many other points
which vote for this system. GIS, even
though difficult to apply, in Indian
context, does provide support to power
utilities in managing the distribution
systems. With the passage of time, it can
be integrated with other techniques.

There is a growing realization in
India that GIS systems will have a
significant impact on distribution control
applications and the way enterprises
manage, or will manage their business to
stay competitive.


CONCLUSION

As we evaluate the application of
DMS automation to the utility
distribution system, to increase the
efficiency, we must concentrate on
economical solutions to the problems.
The solutions must initially be based on
options to solve the business problem,
not the equipment. As the evaluation
process continues, means,
communication systems, software, field
devices, and maintenance and operations
will eventually be considered. To meet
the increasing demand for power, from
both industry and domestic sectors,
every possible method must be
considered. Distribution automation will
pay for itself. Smart utilities will analyze
their potential benefits and apply
distribution automation. Distribution
Automation, based on IT tools, will be
vital to the power industry in improving
the efficiency of the whole system.
Adapt, dont adopt!


BIBLIOGRAPHY

1. Remote sensing of environment
- J oseph Lintz, J r. David
S.Simonett
2. GIS AND REMOTE SENSING
- TMH
3. GIS INDIA (J OURNAL)
4. Remote sensing of earth
resources (Sharokhi, ed.)
5. Websites: www.iee.org
www.infotechsw.com
www.magnasoft.com
www.wtiatl.com












HVDC TRANSMISSION

1

Case st udy-
H.V.D.C
Tr ansmi ssi on
Li ne

BY

D.NAGAGOPAL REDDY
E.E.E,
(04071A0220)
E-Mail:pulluri.kittu@gmail.com

&

S.NAVEEN KUMAR
E.E.E,
(04071A0225)
E-Mail: naveen_eee_225@yahoo.co.in

VNR VIGNANA JYOTHI
COLLEGE OF
ENGINEERING AND
TECHNOLOGY














































HVDC TRANSMISSION

2
ABSTRACT
Modern dc power transmission is
relatively a new technology, which made
a modest beginning in 1954. The advent
of Thyristor valves and related
technological improvements over the last
18 years have been responsible for the
acceleration of the growth of HVDC
systems. The HVDC technology is still
undergoing many changes due to
continuing innovations directed at
improving reliability and reducing costs
of converter stations.
The present paper discusses
methods of transmission, conversion,
various types of converters used in order to
reduce the cost factors of HVDC The paper
also discusses the past, present and future
prospects and advantages of this
technology.

INTRODUCTION
Electric power transmission was
originally developed with direct current.
The availability of transformers, and the
development and improvement of induction
motors at the beginning of the 20th century,
led to greater appeal and use of a.c.
transmission. Experimental plants were set
up in the 1930s in Sweden and the USA to
investigate the use of Mercury-arc valves in
conversion processes for transmission and
frequency changing.
D.C. transmission now became
practical when long distances were to be
covered or where cables were required. The
increase in need for electricity after the
Second World War stimulated research.
The first commercial HVDC line built in
1954 was a 98 km submarine cable with
ground return between the island of
Gotland and the Swedish mainland.
Thyristors were applied to D.C.
Transmission in the late 1960s and solid-
state valves became a reality. In 1969, a
contract for the Eel River D.C. Link in
Canada was awarded as the first application
of solid-state valves for HVDC
transmission. Today, the highest functional
D.C. voltage for D.C. transmission is +/-
600 kV for the 785 km transmission line of
the Itaipu scheme in Brazil. D.C.
transmission is now an integral part of the
delivery of electricity in many countries
throughout the world.


WHY USE DC TRANSMISSION?
The question is often asked, Why
use D.C. Transmission? One response is
that losses are lower, but this is not correct.
The level of losses is designed into a
transmission system and is regulated by the
size of the conductor selected. D.C. and
A.C. conductors, either as overhead
transmission lines or submarine cables, can
have lower losses but at a higher expense
since the larger cross-sectional area will
generally result in lower losses but cost
more. When converters are used for D.C.
Transmission in preference to A.C.
Transmission, it is generally by economic
choice driven by one of the following
reasons:
An overhead D.C. Transmission
line with its towers can be designed
to be less costly per unit length than
an equivalent A.C. line designed to
transmit the same level of electric
power. However the D.C. converter
stations at each end are more costly
than the terminating stations of an
A.C. line and so there is a break-
even distance above which the total
cost of D.C. Transmission is less
than its A.C. Transmission
alternative. The D.C. Transmission
line can have a lower visual profile
than an equivalent A.C. line and so
contributes to a lower
environmental impact. There are
other environmental advantages to a
HVDC TRANSMISSION

3
D.C. Transmission line through the
electric and magnetic fields being
D.C. instead of A.C.
If transmission is by submarine or
underground cable, the breakeven
distance is much less than overhead
transmission. It is not practical to
consider A.C. cable systems
exceeding 50 km but D.C. Cable
transmission systems are in service
whose length is in hundreds of
kilometers, and even distances of
600 km or greater have been
considered feasible.
Some A.C. electric power systems
are not synchronized to neighboring
networks even though the physical
distance between them are quite
small. This occurred in J apan,
where half the country is a 60 Hz
network and the other a 50 Hz
system. It is physically impossible
to connect the two together by
direct A.C. methods in order to
exchange electric power between
them. However, if a D.C. Converter
Station is located in each system
with an interconnecting D.C. link
between them, it is possible to
transfer the required power flow
even though the A.C. systems so
connected remain asynchronous.

CONFIGURATIONS
The integral part of a HVDC power
converter is a valve or valve arm. It may
not be controllable if constructed from one
or more power diodes in series, or from one
or more thyristors in series. If power flows
from the D.C. valve group into the A.C.
system, it is an inverter. Each valve
consists of many series connected thyristors
in thyristor modules. The six-pulse valve
group was usual when the valves were of
the mercury-arc type.

Twelve Pulse Valve Group
Nearly all HVDC power converters with
thyristor valves are assembled in a
converter bridge of a twelve-pulse
configuration The figure given below
demonstrates the use of two three-phase
converter transformers with one D.C. side
winding as an ungrounded star connection,
and the other a delta configuration.
Consequently, the A.C. voltages is applied
to each six-pulse valve group, which make
up the twelve-pulse valve group, having a
phase difference of 30 degrees, which is
utilized to cancel the A.C. side 5th and 7th
harmonic currents and D.C. side 6th
harmonic
voltage





Thyristor Module
A thyristor or valve module is that part of a
valve in a mechanical assembly of series-
connected thyristors and their immediate
auxiliaries, including heat sinks cooled by
air, water or glycol, damping circuits, and
valve-firing electronics. A thyristor module
is usually interchangeable for maintenance
purposes, and consists of electric
components as shown in the figure below.
HVDC TRANSMISSION

4

Sub-Station configuration



The central equipment of a D.C. sub-station
(figure 3) is the thyristor converter, which
is usually housed inside a valve hall. The
figure shows an example of the electrical
equipment required for a D.C. sub-station.
In this example, two poles are represented
(which is the usual case) and this is known
as the Bi-pole configuration. From the
figure, essential equipment in a D.C. sub-
station, in addition to the valve groups,
include the converter transformers. For
higher rated D.C. sub-stations, converter
transformers for 12-pulse operation are
usually comprised of single-phase units,
which is a cost-effective way to provide
spare units for increased reliability. Surge-
arresters across each valve in the converter
bridge, across each converter bridge and in
the D.C. and A.C. switch-yards, are
coordinated to protect the equipment from
all over-voltages regardless of their source.
They may be used in non-standard
applications such as filter protection.
Modern HVDC substations use metal-oxide
arresters and their rating and selection is
made with careful insulation, coordination,
and design.

HVDC Converter Arrangements
HVDC converter bridges and lines or
cables can be arranged into a number of
configurations for effective utilization.
Converter bridges may be arranged either
mono-polar or bi-polar. The various ways
of HVDC transmission used are shown in
simplified form and include the following:
Back-to-Back. There are some
applications where the two A.C.
systems to be inter-connected are
physically in the same location or
sub-station. No transmission line or
cable is required between the
converter bridges in this case, and
the connection may be mono-polar
or bi-polar.
Transmission between Two
Substations. When it is economical
to transfer electric power through
D.C. transmission cables from one
geographical location to another, a
two-terminal or point-to-point
HVDC transmission is used.
Multi-terminal HVDC
Transmission System. When three
or more HVDC sub-stations are
geographically separated with inter-
connecting transmission lines or
cables, the HVDC transmission
system is multiterminal.
Unit Connection. When D.C.
transmission is applied right at the
point of generation, it is possible to
connect the converter transformer of
the rectifier directly to the generator
terminals, so that the generated
HVDC TRANSMISSION

5
power feeds into the D.C.
transmission lines.
Diode Rectifier. It has been
proposed that in some applications
where D.C. power transmission is in
one direction only, the valves in the
rectifier-converter bridges can be
constructed from diodes instead of
thyristors.



ENVIRONMENTAL
CONSIDERATIONS:
Field and ion-effects, as well as
corona effects can characterize the
environmental effects of HVDC
Transmission lines. The electric field arises
from both the electrical charge on the
conductors, and for a HVDC overhead
transmission line, from charges on air, ions
and aerosols surrounding the conductor.
These give rise to D.C. electric fields due to
the ion-current density flowing through the
air from or to the conductors, as well as due
to the ion density in the air. A D.C.
magnetic field is produced by the D.C.
current flowing through the conductors. Air
ions produced by HVDC lines form clouds,
which drift away from the line when blown
by the wind and may come in contact with
humans, animals and plants outside the
transmission line right-of-way or corridor.
The corona effects may produce low levels
of radio interference, audible noise and
ozone generation.

Field and corona effects
The field and corona effects of
transmission lines largely favor D.C.
transmission over A.C. transmission. The
significant considerations are as follows:
For a given power transfer requiring
extra-high voltage transmission, the
D.C. transmission line will have a
smaller tower profile than the
equivalent A.C. tower carrying the
same level of power. This can also
lead to less width of right-of-way
for the D.C. transmission option.
The steady and direct magnetic
field of a D.C. transmission line
near or at the edge of the
transmission right-of-way will be
about the same value in magnitude
as the earths naturally occurring
magnetic field. For this reason
alone, it seems unlikely that this
small contribution by HVDC
transmission lines to the
background geo-magnetic field
would be a cause for concern.
The static and steady electric field
from D.C. transmission at the levels
experienced beneath the lines or at
the edge of the right-of-way have no
known adverse biological effects.
There is no theory or mechanism to
explain how a static electric field at
the levels produced by D.C.
transmission lines could affect
human health. The electric field
level beneath a HVDC transmission
line is of similar magnitude as the
naturally occurring static field,
which exists beneath thunderclouds.
Electric fields from A.C
transmission lines have been under
more intense scrutiny than fields
generated from D.C. transmission
lines.
The ion and corona effects of D.C.
transmission lines lead to a small
HVDC TRANSMISSION

6
contribution of ozone production to
higher naturally occurring
background concentrations.
Exacting long-term measurements
are required to detect such
concentrations.
If ground return is used with
monopolar operation, the resulting
D.C. magnetic field can cause error
in magnetic compass readings taken
in the vicinity of the D.C. line or
cable. This impact is minimized by
providing a conductor or cable
return path (known as metallic
return) in close proximity to the
main conductor or cable for
magnetic field cancellation.

Cost structure
The cost of a HVDC transmission
system depends on many factors, such as
the power capacity to be transmitted, type
of transmission medium, environmental
conditions and other safety, regulatory
requirements etc. Even when these are
available, the options available for optimal
design (different commutation techniques,
variety of filters, transformers etc.) render it
difficult to give a cost figure for a HVDC
system. Nevertheless, a typical cost
structure for the converter stations could be
as follows:



As guidance, an example showing the price
variation for an AC transmission compared
with a HVDC transmission for 2000 MW is
presented below.




Assumptions made in the price
calculations:
For the AC transmission, a double
circuit is assumed with a price per km of
250k USD/km (each); AC sub-stations and
series compensation (above 600 km) are
estimated to 80M USD. For the HVDC
transmission, a bipolar OH line was
assumed with a price per km of 250k
USD/km, and converter stations are
estimated to 250M USD. It is strongly
recommended to contact a manufacturer, in
order to get a first idea of costs and
alternatives. The manufacturer should be
able to give a budgetary price based on a
few data, such as rated power, transmission
distance, type of transmission, voltage level
in the A.C. networks where the converters
are going to be connected.
The choice of D.C. transmission voltage
level has a direct impact on the total
installation cost. At the design stage an
optimisation is done to find out the
optimum D.C. voltage from the investment
and losses point of view. The costs of
losses are also very important- in the
evaluation of losses, the energy cost and the
time-horizon for utilization of the
transmission have to be taken into account.
HVDC TRANSMISSION

7
Finally, the depreciation period and desired
rate of return (or discount rate) should be
considered. Therefore, to estimate the costs
of a HVDC system, it is recommended that
a life-cycle cost analysis is undertaken.



However, the break-even distance, power
transfer level criteria and the comparative
cost information should be taken in the
proper perspective, because of the
following reasons:
In the present (and future) industry-
environment of liberalized
competitive markets, heightened
efforts are being made to conserve
the environment. In such an
scenario, the alternative for a
transmission system is an in-site
gas-fired combined cycle power
plant, not necessarily an option
between an A.C. transmission and a
HVDC one.
Second, the systems prices for both
A.C. and HVDC have varied
widely for a given level of power
transfer. What this shows,
therefore, is that in addition to the
criteria mentioned above (power
levels, distance, transmission
medium, environmental conditions
etc.), the market conditions at the
time of the project is a critical
factor, perhaps more so than the
numerical comparisons between the
costs of an A.C. or D.C. system.
Third, technological developments
have tended to push HVDC system
costs downward, while the
environmental considerations have
resulted in pushing up the high
voltage A.C. system costs
Therefore, for the purposes of early
stage feasibility analysis of the
transmission system type, it is
perhaps better to consider HVDC
and high voltage A.C. systems as
equal cost alternatives.

CONCLUSION
Modern HVDC systems combine
the good experience of the old installations
with recently developed technologies and
materials. The result is a very competitive,
flexible and efficient way of transmitting
electrical energy with a very low
environmental impact.
It is important to remark that a
HVDC system not only transmits electrical
power from one point to another, but it also
has a lot of value added to it, which should
have been necessary to solve by another
means in the case of using a conventional
AC transmission.
HVDC systems remain the best
economical and environment-friendly
option for the above conventional
applications. However, three different
dynamics- technology development,
deregulation of the electricity industry
around the world, and a quantum leap in
efforts to conserve the environment - are
demanding a change in thinking, that could
make HVDC systems the preferred
alternative to high voltage A.C. systems in
many other situations as well.
It is quite conceivable that with
changed circumstances in the electricity
industry, the technological developments,
and environmental considerations, HVDC
would be the preferred alternative in many
more transmission projects.
HVDC TRANSMISSION

8





















REFERENCES
1. Physical Layout of Recent HVDC
Transmission Projects in North America,
IEEE
Special Publication 87TH0177-6-PWR,
September 1986.
2. W.H. Bailey, D.E. Weil and J .R. Stewart,
HVDC Power Transmission
Environmental Issues Review
3. E.W. Kimbark, Direct Current
Transmission, Volume 1.
4. E. Uhlman, Power Transmission by
Direct Current...
5. J . Arrillaga, High Voltage Direct Current
Transmission.
6. K.R. Padiyar, HVDC Transmission -
Technology and System Interactions.
7. Guide for Planning DC Links
Terminating at AC Locations Having Low
Short
Circuit Capacities, Part 1: AC/DC
Interaction Phenomena, CIGRE Technical
Brochure No. 68, 1992.
8. High-Voltage Direct Current Handbook,
First Edition.





(Under the theme: Non-Conventional Energy Sources)





PRESENTED BY
V.Vijay Bhaskar 3/4 E.E.E
R Ravi Kumar 3 /4 E.E.E



















ELECTRICAL AND ELECTRONICS ENGINEERING



MAIL ADDRESS:
veeravijay_4060@yahoo.co.in
raviramineni@yahoo.co.in



1

ABSTRACT:
The current world wide emphasis on reducing green house gas (GHG) emissions
provides an opportunity to revisit how energy is produced and used, consist with the
need for human and economic growth .GHG reduction strategies must include a
grater penetration of electricity into areas, such as transportation, that have been the
almost exclusive domain of fossils fuels. An opportunity for electricity to displace
fossil fuels use is through electrolytic production of hydrogen .Energy storage is
essential to accommodate low capacity factor non carbon sources such as wind and
solar .
Electricity can be used directly to power stand alone hydrogen production
facilities. Electrochemical and process industries frequently flare hydrogen by
products to the atmosphere .This paper will discuss hydrogen power conversion
methods including fuel cells and combustion technologies .this paper presents an
overview of some of the practical implementation methods available and the
challenges that must be met the process and construction of distributing power to
either the AC power system or DC process bus are examined .This technology is
expected to become cost competitive as energy prices continue to climb and fuel cell
proficiency matures.
















2


INDEX:
INTRODUCTION
HYDROGEN BASICS
HYDROGEN ECONOMY
FUEL CELL ---- WORKING
APPLICATIONS OF FUEL CELL
IN AUTOMOBILES
IN RESIDENTIAL DWELLING
IN FUELING INFRASTRUCTURE
LIMITATIONS
STORAGE
ENVIRONMENTAL BENEFITS
CONCLUSION
REFERENCES






















3
INTRODUCTION:
The amount of waste hydrogen produced widely depending on the process in
a particular plan . This paper examines alternatives for plants that produce four metric
tones of hydrogen .Power generation greater than 1MW is the focus of this paper.
I. HYDROGEN BASICS:
The combustion of hydrogen produces no carbon dioxide (CO
2
), particulate
or sulpher emissions. As the conventional fuel of choice, progressed from wood to
coal then to oil and natural gas, the percentage of carbon in our fuel as declined and
the percentage of hydrogen increased .Taking this progressions to the extreme,one
could argue that we will eventually be using 100% hydrogen fuel without the
motivation of environmental benefits .
Hydrogen has higher energy per unit of mass but lower energy per unit
volume. By weight hydrogen carries 3 times energy of our most --common fuels. The
major down side of hydrogen is its poor volumetric energy density, making storage
and transportation a fundamental challenge. To help resolve this problem the
hydrogen industry is currently certifying 1000psi hydrogen cylinders .Hydrogen is
actually safer than media reports of the past suggest The small size of the hydrogen
molecule results in free hydrogen (a leak) dispersing very quickly in the atmosphere,
and it enhances of creating an explosion are somewhat less than conventional fossil
fuel vapors. The great advantage of many electro chemical plants are they continuous
production of hydrogen that may be utilized continuously to produce power.

HYDROGEN-ECONOMY:
Our dependence on fossil fuels present fundamental challenges to our
economic security, environmental security and homeland security. We must pursue a
promising pathway to a more secure energy future.
Hydrogen can be produced renewably under cal conventional energy sources;
the result is fuel flexibility energy security. Hydrogen is well matured with renewable
energy technologies like solar and wind power.
Hydrogen fuel cells generate electricity with no conventional pollutants
4
Fuel cells produce less CO
2
per unit of work. Usually less than conventional
alternatives.
Transitional strategies like hybrid vehicles will help, but because of vehicle
growth in use, we would still need to import as much oil; as we import today. We
need a permanent solution.



FUEL CELL:
The fuel cell does not generate energy through burning; rather, it is based on
an electrochemical process. The energy conversion is twice as efficient as through
combustion. There are little or no harmful emissions. The only release is clean water.
Hydrogen, the simplest element consisting of one proton and one electron, is
plentiful and is exceptionally clean as a fuel. Hydrogen makes up 90% of the universe
and is the third most abundant element on the earth's surface. Such wealth of energy
would provide an almost unlimited amount of energy at relatively low fuel cost. But
there is a price to pay. The fuel cell core (or stack), which converts oxygen and
hydrogen to electricity, is expensive to build and maintain.
A fuel cell is electrolysis in reverse, using two electrodes separated by an
electrolyte. Hydrogen is presented to the negative electrode (anode) and oxygen to the
positive electrode (cathode). A catalyst at the anode separates the hydrogen into
5
positively charged hydrogen ions and electrons. On the Proton Exchange Membrane
(PEM) system, the oxygen is ionized and migrates across the electrolyte to the anodic
compartment where it combines with hydrogen. A single fuel cell produces 0.6-0.8
volts under load. Several cells are connected in series to obtain higher voltages.
.

Fig. The principle of an electrolyzer, shown left; of a fuel cell, shown right


CHEMICAL REACTIONS:
Anode : H2 2H++2e-
Cathode: O2+2H++2e- H2O
Overall: H2+1/2 O2 H2O



APPLICATIONS:
IN AUTOMOBILES:
The fuel cell is intended to replace the internal combustion engine of cars,
trucks and buses. Major car manufacturers have teamed up with fuel cell research centers or are
doing their own development. Because of pending technical issues of the fuel cell, and the low
cost of the combustion engine, experts predict mass-produced fuel cell powered cars to arrive by
2015, or even 2020. Some experts go as far as to say that the commercial viability of the fuel cell
is not proven.
6

Fig... Hyper Car

Large fuel cell plants running at 40,000 kW will likely out-pace the
automotive industry. Such systems could provide electricity to remote
locations within 10 years. Many of these regions have an abundance of fossil
fuel that could be utilized. The stack on these large power plants would last
longer than in mobile applications because of steady use, even operating
temperatures and the absence of shocks and vibrations.

Fuel cells may soon compete with batteries for portable applications, such as
laptops. The energy will be cheaper than that of a conventional battery and
lengthy recharging will become redundant. However, the size and price of
today's portable fuel cells do not yet meet customer's expectations.
IN RESIDENTIAL DWELLING:
A web of the energy networks interconnects residential houses. A
few houses have fuel cells. Those houses interchange the hydrogen that is produced
by their fuel process. All houses are able to join to the interchange of hot water and
electricity. Electricity is interchanged virtually via conventional grid.
7

Fig... Example of proposed energy networks in residential houses
The introduction of fuel cells into all of the three houses is not required, but at least
one house should have a fuel cell. Pipes interconnect the hydrogen storage devices
and hot water tanks respectively. It makes possible to use those equipment which
belong to other houses as like as own equipment. The fuel cells for residential use that
will be supplied into market are assumed to have a capacity of around 1 kW. The
night time electricity demand of residential dwellings is usually less than 1 kW. The
heat supply from the fuel cells is also too large except winter. Extra electricity or
extra heat (hot water) is expected to be supplied to the next houses. It realizes virtual
sharing of a fuel cell among houses.

Fig... Energy interchanges of residential houses
8
Newly installation of pipes for energy networks among houses is necessary. Note
that residential houses are usually crowdedly constructed and the distance between a
house and the next house is very close in urban area of J apan. The construction is
expected to be easy and not to cost high, if the pipe runs the houses or their gardens.
House stores its hot water tank with hot water from its fuel cell primary, and
can store the tank of an other house (rent the tank) when own tank is filled up.
Sharing a fuel processor with other houses provides considerable reduction of the
partial load operation and the start-stop operation which damage the efficiency or the
lifetime. Thus, the proposed system realizes effective utilization of the exhaust heat of
fuel cells and the efficient reformation of hydrogen. The mitigation of CO2 emission
and the energy conservation in next couple of decades is 2.3 times larger than the case
that fuel cells are introduced independently.
A key impediment to expanded fuel cell vehicle use is fueling infrastructure.
The use of distributed hydrogen fueling systems is seen as an intermediate pathway to
permit infrastructure development, with a future transition to a hydrogen pipeline
delivery infrastructure. This project leverages the substantial natural gas delivery
infrastructure by onsite natural gas to hydrogen fueling systems.


FIG... Compact fuel processing system
Several key technologies are being developed in this project. This includes a
highly compact, cost-effective steam methane reformer and fuel-processing
technology developed by GTI. This unit has been adapted to serve as a hydrogen
9
generator for fueling stations. An additional core effort is development of hydrogen
dispenser with an advanced filling algorithm that will permit accurate and complete
filling of compressed hydrogen vehicles under a range of conditions. These advanced
subsystems reforming, fuel cleanup, compression, storage, and dispensing will be
incorporated into integrated and cost-competitive small hydrogen from natural gas
fueling station that will support hydrogen fueling infrastructure development and
expansion.

LIMITATIONS:

The efficiency of a new power source is often compared with a diesel engine or
a nickel-cadmium battery, both of which perform well at 100% load factor. This is
not the case with the fuel cell, which operates best at 30%. Higher loads reduce the
efficiency considerably. Supplying pure oxygen instead of air improves the load
factor.
The fuel cell is intended to replace the chemical battery. Ironically, it will
promote the battery. Most fuel cell applications need batteries as a buffer to provide
momentary high load currents. The fuel will keep the battery charged. For portable
applications, a super capacitor will improve the loading characteristics and enable
high current pulses.
One of the major limitations of the fuel cell is the high-energy cost. While an
internal combustion engine requires an investment of $30 to produce one kilowatt
(kW) of power, the equivalent cost in a fuel cell is a whopping $3,000 (Refer to The
cost of portable power). Part of that cost is due to experimental production since the
fuel cell is not yet mass-produced. The goal is developing a fuel cell that is par with a
diesel engine in terms of cost.
. Once the current difficulties have been solved, the fuel cell is bound to find
applications that lie beyond the reach of the internal combustion engine. It is said that
the fuel cell is as revolutionary as the will take longer.


10
STORAGE:
Hydrides, however, store little energy per unit weight. Current research aims to
produce a compound that will carry a significant amount of hydrogen with a high
energy density, release the hydrogen as a fuel, react quickly, and be cost-effective.
Hydrogen may also be stored as a gas, which uses less energy than making liquid
hydrogen. As a gas, it must be pressurized to store any appreciable amount.
For large-scale use, pressurized Hydrogen gas could be stored in avers, gas
fields, and mines. The hydrogen gas could then be piped into individual
homes in the same way as natural gas.



ENVIRONMENTAL BENEFITS:
Claims that hybrid vehicle are just as clean environmentally as fuel cell
vehicles are in accurate
Only hydrogen offers the promise of completely removing motor vehicles
from the pollution equation.
Although fossils fuels will be used to produce hydrogen in the medium turn,
in long-term hydrogen can be derived largely from renewable sources.
Gasoline on the other hand can only be derived from fossil fuels.
Although a Prius hybrid is significantly cleaner than air conventional gasoline
vehicle, it is not cleaner than an hydrogen vehicle where the hydrogen is
derived from clean sources.
Aronne national laboratory and NRC evaluated a nation natural gas based h2
FCV and calculated it emits 60%less green house gases than the conventional
gasoline vehicle and 25% less than a than a prius hybrid (2004)





11



CONCLUSION:
The concept of interconnection of residential dwellings with the energy
networks of electricity, heat and hydrogen are present in this paper .It provides
considerable effects on carbon dioxide mitigation and energy conservation .The
experimental systems are developed to improve the analysis not only to those effects
but also in schemes, rule and protocols of energy interchange and operation of
equipment .The analysis provide valuable data and information
The automotive industry has made huge investments and continuous to
propel this technology forward .Stationary fuel cells delivers are ramping up with
increased reliability and length of service .High temperature fuel cells for both co-
generation power plants and smaller power plants for home use are been produced
in increasing numbers.

Fuel cells offer a high degree of operational efficiency, zero emissions,
and reduced reliance on foreign petroleum. If the technological and infrastructure
barriers can be remedied, fuel cells provide enormous environmental, economic,
and political benefits. If these benefits are to be realized we must commit to the
technological and infrastructure developments that are required advancement










12

REFERENCES:

www.rmi.org/sitepages/pid70.php

1 Lovins, A.B. (1991) `Advanced light vehicle concepts', briefing notes for
Committee on
Fuel Economy of Automobiles and Light Trucks, Energy Engineering Board,
US
National Research Council (Irvine, CA), 9 July RMI Publ. #T91-20, www.rmi.org
2 `Hypercar' is a registered trademark, and `Fiberforge' is a trademark, of
Hypercar, Inc.
Until early November 1994, RMI called its automotive concept a `supercar',
then changed
the name, partly at the urging of advisor Robert Cumberford, because racers'
use of that
term to connote street-licensed Formula One platforms was causing too much
confusion.
3 Lovins, A.B. and Barnett, J.W. (1993) `Advanced ultralight hybrid-electric
vehicles',
Proceedings of the International Symposium on Automotive Technology and
Automation
(ISATA) (Aachen, Germany), 1317 September 1993, preprinted 30 April 1993,
Rocky
Mountain Institute, www.rmi.org
13








BRUSHLESS DC EXCITATION







T. Sivananda Reddy
Roll No: 04071A0244
EEE Dept.
VNR VJIET
e-mail ID: sivananda.thondapu@gmail.com
V. Manikanta
Roll No: 04071A0219
EEE Dept.
VNR VJIET
e-mail ID: Vutukuri_mani@yahoo.com



























ABSTRACT

This paper deals with Brushless
DC Excitation, by using proper
mechanical placement of the machine
parts with accurate angles of the rotor
and the stator parts. Initially the two
phase supply generated by the permanent
magnet ac alternator is fed to the
intermediator which does all the job.
The current produced due to the emf
generated by the intermediator is directly
fed to the main alternators field which is
on the rotor shaft i.e. the commutation
occurs on rotor shaft, this replaces the
job of brushes. Hence no brush drops.
The uniqueness of this machine is that, it
does not use Power electronic devices,
there fore no voltage drop across the
power electronic devices. Thus the losses
are reduced. This makes the machine
more efficient.






































CONTENTS

1. Introduction 4
2. Working 4
3. Advantages 9
4. Disadvantages 9
5. Conclusion 9





















1. INTRODUCTION

This machine can be divided into three
parts
Permanent Magnet AC
alternator.
Intermediator.
Main AC alternator
Permanent Magnet AC
alternator will generate two phase (for
the ease of explanation it is shown as
two phase alternator). The magnitude
of the voltage generated by this
permanent magnet alternator is low
(i.e. around 100V), since the magnetic
field produced by the permanent
magnet is low.
This two phase supply is fed to
the intermediator. Here the ac supply
given to stator is converted to dc on the
rotor. The intermediator can be divided
into two parts (i) Phase I Part
(ii) Phase II Part
This division is done for the ease of
explanation.
The dc voltage generated on the
rotor is fed to the main alternator field
which is on the rotating shaft. Hence there
is no requirement of brushes or slip rings
to feed the dc supply to the main
alternators field.


2. WORKING

2.1 Permanent Magnet AC alternator
This is the picture of the alternator
showing the alternator parts and its
windings.
AC Permanent Magnet Alternator.(Rendered image using Maya 8.0)
When the shaft rotates the
alternating voltage is generated in the
two windings, and has a phase
differenceof 90
o
. The corresponding
voltage waveforms are as shown in the
figure below.


This two-phase voltage is fed to the
Intermediator stator field winding. Phase
1 voltage (i.e. represented in blue color)
is fed to the Phase I part of the
intermediator. Similarly the Phase 2
voltage(i.e. represented in yellow color)
is fed to the Phase II part of the
intermediator.
2.2 The Intermediator
Intermediator consists of two separate
parts as show in the below.
.


Intermediator Stator
(A 3D image of the intermediator stator showing Phase I part and Phase II part, redered using Maya 8.0)




Intermediator Rotor
(A 3D image rendered using Maya 8.0)

The rotor of the intermediator has two
slots holding two conductors let us name
the conductors as A and B as shown in
the figure above. As the exciting
voltage for the Phase I and Phase II field
windings is the ac voltage with a phase
difference of 90
o
from the initial
alternator, the corresponding flux
produced by it is also alternating and has
a phase difference of 90
o
.
According to the Double Field revolving
theory, alternating flux can be
represented two flux vectors rotating in
clockwise and counter-clockwise
directions, with a magnitude of
m
/2.
These flux vectors rotates with the of
synchronous speed. The rotor is also
rotating at the synchronous speed in the
clockwise direction, hence all parts
attached to it (i.e. the rotor conductors)
also rotate at the synchronous speed.
Let us consider the clockwise
rotating flux as
c
and the counter-
clockwise rotating flux as
cc
. As there
is no relative speed between the
clockwise rotating flux
c
and the rotor
conductors no emf is induced in the
conductors due to
c
.

Different rotor positions and the corresponding flux vectors.
But due the counter-clockwise rotating
flux
cc
, an emf is induced in the rotor
conductors, as there is a relative speed
twice that of the synchronous speed is
present between the
cc
and the rotor
conductors. The emf induced in the rotor
conductors is explained using the
following figures. The emf is induced in
the conductors only when the conductors
are under the pole shoe. The above
shown figures are different rotor
positions differed by an angle of 45
o
.
Case (i)
When the conductor A moves from (I) to
(II), no emf is induced as there is no air
gap flux linking the conductor A.
Case (ii)
When the conductor A moves
from (II) to (IV), emf is induced in the
conductor A because conductor A cuts
the
cc
with twice the synchronous
speed. At the position (III) maximum
emf is induced in conductor A.
Case (iii)
When the conductor A moves
from (IV) to (VI), no emf is induced as
there is no air gap flux linking the
conductor A.
Case (iv)
When the conductor A moves
from (VI) to (VIII), emf is induced in the
conductor A because conductor A cuts
the
cc
with twice the synchronous
speed. At the position (VII) maximum
emf is induced in conductor A. And the
polarity of the emf induced in the
conductor A is similar to that of the
polarity of the emf induced in the case
(ii) because the conductor A cuts the
cc

in the same direction as in the case (ii).
Hence the waveform of the emf induced
in the conductor A in the Phase I part is
as shown below.

Similarly the waveforms of the emf
induced in the conductor A in the Phase
II part is shown below figure.
Hence the resultant emf induced in the
conductor A in intermediator is the sum
of the emf induced in the Phase I part
and the Phase II part. The waveform of
resultant emf induced in the
intermediator is given below.
This dc voltage generated in the
intermediator is fed to the field winding
of the main alternator, which is mounted
on the shaft.
2.3 Main Alternator
Main alternator is a general alternator of
salient pole type, two phase distributed
winding. This machine generates the
required power. A 3D model of the main
alternator is shown in the figure.
Main Alternator ( 3D image of main alternator rendered using Maya 8.0)
3. Advantages
Name itself tells its a brushless
dc exciter, so no brushes are
used, no brush drops, no
maintenance of brushes, and no
friction losses. Hence
maintenance free.
Smooth variation in the field
excitation can be obtained by
placing a rheostat between the
permanent magnet ac alternator
and the intermediator stator
windings. As the current in the
circuit is low when compared to
the main field circuit, the heat
dissipation in the rheostat is low.
Hence it is efficient.

4. Disadvantages
Designing of the alternator is
complex. To achieve high degree
of efficiency, great accuracy is
taken in maintaining the angles
of the rotor and the stator parts
while designing.
In these kind of machines more
ripples are created in the output
voltage of the intermediator.




5. Conclusion
Design of this alternator is advantageous
only in designing the alternators of high
ratings such as 250MW. In this kind of
machine the intermediator size is very
less when compared to the main
alternator. Thus the Kw/Kg.wt machine
increases. This helps in faster cooling of
machine. The disadvantage of the
machine having more ripples in the
output voltage can be overcome by
increasing the number of phases in the
permanent magnet ac alternator. Which
in turn increases the number of phase
parts in the intermediator.

Implementation of Maximum Constant Boost Control of the
Z-Source Inverter for the Induction Motor Drive

Pavan Kumar. D and Lokesh kumar. P, B.tech ,Automation and control of drives
SIDDHARTH INSTITUTE OF ENGG & TECH, PUTTUR, Chittoor District, Andhra Pradesh


Abstract This paper presents an impedance-source (or
impedance-fed) power converter (abbreviated as Z-source
converter) and its control method for implementing dc-to-ac, ac-
to-dc, ac-to-ac, and dc-to-dc power conversion. The Z-source
converter employs a unique impedance network (or circuit) to
couple the converter main circuit to the power source, thus
providing unique features that cannot be obtained in the
traditional voltage-source (or voltage-fed) and current-source (or
current-fed) converters where a capacitor and inductor are used,
respectively. The Z-source converter overcomes the conceptual
and theoretical barriers and limitations of the traditional voltage-
source converter (abbreviated as V-source converter) and
current-source converter (Abbreviated as I-source converter)
and provides a novel power conversion concept. The Z-source
concept can be applied to all dc-to-ac, ac-to-dc, ac-to-ac, and dc-
to-dc power conversion.

To describe the operating principle and control, this
paper focuses on two maximum constant boost control methods
for the Z-source inverter, which can obtain maximum voltage
gain at any given modulation index without producing any low-
frequency ripple that is related to the output frequency. Thus the
Z-network requirement will be independent of the output
frequency and determined only by the switching frequency. The
relationship of voltage gain to modulation index is analyzed in
detail and verified by simulation and experiment.



I. INTRODUCTION

THERE EXIST two traditional converters: voltage-source
(or voltage-fed) and current-source (or current-fed) converters
(or inverters depending on power flow directions). Fig. 1
shows the traditional three-phase voltage-source converter
(abbreviated as V-source converter) structure. A dc voltage
source supported by a relatively large capacitor feeds the main
converter circuit, a three-phase bridge. The dc voltage source
can be a battery, fuel-cell stack, diode rectifier, and/or
capacitor. Six switches are used in the main circuit; each is
traditionally composed of a power transistor and an






antiparallel (or freewheeling) diode to provide bidirectional
current flow and unidirectional voltage blocking capability.
The V-source converter is widely used. It, however, has the
following conceptual and theoretical barriers and limitations.
The ac output voltage is limited below and cannot exceed
the dc-rail voltage or the dc-rail
Voltage has to be greater than the ac input voltage. Therefore,
the V-source inverter is a buck (step-down) inverter for dc-to-
ac power conversion and the V-source converter is a boost
(step-up) rectifier (or boost converter) for ac-to-dc power
conversion. For applications where over drive is desirable and
the available dc voltage is limited, an additional dc-dc boost
converter is needed to obtain a desired ac output. The
additional power converter stage increases system cost and
lowers efficiency.
The upper and lower devices of each phase leg cannot be
gated on simultaneously either by purpose or by EMI noise.
Otherwise, a shoot-through would occur and destroy the
devices. The shoot-through problem by electromagnetic
interference (EMI) noises misgating-on is a major killer to the
converters reliability. Dead time to block both upper and
lower devices has to be provided in the V-source converter,
which causes waveform distortion, etc.
An output LC filter is needed for providing a sinusoidal
voltage compared with the current-source inverter, which
causes additional power loss and control complexity.



Fig. 1. Traditional V-source converter.

Fig. 2 shows the traditional three-phase current-source
converter (abbreviated as I-source converter) structure. A dc
current source feeds the main converter circuit, a three-phase
bridge. The dc current source can be a relatively large dc
inductor fed by a voltage source such as a battery, fuel-cell
stack, diode rectifier, or thyristor converter. Six switches are
used in the main circuit; each is traditionally composed of a
semiconductor switching device with reverse block capability
such as a gate-turn-off thyristor (GTO) and SCR or a power
transistor with a series diode to provide unidirectional current
flow and bidirectional voltage blocking. However, the I-source
converter has the following conceptual and theoretical barriers
and limitations. The ac output voltage has to be greater than
the original dc voltage that feeds the dc inductor or the dc
voltage produced is always smaller than the ac input voltage.
Therefore, the I-source inverter is a boost inverter for dc-to-ac
power conversion and the I-source converter is a buck rectifier
(or buck converter) for ac-to-dc power conversion. For
applications where a wide voltage range is desirable, an
additional dcdc buck (or boost) converter is needed. The
additional power conversion stage increases system cost and
lowers efficiency.
At least one of the upper devices and one of the lower
devices have to be gated on and maintained on at any time.
Otherwise, an open circuit of the dc inductor would occur and
destroy the devices. The open-circuit problem by EMI noises
misgating-off is a major concern of the converters reliability.
Overlap time for safe current commutation is needed in the I-
source converter, which also causes waveform distortion, etc.
The main switches of the I-source converter have to
block reverse voltage that requires a series diode to be used in
combination with high-speed and high-performance transistors
such as insulated gate bipolar transistors (IGBTs). This
prevents the direct use of low-cost and high-performance
IGBT modules and intelligent power modules (IPMs).



Fig. 2. Traditional I-source converter.

In addition, both the V-source converter and the I-source
converter have the following common problems.
They are either a boost or a buck converter and cannot be
a buckboost converter. That is, their obtainable output
voltage range is limited to either greater or smaller than the
input voltage.
Their main circuits cannot be interchangeable. In other
words, neither the V-source converter main circuit can be used
for the I-source converter, nor vice versa.
They are vulnerable to EMI noise in terms of reliability.
To overcome the above problems of the traditional V-
source and I-source converters, this paper presents an
impedance-source (or impedance-fed) power converter
(abbreviated as Z-source converter) and its control method for
implementing dc-to-ac, ac-to-dc, ac-to-ac, and dc-to-dc power
conversion. Fig. 3 shows the general Z-source converter
structure proposed.


II. Z-SOURCE CONVERTER

In a traditional voltage source inverter, the two switches of
the same phase leg can never be gated on at the same time
because doing so would cause a short circuit (shoot-through)
to occur that would destroy the inverter. In addition, the
maximum output voltage obtainable can never exceed the dc
bus voltage. These limitations can be overcome by the new Z-
source inverter, shown in Fig. 3, that uses impedance network
(Z-network) to replace the traditional dc link. The Z-source
inverter advantageously utilizes the shoot-through states to
boost the dc bus voltage by gating on both the upper and lower
switches of a phase leg. Therefore, the Z-source inverter can
buck and boost voltage to a desired output voltage that is
greater than the available dc bus voltage. In addition, the
reliability of the inverter is greatly improved because the
shoot-through can no longer destroy the circuit. Thus it
provides a low-cost, reliable, and highly efficient single-stage
structure for buck and boost power conversion.

Fig. 3. .Z-source inverter


In this paper, we will present two control methods to
achieve maximum voltage boost/gain
while maintaining a constant boost viewed from the Z-source
network and producing no low-frequency ripple associated
with the output frequency. This maximum constant boost
control can greatly reduce the L and C requirements of the Z-
network. The relationship of voltage boost and modulation
index, as well as the voltage stress on the devices, will be
investigated.




III.VOLTAGE BOOST, STRESS AND
CURRENT RIPPLE


The voltage gain of the Z-source inverter can be expressed as

Where Vo is the output peak phase voltage, V
dc
is the input dc
voltage, M is the modulation index, and B is the boost factor.B
is determined by

Where T0 is the shoot-through time interval over a switching
cycle T, or

T = is the shoot-through duty ratio.

A simple boost control method was used to control the
shoot-through duty ratio. The Z-source inverter maintains the
six active states unchanged as in traditional carrier-based pulse
width modulation (PWM) control. In this case, the shoot-
through time per switching cycle is constant, which means the
boost factor is a constant. Therefore, under this condition, the
dc inductor current and capacitor voltage have no ripples that
are associated with the output frequency. For this simple
boost control, the obtainable shoot-through duty ratio
decreases with the increase of M, and the resulting voltage
stress across the devices is fairly high. Maximum boost
control method as shown in Fig. 4, which shoots through all
zero-voltage vectors entirely. Based on the map in Fig. 3, the
shoot-through duty cycle D0 varies at six times the output
frequency. The voltage boost is inversely related to the shoot-
though duty ratio; therefore, the ripple in shoot-through duty
ratio will result in ripple in the current through the inductor, as
well as in the voltage across the capacitor. When the output
frequency is low, the inductor current ripple becomes
significant, and a large inductor is required.


Fig. 4 Maximum boost control sketch map

To calculate the current ripple through the inductor, the circuit
can be modeled as in Fig. 5, where L is the inductor in the Z-
source network, Vc is the voltage across the capacitor in the
Z-source network, and Vi is the voltage fed to the inverter.
Neglecting the switching frequency element, the average value
of Vi can be described as


We have




As can be seen from Eq. (4), D0 has maximum value when

and has minimum value when

If we suppose the voltage across the capacitor is constant, the
voltage ripple across the inductor can be approximated as a
sinusoid with peak-to-peak value of



If the output frequency is f, the current ripple through the
inductor will be



As can be seen from Eq. (7), when the output frequency
decreases, in order to maintain the current ripple in a certain
range, the inductor has to be large.



Fig. 5 Model of the circuit


IV. MAXIMUM CONSTANT BOOST CONTROL

In order to reduce the volume and cost, it is important
always to keep the shoot-through duty ratio constant. At the
same time, a greater voltage boost for any given modulation
index is desired to reduce the voltage stress across the
switches. Figure 6 shows the sketch map of the maximum
constant boost control method, which achieves the maximum
voltage gain while always keeping the shoot-through duty
ratio constant. There are five modulation curves in this control
method: three reference signals, Va, Vb, and Vc, and two
shoot-through envelope signals, Vp and Vn. When the carrier
triangle wave is greater than the upper shoot-through
envelope, Vp, or lower than the lower shoot-through envelope,
Vn, the inverter is turned to a shoot-through zero state. In
between, the inverter switches in the same way as in
traditional carrier-based PWM control.

Because the boost factor is determined by the Shoot-
though duty cycle, the shoot-through duty cycle must be kept
the same in order to maintain a constant boost. The basic point
is to get the maximum B while keeping it constant all the time.
The upper and lower envelope curves are periodical and are
three times the output frequency. There are two half-periods
for both curves in a cycle.



Fig. 6 Sketch map of constant boost control


Fig. 7, Vac/0.5V0 versus M

For the first half-period, (0, /3) in Fig. 4, the upper and
lower envelope curves can be expressed by Eqs. (8) and (9),
respectively.



For the second half-period (/3, 2/3), the curves meet Eqs.
(10) and (11), respectively.






Obviously, the distance between these two curves is always
constant, that is, therefore the shoot-through duty ratio
is constant and can be expressed as


The boost factor B and the voltage gain can be calculated:



The curve of voltage gain versus modulation index is
shown in Fig. 7. As can be seen in Fig. 7, the voltage gain
approaches infinity when M decreases to 3/3.
This maximum constant boost control can be
Implemented using third harmonic injection. A sketch map of
the third harmonic injection control method, with 1/6 of the
third harmonic, is shown in Fig. 8. As can be seen from Fig. 8,
V
a
reaches its peak value 3M/2 while V
b
is at its minimum
value -3M/2. Therefore, a unique feature can be obtained:
only two straight lines, V
p
and V
n
, are needed to control the
shoot-through time with 1/6 (16%) of the third harmonic
injected.




Fig.8 Sketch map of constant boost control with third harmonic injection


The shoot-through duty ratio can be calculated by



As we can see, it is identical to the previous maximum
constant boost control method. Therefore, the voltage gain can
also be calculated by the same equation. The difference is that
in this control method, the range of M is increased to 23/3.
The voltage gain versus M is shown in Fig.9. The voltage gain
can be varied from infinity to zero smoothly by increasing M
from 3/3 to 2/3 with shoot-through states (solid curve in









Fig. 9) and then decreasing M to zero without shoot-through
states (dotted curve in Fig. 9).



Fig.9. 7 Vac/0.5V0 versus M


V. VOLTAGE STRESS COMPARISON

Voltage gain G is

We have

The voltage across the devices, Vs, can be expressed as


The voltage stresses across the devices with different control
methods are shown in Fig. 10. As can be seen from Fig.10, the
proposed method will cause a slightly higher voltage stress
across the devices than the maximum control method, but a
much lower voltage stress than the simple control method.
However, since the proposed method eliminates line
frequency related ripple, the passive components in the Z-
network will be smaller, which will be advantageous in many
applications.




Fig. 10 Voltage stress comparison of different control methods














I. SIMULATION RESULTS

To verify the validity of the control strategies,
simulation conducted. The fig.11 shows the simulated block
diagram. The simulation results with the modulation index M
= 0.6, M = 0.8, and M = 1 shown in Figs.12 respectively,
where the input voltage 100v, respectively. Table lists the
theoretical voltage stress and output line-to-line rms voltage
based on the simulation results.

Operating
condition

Voltage stress
(V)

Output voltage
VL-L(V)

M = 0.6,
Vdc = 100V


2549V

342.2V
M = 0.8,
Vdc = 100V


259V

150.2V
M = 1,
Vdc = 100V


110.4V

72.19V

Table . Theoretical voltage stress and output voltage under different
conditions









Z Source Inverter
The fig.11 , simulated block diagram







fig 12, voltage wave form for M=0.8





fig 12, voltage wave form for M=1


A
B
C
+
-
a
k
m
PUSE 1 ap
PUSE 2 cn
PUSE 3 bp
PUSE 4 an
PUSE 5 cp
PUSE 6 bn
3
RL VOLTAGE MEASUREMENT BLOCK
3
RL CURENT MEASUREMENT BLOCK
C g
E m
IGBT6
C g
E m
IGBT4
C g
E m
IGBT3
C g
E m
IGBT2
C g
E m
IGBT1
C g
E m
IGBT
[G6]
Goto5
[G5]
[G4]
[G3]
[G2]
[G1]
[G6]
From5
[G1] [G3]
From2
[G5]
From4
[G4]
From3
[G2]
From1
a
k m
a
k m
a
k m
a
k m
a
k m
a
k m
AC 1
AC 2
AC 3





VII. CONCLUSION

The maximum voltage gain with constant boost has been
presented that achieve maximum voltage boost without
introducing any low-frequency ripple related to the output
frequency. The relationship of the voltage gain and the
modulation index was analyzed in detail. The proposed
method can achieve the minimum passive components
requirement and maintain low voltage stress at the same
time. The control method has been verified by simulation
and experiments.


REFERENCES

[1] F. Z. Peng and Miaosen Shen, Zhaoming Qian,
Maximum Boost Control of the Z-source Inverter, in
Proc. of IEEE PESC 2004.

[2] D.A. Grant and J. A. Houldsworth: PWM AC Motor
Drive Employing Ultrasonic Carrier. IEE Conf. PE-VSD,
London, 1984, pp. 234-240.

[3] Bimal K. Bose, Power Electronics and Variable
Frequency Drives, Upper Saddle River, NJ: Prentice-Hall
PTR, 2002.

[4] P. T. Krein, Elements of Power Electronics, London,
UK: Oxford Univ. Press, 1998.

[5] W. Leonard, Control of Electric Drives, New York:
Springer-Verlag, 1985


































IMPROVEMENTS TO VOLTAGE SAG RIDE-THROUGH
PERFORMANCE OF AC VARIABLE SPEED DRIVES

R.Vijayalakshmi and G.Madhunika
Department of Electrical & Electronics Engineering
Sri Kalahasteeswara Institute Of Technology
Srikalahasthi.

ABSTRACT
Voltage sags originating in ac supply systems can cause nuisance tripping of variable
speed drives (VSDs) resulting in production loss and restarting delays. In ac VSDs
having an uncontrolled rectifier front-end, the effects of voltage sags on the dc link
causing dc under-voltage or ac over current faults initiate the tripping. This paper
suggests modifications in the control algorithm in order to improve the sag ride-through
performance of ac VSDs. The proposed strategy recommends maintaining the dc link
voltage constant at the nominal value during a sag by utilizing two control modes, viz.
(a) by recovering the kinetic energy available in the rotating mass at high motor speeds
and (b) by recovering the magnetic field energy available in the motor winding
inductances at low speeds. By combining these two modes, the VSD can be configured to
have improved voltage sag ride-through performance at all speeds.

1. INTRODUCTION
Solid State AC Variable Speed Drives
have already become an integral part of
many process plants and their usage is
on the rise in industrial, commercial and
residential applications. It is projected
that, about 50-60% of the electrical
energy generated will be processed by
solid state power electronic devices by
the year 2010 compared to the present
day levels of 10-20%. However, VSDs
are vulnerable to voltage sags. Voltage
sag is a momentary reduction of voltage
and is usually characterised by its
magnitude and duration with typical
values of magnitude between 0.1- 0.9
p.u. and duration ranging from 0.5
cycles to 1 minute. Voltage sags are
reported to be the most frequent cause of
disrupted operations of many industrial
processes. This paper concentrates on
AC VSDs with a three stage topology
(Fig. 1) viz., a diode bridge rectifier



front-end, a dc link capacitor and a
PWM inverter.


Figure1. AC VSD with a VSI
configuration
In VSDs having an uncontrolled rectifier
front-end, variation in the incoming ac
supply voltage is usually reflected in the
dc link behaviour. In the case of a
balanced three-phase sag, the dc link
voltage reduces, leading to an under-
voltage trip. Also when the ac supply
returns to normal conditions, the VSD
can trip due to the ac side over-current as
a result of high charging current of the
dc bus capacitor. In the case of an
unbalanced sag, the ripple in the dc bus
voltage increases and the VSD can trip
especially when the sag magnitude and
the load torque are very high. It is
reported that, a sag of magnitude more
than 20% (i.e. ac voltage falls below 0.8
p.u.) and duration more than 12 cycles is
found to trip VSDs. The impact of
unbalanced sag is less severe on the
VSD performance and hence the ride-
through behaviour when subjected to a
balanced three-phase sag alone is
analysed here.

2. CONVENTIONAL STRATEGIES
2.1 Types of available strategies
Three types of voltage sag mitigation
techniques are reported in literature.
They are, (a) hardware modifications
(eg. increasing ac side inductors, dc bus
capacitance) (b) improvement in power
supply conditions (eg. use of alternative
power supplies such as a motor-
generator set, Uninterruptable Power
Supply) and (c) modifying control
algorithm. In this paper, strategies
involving improvements in the control
strategy alone are considered due to the
following advantages, (a) no additional
space is required and (b) since only
software modifications are involved and
hence cost increase is relatively
negligible.

2.2 Control algorithm based strategies
One control algorithm based technique
which ensures maximum torque
availability to the motor suggests
compensating the modulation index and
stator frequency corresponding to the
instantaneous dc link voltage during a
sag. However, the dc link characteristics
are not improved and the drive can still
trip when a sag occurs. Another strategy
suggests maintaining the supply output
of the VSD synchronised with the
induction motor flux and operate the
motor at zero slip during a sag so that
the machine can be restarted when
normal ac supply returns. Since only a
minimal power is drawn from the dc
link, the rate of dc voltage reduction is
low. However, the sag ride-through
performance of the VSD depends on the
dc link voltage at the instant of ac supply
recovery. Finally, another control
strategy recommends maintaining the dc
bus voltage at a required level by
recovering the kinetic energy available
in the rotating mass during a sag. With
this type of control, the motor
decelerates towards zero speed at a rate
proportional to the amount of energy
regenerated and the shaft load on the
motor. But, since the kinetic energy
decreases proportional to the square of
the speed, the sag ride-through
performance of the VSD under this
strategy is highly speed dependent and it
works well only at high motor speeds. If
the voltage sag persists even after the
motor has come to standstill, the
capacitor voltage will start to reduce and
the VSD will trip due to either under-
voltage or over-current faults.

3. EFFECT OF VOLTAGE SAG ON
AC VSDs
Here, the impact of a symmetrical three-
phase sag on the VSDs controlling a
synchronous reluctance motor (SRM)
and an induction motor (IM) will be
verified. Field orientation control (FOC)
is considered. The VSDs were modeled
in MATLABTM with details as
discussed below.

3.1 Mathematical modeling of AC
VSDs
In field oriented control of ac motors, the
three phase motor currents are
transformed into two orthogonal
components in a synchronous frame of
reference which moves with respect to
the stator coordinates, and they are
defined as isq, the torque producing
component (quadrature axis current) and
isd, the flux producing component
(direct axis current).The main difference
in the control of IMs as compared to the
SRMs is due to the orientation of the
flux axis. In the case of an SRM, the
synchronous frame of reference is the
same as the rotor axis, which can be kept
track by a rotor position sensor whereas,
in the case of IMs, more complicated
computations are involved. The
functional block diagrams of SRM and
IM VSDs under field orientation are
shown in Figures 2 and 3 respectively. In
both cases, the speed reference (wref)
and the magnetising current reference
(im Rref) forms the main control inputs.
The operation of both VSDs is almost
identical and the functions of various
control blocks are as follows:

The Torque / Current Conversion block
calculates the torque producing current
set point (isqset) from the set torque
reference (Tref). The Current Control
block calculates the stator voltage set
points in field coordinates (Vsdref and
Vsqref). The Co-ordinate Transformation
block transforms the selected voltage
references (Vsdref and Vsqref) from the
synchronous coordinates to the stator co-
ordinates. The Switching Vector
Selection block selects the appropriate
operating sequence for the inverter
switches, based on the voltage vector
position in the complex plane.

3.2 Behaviour of VSDs on a sag
condition
Here, the impact of voltage sag on the
behaviour of both SRM and IM VSDs of
5.5 kW rating will be discussed. A
symmetrical three-phase voltage sag of
0.5 p.u. and duration 1 second was
applied when the motors were running at
a steady-state speed of 120 rad/s
operating with a load torque (TL) of 18
Nm (half the rated load). The inverter
switching frequency was kept at 5 kHz.
The observations are as follows:

3.2.1 Behaviour of SRM VSD during a
sag


The control system has ensured that the
SRM speed, torque and flux were
unaffected during the sag condition.
However, the dc bus characteristics, viz.
the capacitor voltage (Vbus) and the
capacitor charging current (Iin), are
affected the most during the sag (Fig.4).
The double-ended arrow indicates the
sag period. It is observed from the above
figures that initially there is no flow of
capacitor charging current (Iin) because
the rectifier diodes are reverse biased
and the capacitor discharges the stored
energy to the motor. Once Vbus becomes
less than the ac supply peaks, the
capacitor is charged uniformly by all the
three phases during the sag. When the ac
supply returns to normal, a very high
current pulse is observed in Iin with its
magnitude increasing with the sag
magnitude, but it is usually many times
the current rating of the rectifier diodes.
This high current pulse results in the
over shoot of Vbus which gradually
returns to normal by discharging to the
inverter load (Figure 4 (a)).

3.2.2 Behaviour of IM VSD during a
sag

When subjected to the three-phase sag,
the behaviour of the IM VSD was found
to be identical to that of the SRM VSD.
The speed and torque performances
are not affected whereas the impact of
the sag is observed in the dc link
characteristics. The capacitor voltage
(Vbus) and the capacitor charging
current (Iin) are shown in Figures 5 (a)
and (b) respectively.

3.3 Reasons for VSD tripping during a
sag
From Figures 4 and 5, it can be observed
that, the dc bus voltage reaches a low
level depending on the magnitude of the
sag and the load, which can cause the
VSD to trip due to an under-voltage
fault. When the sag condition is over,
very high capacitor recharging current
(Iin) results and in spite of being limited
by the circuit impedances, it is usually
several times the current handling
capacity of the rectifier diodes. In such a
case, the VSD can trip due to the over-
current. In order to protect the VSD
hardware, the under voltage trip setting
is typically kept between 70% and 85%
of the nominal dc voltage. Similarly, the
ac over-current trip is usually set in the
range of 200% to 250% of the rated
motor current.

4. PROPOSED CONTROL STRATEGY

Since the nuisance tripping of the VSD
during a voltage sag is triggered by the
dc bus characteristics, the proposed
strategy, which is an extension of the
control strategy proposed in,
recommends maintaining the dc link
voltage at the nominal level by
recovering the kinetic as well as
magnetic field energy present in the ac
motor in order to improve the sag ride-
through performance.

4.1 Energy levels present in an ac VSD
The typical levels of energy present in
an ac VSD controlling a 5.5 kW motor
are given below:

4.2 Control Sequence and Flow-
Charting
The flowchart illustrating the VSD
control during a voltage sag condition is
shown in Figure 6.


It can be observed that, there are three
distinct situations involved with respect
to the control of the VSD. They are
summarized as follows:

Control Situation 1 (CS1): (No Voltage
sag) VSD operation with normal speed
control.

Control Situation 2 (CS2): (Voltage
sag and motor speed > cut-off speed)
DC bus voltage control by recovering
load kinetic energy.

Control Situation 3 (CS3): (Voltage
sag and motor speed < cut-off speed)
DC bus voltage control by recovering
magnetic field energy.
In order to maintain the dc link voltage
at the required level during the sag,
additional control loops are necessary
within theVSD control system. They are:
(a) Kinetic energy recovery
(b) Magnetic field energy recovery

5. RESULTS
Above control strategies were
implemented for a case where the VSDs
were subjected to a 50% three-phase
Sag. Following sections illustrate the
results. and the results are analysed as
follows:

5.1 SRM VSD response


It can be seen that the bus voltage is
maintained at a level close to the set
reference by initially recovering the
kinetic energy as long as the motor
speed is above the cut off speed and then
by recovering the energy available in the
inductances (by reducing isd). The motor
speed is found to drop more rapidly due
to the regenerative operation and then
coast at a slower rate during the energy
recovery from the motor windings,
hence torque requirement increases.
Once the supply returns to normal, the
motor flux reaches its rated level and the
Motor starts to accelerate towards the set
speed. The capacitor charging current
(Iin) is found to be within acceptable
limits on normal ac supply recovery.

5.2 IM VSD response
When controlled by the proposed
strategy, the response of the IM VSD
was found to be similar to that of SRM
VSD at high motor speeds (i.e. Control
Situation 2). The motor was controlled
under regeneration mode and the dc link
voltage was maintained around the set
level (587 V). The motor speed was
found to reduce towards zero. However,
below the cut-off speed, while trying to
recover energy from the winding
inductances, the operation of the control
system was found to be different from
that of SRM.




Figure10. IM VSD response

6. CONCLUSIONS
The proposed strategy was found to
work satisfactorily in the case of the
SRM VSD. It was found that the dc link
characteristics have improved and the
VSD can ride through sags over a wide
speed range. However, in the case of an
IM VSD, it is observed that this strategy
can override a sag only at high motor
speeds. It was verified that the magnetic
field energy gets dissipated across the
rotor in the case of an IM. Thus, an open
loop flux reduction at a rate higher than
the rotor time constant will improve the
sag ride-through performance of an IM
VSD at low motor speeds.

REFERENCES

[1] Puttgen, H.B., Rouaud, D., Wung, P.,
Recent Power Quality Related Small to
Intermediate ASD Market Trends, PQA
91, (First International Conference on
Power Quality: End-Use Applications and
Perspective), Oct 15-18, 1991, Paris, France.

[2] Sarmiento, H. G., Estrada, E., A
Voltage Sag Study in an Industry with
Adjustable Speed Drives, Proc. Industrial
and Commercial Power Systems Technical
Conference, Irvine, CA, USA, May 1994, pp
85-89.

[3] Collins J r., E. R., Mansoor A., Effects
of Voltage Sags on AC Motor Drives, Proc.
IEEE Annual Textile, Fiber and Film
Industry Technical Conference Greenville,
SC, USA, May 1997; pp 1-7.

[4] Mansoor, A., Collins J r., E. R., Morgan,
R. L., Effects of Unsymmetrical voltage
sags on Adjustable Speed Drives, Proc. The
7
th
Annual Conference on Harmonics and
Quality of Power, Las Vegas, NV, USA,
October 1996, pp 467-472.

INFRARED PLASTIC SOLAR CELL
-A product of nanotechnology

Murali.G II ECE Mahesh babu.K , II ECE
Phone no:9949607193 mbk_141@Yahoo.co.in

GMRIT,Rajam
ABSTRACT
Nanotechnology is the nexus of
sciences. Nanotechnology is the engineering
of tiny machines - the projected ability to
build things from the bottom up using
techniques and tools being developed today
to make complete, highly advanced
products. It includes anything smaller than
100 nanometers with novel properties. As
the pool of available resources is being
exhausted, the demand for resources that
are everlasting and eco-friendly is
increasing day by day. One such form is the
solar energy. The advent of solar energy just
about solved all the problems. As such solar
energy is very useful. But the conventional
solar cells that are used to harness solar
energy are less efficient and cannot function
properly on a cloudy day. The use of
nanotechnology in the solar cells created an
opportunity to overcome this problem,
thereby increasing the efficiency. This paper
deals with an offshoot in the advancement of
nanotechnology, its implementation in solar
cells and its advantage over the
conventional commercial solar cell.
WHAT IS NANOTECHNOLOGY?
The pursuit of nanotechnology
comprises a wide variety of disciplines:
chemistry, physics, mechanical engineering,
materials science, molecular biology, and
computer science.


In order to the miniaturization of
integrated circuits well into the present
century, it is likely that present day, nano-
scale or nanoelectronic device designs will
be replaced with new designs for devices
that take advantage of the quantum
mechanical effects that dominate on the
much smaller,nanometer scale .
Nanotechnology is often referred to
as general purpose technology. That is
because in its mature form it will have
significant impact on almost all industries
and all areas of society. It offers better built,
longer lasting, cleaner, safer and smarter
products for the home, for ammunition, for
medicine and for industries for ages. These
properties of nanotechnology have been
made use of in solar cells. Solar energy is
really an abundant source that is renewable
and pollution free. This form of energy has
very wide applications ranging from small
household items, calculators to larger things
like two wheelers, cars etc. they make use of
solar cell that coverts the energy from the
sun into required form.
1. WORKING OF CONVENTIONAL
SOLAR CELL:
Basically conventional type solar cells
Photovoltaic (PV) cells are made of special
materials called semiconductors such as
silicon, which is currently the most
commonly used. Basically, when light
strikes the cell, a certain portion of it is
absorbed within the semiconductor material.
This means that the energy of the absorbed
light is transferred to the semiconductor.
The energy knocks electrons loose, allowing
them to flow freely. PV cells also all have
one or more electric fields that act to force
electrons freed by light absorption to flow in
a certain direction. This flow of electrons is
a current, and by placing metal contacts on
the top and bottom of the PV cell, we can
draw that current off to use externally. For
example, the current can power a calculator.
This current, together with the cell's voltage
(which is a result of its built-in electric field
or fields), defines the power (or wattage)
that the solar cell can produce.
Conventional semiconductor solar
cells are made by polycrystalline silicon or
in the case of highest efficiency ones
crystalline gallium arsenide.
But by this type of solar cell, it is
observed that, only 35% of the suns total
energy falling on it could be judiciously
used. Also, this is not so favorable on cloudy
days, thus creating a problem. This major
draw back led to the thought of development
of a new type of solar cell embedded with
nanotechnology. The process involved in
this is almost the same as explained earlier.
But the basic difference lies in the
absorption of the wavelength of light from
the sun.
Various developments regarding this
field are explained below:

2. INFRARED PLASTIC SOLAR
CELL:
Scientists have invented a plastic
solar cell that can turn the suns power into
electric energy even on a cloudy day.
Plastic solar cells are not new .But
existing materials are only able to harness
the suns visible light. While half of the
suns power lies in the visible spectrum, the
other half lies in the infrared spectrum. The
new material is first plastic compound that is
able to harness infrared portion. Every warm
body emits heat. This heat is emitted even
by man and by animals, even when it is dark
outside.
The plastic material uses
nanotechnology and contains the
1
st
generation solar cells that can harness the
suns invisible infrared rays. This
breakthrough made us to believe that plastic
solar cells could one day become more
efficient than the current solar cell. The
researchers combined specially designed
nano particles called quantum dots with a
polymer to make the plastic that can detect
energy in the infrared.
With further advances the new
PLASTIC SOLAR CELL could allow up
to 30% of suns radiant energy to be
harnessed completely when compared to
only 6% in today plastic best plastic solar
cells.
A large amount of suns energy
could be harnessed through solar farms and
used to power all our energy needs. This
could potentially displace other source of
electrical production that produce green
house gases like coal.
Solar energy reaching the earth is
10000 times than what we consume. If we
could cover .1% of the earths surface with
the solar farms we could replace all our
energy habits with a source of power which
is clear and renewable.
The first crude solar cells have
achieved efficiencies of todays standard
commercial photovoltaics the best solar cell,
which are very expensive semiconductor
laminates convert at most, 35% of the suns
energy into electricity.

2.1. WORKING OF PLASTIC
SOLAR CELL:
The solar cell created is actually a
hybrid, comprised of tiny nanorods
dispersed in an organic polymer or plastic. A
layer only 200 nanometers thick is
sandwiched between electrodes and can
produce at present about .7 volts. The
electrode layers and nanorod /polymer
layers could be applied in separate coats,
making production fairly easy. And unlike
todays semiconductor-based photovoltaic
devices, plastic solar cells can be
manufactured in solution in a beaker without
the need for clean rooms or vacuum
chambers.
The technology takes advantage of
recent advances in nanotechnology
specifically the production of nanocrystals
and nanorods. These are chemically pure
clusters of 100 to 100000 atoms with
dimensions of the order of a nanometer, or a
billionth of a meter. Because of their small
size, they exhibit unusual and interesting
properties governed by quantum mechanics,
such as the absorption of different colors of
light depending upon their size. Nanorods
were made of a reliable size out of cadmium
selenide, a semi conducting material.
Nanorods are manufactured in a
beaker containing cadmium selenide, aiming
for rods of diameter-7 nanometers to absorb
as much sunlight as possible. The length of
the nanorods may be approximately
60nanometers.Then the nanorods are mixed
with a plastic semiconductor called p3ht-
poly-(3-hexylthiophene) a transparent
electrode is coated with the mixture. The
thickness, 200 nanometers-a thousandth the
thickness of a human hair-is a factor of 10
less than the micron-thickness of
semiconductor solar cells. An aluminium
coating acting as the back electrode
completed the device. The nanorods act like
wires. When they absorb light of a specific
wavelength, they generate an electron plus
an electron hole-a vacancy in the crystal that
moves around just like an electron. The
electron travels the length of the rod until it
is collected by aluminium electrode. The
hole is transferred to the plastic, which is
known as a hole-carrier, and conveyed to the
electrode, creating a current.

2.2 IMPROVEMENTS:
Some of the obvious improvements
include better light collection and
concentration, which already are employed
in commercial solar cells. Significant
improvements can be made in the plastic,
nanorods mix, too, ideally packing the
nanorods closer together, perpendicular to
the electrodes, using minimal polymer, or
even none-the nanorods would transfer their
electrons more directly to the electrode. In
their first-generation solar cells, the
nanorods are jumbled up in the polymer,
leading to losses of current via electron-hole
recombination and thus lower efficiency.
They also hope to tune the nanorods to
absorb different colors to span the spectrum
of sunlight. An eventual solar cell has three
layers each made of nanorods that absorb at
different wavelength.
3. APPLICATIONS:
Silicon possesses some nanoscale
properties. This is being exploited in
the development of a super thin
disposable solar panel poster
which could offer the rural dwellers
a cheap and an alternative source of
power. Most people living in remote
areas are not linked to national
electricity grid and use batteries or
run their own generator to supply
their power needs. Disposal solar
panels can be made in thin sheets
with about 6-10 sheets stacked
together and made into a poster can
help them to some extent in this
regard. This poster could be mounted
behind a window or attached to a
cabinet.
Like paint the compound can also be
sprayed onto other materials and
used as portable electricity.
Any chip coated in the material
could power cell phone or other
wireless devices.
A hydrogen powered car painted
with the film could potentially
convert energy into electricity to
continually recharge the cars
battery.
One day solar farms consisting of
plastic materials could be rolled
across deserts to generate enough
clear energy to supply the entire
planets power needs.

4. ADVANTAGES:
Plastic solar cells are quite a lot useful
in the coming future. This is because of the
large number of advantages it has got. Some
of the major advantages are:
They are considered to be 30% more
efficient when compared to
conventional solar cells.
They are more efficient and more
practical in application.
Traditional solar cells are bulky
panels. This is very compact.
Conventional solar cells are only
used for large applications with big
budgets. But the plastic solar cells
are feasible as they can be even sewn
into fabric- thus having vast
applications.
Flexible, roller processed solar cells
have the potential to turn the suns
power into a clean, green, consistent
source of energy.

5. LIMITATIONS:
The biggest problem with this is cost
effectiveness. But that could change
with new material. But chemists
have found a way to make cheap
plastic solar cells flexible enough to
paint onto any surface and
potentially able to provide electricity
for wearable electronics or other low
power devices.
Relatively shorter life span when
continuously exposed to sunlight.
Could possibly require higher
maintenance and constant
monitoring.

6. CONCLUSION:
Plastic solar cells help in exploiting the
infrared radiation from the suns rays. They
are more effective when compared to the
conventional solar cell. The major advantage
they enjoy is that they can even work on
cloudy days, which is not possible in the
former. They are more compact and less
bulkier.
Though at present, cost is a major draw
back, it is bound be solved in the near future
as scientists are working in that direction.
As explained earlier, if the solar farms
can become a reality, it could possibly solve
the planets problem of depending too much
on the fossil fuels, without a chance of even
polluting the environment.
7. REFERENCES:
1. Nanomaterials: Synthesis, Properties and
Applications: Edelstein, A. S., Cammarata,
R. C., Eds.; Institute of Physics Publishing:
Bristol and Philadelphia, 1996.
2. The Coming Era of Nanotechnology;
1987. Drexler, K. Eric, Doubleday; New
York
3. A gentle introduction to the next big idea-
Mark A. Ratner, Daniel Ratner.
4. Introduction to nanotechnology- Charles
P Poole, Frank J Owens
5. The clean power revolution- Troy
Helming
6. Solar energy-fundamentals, design,
modeling, applications- G.N. Tiwari
7. Thin film solar cells next generation
photovoltaics and its application- Y
Hamakawa
Lunar Solar Energy



P.Anil B. Durga Hari Kiran
3/4 E.E.E. 3/4 E.E.E.
Anil_palli2005@yahoo.com bdurgaharikiran@yahoo.co.uk


1. Abstract
Out of all the renewable and non-
polluting sources solar power become
the most the primary source of
commercial power for every one in the
world to achieve the same high standard
of living. Over the past 200 years the
developed nations have vastly increased
their creation of per capita income
compared to the other nations. In
parallel, the developed nations increased
the use of commercial thermal power to
~6.9Kwt/person. In fact, most people in
the developing nations use much less
commercial thermal power and most
have little (or) no access to electric
power. By the year 2050, people will
require at least 20,000 GWe of power.
This requires approximately 60,000 GWt
of conventional thermal power
generation. Such enormous thermal
energy consumption will exhaust
economical recoverable deposits of coal,
shale, oil, natural gas, uranium and
thorium. As a result, of conventional
systems become useless. Terrestrial
renewable systems are always captive to
global climate change induced by volcanoes,
natural variation in regional climate,
industrial haze and possibly even
microclimates induced by large area
collectors. Over the 21
st
century, a global
stand -alone system for renewable power
would cost thousand of trillions of dollars to
build and maintain. Energy costs could
consume most of the world's wealth. We
need a power system that is independent of
earth's biosphere and provides an abundant
energy at low cost. To do this man -kind
must collect dependable solar power in
space and reliably send it to receivers on
earth. The MOON is the KEY.
2. Present and Future Power
Scenario
In 1975 Goeller and Weinberg published
a fundamental paper on the relation of
commercial power to economic prosperity.
They estimated that an advanced economy
could provide the full range of Goods
and services to its population with
6kWt/person. As technology advances,
the goods and services could be provided
by ~2 kWe/person of electric power.
There will be approximately 10 billion
people in 2050.They must be supplied
with ~6 kWt/person or ~2 kWe/person in
order to achieve energy and economic
prosperity. Present world capacity for
commercial power must increase by a
factor of ~5 by 2050 to 60 kWt or ~20
TWe (T=1012). Output must be
maintained indefinitely. Conventional
power systems are too expensive for the
Developing Nations. Six kilowatts of
thermal power now costs ~1,400 $/Y-
person. This is ~50% of the average per
capita income within the Developing
Nations. Other major factors include the
limited availability of fossil and nuclear
fuels (4,000,000 GWt-Y) and the
relatively low economic output from
thermal energy (~ 0.25 $/kWt-h).
Humans must transition to solar energy
during first part of the 21st Century to
extend the newly emerging world
prosperity. However, solar and wind are
intermittent and diffuse. Their energy
output is too expensive to collect, store,
and dependably distribute.
3. Lunar Solar Power Generation
Two general concepts have been
proposed for delivering solar power to Earth
from space. In one, Peter Glaser of Arthur
D. Little, Inc. (Cambridge, MA), proposed
in 1968 that a huge satellite in
geosynchronous orbit around Earth could
dependably gather solar power in space. In
the second concept figure (1), discussed
here, solar power would be collected on the
moon. In both ideas, many different beams
of 12cm wavelength microwaves would
deliver power to receivers at sites located
worldwide. Each receiver would supply
commercial power to a given region. Such a
receiver, called a rectenna, would consist of
a large field of small rectifying antennas. A
beam with a maximum intensity of less than
20% of noontime sunlight would deliver
about 200 W to its local electric grid for
every square meter of rectenna area.
Unlike sunlight, microwaves pass through
rain, clouds, dust, and smoke. In both
scenarios, power can be supplied to the
rectenna at night Several thousand
individual rectennas strategically located
around the globe, with a total area of
100,000 km2, could continuously provide
the 20 TW of electric power, or 2 kW per
person, required for a prosperous world of
10 billion people in 2050. This surface
area is 5% of the surface area that would
be needed on Earth to generate 20 TW
using the most advanced terrestrial solar-
array technology of similar average
capacity now envisioned. Rectennas are
projected to cost approximately
$0.004/kWeoh, which is less than one-
tenth of the current cost of most
commercial electric energy. This new
electric power would be provided
without any significant use of Earth's
resources several types of solar power
satellites have been proposed. They are
projected, over 30 years, to deliver
approximately 10,000 kWoh of electric
energy to Earth for each kilogram of
mass in orbit around the planet.
To sell electric energy at $0.01/ kWoh,
less than $60 could be expended per
kilogram to buy the components of the
power satellites, ship them into space,
assemble and maintain them, decommission
the satellites, and finance all aspects of the
space operations. To achieve this margin,
launch and fabrication costs would have to
be lowered by a factor of 10,000. Power
prosperity would require a fleet of
approximately 6,000 huge, solar-power
satellites. The fleet would have more than
330,000 km2 of solar arrays on-orbit and a
mass exceeding 300 million tones. By
comparison, the satellite payloads and rocket
bodies now in Earth geosynchronous orbit
have a collective surface area of about 0.1
km2. The mass launch rate for a fleet of
power satellites would have to be 40,000
times that achieved during the Apollo era by
both the United States and the Soviet Union.
A many decade development program
would be required before commercial
development could be considered.


4. Lunar Solar Collectors
Fortunately, in the Lunar Solar
Power (LSP) System, an appropriate,
natural satellite is available for
commercial development. The surface of
Earth's moon receives 13,000 TW of
absolutely predictable solar power. The
LSP System uses 10 to 20 pairs of bases-
one of each pair on the eastern edge and
the other on the western edge of the
moon, as seen from Earth-to collect on
the order of 1% of the solar power
reaching the lunar surface. The collected
sunlight is converted to many low
intensity beams of microwaves and
directed to rectennas on Earth. Each
rectenna converts the microwave power to
electricity that is fed into the local electric
grid. The system could easily deliver the 20
TW or more of electric power required by
10 billion people. Adequate knowledge of
the moon and practical technologies has
been available since the late 1970s to collect
this power and beam it to Earth. Successful
Earth-moon power beams are already in use
by the Arecibo planetary radar, operating
from Puerto Rico. This radio telescope
periodically images the moon for mapping
and other scientific studies with a radar
beam whose intensity in Earth's atmosphere
is 10% of the maximum proposed for the
LSP System. Each lunar power base would
be augmented by fields of solar converters
located on the back side of the moon,
500 to 1,000 km beyond each visible
edge and connected to the earthward
power bases by electric transmission
lines.





The moon receives sunlight continuously
except during a full lunar eclipse, which
occurs approximately once a year and
lasts for less than three hours. Energy
stored on Earth as hydrogen, synthetic
gas, dammed water, and other forms
could be released during a short eclipse.
Each lunar power base consists of tens
of thousands of power plots figure (2)
distributed in an elliptical area to form
fully segmented, phased-array radar that
is solar-powered. Each demonstration
power plot consists of four major
subsystems. Solar cells collect sunlight,
and buried electrical wires carry the
solar energy as electric power to
microwave generators.
These devices convert the solar electricity
to microwaves of the correct phase and
amplitude and then send the microwaves to
screens that reflect microwave beams
toward Earth. Rectennas located on Earth
between 60 N and 60 S can receive power
directly from the moon approximately 8
hours a day. Power could be received
anywhere on Earth via a fleet of relay
satellites in high inclination, eccentric orbits
around Earth figure (1). A given relay
satellite receives a power beam from the
moon and retransmits multiple beams to
several rectennas on Earth required by an
alternative operation. This enables the
region around each rectenna to receive
power 24 hours a day. The relay satellites
would require less than 1% of the surface
area needed by a fleet of solar-power
satellites in orbit around Earth.
Synthetic-aperture radars, such as those
flown on the Space Shuttle, have
demonstrated the feasibility of
multibeam transmission of pulsed power
directed to Earth from orbit. Relay
satellites may reflect the beam or may
receive the beam, convert it in frequency
and phasing and then, transmit a new
beam to the rectenna. A retransmitter
satellite may generate several beam and
simultaneously service several rectennas.
The orbital reflector and retransmitter
satellites minimize the need on earth for
long distance power lines. Relay satellites
also minimize the area and mass of power
handling equipments in orbit around earth.
There by reducing the hazards of orbital
debris to space vehicles and satellites


5. Fabrication of Thin Film
Crystalline Silicon Solar Cells
The silicon film is a proprietary
process, and only a very general process
is designed. The generic process consists
of ceramic formation, metallurgical
barrier formation, polycrystalline layer
deposition, emitter diffusion and contact
fabrication. The conductive ceramic
substrate is fabricated from selected low-
cost materials. The metallurgical barrier
prevents the substrate impurities from
entering and contaminating the active thin
silicon layer. The randomly textured and
highly reflecting metallurgical barrier
improves light trapping. A suitable p- type
doped 30 - 100micro-cm active layer is
deposited from a liquid solution.
Phosphorus and aluminum impurity
gathering are used for bulk quality
improvement. Cells with large areas of
240, 300 and 700 cm2 are developed. A
cell with an area of 675 cm2 has
demonstrated the record efficiency of
11.6 - 17.7%.
The waste products present in the lunar
surface are silicon, iron, TiO2, etc.
These products can be used as raw
materials for solar cell fabrication. A
special compound called anorthite is
used for extracting the above said
components. Carbothermal reduction of
anorthite

. Carbon compounds can also be used to
extract Oxygen, Fe, and TiO2 from Lunar
Ilemenite. The iron is used for interconnect
and TiO2 for anti reflect.


6. Microwave
For direct microwave wireless power
transmission to the surface of the earth, a
limited range of transmission frequencies is
suitable. Frequencies above 6 GHz are
subject to atmospheric attenuation and
absorption, while frequencies below 2 GHz
require excessively large apertures for
transmission and reception. Efficient
transmission requires the beam have a
Gaussian power density. Transmission
efficiency b for Gaussian beams is related
to the aperture sizes of the transmitting and
receiving antennas: b ~1- exp (-2) and
=DtDr/ (4R) Where Dt is the transmitting
array diameter, Dr is the receiving array
diameter, b .is the wavelength of
transmission and R is the range of
transmission. Frequencies other than
2.45 GHz, particularly 5.8 GHz and 35
GHz are being given greater attention as
candidates for microwave wireless
power transmission in studies and
experiments. The mass and size of
components and systems for the higher
frequencies are attractive. However, the
component efficiencies are less than for
2.45 GHz, and atmospheric attenuation,
particularly with rain, is greater.
7. Cost Forecasting
To achieve low unit cost of
energy, the lunar portions of the LSP
System are made primarily of lunar
derived components. Factories, fixed and
mobile, are transported from the Earth to
the Moon. High output greatly reduces
the impact of high transportation costs
from the Earth to the Moon. On the
Moon the factories produce 100s to
1,000s of times their own mass in LSP
components. Construction and operation
of the rectennas on Earth constitutes
greater than 90% of the engineering
costs. Any handful of lunar dust and
rocks contains at least 20% silicon, 40%
oxygen, and 10% metals (iron,
aluminum, etc.). Lunar dust can be used
directly as thermal, electrical, and radiation
shields, converted into glass, fiberglass, and
ceramics, and processed chemically into its
elements. Solar cells, electric wiring, some
micro-circuitry components, and the
reflector screens can be made out of lunar
materials. Soil handling and glass
production are the primary industrial
operations. Selected micro circuitry can be
supplied from Earth. Use of the Moon as a
source of construction materials and as the
platform on which to gather solar energy
eliminates the need to build extremely large
platforms in space. LSP components can be
manufactured directly from the lunar
materials and then immediately placed on
site. This eliminates most of the packaging,
transport, and reassembly of components
delivered from Earth or the Moon to deep
space. There is no need for a large
manufacturing facility in deep space. The
LSP System is the only likely means to
provide 20 TWe of affordable electric power
to Earth by 2050. According to criswell in
the year 1996 lunar solar power reference
design for 20,000GWe is shown in table (1).
Its also noted that the total mass investment
for electricity from lunar solar energy is less
than for Terrestrial solar energy systems.
Terrestrial Thermal power system - 310,000
tones / GWe.
Terrestrial Photovoltaic - 430,000 tones /
GWe.
Lunar solar power - 52,000 tones / GWe.
8. Merits of LSP
In technical and other aspects there
are two reasons for which we prefer LSP
are:Unlike earth, the, moon is the ideal
environment for large area solar
converters.
The solar flux to the lunar surface is
predictable and dependable.
There is no air or water to degrade
large area thin film devices.
Solar collectors can be made that are
unaffected by decades of exposure to
solar cosmic rays and the solar wind.
Sensitive circuitry and wiring can be
buried under a few- tens of centimeters
of lunar soil, and completely protected
against solar radiations
temperature extremes. Secondly,
virtually all the LSP components can be
made from local lunar materials.
The high cost of transportation to and
from the moon is cancelled out by
sending machines and small factors to
the moon that produce hundreds to
several thousand times there own mass in
components and supplies.
Lunar materials will be used to reduce
the cost of transportation between the earth
and the moon and provide supplies.
8.1 Additional Features of LSP
The design and demonstration of
robots to assemble the LSP components and
construct the power plots can be done in
parallel. The crystalline silicon solar cells
can be used in the design of robots, which
will further decrease the installation cost.
8.2 Economical Advantages of LSP and
Crystalline Silicon Solar Cell
Crystalline silicon solar cells almost
completely dominate world - wide solar cell
production.
Excellent stability and reliability plus
continuous development in cell structure and
processing make it very likely that
crystalline silicon cells will remain in this
position for the next ten years.
Laboratory solar cells, processed by
means of sophisticated micro - electronic
techniques using high quality Fe-Si substrate
have approached energy conversion
efficiencies of 24%.

Solar converter

1). Solar converter.
2).Microwave generator.
3).Microwave reflector.
4).Mobile factory.
5). Assembly units.
6). Habitat / Manufacturing units.

9. Conclusion

The LUNAR SOLAR POWER
(LSP) system will establish a permanent
two-planet economy between the earth
and the moon. The LSP System is a
reasonable alternative to supply earth's
needs for commercial energy without the
undesirable characteristics of current
options. The system can be built on the
moon from lunar materials and operated
on the moon and on Earth using existing
technologies. More-advanced production
and operating technologies will
significantly reduce up-front and
production costs. The energy beamed to
Earth is clean, safe, and reliable, and its
source-the sun-is virtually inexhaustible.


10. References
[1] Alex Ignatiev, Alexandre Freundlich,
and Charles Horton., "Electric Power
Development on the Moon from In-Situ
Lunar Resources", Texas Center for
Superconductivity and Advanced Materials
University of Houston, Houston, TX 77204
USA.
[2] Criswell, D. R. and Waldron, R. D.,
"Results of analysis of a lunar-based power
system to supply Earth with 20,000GW of
electric power", SPS 91 Power from Space
Paris/Gif-sur-Yvette 27 to 30 August 1991,
pp. 186-193
[3] Dr. David R. Criswell.," Lunar solar
power utilization of lunar materials and
economic development of the moon".
[4] Dr. David R. Criswell.," Solar Power via
the Moon"
[5] G.L.Kulcinski.,"Lunar Solar Power
System", lecture 35, April 26, 2004.
[6] G.L.Kulcinski.,"Lunar Solar Power
System", lecture 41, April 30, 2004.



APPLICATION OF AGENT BASED SYSTEM USING
MEMS TECHNOLOGY


AIRBAG CONTROL SYSTEM USING
MEMS ACCELEROMETERS


SONA COLLEGE OF TECHNOLOGY

SUBMITTED BY

UMMIDI SAI PALLAVI V.PRIYADHARSHINI
FINAL YR,B.E(EEE) FINAL YR,B.E(EEE)
pallavisai@gmail.com priyadharshini@gmail.com




ABSTRACT:
Airbag control system using
MEMS Accelerometer is
advancement in automotive
industry. Micro Electro Mechanical
System (MEMS) integrates
sensors, electronics & actuators on
a common platform using micro
fabrication technology. With the
motive of increasing the public
awareness of driving safety in
India, we have presented this paper.
This paper presents a detailed
view of how MEMS accelerometer
technology is applied to the
detection of car collision and
inflation of the airbags to cushion
the passenger from impact. The
airbag control system responses
within 0.025 sec after collision, this
instant response is mainly
contributed by the airbag sensing
system using ADXL capacitive
dual axis accelerometer chip.
This mechanism can be
extended to Accident Spot
Detection System (ASDS) for
detecting highway accidents from a
central control room immediately
as it is difficult for developing
countries like India to work with a
global positioning satellite (GPS).
The literary survey presented here,
highlights the effectiveness of
airbag & increase in demand of
MEMS shipment. We have further
presented a brief report on the
initiatives taken by the government
of India in the field of MEMS. The
increase in reliability and reduction
in price of the airbag system has
helped to bring about its near
universal inclusion in cars and light
trucks.



CONTENTS

INTRODUCTION

AIR BAG CONTROL SYSTEM USING MEMS ACCELEROMETERS

CAPACITIVE ACCELEROMETER

o PRINCIPLE OF CAPACITIVE ACCELEROMETER

o CONSTRUCTION OF CAPACITIVE ACCELEROMETER

o ADXL202/210 (Accelerometer IC Chip)

MERITS

BRIEF LITERARY SURVEY

INDIAN INITIATIVES

CONCLUSION







INTRODUCTION:

MEMS: Next Silicon Revolution?
After Information technology,
MEMS may be the next technological
forte of our country. We are the late
starter, but still not out of the race. A base
technology (semiconductor processing),
an starting level R&D awareness already
exist in our country.
MEMS is an acronym for Micro
Electro Mechanical System. MEMS
integrate Sensors, Electronics &
Actuators on a common platform using
Micro Fabrication Technology. The
sensor collects information from the
environment and provides it to the circuit.
The electronic circuit processes this
information and gives control signals to
the Actuators, which then manipulates
environment for the desired purpose. In
this way, MEMS are destined to bring
revolution in technology by making
system more intelligent.
From 1960s MEMS find wide
applications in various fields like





automobiles, optics, radio frequency &
life sciences.
MEMS are unavoidable where
compactness is a desired feature.
MEMS accelerometers find the
biggest use in automobiles, mainly in
airbag safety system to detect the
collision impact and inflate the airbags to
protect the passengers.
Why MEMS Accelerometers is
necessary in airbag control system?
The first and, to date, leading
mass-market application of MEMS
motion sensors is their use in the
automotive market. Vehicle
manufacturers were interested in a cheap,
accurate and reliable airbag release
trigger. Today, virtually all airbag
mechanisms rely on MEMS
accelerometers in their operations.
The increase in reliability and
reduction in price of the airbag system
helped bring about its near universal
inclusion in cars.




AIR BAG CONTROL SYSTEM USING MEMS ACCELEROMETERS:

Before talking about airbag
sensing system, lets learn the principle of
airbag system protection. An airbag is an
automotive safety restraint system, which
is build in steering wheel and
instrumental panel of our cars.
Airbag are assemblies consisting
of the airbag (made of Nylon), inflator
modules, and sensor housing, electrical
connectors, airbag retainer, and the cover
labeled with SRS. The drivers side
bag is mounted in the center of the
steering wheel, while the passenger
airbag is mounted in the top of dash on
the passenger side of vehicle.






Airbags are made to inflate from
collisions that occur within a 60% arc in
front of the vehicle, and with collisions
to about 25 miles per hour into a fixed
wall. Upon crash, sensors set off an

igniter in the center of the airbag inflator.
Sodium azide pellets in the inflator ignite
and release gases that primarily consist of
nitrogen. The gas then passes through a
filter, which removes ash or any solid
particles, into the bag, causing it to
inflate.
Almost all the collisions occur
with 0.125 second after car crash
according to investigation. The airbag is
designed to inflate in less than 0.04
second or 40 milliseconds (ms).In a
collision, the airbag begins to fill within
0.03 second. By 0.06 second, the airbag
is fully inflated and cushions the
occupant from impact. The airbag then
deflates (depowers) 0.12 second after
absorbing the forward force. The entire
event, from initial impact to full
deployment of airbag system, takes about
55 msec about half the time to blink an
eye [2].










Fig 1. Schematic of airbag deployment after collision [3]

The fast response of airbag
protection system within such a short
time after collision is mainly
contributed to the airbag sensing
system --- accelerometer which is used
on a moving vehicle to measure
acceleration. As a crash happens, the
speed drastically reduces from high speed
to zero within a millitime, and yields a
huge deceleration up to several thousands
of gravitational acceleration (g). So fast
and precise reading acceleration data of a
moving vehicle is very important to
evaluation of the moving situation.
Today the integrated circuit with a
sensing unit already has been applied as
an accelerometer in airbag protection
system. Many different accelerometers
can be for the measurement of
acceleration , but capacitive are
considered as more commercial products
at present automotive market.
MEMS accelerometers commonly control
side airbags. Because the fire decision
must be made quickly, there is no time to
wait for the propagation of the sensors
signal through the cars chassis, so the
satellite must be placed close to the
airbag it controls. In addition, because
there is virtually no crush zone between
the impact and the accelerometer, the
measurement range must be above the
center console accelerometers. As a
result, many vehicles outfitted with side
airbags may add two to four more MEMS
accelerometers for this function.










FIGURE: Positions of ADXL Chip ( Satellite Sensor) & Airbag

Front-looking crash sensors placed just
behind the front bumper are being added
to some models to help determine the
severity of the frontal crash. The
acceleration signature of the front-
looking sensor is compared with that of
the center console accelerometer,
allowing the airbag module controller to
modulate the inflation rate of the airbag
to match the deceleration rate of the car.
Here, too, high grange and compact size
are important factors in this application.









CAPACITIVE ACCELOMETERS:

PRINCIPLE OF CAPACITIVE
ACCELEROMETER:
The output of a parallel plate
capacitor depends on the gap between its
movable & fixed plates. Due to vibration,
if the gap between the plates is altered ,
its capacitance also changes. This change
in capacitance becomes a measure of
acceleration. This is the principle of
capacitive accelerometer.


CONSTRUCTION OF CAPACITIVE
ACCELEROMETER
The basic structure of the
capacitive accelerometer sense element is
shown in Figure. The sense element
wing is a flat plate of nickel supported
above the substrate surface by two torsion
bars attached to a central pedestal. The
structure is asymmetrically shaped so that
one side is heavier than the other,
resulting in a center of mass that is offset
from the axis of the torsion bars. When
an acceleration produces a moment
around the torsion bar axis, the plate or


wing is free to rotate, constrained only by
the spring constant of the torsion bars.


Schematic of sensor element

On the substrate surface, beneath the
sense element wing, two conductive
capacitive plates are symmetrically
located on each side of the torsion bar
axis. The upper wing and the two lower
capacitor plates on the substrate form two
air gap variable capacitors with a
common connection thus creating a fully
active capacitance bridge. When the
wing rotates about the torsion bar axis,
the average distance between the wing
and one surface plate decreases,
increasing the capacitance for that plate,
while the distance to the other plate
increase, decreasing its capacitance.



ADXL202/210:

The ADXL202/ADXL210 are low cost,
low power, complete dual-axis
accelerometers with a measurement
range of either 2 g/ 10 g on a single
monolithic IC. They contain a polysilicon
surface-micromachined sensor and signal
conditioning circuitry to implement an
open loop acceleration measurement
architecture. The ADXL202/ADXL210
can measure both dynamic acceleration
(e.g., vibration) and static acceleration
(e.g., gravity). The outputs are digital
signals whose duty cycles (ratio of
pulsewidth to period) are proportional to
the acceleration in each of the 2 sensitive
axes. These outputs may be measured
directly with a microprocessor counter,
requiring no A/D converter or glue logic.

The ADXL202/ADXL210 is available in
a hermetic 14-lead Surface Mount
CERPAK, specified over the 0 C to
+70 C commercial or 40 C to +85 C
industrial temperature range.





PIN DESCRIPTION:







THEORY OF OPERATION

For each axis, an output circuit
converts the analog signal to a duty cycle
modulated (DCM) digital signal that can
be decoded with a counter/timer port on a
microprocessor.
The accelerometer measures static
acceleration forces such as gravity,
allowing it to be used as a tilt sensor.The
sensor is a surface micromachined
polysilicon structure built on top of the
silicon wafer.
Polysilicon springs suspend the
structure over the surface of the wafer
and provide a resistance against
acceleration forces. Deflection of the
structure is measured using a differential
capacitor that consists of independent
fixed plates and central plates attached to
the moving mass. An acceleration will
deflect the beam and unbalance the
differential capacitor, resulting in an
output square wave whose amplitude is
proportional to acceleration. Phase
sensitive demodulation techniques are
then used to rectify the signal and
determine the direction of the



acceleration. The output of the
demodulator drives a duty cycle
modulator (DCM) stage through a
32kiloohms resistor.
At this point a pin is available on
each channel to allow the user to set the
signal bandwidth of the device by adding
a capacitor. This filtering improves
measurement resolution and helps
prevent aliasing. After being low-pass
filtered, the analog signal is converted to
a duty cycle modulated signal by the
DCM stage. A single resistor sets the
period for a complete cycle (T2), which
can be set between 0.5 ms and 10 ms (see
Figure 12). A 0 g acceleration produces a
nominally 50% duty cycle.
The acceleration signal can be
determined by measuring the length of
the T1 and T2 pulses with a counter/timer
or with a polling loop using a low cost
microcontroller. An analog output voltage
can be obtained either by buffering the
signal from the XFILT and YFILT pin, or
by passing the duty cycle signal through
an RC filter to reconstruct the dc value.





The ADXL202/ADXL210 will
operate with supply voltages as low as
3.0 V or as high as 5.25 V.


MERITS:
MEMS Accelerometers technology has
created a host of sensors that sport
attractive features and characteristics for
air bag controlling. Those include:
Small size: MEMS sensors are
fabricated using integrated circuit
technology, generating sensors sized at a
few hundred microns.
Reliability: The mechanical properties
of silicon, the base material for
MEMS sensors, are excellent. As a result,
MEMS sensors work without
performance degradation over very long
periods of time, leading to low cost
of ownership and operation.
Ruggedness: MEMS sensors can
operate in a wide range of environments.
Cost: Improvements in manufacturing
technologies and constantly


increasing production volumes resulted in
the availability of sensors covering a
range of dynamics parameters at
attractive price/performance ratio.

FUTURE APPLICATION :
This control system can be used in
detecting accidents on highways from a
central controlled rooms called Accident
Spot Detection Systems (ASDS).The
Trigger switch of the airbag control
system is connected to a transmitter
which transmits the signal to the modified
receiver in the milestone that already
exist on highways. This received signal is
then transmitted to the control room. So
that rescue team can easily arrive at
accident spot immediately.

BRIEF LITERARY
SURVEYCENSUS FOR AIR BAG
EFFECTIVENESS (FRONT
CRASHES) (1998)


Car drivers: 61 percent fatality reduction
Car passenger: 62 percent fatality reduction
Light truck drivers: 66 percent fatality reduction



WORLD WIDE SHIPMENT OF
MEMS PRODUCTS ($US Million)


Application
Sector

2000

2004
IT
peripheral

8700

13,400

Automotive

1260

2350
Industrial
Automation

1190

1850


Figure1: Cost Vs Performance Graph.


INDIAN INIATIATIVES:

With its outstanding manpower, a
few semiconductor foundaries, many



research labs & low production cost,
India is uniquely positioned to excel in
the field of MEMS. After Information
Technology, MEMS may be the next
technological forte of the country.
The government has initiated a
national level programme called National
Programme on Smart Materials(NPSM)
to put coordinated efforts in this
direction. Semiconductor Complex
Limited (SCL), Chandigarh, is a
government of India Enterprise involved
in IC designing & fabrication. It has been
selected as the national MEMS foundry
by NPSM.

LOGICAL CONCLUSIONS &
PERSPECTIVES:

With an increase of the public
awareness of driving safety in India , the
airbag protection system with advanced
accelerometer can be implanted to
enhance safe driving. The progresses of
automotive and electronics technologies
offer the microelectronics accelerometer
much more accuracy, fast response,
multifunction and stability. We hope that
this Air Bag Safety System will be
successfully implemented in India to a
great extent.
MEMS have a tremendous future in
replacing the components of many
commercial products used today. Further
more, this enabling technology promises
to not only transform major industries but
to create entirely new categories of
products. An almost infinite number of
radical applications are possible because
of its potential : nearly limitless
functionality, infinitesimally small form
factor, a phenomenon price / performance
ratio & an architecture that lends itself to
mass production, all of which are
advantages that drove the success of the
integrated circuit. MEMS will be the
indispensable factor for advancing
technology in the 21
st
Century in India as
well as World.

BIBLIOGRAPHY:

1. Frost & Sullivan Inc., New
technologies cause automotive
sensor markets to thrive, May 3,
1999.

2. Electronics For You (Sept. 2003)
How MEMS makes it all possible

3. J.C. Lotters et al., Sensitive
differential capacitance to voltage
converter for sensor applications,
IEEE Transaction on
Instrumentation & Measurement,
vol. 48, n 1, pp.89-96, 1999.

4. National Highway Traffic Safety
Administration: Safety Fact Sheet,
November, 1999.

5. The New Indian Express
(Education Express ) 31
st
Dec.
2004

6. Datasheet of ADXL 202/210
Analog Devices.

7. R. Puers and S. Reyntjens,
Characterization of miniature
silicon micromachined capacitive
accelerometer, J.
Micromechanics &
Microengineering, vol. 8, n 2,
pp.127-133, June 1998




























MEMS for smart systems An emerging technology

Lakshmi srinivas pendyala
3\4 BE
GITAM VISAKHAPATNAM
Pendyala.sreenivas@gmail.com

Konidala sravani
3 \4 BE
GITAM VISAKHAPATNAM
konidala87@gmail.com



ABSTRACT: Micro-Electro-
Mechanical-Systems (MEMS) are
finding more and more applications in
microwave systems as the products
utilize robust processes from the
semiconductor industry to make a wide
variety of electronic devices smaller,
more reliable and cheaper to
manufacture. In simple terms, MEMS is
the creation of mechanical structures
with semi-conductor technology. MEMS
are finding more and more applications
in microwave systems. A brief
introduction to the smart systems and
material is given. Two applications of
MEMS, one in surgery and other in
microphones are introduced.

Introduction to smart sensors
Smart systems trace their origin to a
field of research that envisioned devices
and materials that could mimic nature.


The essential idea is to produce non-
biological systems that will achieve the
optimum functionality observed in
biological systems through emulation of
their adaptive capabilities and integrated
design.


By definition, smart materials and smart
structures and by extension smart
systems consist of systems with
sensors and actuators that are either
embedded in or attached to the system to
form an integral part of it.



The system and its related components
form an entity that will act and react in a
predicted manner, and ultimately behave
in a pattern that emulates a biological
function. The human body is the ideal or
ultimate smart system. One of the first
attempts to use the smart materials
technology involved materials
constructed to do the work of
electromechanical devices (eg. MEMS).
Since then, many types of sensors and
actuators have been developed to
measure or excite a system. This
technology is still in its infancy and the
scientific community is just beginning to
scratch the surface of its potential. With
a bit of imagination one can see
enormous benefits to society.

Smart materials
Smart or intelligent materials are
materials that have the intrinsic and
extrinsic capabilities, first, to respond to
stimuli and environmental changes and,
second, to activate their functions
according to these changes. The stimuli
could originate internally or externally.
Since its beginnings, materials science
has undergone a distinct evolution: from
the use of inert structural materials to
materials built for a particular function,
to active or adaptive materials, and
finally to smart materials with more
acute recognition, discrimination and
reaction capabilities. To encompass this
last transformation, new materials and
alloys have to satisfy a number of
fundamental specifications.

New Material Requirements
To achieve a specific objective for a
particular function or application, a new
material or alloy has to satisfy specific
qualifications related to the following
properties:
Technical properties, including
mechanical characteristics such
as plastic flow, fatigue and yield
strength; and behavioral
characteristics such as damage
tolerance and electrical, heat and
fire resistance;
Technological properties,
encompassing manufacturing,
forming, welding abilities,
thermal processing, waste level,
workability, automation and
repair capacities;
Economic criteria, related to raw
material and production costs,
supply expenses and availability;
Environmental characteristics,
including features such as
toxicity and pollution;
Sustainable development criteria,
implying reuse and recycling
capacities.

If the functions of sensing and actuation
are added to the list, then the new
material/alloy is considered a smart
material.




Introduction to MEMS:
The term microelectro-mechanical
systems (MEMS) refers to a collection
of microsensors and actuators which
cansense their environments and have
the ability to react to changes in those
environments with the use of
microcircuit control. Such systems
include, in addition to the conventional
microelectronics packaging, antenna
structures for command signals
integrated into microelectromechanical
structures to perform desired sensing and
actuating functions. A system may also
need micropower supply, microrelay,
and microsignal-processing units.
Microcomponents make the system
faster, more reliable, cheaper, and
capable of incorporating more complex
functions.

The combination of silicon-based
microelectronics and micromachining
technology allows the system to gather
and process information, decide on a
course of action, as well as control the
surrounding environment, which in turn
increases the affordability, functionality
and performance of products using the
system. Due to this increase in value,
MEMS are expected to drive the
development of "smart" products within
the automobile, scientific, consumer
goods, defense and medical industries.

How MEMS work:
The sensors gather information by
measuring mechanical, thermal,
biological, chemical, magnetic and
optical signals from the environment.
The microelectronic ICs act as the
decision-making piece of the system, by
processing the information given by the
sensors. Finally, the actuators help the
system respond by moving, pumping,
filtering or somehow controlling the
surrounding environment to achieve its
purpose.
Three key pieces are used in MEMS
development:
Deposition processes are processes that
result in the deposit of thin films on the
substrate. Deposition occurs due to
various chemical or physical reactions
and can have many forms. After the
films are deposited, they are locally
etched by either lithography or etching
processes.
Lithography is the transfer of a pattern to
a photosensitive material by selective
exposure to a radiation source.
Etching processes are used on the films
deposited (mentioned above) to form the
functional MEMS structure. Etching
either occurs when a liquid is added that
will dissolve the material (wet etching)
or by dry etching, when the substrate is
put into a reactor in order to break the
gas molecules into ions which react with
the material being etched.


MEMS Applications
MEMS have uses within the automobile,
scientific, consumer goods, defense and
medical industries. Some examples
include:
Pressure, temperature, chemical
and vibration sensors
Light reflectors
Switches
Accelerometers (for airbags,
pacemakers and games)
Microactuators for data storage
and read/write heads
All-optical switches
Applications of these devices are
emerging in a wide variety of industries,
such as automobile (air bag
sensor/exploders), healthcare
(intravenous blood pressure monitors),
and consumer products (thermisters of
all kinds). Here we will discuss about
their application in surgical field and



MEMS in the surgical field:
Much as how MEMS has transformed
the sensor industry in the last quarter of
the 20th century, surgery has also been
advancing. New technologies and
procedures have been focusing on
minimizing the invasiveness of surgical
procedures. To better understand how
MEMS devices can improve surgery, it
is necessary to look at this evolution in
surgery. Surgery is the treatment of
diseases or other ailments through
manual and instrumental means. A large
incision would be made in the patient
allowing the surgeon full access to the
surgical area. This is called open
surgery and is referred to as a first-
generation technique. The surgeon has a
full and direct view of the surgical area,
and is able to put his hands directly into
the patient. This enables the surgeon to
come into contact with organs and tissue
and manipulate them freely. This is the
traditional surgical technique and most
surgeons were trained in this manner.
While the large incision gives the
surgeon a wide range of motion to do
very fine controlled operations, it causes
a lot of trauma to the patient. In
September 1985, Muhe performed the
first laparoscopic gall bladder removal
surgery with a fiber optic scope, and the
second generation of surgical procedures
was born. This advanced technique is
commonly called minimally invasive
surgery,(MIS) . In most of these
surgical procedures, the majority of
trauma to the patient is caused by the
surgeons incisions to gain access to the
surgical site rather than the procedure
itself.

Intuitive Surgical da Vinci robotic system.

While MIS has many advantages to the
patient, such as reduced postoperative
pain, shorter hospital stays, quicker
recoveries, less scarring, and better
cosmetic results, there are a number of
new problems for the surgeon. The
surgeons view is now restricted and
does not allow him to see the entire
surgical area with his eyes. While the
operation is being performed, he must
look at a video image on a monitor
rather than at his hands. This is not very
intuitive and disrupts the natural hand
eye coordination we all have been
accustomed to since childhood. The
video image on the monitor is also only
2-D and results in a loss of our binocular
vision eliminating the surgeons depth
perception. While performing the
procedure, the surgeon does not have
direct control of his own field of view. A
surgical assistant holds and maneuvers
the endoscopic camera. The surgeon has
to develop his own language to
command the assistant to position the
scope appropriately, which often leads to
orientation errors and unstable camera
handling, especially during prolonged
procedures. Since the images from the
camera are magnified, small motions
such as the tremor in a surgical
assistants hand or even their heartbeat
can cause the surgical team to
experience motion-induced nausea.

To combat the endoscopic problems,
some surgeons chose to manipulate the
endoscope themselves. This restricts
them to using only one hand for delicate
surgical procedures and makes
procedures even more complicated.
Performing a minimally invasive
procedure has been likened to writing
your name holding the end of an
eighteen-inch pencil. The difficulties
encountered by the surgeon cause
degradation in surgical performance
compared to open surgery, which limits
surgeons to performing only simpler
surgical procedures. In an attempt to
address some of these shortcomings and
allow the surgeon more control during
operations, a third generation of surgical
procedures, robotic surgery, was
developed. Although these types of
procedures are commonly referred to as
robotic surgery, the operations
themselves are not completely
automated and are still carried out by a
surgeon.

Multi-degrees-of-freedom end effector.

For this reason, robotic surgery is also
referred to as computer-aided or
computer-assisted surgery. The
technology was originally developed for
telerobotic applications in the late 1980s
for the Defense Advanced Research
Project Administration (DARPA) by
researchers at SRI International, Menlo
Park, CA. The surgeon of the future
would allow surgeons from remote
command centers to operate on injured
soldiers in the battlefield.

Current robotic surgery systems have a
number of benefits over conventional
MIS shows an Intuitive Surgical daVinci
robotic system. In this arrangement, the
surgeon sits comfortably at a computer
console .A three-armed robot takes his
place over the patient. One arm holds an
endoscope while the other two hold a
variety of surgical instruments. The
surgical team can also look at a video
monitor to see what the surgeon is
seeing. The surgeon looks into a stereo
display and manipulates joystick
actuators located below the display. This
simulates the natural hand-eye alignment
he is used to in open surgery .Since
computers are used to control the robot
and are already in the operating room,
they can be used to give the surgeon
superhuman-like abilities. Accuracy is
improved by employing tremor
cancellation algorithms to filter the
surgeons hand movements. A wide
variety of surgical instruments or end
effectors are available including
graspers, cutters, cauterizers, staplers,
etc. These advances allow surgeons to
perform more complex procedures such
as reconstructive cardiac operations
likecoronary bypass and mitral valve
repair that cannot be performed using
minimally invasive techniques. MEMS-
based technologies with their small size,
improved performance, and ability to
interface with existing robotic computer
systems will help fuel this growth rate.

MEMS Microphones
Recent refinements in MEMS processing
have resulted in the batch fabrication
of low cost, high performance,
miniaturized condenser microphones. In
certain applications, this device offers
specific advantages over traditional
ECMs (Electret Condenser Microphone).
Starting with a silicon wafer,
semiconductor materials are deposited
and removed to form a capacitor. J ust as
with a conventional electret microphone,
it consists of a flexible diaphragm, a stiff
backplate and damping holes with an
electrical charge on the backplate. The
diaphragm is in close proximity to the
backplate, forming a capacitor. When
sound pressures impinge on the
diaphragm, it moves, changing the
capacitance between the plates. This
variance is measured and outputted
as an electrical signal.
Apart from being made of silicon, the
largest difference between an ECM and
a
silicon microphone involves how the
charge is maintained on the backplate.
With ECMs, the charge on the backplate
(typically 200 V - 300 V) is implanted at
the manufacturer. If for any reason the
charge is reduced or removed, the
dynamic response of the microphone
quickly degrades. More often than not,
this is caused by excessive heat. This is
why ECMs are not specified over 85C
and cannot be soldered to a printed
circuit
board through automated surface mount
processes. A silicon microphone does
not
have a charge when it leaves the factory.
A charge of 12 V is pumped onto the
backplate via a CMOS circuit. The chip
maintains this charge whenever the
microphone is activated.


MEMS microphone-3D coupled
field analysis


The coupled Power Density
Electrostatic (PDE) is given by


Where



The Packaging Challenge
The design requirements for the silicon
microphones package were challenging;
to achieve small size, low cost, high
volume reproducibility, with the
flexibility to accommodate new designs.
Secondly, the package had to provide
protection from typical problems in
portable application such as EMI
(Electro-Magnetic Interference). The
package developed meets all
requirements while exceeding
others in several respects. The
microphone assembly (CMOS and
MEMS die) is encased in a housing
composed of readily available
composites, which has been plated
with metal to create a Faraday cage.
Filtering capacitors are mounted on the
substrate for EMI protection. This
approach also provides design flexibility
to change the size of package as the
MEMS microphone die evolves,
integrate a waterproof cover, utilized
custom CMOS chips with filters, signal
amplifiers, analog to digital converters,
or change the size or location of the
acoustic port to alter the microphones
performance.

The following figure shows the
unpackaged MEMS Microphone die.



The figure is the MEMS Microphone in
metal package with PCB substrate.




Conclusion:
Applications of MEMS technology in
microwave components and
subsystems ore growing very rapidly.
This technology is very attractive for
insertion in communications satellites
where. size, weight, and cost reductions
are essential. Implications of this
technology for these demanding systems
were discussed and
examples of candidate satellite MEMS
components were shown.
MEMS is both a beneficiary of and a
driver for improvements in computation.
Today, it is the affordability of
computation that is driving information
systems out into the physical world,
creating the demand for the physical
interfaces made possible by MEMS
technology. Tomorrow, it will be the
increasingly functional and affordable
interfaces made possible by MEMS that
will create opportunities for the
deployment of information systems in
places and applications that today are not
possible or useful. The evolution and
maturity of MEMS in the coming
years will be driven less by captive
fabrication facilities and process
development and more by innovative,
aggressive electromechanical systems
design. MEMS is poised to take
full advantage of advances in
information technology and
couple them to advances in other
disciplines to drive a fundamentally new
approach to electromechanical system
design and fabrication. By merging
sensing and actuation with computation,
MEMS will not only invest existing
systems with enhanced capabilities and
reliability but also will make possible
radically new devices and systems
designs that will exploit the
miniaturization, multiplicity, and
microelectronics of MEMS.


References:
1. A. Pisano " MEMS Overview"
DARPA presentation, Summer 1997.

2. Buckley, S., Automation Sensing
Mimics Organisms, Sensors, J une 1985
and Desilva, C.W., Control Sensors and
Actuators, Prentice-Hall, Inc.,
Englewood Cliffs, New J ersey,
USA, 1989.

3. Actuators for Control, Precision
Machinery and Robotics, Editor
Hiroyasu Funakubo, Gordon and Breach
Science Publishers,
New York, 1991.
4. L.E. Larson " Microwave MEMS
Technology for Next- Generation
Wireless Communications"
1999 IEEE MlT-S IMS Digest,
Anaheim, Ca.

5. Measures, R.M., Alavie T., Maaskant,
R.Karr, S., Huang, S., Grant, L., Guha-
Thakurta, A., Tadros, G. and Rizkalla.S
Fibre Optics Sensing for Bridges,
Proceedings, Conference on
Developments in Shot and Medium Span
Bridge Engineering, Halifax, August
1994.

6. Akhras, G., How
Smart a Bridge Can Be? , Presentation
at the 1998 Canada-Taiwan Workshop
on Medium and Long-Span Bridges,
Taipei, 9 -11 March, 1998.

8. Dufault, F. and Akhras, G.,
Applications of Smart Materials and
Structures in Medical applications,
Proceedings, 3rd CanSmart Workshop
on Smart Materials and Structures,
Sep. 2000, pp. 149-159.

9. Akhras, G., Editor, Proceedings,
CanSmart Workshops on Smart
Materials and Structures,
St-Hubert, Quebec, Canada, Sep. 1998
and Sep. 1999.

10. W. Tang " MEMS at DARPA'
DARPA presentation, 1211999






A
Paper presentation
on
Neural Networks Controlled PWM Inverters

Submitted
By



D.V.R.Phaneendra A.Avinash
III/IV B-Tech III/IV B-Tech
Adm: 04001A0201 Adm: 04001A0205
Email:phani_mani01@yahoo.com Email:avi_0525@yahoo.com




Neural Networks Controlled PWM Inverter

Abstract
A high performance PWM inverter using neural
networks controller is presented in this paper.
Owing to the learning ability and robust
capability of neural networks, the switching
pattern of the PWM inverter can be optimized
under the linear and nonlinear load conditions.
Thus, a high quality and low harmonics sinusoidal
output voltage is expected to be obtained. The
proposed PWM inverter provides the feasible
characteristics and the memory size needed for
hardware implementation is small since the
learned. Informations are parallel distributed in
neural network's weights. The simulation results
of PWM inverter presented are shown to
demonstrate its excellent performance.

1. Introduction
PWM inverters are more and more widely employed
in uninterruptible power supply (UPS), variable-
speed motor drivers, and induction heating. The
carrier-modulated sine PWM and programmed PWM
techniques are usually used for the inverter design.
Programmed PWM technique computes the optimal
switching patterns and reduces the selected
harmonics. In carrier-modulated sine PWM inverter,
the switching angles are determined by the
comparison between a triangle waveform and a
desired sinusoidal waveform. The drawback of these
techniques is that high harmonic content output
voltage will be obtained under the non-linear load
conditions, and the transient response time is usually
slow at a load disturbance.



The developments of the semiconductor
power devices and microcomputer reduce many
constraints of practical implementation of modern
control techniques. Many fast response control
techniques have been developed for non-linear load
and load disturbance. However, these control
algorithms are not straightforward that very complex
computation processes are involved.
Neural networks have the ability of parallel
processing, learning ability, robustness and
generalization, which are very suitable for the PWM
controller. A non-linear model can be developed by
Neural network such that the controller would be
desirable for the non-linear systems. the inverters
have the ability of harmonics suppression, but large
harmonics are still existed under the non-linear load
condition.
A novel control technique using neural network
is proposed in this paper. The inverter has a excellent
characteristics both under linear and non-linear load
conditions. The memory size needed for hardware
implementation is small. Fault tolerance is another
characteristic of neural networks. It also works very
even at a disturbed condition.

Fig 1. Full Bridge Inverter Circuit







Positive half cycle negative half cycle
A full bridge PWM inverter circuit is shown in
Fig.1 which can be considered as the plant of a
closed loop control system. A L-C filter is used to
filter out the high order harmonics in the inverter
output and to obtain a high quality sinusoidal
waveform. The inverter output Vi may take three
voltage levels: -Vdc 0, or +Vdc which is shown in
Fig 2. The turn-on width T in each sampling
interval will determine the output waveform of PWM
inverter which is centered in the interval T. One
period of reference sine wave is equally divided into
N intervals. The sinusoidal reference output
waveform and the corresponding PWM switching
patterns, N=20. is illustrated in Fig 3.
Fig. 2 Inverter Output states



Fig.3 Inverter Output pattern and reference
Voltage (N = 20)






2. Structure of Neural Networks

PWM Inverter Plant

Neural Network for lnverter
The first attempt is to use the back propagation
algorithm because it is the simplest multilayer neural
networks. The impact of this algorithm on the neural
network research was undoubtedly enormous. The
basic neuron block diagram is shown in Fig 4. The
activation function used in the input layer can be a
sigmoid or a linear function. The relation between the
input and the output is shortly described as below:








Fig. 4 Multilayer Neural Network for Inverter


Fig. 5 Block diagram of NNController for Inverter






he calculation of neural network is described in
neural network has the decisive

o train the
d


T
equation (1)-(5). The neural networks weights have
to update until the total output error is less than the
expected value.
The input of
impact on the performance of PWM controller, so the
selection of input data is very important. Time t,
Present voltage V
k,
and previous sampling voltage
V
k-1
., are chosen as the input and the output is the
turn-on time T in each sampling interval. The
architecture is 3-3- 1 (input-hidden-output layer) and
the decision principle of the number of hidden layer
is as few as possible to reduce the calculation time of
neural network. The block diagram of neural network
controller is illustrated in Fig 5.
Back propagation algorithm is used t
neural network. The training data is selecte
randomly from the optimal output of PWM. The
neural network is trained off-line, and then the
trained weights of neurons can be employed for on-
line control. The controller operation and robustness
is under the influence of the weights.

3. Simulation
the performance of the proposed In order to verify
controller, the neural network PWM controller is
employed to develop a simulation model. The system
parameters are given in Table 1. and the parameters
of neural network are listed in Table 2.



A. linear load
e the inverter output voltage for the
Fig. 6. Simulated output voltage under


Fig 6.(a)(b) ar
proposed scheme at load resistance equal to 5 and
10 The simulation output waveforms, which are
similar to those obtained by sinePWM inverter, are
very close to sinusoidal waveforms. Table 3. gives
the fundamental amplitude and the percentage of
THD (Total Harmonic Distortion) under these test
conditions




linear load (a) R-5 ohm (b) R-10 ohm

Table. 3. Percentage of THD under linear
Load




. Non-linear load
output waveforms under
l h
B
Fig 7. shows the
nonlinear oads with p ase controlled at angles of
45 and 90. Table 4. gives the fundamental
amplitude and the percentage of THD under the non-
linear loads. Results show that the voltage dip can be
compensated very quickly under non-linear load
conditions. It is very cleared from Table 3 and 4 that
the percentage of THD is below 1% and 5% under the
linear load and non linear load respectively




Fig. 7 Simulated output voltage for non-linear


4. Conclusion
oposed for a single-



load (a) phase angle =45, (b) phase angle=90











Table .4 Percentage of THD under non-linear
load




A neural network controller is pr
phase PWM inverter to produce a high quality, low
distorted sinusoidal voltage. The performance of the
proposed controller whose performance is confirmed
by computer simulations. The results show that a
very low THD output voltage, good voltage
regulation, and fast transient response inverter could
be produced even under a non-linear load conditions.
The major disadvantage of this method is that the
complex calculations are involved to generate a
optimal switching pattern in the controller. Owing to
the rapid development of neural network VLSI chip,
the drawback of neural network aforementioned will
be overcome. The design of neural network controller
will be more and more popular. It is predicted that
neural networks will be a design fashion in the
future.

Neural Network controller for UPS Inverter applications




NEURAL NETWORK CONTROLLER
FOR
UPS INVERTER APPLICATIONS













Chandra Babu Naidu. R



















BY

Madhusudhan Reddy. B


E-mail:

Chandra0206@gmail.com
1
Neural Network controller for UPS Inverter applications
Abstract :

A UPS inverter is a device
which supplies power in the absence
of supply which is to be controlled
by a controller for efficient operation.
A PI controller with optimized
parameters is taken for controlling
the UPS .As it is applicable only for
linear loads and also as the Total
Harmonic Distortion (THD) is
higher, the introduction of a new
controller instead of PI controller is
essential.
So, a Neural Network
controller is incorporated in the place
of PI controller
with gives low Total Harmonic
Distortion (THD) as well as good
output sinusoidal voltage, designed
for both linear as well as non linear
loading conditions.

INTRODUCTION :
Neural Networks(NN) have been
employed in many applications in recent
years. A NN is an interconnection of a
number of artificial neurons that simulates
a biological brain system which has the
ability to approximate the arbitrary
function mapping and can achieve a higher
degree of fault tolerance .
A NN is used in system control,
which can be trained either on-line or off-
line training. Since the weights and biases
of the NN are adaptively modified during
the control process it has better
adaptability to nonlinear operating
conditions. The most popular training
algorithm for a feed forward NN is
backpropagation. It is attractive because
it is stable, robust and efficient.
Although the weights and biases are fixed
during the control process, the NN is a
nonlinear system that has much better
robustness than a linear system. Moreover,
the forward calculation of the NN involves
only addition, multiplication, and
sigmoidal-function wave shaping that can
be implemented with simple and low-cost-
analogue hardware. The fast-response and
low-cost implementation of the off-line
trained NN are suitable for UPS inverter
applications.
The inputs of the NN were time, present
output voltage, and last sampled output
voltage. The limited information might not
be enough to ensure a sinusoidal output
voltage under various loading conditions.
Example patterns are obtained
from a simulated controller, which has an
idealized load-current reference. After
2
Neural Network controller for UPS Inverter applications
training, the NN is used to control the
inverter on-line. Simulation results shows
that the proposed NN controller can
achieve low THD under nonlinear loading
conditions.
Additionally for comparison
purposes, a PI controller with optimized
parameters is built.

UNINTERRUPTIBLE POWER
SUPPLY
There are some equipments which
require better quality of power supply than
conventional equipments. Some examples
are computers, process control systems,
communication links, hospital equipment,
etc. These are called critical
equipments because they are very
sensitive to the nature of power supply for
their operation, protection of the
equipment and continuity of a process or
transfer of information. So that critical
equipment cannot tolerate power failure
even for a moment. This not only upsets
the operation programs but also damage
the equipment .
Uninterruptible Power Supply
(UPS) systems are necessary for
satisfactory operation of the equipment
because the commercial power supply
system cannot ensure the standard
requirement. Though the UPS systems can
meet almost all the requirements for
normal operation of the critical equipment
under all normal and abnormal conditions
of the commercial supply.So this chapter
explains various types of UPS Systems
and voltage control techniques.
1
TYPES OF UPS SYSTEMS :
There are four types of UPS
systems such as
i. Off-Line Line Preferred Systems
ii. Off-Line Line Interactive System
iii. On-Line Inverter Preferred System
iv. Hybrid type UPS

UPS INVERTER :
Dc-to-ac converters are
known as inverters. The function of an
inverter is change a dc input voltage to a
symmetrical ac output voltage of desired
magnitude and frequency. The output
voltage could be fixed or variable at a
fixed or variable frequency. A variable
output voltage can be obtained by varying
3
Neural Network controller for UPS Inverter applications
the input voltage and maintaining the gain
of the inverter constant. On the other hand
the dc input voltage is fixed and it is not
controllable, a variable output voltage can
be obtained by varying the gain of the
inverter, which is normally accomplished
by pulse-width-modulation (PWM)
control within the inverter. The inverter
gain may be defined as the ratio of the ac
output voltage to dc input voltage
.
ARCHITECTURE OF UPS
INVERTER
A UPS inverter typically consists
of a dc power source, a full bridge (or half
bridge) PWM inverter and an LC filter, as
shown in fig.

Fig : UPS Inverter System
The full bridge inverter,
which is the core of the system, chops the
DC input into a series of PWM pulses
according to the modulation signal u
m
. The
function of the second order LC filter is to
remove the high frequency components of
the chopped output voltage u
i.
R
f
represents the resistance of the filter
inductor. The Effective Series Resistance
(ESR) of the filter capacitor is ignored
since it only has a small effect with the
frequency range concerned. The DC
power source is considered as an ideal
constant-voltage supply. The load shown
in Fig can be of any type: resistive,
inductive, capacitive, or nonlinear.

VOLTAGE CONTROL OF
INVERTERS
In many industrial applications, it is
often required to control the output voltage
of inverters
i. To cope with the variation of dc
input voltage
ii. For voltage regulation of inverters
iii. For the constant volts/frequency
control requirement.

There are various techniques to vary
the inverter gain. The most efficient
method of controlling the gain is to
incorporate pulse-width-modulation
(PWM) control within the inverters .
4
Neural Network controller for UPS Inverter applications
The commonly used techniques are
1. Single pulse-width modulation
2. Multiple pulse-width modulation
3. Sinusoidal pulse-width modulation.
Here we incorporate the sinusoidal
pulse-width modulation technique
which reduces the total harmonic
distortion.
LINEAR MODEL OF UPS
INVERTER
The switching frequency is usually
several orders higher than the
fundamental frequency of the AC
output, the dynamics of the PWM
inverter can be ignored. Thus, the UPS
inverter can be modeled as a simple
proportional gain block. Fig shows a
linear model of the UPS inverter
system.

Fig : Linear model of UPS Inverter

In which the proportional gain of
the inverter K is equal to V
dc
/V
c
where
V
dc
is the voltage of the DC power
source and V
c
is the peak voltage of
the triangular carrier wave.
CONTROLLERS
The UPS Inverter without any
controller can give unsatisfactory or
distorted sinusoidal wave form. So a
controller is needed to get the desired
sinusoidal wave form. Various controllers
are there for achieving the desired output.
A PI controller can give a sinusoidal wave
form for linear loads but produces some
distortions in non-linear loads. A Neural
Network Controller can give better
sinusoidal wave for linear as well as
nonlinear loading conditions.
PI CONTROLLER
The PI controller produces an output
signal consisting of two terms, one is
proportional to error signal and the other
is proportional to the integral of error
signal. The transfer function of PI
controller = Kp(1+1/Tis),where Kp is
proportional gain and Ti is integral time.
5
Neural Network controller for UPS Inverter applications

Fig Block diagram of PI control system.

Fig : UPS Inverter with PI controller
There is a feed forward signal from
the reference voltage, which is found to
have advantages of reducing steady-state
error and providing a high tracking
accuracy to the reference. Here , the outer
feed back loop taken from output voltage
is compared with the reference input and
the inner feed back loop taken from the
capacitor current.

NEURAL NETWORK
CONTROLLER
Artificial Neural Network (ANN)
is an information processing paradigm that
is inspired by the way biological nervous
systems, such as the brain, processes the
information. The key element of this
paradigm is the novel structure of the
information processing system. It is
composed of a large number of highly
interconnected processing elements
(neurons) working in unison to solve
specific problems. An ANN is configured
for a specific application, such as pattern
recognition or data classification, through
a learning process. Learning in biological
systems involves adjustments to the
synaptic connections that exist between
the neurons.
ARTIFICIAL NEURAL NETWORKS
An Artificial Neuron is a concept
whose components have a direct analogy
with the biological neurons. Fig below
shows the structure of an artificial neuron
model.
6
Neural Network controller for UPS Inverter applications

Fig : The neuron model
Each input signal flows through a
gain or weight, called a synaptic weight
or connection strength, whose function is
analogous to that of the synaptic junction
in a biological neuron.
The weights can be positive
(excitory) or negative (inhibitory)
corresponding to acceleration or inhibition
respectively, of the flow of electrical
signals in a biological cell. The summing
node accumulates all the inputs weighted
signals and then passes to the output
through the activation function. This is
usually nonlinear in nature, which is
analogous to axon in a biological neuron.
ARCHITECTURE OF NEURAL
NETWORKS
a. FEED-FORWARD NETWORKS
Feed-forward ANNs are shown in below
Fig , allow signals to travel in only one
way, from input to output. There is no
feedback (loops) i.e. the output of any
layer does not affect that same layer. Feed-
forward ANNs tend to be straight forward
networks that associate inputs with
outputs. They are extensively used in
pattern recognition.

Figure A simple feed forward network
b.FEEDBACK NETWORKS
Feedback networks can have
signals traveling in both directions by
introducing loops in the network.
Feedback networks are very powerful and
can get extremely complicated. Feedback
networks are dynamic, their 'state' is
changing continuously until they reach an
7
Neural Network controller for UPS Inverter applications
equilibrium point. They remain at the
equilibrium point until the input changes
and a new equilibrium needs to be
found.Feedback architectures are also
referred to as interactive or recurrent,
although the latter term is often used to
denote feedback connections in single-
layer organizations.
III) NETWORK LAYERS
The common type of artificial neural
network consists of three groups or layers
.A layer of "input" units is connected to a
layer of "hidden" units, which is
connected to a layer of "output" units.
The activity of the input units represents
the raw information that is fed into the
network.
The activity of each hidden unit is
determined by the activities of the
input units and the weights on the
connections between the input and
the hidden units.
The behavior of the output units
depends on the activity of the
hidden units and the weights
between the hidden and output
units.
The weights between the input and hidden
units determine when each hidden unit is
active, and so by modifying these weights,
a hidden unit can choose what it
represents.
BACK-PROPAGATION
ALGORITHM
In order to train a neural network to
perform some task, we must adjust the
weights of each unit in such a way that the
error between the desired output and the
actual output is reduced. This process
requires that the neural network compute
the error derivative of the weights (EW).
In other words, it must calculate how the
error changes as each weight is increased
or decreased slightly. The back
propagation algorithm is the most widely
used method for determining the EW.
The algorithm computes each EW by
first computing the EA, the rate at which
the error changes as the activity level of a
unit is changed. For output units, the EA is
8
Neural Network controller for UPS Inverter applications
simply the difference between the actual
and the desired output. To compute the
EA for a hidden unit in the layer just
before the output layer, we first identify all
the weights between that hidden unit and
the output units to which it is connected.
We then multiply those weights by the
EAs of those output units and add the
products. This sum equals the EA for the
chosen hidden unit. After calculating all
the EAs in the hidden layer just before the
output layer, we can compute in like
fashion the EAs for other layers, moving
from layer to layer in a direction opposite
to the way activities propagate through the
network. This is what gives back
propagation its name. Once the EA has
been computed for a unit, it is straight
forward to compute the EW for each
incoming connection of the unit. TheEW
is the product of the EA and the activity
through the incoming connection.
OBTAINING EXAMPLE PATTERNS
UNDER LINEAR LOADING
CONDITIONS
The example patterns are obtained
by using idealized load-current-reference
model shown in fig. There is a feed
forward signal from the reference voltage,
which is found to have advantages of
reducing steady-state error and providing a
high tracking accuracy to the reference.

Fig Idealized load-current feedback control
scheme
The generator consists of two components,
one is sinusoidal feed forward signal, and
the other is a compensation signal
produced by feedback loops. In this way,
the neural network can be trained to
generate only the compensation signal
as its desired output will results more
effective training and better control
performance.
9
Neural Network controller for UPS Inverter applications
OBTAINING EXAMPLE PATTERNS
UNDER NON LINEAR LOADING
CONDITIONS
Many electrical loads nowadays
are nonlinear. It is therefore essential to
maintain the performance of the UPS
inverter under nonlinear loading
conditions. The nonlinear load in this
project is chosen to be a diode bridge
rectifier with an output filter. The input
current of the nonlinear load, namely the
output current of the inverter. The
idealized load-current feed back control
scheme for nonlinear loads shown in fig
below.


Fig : Idealized load-current feedback control
scheme
to obtain example patterns for non linear loads
Where a sinusoidal voltage is fed
to nonlinear load model to obtain a load-
current reference. The actual load current
is compared with this reference, and the
error signal is used as the controller input.
The parameters of controller are
determined from simulations to produce an
output with a low THD and a small
enough steady-state error.
STRUCTURE AND TRAINING OF
NEURAL NETWORK
All the example patterns
obtained from the simulations are put
together to form a database that contains
the information about the control law. A
proper neural network is then used to
learn the control law. The neural network
should be as simple as possible to reduce
the calculation time. The neural network
control scheme for UPS inverter is shown
in fig below.
10
Neural Network controller for UPS Inverter applications

Fig Neural Network Control Scheme for
UPS Inverter
The inputs to the neural network are
capacitor current, load current, output
voltage and the error between the
reference voltage and the output voltage
The training of the NN is automated
by a computer program that presents a
randomly selected example pattern from
the pattern database to the NN a large
number of times. During each time the
weights and biases of the NN are updated
using the back propagation algorithm to
make the mean square error between the
desired output and the actual output of the
NN less than a predefined value.
The following is a summary of the design
steps for the proposed NN controller for
UPS inverter applications.
1.Build the simulated controller with
the idealized load current-reference
for the inverter.
2.For each of the loading conditions,
tune the parameters of the controller to
optimal values. Then collect output
voltage, load current, and capacitor
current as the inputs of the NN and the
compensation signal as the desired
output of the NN.
3.Select an NN structure that is simple
and yet sufficient to model the
simulated controller based on the
pattern database.
4.Train the NN using software tools
using MATLAB.
MATHEMATICAL MODELS
AND RESULTS
The complete details of the UPS Inverter
are presented in previous pages. This
chapter deals with the mathematical
11
Neural Network controller for UPS Inverter applications
models of UPS Inverter using MATLAB
and corresponding results are presented.
LOAD MODEL :
The various loads used for
taking data such as resistive, R-L load, R-
C load and diode bridge load. The fig 4.1
shows the mathematical model for R-L
and R-C loads.
Fig shows the mathematical model for
diode bridge rectifier load with output
capacitor and resister.
The diode is described as

i
d
= 0, (u
d
<.0.7)
i
d
= (ud - 0.7)/0.1, (u
d
>=0.7)

Where i
d
is the instantaneous forward
current in the diode.
u
d
is the instantaneous forward
voltage across the diode.

Figure:MATLAB load models
of R-L load and R-C load


Figure : Diode bridge rectifier load model

SPWM GENERATOR
The SPWM generator is described
by the following equation in MATLAB

Where Vdc is the voltage of DC source,
um is the instantaneous voltage of the
modulating signal and uc is the
instantaneous voltage of the triangular
carrier wave. Fig 4.3 shows the
mathematical model of SPWM generator
using MATLAB and Fig 4.4 shows the
output of the SPWM generator.

fig : SPWM generator.

12
Neural Network controller for UPS Inverter applications

Fig SPWM generator output

UPS INVERTER WITHOUT
CONTROLLER

Fig UPS Inverter Without controller
for R load

Fig UPS Inverter Without controller
output for R load

Fig a : UPS Inverter without controller
for diode bridge rectifier

Fig b : UPS Inverter without controller
output for diode bridge rectifier

By observing the figures,it shows
that the output response of the UPS
Inverter does not track the reference
sine wave, so that it requires
controllers to get a quality sine wave.

UPS INVERTER WITH PI
CONTROLLER
.
13
Neural Network controller for UPS Inverter applications

Fig UPS Inverter with PI controller
for R load

Fig UPS Inverter with PI controller
output for R load

Fig UPS Inverter with PI controller
output for diode bridge rectifier load

BLOCK DIAGRAM OF NEURAL
NETWORKS
The block diagram of neural
network and its hidden layers are shown in
fig a and fig b respectively.
y{1}
Input 1
p{1}
p{1} y{1}
Neural Network

Fig a : Block diagram of Neural Networks

1
y{1}
a{1}
a{1} a{2}
Layer 2
p{1} a{1}
Layer 1
a{1}
1
p{1}

1
a{1} tansig netsum bias
b{1}
weight
IW{1,1}
TDL
Delays 1
1
p{1}



1
lz{2,1}
w
p
z
dotprod1
Mux
Mux
weights
IW{2,1}(1,:)' 1
ad{2,1}

Fig b : Hidden layers of the Neural
Networks
14
Neural Network controller for UPS Inverter applications
UPS INVERTER WITH NEURAL
NETWORK CONTROLLER

Fig UPS Inverter with Neural Network
controller for R load

Fig UPS Inverter with Neural Network
controller output for R load

Fig UPS Inverter with Neural Network
controller for diode bridge rectifier
load

Fig UPS Inverter with Neural Network
controller output for diode bridge rectifier
load

CONCLUSION :
Simulation results show the
proposed Neural Network controller can
produce quality sine wave with low Total
Harmonic Distortion (THD) under linear
and nonlinear loading conditions. The
simulation results show that the superior
performance is achieved by the proposed
Neural Network controller than the PI
controller, especially under rectifier-type
loading conditions i.e, non linear loading
conditions.

15
Neural Network controller for UPS Inverter applications














16






A
Paper presentation
On

NEURAL NETWORKS IN POWER SYSTEMS

Submitted
By



E.Srinivasu M. Raju
III/IV B-Tech III/IV B-Tech
Adm: 04001A0230 Adm: 04001A0228
Email: srenu.shekar@gmail.com Email: raj_jnta28@yahoo.com


ABSTRACT: This paper deals with
neural networks introduction and
applications of ANNs in power system
voltage stability improvement are
described as power systems grow in
their size and interconnections, their
complexity increases. Rising costs due
to inflation and increased
environmental concerns has made
transmission, as well as generation
systems to be operated closer to design
limits. Hence Artificial Neural
Networks (ANN) are emerging as an
Artificial Intelligence tool,which give
fast, though approximate, but
acceptable solutions in real time as
they mostly use parallel processing
technique for computation This paper
deals with development of ANN
architecture, which provide solutions
for monitoring, and control of voltage
stability in the day-to-day operation of
power systems.



1. What is a Neural Network?
An Artificial Neural Network (ANN) is
an information processing paradigm that
is inspired by the way biological nervous
systems, such as the brain, process
information. The key element of this
paradigm is the novel structure of the
information processing system. It is
composed of a large number of highly
interconnected processing elements
(neurones) working in unison to solve
specific problems.
1.1Why use neural networks?
Neural networks, with their remarkable
ability to derive meaning from
complicated or imprecise data, can be
used to extract patterns and detect trends
that are too complex to be noticed by
either humans or other computer
techniques.
1.2 Human and Artificial Neurones -
investigating the similarities
1.2(a) How the Human Brain Learns?
Much is still unknown about how the
brain trains itself to process information,
so theories abound. In the human brain,
a typical neuron collects signals from
others through a host of fine structures


Components of a neuron


The synapse
called dendrites. The neuron sends out
spikes of electrical activity through a
long, thin stand known as an axon,
which splits into thousands of branches.
At the end of each branch, a structure
called a synapse converts the activity
from the axon into electrical effects that
inhibit or excite activity from the axon
into electrical effects that inhibit or
excite activity in the connected
neurones. When a neuron receives
excitatory input that is sufficiently large
compared with its inhibitory input, it
sends a spike of electrical activity down
its axon. Learning occurs by changing
the effectiveness of the synapses so that
the influence of one neuron on another
changes.
1.2(b) From Human Neurones to
Artificial Neurones
However because our knowledge of
neurones is incomplete and our
computing power is limited, our models
are necessarily gross idealisations of real
networks of neurones.

The neuron model
2. An engineering approach-A
simple neuron
An artificial neuron is a device with
many inputs and one output. The neuron
has two modes of operation; the training
mode and the using mode. In the training
mode, the neuron can be trained to fire
(or not), for particular input patterns. In
the using mode, when a taught input
pattern is detected at the input, its
associated output becomes the current
output. If the input pattern does not
belong in the taught list of input patterns,
the firing rule is used to determine
whether to fire or not.

A simple neuron
3. ANN Application to Power System
Voltage Stability Improvement:

Power Systems operate most of the time
under quasi-steady state. Disturbances
also occur in power systems. Examples
of such disturbances are sudden change
in load demand, generator failure or
change in transmission system
configuration due to faults and line
switching. Voltage stability is concerned
with the ability of a power system to
maintain acceptable voltages in the
system under normal conditions and
after being subjected to a disturbance. A
system enters a state of voltage
instability when a disturbance causes a
progressive and uncontrollable decline in
voltage. Fundamentally, voltage
instability is caused by the system
inability to meet reactive power demand.
1
Instability can occur following simple
reactive load increase, or following
contingencies which may bring in
increased losses in transmission lines,
higher reactive requirements from
devices such as induction motors, or
diminished reactive supply from static
capacitors. The objective of an ECC is,
to ensure secure and economic operation
of power system. The challenge to
optimize power system operation, while
maintaining system security and quality
of supply to customers, is increasing.
3.1Artificial Neural Network
Architecture:

ANNs are composed of simple elements
operating in parallel. These elements are
inspired by the functioning of biological
nervous systems. The key element of
ANN paradigm is the novel structure of
the information processing system. It is
composed of a large number of highly
interconnected processing elements that
are analogous to neurons and are tied
together with weighted connections that
are analogous to synapses.Back
propagation algorithm is used for
training the neural network.

3.2Algorithm for improvement of
voltage stability

A minimization algorithm for improving
voltage stability margin based on L-
Index [3] and employing non-linear least
squares optimization technique is
presented. The control variables
considered are switchable VAR
compensators, OLTC transformers and
generators excitation .A prototype of an
ANN for monitoring and control of
power system voltage stability margin
improvement has been developed. The
proposed ANN tries to improve the
voltage stability margin using SVCs,
Generator excitation and OLTC
transformers as controllers for different
loading conditions for a practical EHV
Indian power system and encouraging
results have been obtained.


4. WHAT IS FUZZY LOGIC?

FL is a problem solving control system
methodology that lends itself to
implementation in systems ranging from
simple, small, embedded micro-
controllers to large, networked, multi-
channeled PC or workstation-based data
acquisition and control systems. Fuzzy
logic is the way the human brain works,
2
and we can mimic this in machines so
they will perform somewhat like humans
(not to be confused with Artificial
Intelligence, where the goal is for
machines to perform exactly like
humans). Fuzzy logic control and
analysis systems may be electro-
mechanical in nature, or concerned only
with data, for example economic data, in
all cases guided by "If-Then rules" stated
in human language.

4.1: The Fuzzy Logic Method:

The fuzzy logic analysis and control
method is, therefore:
1. Receiving of one, or a large number,
of measurement or other assessment of
conditions existing in some system we
wish to analyze or control.
2.. Processing all these inputs according
to human based, fuzzy "If-Then" rules,
which can be expressed in plain
language words, in combination with
traditional non-fuzzy processing.
3. Averaging and weighting the resulting
outputs from all the individual rules into
one single output decision or signal
which decides what to do or tells a
controlled system what to do. The
output signal eventually arrived at is a
precise appearing, defuzzified, "crisp"
value. The following is the Fuzzy Logic
Control/Analysis Method diagram:

4.2: The World's First Fuzzy Logic
Controller
In England in 1973 at the University of
London, a professor and student were
trying to stabilize the speed of a small
steam engine the student had built. They
had a lot going for them, sophisticated
equipment like a PDP-8 minicomputer
and conventional digital control
equipment. But, they could not control
the engine as well as they wanted.
Engine speed would either overshoot the
target speed and arrive at the target
speed after a series of oscillations, or the
speed control would be too sluggish,
taking too long for the speed to arrive at
the desired setting, as in Figure 1, below.

The steam engine speed control graph
using the fuzzy logic controller appeared
as in Figure 2,
3




The speed approached the desired value
very quickly, did not overshoot and
remained stable. The MAMDANI
PROJ ECT made use of four inputs:
boiler pressure error, rate of change of
boiler pressure error, engine speed error
and rate of change of engine speed error.
There were two outputs: control of heat
to the boiler and control of the throttle.
They operated independently. Following
is a system diagram, Figure 3, for a
"getting acquainted with fuzzy" project
that provides speed control and
regulation for a DC motor. The motor
maintains "set point" speed, controlled
by a stand-alone converter-controller,
directed by a BASIC fuzzy logic control
program in a personal computer.
5. CONCLUSION:
They are very well suited for real time
systems because of their fast response
and computational times which are due
to their parallel architecture. Neural
networks also contribute to other areas
of research such as neurology and
psychology Finally, I would like to state
that even though neural networks have a
huge potential we will only get the best
of them when they are intergrated with
computing, AI, fuzzy logic and related
subjects.
REFERENCES:
1. Bibliography: An introduction to
neural computing. Alexander. and
Morton, H.



4
NEURAL NETWORK CONTROLLER
FOR UPS INVERTER APPLICATIONS

PAPER PRESENTED BY : B.V.BASAVESWARA RAO &
B.MADHUSUDHANA REDDY.

YEAR : IV B.TECH

BRANCH : ELECTRICAL & ELECTRONICS
ENGINEERING (E.E.E)

COLLEGE : NARAYANA ENGINEERING COLLEGE
NELLORE.

UNIVERSITY : J AWAHARLAL NEHRU TECHNOLOGICAL
UNIVERSITY , HYDERABAD.

E-MAIL : bvsr_203@rediffmail.com
madhu.barusu@gmail.com


ABSTRACT :
A UPS inverter is a device
which supplies power in the absence of
supply which is to be controlled by a
controller for efficient operation. A PI
controller with optimized parameters is
taken for controlling the UPS .As it is
applicable only for linear loads and also
as the Total Harmonic Distortion (THD)
is higher, the introduction of a new
controller instead of PI controller is
essential.
So, a Neural Network controller
is incorporated in the place of PI
controller
with gives low Total Harmonic
Distortion (THD) as well as good output
sinusoidal voltage, designed for both
linear as well as non linear loading
conditions.

INTRODUCTION :
Neural Networks(NN) have
been employed in many applications in


recent years. A NN is an
interconnection of a number of artificial
neurons that simulates a biological brain
system which has the ability to
approximate the arbitrary function
mapping and can achieve a higher
degree of fault tolerance .
A NN is used in system control,
which can be trained either on-line or
off-line training.Since the weights and
biases of the NN are adaptively modified
during the control process it has better
adaptability to nonlinear operating
conditions. The most popular training
algorithm for a feed forward NN is
backpropagation. It is attractive because
it is stable, robust and efficient.
Although the weights and biases are
fixed during the control process, the NN
is a nonlinear system that has much
better robustness than a linear system.
Moreover, the forward calculation of the
NN involves only addition,
multiplication, and sigmoidal-function
wave shaping that can be implemented
with simple and low-cost-analogue
hardware. The fast-response and low-
cost implementation of the off-line
trained NN are suitable for UPS inverter
applications.
The inputs of the NN were time, present
output voltage, and last sampled output
voltage. The limited information might
not be enough to ensure a sinusoidal
output voltage under various loading
conditions.
Example patterns are obtained
from a simulated controller, which has
an idealized load-current reference.After
training, the NN is used to control the
inverter on-line. Simulation results
shows that the proposed NN controller
can achieve low THD under nonlinear
loading conditions.
Additionally for comparison
purposes, a PI controller with optimized
parameters is built.

UN INTERRUPTIBLE POWER
SUPPLY
There are some equipments
which require better quality of power
supply than conventional equipments.
Some examples are computers, process
control systems, communication links,
hospital equipment, etc. These are called
critical equipments because they are
very sensitive to the nature of power
supply for their operation, protection of
the equipment and continuity of a
process or transfer of information. So
that critical equipment cannot tolerate
power failure even for a moment. This
not only upsets the operation programs
but also damage the equipment .
Uninterruptible Power Supply
(UPS) systems are necessary for
satisfactory operation of the equipment
because the commercial power supply
system cannot ensure the standard
requirement. Though the UPS systems
can meet almost all the requirements for
normal operation of the critical
equipment under all normal and
abnormal conditions of the commercial
supply.So this chapter explains various
types of UPS Systems and voltage
control techniques.

TYPES OF UPS SYSTEMS:
There are four types of UPS
systems such as
i. Off-Line Line Preferred
Systems
ii. Off-Line Line Interactive
System
iii. On-Line Inverter Preferred
System
iv. Hybrid type UPS

UPS INVERTER:
Dc-to-ac converters are
known as inverters. The function of an
inverter is change a dc input voltage to a
symmetrical ac output voltage of desired
magnitude and frequency. The output
voltage could be fixed or variable at a
fixed or variable frequency. A variable
output voltage can be obtained by
varying the input voltage and
maintaining the gain of the inverter
constant. On the other hand the dc input
voltage is fixed and it is not controllable,
a variable output voltage can be obtained
by varying the gain of the inverter,
which is normally accomplished by
pulse-width-modulation (PWM) control
within the inverter. The inverter gain
may be defined as the ratio of the ac
output voltage to dc input voltage
.
ARCHITECTURE OF UPS
INVERTER:
A UPS inverter typically consists
of a dc power source, a full bridge (or
half bridge) PWM inverter and an LC
filter, as shown in fig.

Fig : UPS Inverter System
The full bridge inverter,
which is the core of the system, chops
the DC input into a series of PWM
pulses according to the modulation
signal u
m
. The function of the second
order LC filter is to remove the high
frequency components of the chopped
output voltage u
i.
R
f
represents the
resistance of the filter inductor. The
Effective Series Resistance (ESR) of the
filter capacitor is ignored since it only
has a small effect with the frequency
range concerned. The DC power source
is considered as an ideal constant-
voltage supply. The load shown in Fig
can be of any type: resistive, inductive,
capacitive, or nonlinear.

VOLTAGE CONTROL OF
INVERTERS
In many industrial applications, it is
often required to control the output
voltage of inverters
i. To cope with the variation of dc
input voltage
ii. For voltage regulation of
inverters
iii. For the constant volts/frequency
control requirement.
There are various techniques to
vary the inverter gain. The most
efficient method of controlling the gain
is to incorporate pulse-width-
modulation (PWM) control within the
inverters .
The commonly used techniques
are
1. Single pulse-width modulation
2. Multiple pulse-width modulation
3. Sinusoidal pulse-width
modulation.
Here we incorporate the
sinusoidal pulse-width modulation
technique which reduces the total
harmonic distortion.

LINEAR MODEL OF UPS
INVERTER
The switching frequency is usually
several orders higher than the
fundamental frequency of the AC output,
the dynamics of the PWM inverter can
be ignored. Thus, the UPS inverter can
be modeled as a simple proportional gain
block. Fig shows a linear model of the
UPS inverter system.

Fig : Linear model of UPS Inverter

In which the proportional gain of the
inverter K is equal to V
dc
/V
c
where V
dc

is the voltage of the DC power source
and V
c
is the peak voltage of the
triangular carrier wave.

CONTROLLERS
The UPS Inverter without
any controller can give unsatisfactory or
distorted sinusoidal wave form. So a
controller is needed to get the desired
sinusoidal wave form. Various
controllers are there for achieving the
desired output. A PI controller can give a
sinusoidal wave form for linear loads but
produces some distortions in non-linear
loads. A Neural Network Controller can
give better sinusoidal wave for linear as
well as nonlinear loading conditions.

PI CONTROLLER
The PI controller produces an
output signal consisting of two terms,
one is proportional to error signal and
the other is proportional to the integral
of error signal. The transfer function of
PI controller =Kp(1+1/Tis),where Kp is
proportional gain and Ti is integral time.

Fig Block diagram of PI control system.

Fig : UPS Inverter with PI controller
There is a feed forward signal
from the reference voltage, which is
found to have advantages of reducing
steady-state error and providing a high
tracking accuracy to the reference. Here ,
the outer feed back loop taken from
output voltage is compared with the
reference input and the inner feed back
loop taken from the capacitor current.

NEURAL NETWORK
CONTROLLER
Artificial Neural Network (ANN)
is an information processing paradigm
that is inspired by the way biological
nervous systems, such as the brain,
processes the information. The key
element of this paradigm is the novel
structure of the information processing
system. It is composed of a large number
of highly interconnected processing
elements (neurons) working in unison to
solve specific problems. An ANN is
configured for a specific application,
such as pattern recognition or data
classification, through a learning
process. Learning in biological systems
involves adjustments to the synaptic
connections that exist between the
neurons.

ARTIFICIAL NEURAL
NETWORKS
An Artificial Neuron is a
concept whose components have a direct
analogy with the biological neurons. Fig
below shows the structure of an artificial
neuron model.

Fig : The neuron model
Each input signal flows through a
gain or weight, called a synaptic weight
or connection strength, whose function is
analogous to that of the synaptic junction
in a biological neuron.
The weights can be positive
(excitory) or negative (inhibitory)
corresponding to acceleration or
inhibition respectively, of the flow of
electrical signals in a biological cell. The
summing node accumulates all the
inputs weighted signals and then passes
to the output through the activation
function. This is usually nonlinear in
nature, which is analogous to axon in a
biological neuron.

ARCHITECTURE OF NEURAL
NETWORKS
a. FEED-FORWARD NETWORKS
Feed-forward ANNs are shown
in below Fig , allow signals to travel in
only one way, from input to output.
There is no feedback (loops) i.e. the
output of any layer does not affect that
same layer. Feed-forward ANNs tend to
be straight forward networks that
associate inputs with outputs. They are
extensively used in pattern recognition.

Figure A simple feed forward network

b.FEEDBACK NETWORKS
Feedback networks can have
signals traveling in both directions by
introducing loops in the network.
Feedback networks are very powerful
and can get extremely complicated.
Feedback networks are dynamic, their
'state' is changing continuously until they
reach an equilibrium point. They remain
at the equilibrium point until the input
changes and a new equilibrium needs to
be found.Feedback architectures are also
referred to as interactive or recurrent,
although the latter term is often used to
denote feedback connections in single-
layer organizations.
III) NETWORK LAYERS
The common type of artificial
neural network consists of three groups
or layers .A layer of "input" units is
connected to a layer of "hidden" units,
which is connected to a layer of "output"
units. The activity of the input units
represents the raw information that is fed
into the network.
The activity of each hidden unit
is determined by the activities of
the input units and the weights on
the connections between the
input and the hidden units.
The behavior of the output units
depends on the activity of the
hidden units and the weights
between the hidden and output
units.
The weights between the input and
hidden units determine when each
hidden unit is active, and so by
modifying these weights, a hidden unit
can choose what it represents.

BACK-PROPAGATION
ALGORITHM
In order to train a neural network to
perform some task, we must adjust the
weights of each unit in such a way that
the error between the desired output and
the actual output is reduced. This
process requires that the neural network
compute the error derivative of the
weights (EW). In other words, it must
calculate how the error changes as each
weight is increased or decreased slightly.
The back propagation algorithm is the
most widely used method for
determining the EW.
The algorithm computes each EW
by first computing the EA, the rate at
which the error changes as the activity
level of a unit is changed. For output
units, the EA is simply the difference
between the actual and the desired
output. To compute the EA for a hidden
unit in the layer just before the output
layer, we first identify all the weights
between that hidden unit and the output
units to which it is connected. We then
multiply those weights by the EAs of
those output units and add the products.
This sum equals the EA for the chosen
hidden unit. After calculating all the EAs
in the hidden layer just before the output
layer, we can compute in like fashion the
EAs for other layers, moving from layer
to layer in a direction opposite to the
way activities propagate through the
network. This is what gives back
propagation its name. Once the EA has
been computed for a unit, it is straight
forward to compute the EW for each
incoming connection of the unit. The
EW is the product of the EA and the
activity through the incoming
connection.

OBTAINING EXAMPLE
PATTERNS UNDER LINEAR
LOADING CONDITIONS
The example patterns are
obtained by using idealized load-current-
reference model shown in fig. There is a
feed forward signal from the reference
voltage, which is found to have
advantages of reducing steady-state error
and providing a high tracking accuracy
to the reference.

Fig Idealized load-current feedback
control scheme
The generator consists of two
components, one is sinusoidal feed
forward signal, and the other is a
compensation signal produced by
feedback loops. In this way, the neural
network can be trained to generate only
the compensation signal as its desired
output will results more effective
training and better control performance.

OBTAINING EXAMPLE
PATTERNS UNDER NON LINEAR
LOADING CONDITIONS
Many electrical loads nowadays
are nonlinear. It is therefore essential to
maintain the performance of the UPS
inverter under nonlinear loading
conditions. The nonlinear load in this
project is chosen to be a diode bridge
rectifier with an output filter. The input
current of the nonlinear load, namely the
output current of the inverter. The
idealized load-current feed back control
scheme for nonlinear loads shown in fig
below.

Fig : Idealized load-current feedback
control scheme
to obtain example patterns for non
linear loads
Where a sinusoidal voltage is fed
to nonlinear load model to obtain a load-
current reference. The actual load
current is compared with this reference,
and the error signal is used as the
controller input. The parameters of
controller are determined from
simulations to produce an output with a
low THD and a small enough steady-
state error.
STRUCTURE AND TRAINING OF
NEURAL NETWORK
All the example patterns
obtained from the simulations are put
together to form a database that contains
the information about the control law. A
proper neural network is then used to
learn the control law. The neural
network should be as simple as possible
to reduce the calculation time. The
neural network control scheme for UPS
inverter is shown in fig below.

Fig Neural Network Control Scheme
for UPS Inverter
The inputs to the neural network are
capacitor current, load current, output
voltage and the error between the
reference voltage and the output voltage
The training of the NN is
automated by a computer program that
presents a randomly selected example
pattern from the pattern database to the
NN a large number of times. During
each time the weights and biases of the
NN are updated using the back
propagation algorithm to make the mean
square error between the desired output
and the actual output of the NN less than
a predefined value.
The following is a summary of the
design steps for the proposed NN
controller for UPS inverter applications.
1.Build the simulated controller with
the idealized load current-reference
for the inverter.
2.For each of the loading conditions,
tune the parameters of the controller
to optimal values. Then collect
output voltage, load current, and
capacitor current as the inputs of the
NN and the compensation signal as
the desired output of the NN.
3.Select an NN structure that is
simple and yet sufficient to model
the simulated controller based on the
pattern database.
4.Train the NN using software tools
using MATLAB.

MATHEMATICAL MODELS
ANDRESULTS
The complete details of the UPS
Inverter are presented in previous pages.
This chapter deals with the mathematical
models of UPS Inverter using MATLAB
and corresponding results are presented.
LOAD MODEL :
The various loads used for
taking data such as resistive, R-L load,
R-C load and diode bridge load. The fig
4.1 shows the mathematical model for
R-L and R-C loads.
Fig shows the mathematical model
for diode bridge rectifier load with
output capacitor and resister.
The diode is described as

i
d
= 0,
(u
d
<.0.7)
i
d
= (ud - 0.7)/0.1,
(u
d
>=0.7)

Where i
d
is the instantaneous forward
current in the diode.
u
d
is the instantaneous forward
voltage across the diode.

Figure:MATLAB load
models of R-L load and R-C load


Figure : Diode bridge rectifier load
model

SPWM GENERATOR
The SPWM generator is
described by the following equation in
MATLAB

Where Vdc is the voltage of DC source,
um is the instantaneous voltage of the
modulating signal and uc is the
instantaneous voltage of the triangular
carrier wave. Fig 4.3 shows the
mathematical model of SPWM
generator using MATLAB and Fig 4.4
shows the output of the SPWM
generator.

fig : SPWM generator.


Fig SPWM generator output
UPS INVERTER WITHOUT
CONTROLLER

Fig UPS Inverter Without controller
for R load

Fig UPS Inverter Without controller
output for R load

Fig a : UPS Inverter without
controller for diode bridge rectifier

Fig b : UPS Inverter without
controller output for diode bridge
rectifier

By observing the figures,it shows
that the output response of the UPS
Inverter does not track the reference
sine wave, so that it requires
controllers to get a quality sine wave.

UPS INVERTER WITH PI
CONTROLLER
.

Fig UPS Inverter with PI controller
for R load

Fig UPS Inverter with PI controller
output for R load

Fig UPS Inverter with PI controller
output for diode bridge rectifier load

BLOCK DIAGRAM OF NEURAL
NETWORKS
The block diagram of neural
network and its hidden layers are shown
in fig a and fig b respectively.
y{1}
Input 1
p{1}
p{1} y{1}
Neural Network

Fig a : Block diagram of Neural
Networks

1
y{1}
a{1}
a{1} a{2}
Layer 2
p{1} a{1}
Layer 1
a{1}
1
p{1}

1
a{1} tansig netsum bias
b{1}
weight
IW{1,1}
TDL
Delays 1
1
p{1}

1
iz{1,1}
w
p
z
dotprod3
w
p
z
dotprod2
w
p
z
dotprod1
Mux
Mux
weights
IW{1,1}(3,:)'
weights
IW{1,1}(2,:)'
weights
IW{1,1}(1,:)'
1
pd{1,1}

1
a{2} purelin netsum bias
b{2}
weight
LW{2,1}
TDL
Delays 1
1
a{1}



1
lz{2,1}
w
p
z
dotprod1
Mux
Mux
weights
IW{2,1}(1,:)' 1
ad{2,1}

Fig b : Hidden layers of the Neural
Networks


UPS INVERTER WITH NEURAL
NETWORK CONTROLLER


Fig UPS Inverter with Neural
Network controller for R load



Fig UPS Inverter with Neural
Network controller output for R load


Fig UPS Inverter with Neural
Network controller for diode bridge
rectifier load


Fig UPS Inverter with Neural
Network controller output for diode
bridge rectifier load

REFERENCE:
1. Power electronics circuits,
devices, and applications by
Muhammad H.Rashid ,Second
edition.
2. Modern Power electronics by PC
Sen First edition1998.
3. NEURAL NETWORKS by
Christos Stergiou and Dimitrios
Siganos.
4. N. M. Abdel-Rahim and J . E.
Quaicoe, Analysis and design of
multiplefeedbackloop control
strategy for single-phase voltage-
source UPS inverters, IEEE
Trans. Power Electron., vol. 11,
pp. 532541, J uly 1996.
5. M. J . Ryan, W. E. Brumsickle,
and R. D. Lorenz, Control
topology options for ingle-phase
UPS inverter, IEEE Trans. Ind.
Applicat., vol. 33, pp. 493501,
Mar./Apr. 1997.
6. B. J . Kim, J . H. Choi, J . S. Kim,
and C. H. Choi, Digital control
scheme of UPS inverter to
improve the dynamic response,
in Proc. IEEE Elect.Comput.
Eng. Can.Conf., 1996, pp. 318
321.
IMPROVEMENTS TO VOLTAGE SAG RIDE-THROUGH
PERFORMANCE OF AC VARIABLE SPEED DRIVES

R.Vijayalakshmi and G.Madhunika
Department of Electrical & Electronics Engineering
Sri Kalahasteeswara Institute Of Technology
Srikalahasthi.

ABSTRACT
Voltage sags originating in ac supply systems can cause nuisance tripping of variable
speed drives (VSDs) resulting in production loss and restarting delays. In ac VSDs
having an uncontrolled rectifier front-end, the effects of voltage sags on the dc link
causing dc under-voltage or ac over current faults initiate the tripping. This paper
suggests modifications in the control algorithm in order to improve the sag ride-through
performance of ac VSDs. The proposed strategy recommends maintaining the dc link
voltage constant at the nominal value during a sag by utilizing two control modes, viz.
(a) by recovering the kinetic energy available in the rotating mass at high motor speeds
and (b) by recovering the magnetic field energy available in the motor winding
inductances at low speeds. By combining these two modes, the VSD can be configured to
have improved voltage sag ride-through performance at all speeds.

1. INTRODUCTION
Solid State AC Variable Speed Drives
have already become an integral part of
many process plants and their usage is
on the rise in industrial, commercial and
residential applications. It is projected
that, about 50-60% of the electrical
energy generated will be processed by
solid state power electronic devices by
the year 2010 compared to the present
day levels of 10-20%. However, VSDs
are vulnerable to voltage sags. Voltage
sag is a momentary reduction of voltage
and is usually characterised by its
magnitude and duration with typical
values of magnitude between 0.1- 0.9
p.u. and duration ranging from 0.5
cycles to 1 minute. Voltage sags are
reported to be the most frequent cause of
disrupted operations of many industrial
processes. This paper concentrates on
AC VSDs with a three stage topology
(Fig. 1) viz., a diode bridge rectifier



front-end, a dc link capacitor and a
PWM inverter.


Figure1. AC VSD with a VSI
configuration
In VSDs having an uncontrolled rectifier
front-end, variation in the incoming ac
supply voltage is usually reflected in the
dc link behaviour. In the case of a
balanced three-phase sag, the dc link
voltage reduces, leading to an under-
voltage trip. Also when the ac supply
returns to normal conditions, the VSD
can trip due to the ac side over-current as
a result of high charging current of the
dc bus capacitor. In the case of an
unbalanced sag, the ripple in the dc bus
voltage increases and the VSD can trip
especially when the sag magnitude and
the load torque are very high. It is
reported that, a sag of magnitude more
than 20% (i.e. ac voltage falls below 0.8
p.u.) and duration more than 12 cycles is
found to trip VSDs. The impact of
unbalanced sag is less severe on the
VSD performance and hence the ride-
through behaviour when subjected to a
balanced three-phase sag alone is
analysed here.

2. CONVENTIONAL STRATEGIES
2.1 Types of available strategies
Three types of voltage sag mitigation
techniques are reported in literature.
They are, (a) hardware modifications
(eg. increasing ac side inductors, dc bus
capacitance) (b) improvement in power
supply conditions (eg. use of alternative
power supplies such as a motor-
generator set, Uninterruptable Power
Supply) and (c) modifying control
algorithm. In this paper, strategies
involving improvements in the control
strategy alone are considered due to the
following advantages, (a) no additional
space is required and (b) since only
software modifications are involved and
hence cost increase is relatively
negligible.

2.2 Control algorithm based strategies
One control algorithm based technique
which ensures maximum torque
availability to the motor suggests
compensating the modulation index and
stator frequency corresponding to the
instantaneous dc link voltage during a
sag. However, the dc link characteristics
are not improved and the drive can still
trip when a sag occurs. Another strategy
suggests maintaining the supply output
of the VSD synchronised with the
induction motor flux and operate the
motor at zero slip during a sag so that
the machine can be restarted when
normal ac supply returns. Since only a
minimal power is drawn from the dc
link, the rate of dc voltage reduction is
low. However, the sag ride-through
performance of the VSD depends on the
dc link voltage at the instant of ac supply
recovery. Finally, another control
strategy recommends maintaining the dc
bus voltage at a required level by
recovering the kinetic energy available
in the rotating mass during a sag. With
this type of control, the motor
decelerates towards zero speed at a rate
proportional to the amount of energy
regenerated and the shaft load on the
motor. But, since the kinetic energy
decreases proportional to the square of
the speed, the sag ride-through
performance of the VSD under this
strategy is highly speed dependent and it
works well only at high motor speeds. If
the voltage sag persists even after the
motor has come to standstill, the
capacitor voltage will start to reduce and
the VSD will trip due to either under-
voltage or over-current faults.

3. EFFECT OF VOLTAGE SAG ON
AC VSDs
Here, the impact of a symmetrical three-
phase sag on the VSDs controlling a
synchronous reluctance motor (SRM)
and an induction motor (IM) will be
verified. Field orientation control (FOC)
is considered. The VSDs were modeled
in MATLABTM with details as
discussed below.

3.1 Mathematical modeling of AC
VSDs
In field oriented control of ac motors, the
three phase motor currents are
transformed into two orthogonal
components in a synchronous frame of
reference which moves with respect to
the stator coordinates, and they are
defined as isq, the torque producing
component (quadrature axis current) and
isd, the flux producing component
(direct axis current).The main difference
in the control of IMs as compared to the
SRMs is due to the orientation of the
flux axis. In the case of an SRM, the
synchronous frame of reference is the
same as the rotor axis, which can be kept
track by a rotor position sensor whereas,
in the case of IMs, more complicated
computations are involved. The
functional block diagrams of SRM and
IM VSDs under field orientation are
shown in Figures 2 and 3 respectively. In
both cases, the speed reference (wref)
and the magnetising current reference
(im Rref) forms the main control inputs.
The operation of both VSDs is almost
identical and the functions of various
control blocks are as follows:

The Torque / Current Conversion block
calculates the torque producing current
set point (isqset) from the set torque
reference (Tref). The Current Control
block calculates the stator voltage set
points in field coordinates (Vsdref and
Vsqref). The Co-ordinate Transformation
block transforms the selected voltage
references (Vsdref and Vsqref) from the
synchronous coordinates to the stator co-
ordinates. The Switching Vector
Selection block selects the appropriate
operating sequence for the inverter
switches, based on the voltage vector
position in the complex plane.

3.2 Behaviour of VSDs on a sag
condition
Here, the impact of voltage sag on the
behaviour of both SRM and IM VSDs of
5.5 kW rating will be discussed. A
symmetrical three-phase voltage sag of
0.5 p.u. and duration 1 second was
applied when the motors were running at
a steady-state speed of 120 rad/s
operating with a load torque (TL) of 18
Nm (half the rated load). The inverter
switching frequency was kept at 5 kHz.
The observations are as follows:

3.2.1 Behaviour of SRM VSD during a
sag


The control system has ensured that the
SRM speed, torque and flux were
unaffected during the sag condition.
However, the dc bus characteristics, viz.
the capacitor voltage (Vbus) and the
capacitor charging current (Iin), are
affected the most during the sag (Fig.4).
The double-ended arrow indicates the
sag period. It is observed from the above
figures that initially there is no flow of
capacitor charging current (Iin) because
the rectifier diodes are reverse biased
and the capacitor discharges the stored
energy to the motor. Once Vbus becomes
less than the ac supply peaks, the
capacitor is charged uniformly by all the
three phases during the sag. When the ac
supply returns to normal, a very high
current pulse is observed in Iin with its
magnitude increasing with the sag
magnitude, but it is usually many times
the current rating of the rectifier diodes.
This high current pulse results in the
over shoot of Vbus which gradually
returns to normal by discharging to the
inverter load (Figure 4 (a)).

3.2.2 Behaviour of IM VSD during a
sag

When subjected to the three-phase sag,
the behaviour of the IM VSD was found
to be identical to that of the SRM VSD.
The speed and torque performances
are not affected whereas the impact of
the sag is observed in the dc link
characteristics. The capacitor voltage
(Vbus) and the capacitor charging
current (Iin) are shown in Figures 5 (a)
and (b) respectively.

3.3 Reasons for VSD tripping during a
sag
From Figures 4 and 5, it can be observed
that, the dc bus voltage reaches a low
level depending on the magnitude of the
sag and the load, which can cause the
VSD to trip due to an under-voltage
fault. When the sag condition is over,
very high capacitor recharging current
(Iin) results and in spite of being limited
by the circuit impedances, it is usually
several times the current handling
capacity of the rectifier diodes. In such a
case, the VSD can trip due to the over-
current. In order to protect the VSD
hardware, the under voltage trip setting
is typically kept between 70% and 85%
of the nominal dc voltage. Similarly, the
ac over-current trip is usually set in the
range of 200% to 250% of the rated
motor current.

4. PROPOSED CONTROL STRATEGY

Since the nuisance tripping of the VSD
during a voltage sag is triggered by the
dc bus characteristics, the proposed
strategy, which is an extension of the
control strategy proposed in,
recommends maintaining the dc link
voltage at the nominal level by
recovering the kinetic as well as
magnetic field energy present in the ac
motor in order to improve the sag ride-
through performance.

4.1 Energy levels present in an ac VSD
The typical levels of energy present in
an ac VSD controlling a 5.5 kW motor
are given below:

4.2 Control Sequence and Flow-
Charting
The flowchart illustrating the VSD
control during a voltage sag condition is
shown in Figure 6.


It can be observed that, there are three
distinct situations involved with respect
to the control of the VSD. They are
summarized as follows:

Control Situation 1 (CS1): (No Voltage
sag) VSD operation with normal speed
control.

Control Situation 2 (CS2): (Voltage
sag and motor speed > cut-off speed)
DC bus voltage control by recovering
load kinetic energy.

Control Situation 3 (CS3): (Voltage
sag and motor speed < cut-off speed)
DC bus voltage control by recovering
magnetic field energy.
In order to maintain the dc link voltage
at the required level during the sag,
additional control loops are necessary
within theVSD control system. They are:
(a) Kinetic energy recovery
(b) Magnetic field energy recovery

5. RESULTS
Above control strategies were
implemented for a case where the VSDs
were subjected to a 50% three-phase
Sag. Following sections illustrate the
results. and the results are analysed as
follows:

5.1 SRM VSD response


It can be seen that the bus voltage is
maintained at a level close to the set
reference by initially recovering the
kinetic energy as long as the motor
speed is above the cut off speed and then
by recovering the energy available in the
inductances (by reducing isd). The motor
speed is found to drop more rapidly due
to the regenerative operation and then
coast at a slower rate during the energy
recovery from the motor windings,
hence torque requirement increases.
Once the supply returns to normal, the
motor flux reaches its rated level and the
Motor starts to accelerate towards the set
speed. The capacitor charging current
(Iin) is found to be within acceptable
limits on normal ac supply recovery.

5.2 IM VSD response
When controlled by the proposed
strategy, the response of the IM VSD
was found to be similar to that of SRM
VSD at high motor speeds (i.e. Control
Situation 2). The motor was controlled
under regeneration mode and the dc link
voltage was maintained around the set
level (587 V). The motor speed was
found to reduce towards zero. However,
below the cut-off speed, while trying to
recover energy from the winding
inductances, the operation of the control
system was found to be different from
that of SRM.




Figure10. IM VSD response

6. CONCLUSIONS
The proposed strategy was found to
work satisfactorily in the case of the
SRM VSD. It was found that the dc link
characteristics have improved and the
VSD can ride through sags over a wide
speed range. However, in the case of an
IM VSD, it is observed that this strategy
can override a sag only at high motor
speeds. It was verified that the magnetic
field energy gets dissipated across the
rotor in the case of an IM. Thus, an open
loop flux reduction at a rate higher than
the rotor time constant will improve the
sag ride-through performance of an IM
VSD at low motor speeds.

REFERENCES

[1] Puttgen, H.B., Rouaud, D., Wung, P.,
Recent Power Quality Related Small to
Intermediate ASD Market Trends, PQA
91, (First International Conference on
Power Quality: End-Use Applications and
Perspective), Oct 15-18, 1991, Paris, France.

[2] Sarmiento, H. G., Estrada, E., A
Voltage Sag Study in an Industry with
Adjustable Speed Drives, Proc. Industrial
and Commercial Power Systems Technical
Conference, Irvine, CA, USA, May 1994, pp
85-89.

[3] Collins J r., E. R., Mansoor A., Effects
of Voltage Sags on AC Motor Drives, Proc.
IEEE Annual Textile, Fiber and Film
Industry Technical Conference Greenville,
SC, USA, May 1997; pp 1-7.

[4] Mansoor, A., Collins J r., E. R., Morgan,
R. L., Effects of Unsymmetrical voltage
sags on Adjustable Speed Drives, Proc. The
7
th
Annual Conference on Harmonics and
Quality of Power, Las Vegas, NV, USA,
October 1996, pp 467-472.

POWER QUALITY IMPROVEMENT USING
RESONANT CIRCUIT


AUTHORS:

G.VARUN REDDY
3/4EEE
GITAM, VIZAG
Email : gvarunreddy.25@gmail.com
Ph: 9885079934


ABSTRACT

Characteristics of high-
frequency modified series-parallel
resonant converter operating in high
power factor mode are presented. The
highorder series resonant converter
presented in this paper has very good
characteristics viz.., high efficiency
and narrow variation in switching
frequency for good regulation.This
converter uses capacitive output filter.
Fixed-frequency controls are used to
regulate the magnitude of output
voltage and power. An active control
of series-parallel resonant converter is
used for improving input line current
wave-shape. Because of this converter
has a very high power factor. Design
criteria that incorporate transformer
non-idealities are developed, and are
employed in the construction of high-
voltage prototype converter.
Simulation results for the converter so
designed and experimental results for
a 1kW, 1kV converter are presented to
verify
The performance of the
proposed converter for varying load
conditions. The converter operates in
lagging power factor mode for the
entire load range.




P.DEEPAK MANOHAR
3/4EEE
GITAM, VIZAG
Email: deepak_mnhr@yahoo.co.in
Ph: 9885847272




Index Terms
High frequency resonant
converter, high-voltage, fixed
frequency control, Power quality.

I INTRODUCTION
A number of active power line
conditioners (APLCs) have been
proposed in the literature. Most of
These configurations are
PWM-type, hard-switched. They are
subjected to very high switching
losses, switching noise, switching
stresses and electromagnetic
interference (EMI), especially at high
frequency. Thus, this reduces the
efficiency of the converter. To
overcome these problems, the quasi-
resonant converter (QRC) with high
power factor of input line cur-rent is
reported in recent literature. The
switching losses and the stresses of the
converters are significantly reduced in
comparison with the PWM converters.
But, it is difficult to use high
frequency transformer, and to operate
it at high power level.
In order to overcome above
disadvantages, improvement of input
power factor and reducing the total
harmonics distortion (THD) in input
line current using full-bridge
configuration of resonant converter is
1
necessary [1-7]. Most of the schemes
are series, parallel, or series-parallel
resonant converters. A series resonant
converter has volt-age regulation
problem at light loads, parallel
resonant converter has lower
efficiency at light loads and LCC-type
series-parallel resonant converter takes
the properties of parallel resonant
converter at load below 50 % of the
full load. A Modified resonant
converter [4] can be used as an AC-
DC converter, but the major drawback
of this converter is high component
stress at peak of ac input, especially at
full load. The highorder series
resonant converter presented in [5] has
very good characteristics viz., high
efficiency and narrow variation in
switching frequency for good
regulation. It is a good candidate for
high voltage dc applications, but has
high voltage stress on the output HF
rectifier diodes. M. J . Schutlen [6]
supports the parallel and series-
parallel resonant converter operated in
a high power factor mode even with
no active control of input line current.
However, it requires large output
capacitive filter to store the second
harmonic (120Hz ripple for 60Hz-
power supply frequency). Use of such
a large capacitive filter leads to input
line current with nearly quasi-square
waveform. The waveform of this
harmonics and contributes to losses in
the system. This reduces the efficiency
of the converter.
In this pap
current contains lower order
er, an improvement
in power factor on ac line side of AC-
DC converter using high frequency
(HF) modified series-parallel resonant
converter (MSPRC) is proposed. This
scheme consists of an uncontrolled
diode bridge rectifier followed by a
small dc link capacitor Cin (i.e., high
frequency bypass) connected to the
MSPRC. The input line current of this
converter is quasi-square wave. This
leads to the presence of THD more
than 100% at load below 25 % of full
load. Therefore with the use of active
control of input line current of ac-to-
dc MSPRC, the line current waveform
is purely sinusoidal in nature and
power factor of the circuit is very
high. The HF transformer non-
idealities (i.e., leakage inductance and
winding capacitance) are considered
into the basic operation of circuit. The
winding capacitance of HF
transformer when taken into
consideration for analysis contributes
to improvement of predicted
performance of the converter in terms
of efficiency. Therefore, this converter
is best suited for high voltage dc
applications like distributed power
system as well as standalone converter
for a variety of applications.

2
The block diagram and circuit
diagram of the proposed ac-to-dc
converter employing MSPRC with
capacitive output filter is shown in
Fig.1 and Fig.2 respectively. The
small LC filter used at input of line
rectifier to re-duce the high frequency
switching noise and ripple. The
idealized operating waveform of
MSPRC is shown in Fig.3. The
proposed converter consists of an
uncontrolled diode full-bridge rectifier
followed by a small dc link capacitor
in C (i.e., high frequency bypass)
connected to the MSPRC. Therefore,
Input voltage to MSPRC is a rectified
line voltage. The HF transformer non-
idealities are considered while
designing the MSPRC. In addition to
this, use of this tank circuit reduces the
component-stresses. Therefore, over-
all performance of the proposed
resonant converter is improved in
terms of the efficiency and the power
factor. A useful analytical technique,
based on classical complex ac-circuit
analysis is suggested for designing the
modified series-parallel resonant tank
circuit. Fixed frequency control (200
kHz) is used to regulate the out-put
voltage. The closed loop control is
used to meet the output ripple
specifications. The closed loop gain
should be chosen properly for
achieving good transient response.
This ac-to-dc converter is simulated
using TUT-SIM package, and
experimental prototype unit is de-
signed and fabricated using high
frequency switches MOSFETs.

III DESIGN OF TANK
CIRCUIT
Based on the classical complex
ac-circuit analysis, MSPRC is
designed. MSPRC is operated just
above the resonant peak of converter
gain to maintain zero-voltage
switching for the duty ratio control.
AC equivalent circuit of proposed
MSPRC is shown in Fig.4, and the
important equations relating to the
resonant tank circuit are given as
follows:



Where leak L and w C are leakage
inductance of primary winding and
secondary winding capacitance of HF
transformer, respectively.

3

If a sinusoidal current is drawn
from the ac input line, the power
delivered by the converter to the
output filter capacitor and load has a
wave shape of a double frequency
sinusoid with peak value equal to
twice the aver-age power delivered by
the converter. The converter is
designed in the same way to a dc-to-dc
converter [7] to deliver twice the
average output power at the peak of
minimum line voltage (115 V, rms).
The different design curves like
minimum kVA rating of tank circuit
per kW of the output power, minimum
inverter output current, maximum
efficiency and normalized output
voltage are considered in the design of
converter. One such an optimum value
of normalized voltage curves for the
converter is shown in Fig.5. The
optimum designs values are referred to
Fig.4 are given below: The design
specifications of proposed ac-to-dc
converter circuit fabricated are as
follows:


Where, n is transformer turns
ratio (primary winding to secondary
winding), 'L R is load resistance
referred to primary side of HF
transformer, and d V is forward
voltage drop of HF rectifier diodes?
Using equation (1), (2), (3), and (5),
the optimum component values
calculated are as:



V Results
The simulation and
experimental waveform of ac input
line voltage (Vin) and input line
current (Iin) at different load
conditions are shown in Fig.6 and
Fig.7 respectively. The output voltage
Vo is regulated at 975 V. This is due
to fact that the component values used
were not exactly the same as those
obtained from the design and also
losses in the semiconductor devices
were not taken into account in the
design. The decrease in load requires
decrease in duty ratio (0.48 at full load
to 0.31 at 10% of the full load) to
maintain constant output voltage. The
performance of the converter was
investigated with closed loop
conditions, with active control on the
input line current. It seen from the
experimental and simulation result that
the power factor is very high with
active control of input line current.
The overall efficiency of ac-to-dc
converter is 93% at full load. The
Table-1 shows the power factor with
and without active control of input line
current. It is observed from the
4
experimental waveform and table-1
that the maximum power factor is
99.97 at full load.



















In this paper, HF transformer
non-idealities (i.e., leak-age
inductance and winding capacitance)
are considered as useful elements of
the resonant tank circuit. The resonant
converter has been designed for 1000-
W as given above. This converter has
high power factor (99.96%) at full
load and low total harmonic distortion.
This is true even with no active control
on the input line current. It is
noteworthy from the results that the
input line current decreases with
decrease in load current, resulting into
high efficiency (93%) at full load.
Owing to use of the resonant converter
at HF operation and high voltage
applications, a small value of output
filter capacitor is required to reduce
the ripple in the output dc voltage.
Also, due to the action of the resonant
converter and small value of output
filter capacitor, the magnitude of
capacitor charging current is kept low.
Thus, input line current is sinusoidal in
nature and converter has high power
factor. The input line current
waveform is slightly distorted at light
load (10% of full load). This is
because of the fact that the component
values used were not exactly the same
as those obtained from the design and
also losses in the semiconductor
devices were not taken into account in
5
the design. The MSPRC is a strong
candidate for the high-voltage ac-dc
applications, like distributed power
system stand-alone converter, high-
speed DC motor drives, and other
power electronics systems.



Fig.7 Experimental waveforms for
AC-to-DC resonant converter for input
voltage Vin , input current iin, with
variable frequency control.
(a) on full load Vin (100 V/div), iin (4
A/div) (b) on 50 % of full load Vin
(100 V/div), iin (2 A/div) and (c) on
10 % of full load.. Vin (100 V/div), iin
(0.5 A/div), Time scale: 5 ms/divison.


VI REFERENCES
[1] R. L. Steigerwald, A Comparison
of half-bridge resonant converter
topologies, IEEE Trans. on Power
Electronics, vol.3, no.2, pp.174-182,
April 1988.

[2] A. K. S. Bhat and M. M. Swamy,
Analysis and design of a parallel
resonant converter including the effect
of a high frequency trans-former,
IEEE Trans. on Industrial
Electronics, vol.37, no.4, pp.297-306,
Aug.1990.

[3] A. K. S. Bhat, Analysis and
design of a series-parallel resonant
converter, IEEE Trans. on Power
Electronics, vol.8, no.1, pp.1- 11,
J an.1993.

[4] H. M. Suryawanshi and S. G.
Tarnekar, Modified LCLC-type
series resonant converter with
improved performance, IEE Proc.-
Electr. Power Appl., Vol.143, No.5,
Sept.1996, pp.354-360.

[5] H. M. Suryawanshi and S. G.
Tarnekar, Resonant converter in high
power factor, high-voltage dc
applications, Proceedings-Electric
Power Application, Vol.145, No.4,
J uly-1998, pp.307-314.


6
Title : Power Quality Contracts in a
Competitive Electric Utility Industry
Authors : N.Narendra Chowdary & Mohin Ahamad Syed
Address : Chadalawada Ramanamma Engineering College,
Renigunta Road, Tirupathi.
Email Id : nuthalapati.narendra@gmail.com
: moinahamed245@yahoo.co.in.

TECHNICAL PAPER PRESENTATION

ON
Power Quality Contracts in a
Competitive Electric Utility Industry
BY
N.Narendra chowdary
&
Mohin Ahamed Syed




Abstract

Who will be responsible for the
quality of power being delivered in the
deregulated utility industry? The
characteristics and sensitivity of end use
equipment within customer facilities
ultimately define power quality
requirements. Improving the energy
efficiency and productivity of industrial
and commercial facilities can sometimes
result in the use of technology that
either causes power quality problems or
is sensitive to power quality variations.
Historically, utilities have only gotten
involved in power quality problems that
they caused to their customers. In the
deregulated industry the concepts of
utilities and their customers are blurred.



What are the power quality
requirements at the interface between the
transmission company and the
distribution company? What is the base
level of power quality that must be
supplied by the distribution company to
the end use customers? What kinds of
enhanced power quality services can the
energy service company offer to the end
use customers? The answers to all of
these questions must be developed in
terms of the contracts between the
different entities resulting from the
utility industry deregulation. This paper
describes some of the requirements for
these contracts and ways of measuring
performance to evaluate compliance
with the contracts.


THE NEW MODEL

The traditional model for an
electric utility is a vertically integrated,
regulated company that includes
generation of electricity, transmission
systems, distribution systems, and retail
services. The prices and service terms
are set by uniform, regulated tariffs
that are approved and controlled by a
utility regulatory commission. The
company is allowed to obtain a specified
rate of return on the capital investments
in generation, transmission, and
distribution required to provide reliable
electric service to all the customers.
Deregulation is resulting in
important structural changes in the
utility industry. In this new structure,
customers will have the option to
purchase electricity from a variety of
retail marketers, just as you can select
from a variety of suppliers for your long
distance telephone service. In this
model, the price and terms of the e ld c
service will be determined in the
competitive marketplace and will be
negotiated on a case-by case basis. The
retail marketers will have to arrange for
the required power supply to meet their
contractual requirements, the physical
delivery of the electricity (transmission
and distribution), and any other services
that may be part of their contracts with
Customers.
The retail marketer is just one of
the separate entities that results from the
restructuring. Power production
companies (GENCOs or IPPs) will se11
power in contracts to bulk power traders.
Regional transmission network operators
("SCOs) will provide transmission
access to get the power to the
distribution systems supplying the
customers. Finally, distribution
companies (DISTCOs) will provide the
final delivery of the electricity to
individual end use customers. These
distribution companies, or lines
companies, will likely still be the sole
provider of delivery services and the
access charges and terms for these
services will probably still be regulated.

One interesting result of
the new structure is the opportunity for
Unbundled customer services as new
Revenue producing opportunities.
Utilities are creating unregulated
subsidiaries at a tremendous pace in
order to tap this potential market for
customer Services. This business could
be part of the retail energy marketing
business or they can be separate energy
service companies (ESCOs). Evaluation
of power quality concerns and
implementation of power quality
improvement technologies are very clear
opportunities for these businesses. Note
that these businesses do not have any
geographic boundaries. When
technologies and expertise are
developed to offer a range of services,
the services can be offered worldwide,
not just in atraditional service territory.
Power quality enhancement is one of
these services.

This new model for the
electric utility industry creates many
needs for contracts between the different
entities involved. The contracts will have
to address issues of reliability and power
quality, as well as the obvious issues
Of price and delivery requirements.
POWER QUALITY& RELIABILITY

Obviously, reliability will
be a key issue in this restructured utility
industry. We must not allow the
introduction of competition to create
disincentives for maintaining a reliable
electric system. In fact, there is
significant discussion of performance-
bad rate structures for the distribution
companies or lines companies. But what
is reliability?
As competition takes hold,
defining reliability and standardizing
methods of reporting will be one of the
most important tasks facing industry
regulators. Methods for characterizing
all power quality variations are needed
so that system performance can be
described in a consistent manner from
one utility to another and one system to
another. Electric utilities already have
Standardized reliability indices [ 1) that
are used to report on system
performance. The indices are based on
sustained interruptions that last longer
than a certain period (1-5 minutes,
depending on the state and the utility).
However, there are many power quality
variations other than sustained
interruptions which can cause
misoperation of customer equipment.
These include sags, swells, harmonic
distortion, transient over voltages, and
steady-state voltage variations.
The Electric Power Research
Institute (EPFU) has been developing a
set of indices which provide a more
complete picture of system performance
to address the broader definition of
reliability. [2] All or some subset of
these indices can be used in contracts
between suppliers and customers or
between companies representing
different parts of the system (e.g.
transmission and distribution) to define
expected system performance.
The extent of the system being evaluated
will depend on the particular agreement
being evaluated (see Figure 1).
One of the most important indices
relates to the system voltage sag
performance. Voltage sags are
typically the most important power
quality variations affecting industrial and
commercial customers. The IEEE Gold
Book [31(Standard 493-1990) already
includes voltage sags in the definition of
reliability:
Economic evaluation of
reliability begins with the establishment
of an interruption definition. Such a
Definition specifies the magnitude of
the voltage dip and the minimum
duration of such a reduced-voltage
period that results in a loss of
production or other function of the
plant process.




Figure I: Application of Reliability
Power Qualify Indices at Different
Levels of the System
The most basic index for
voltage sag performance is the System
Average RMS (Variation) Frequency
Indexvoltage StRE7.J. SARFI,
represents the average number of
specified short duration rms variation
events that occurred over the
monitoring period per customer served
from the assessed system. For S4WIx,
the specified disturbances are those rms
variations with a voltage magnitude less
than r for voltage drops or a magnitude
greater than x for voltage increases.
URFI, is defined as follows:


These indices can be estimated
based on historical fault performance of
transmission and distribution lines but
system monitoring is required for
accurate assessment of performance at
specific system locations. Many utilities

SARFlx is calculated in a similar
manner as the System Average
Interruption Frequency Index (SAIFI)
value that many utilities have calculated
for years [2]. The two indices are,
however, quite different. SARFIx
assesses system performance with
regard to short duration rms variations,
whereas, SAIFI assesses only sustained
Interruptions. SARFI, can be used to
assess the frequency of occurrence of
sags, swells, and short duration
interruptions. Furthermore, the inclusion
of the index threshold value, r, provides
a means for assessing sags and swells of
varying magnitudes. For example,
SIRFI7, represents the average number
of sags below 70% experienced by the
average customer served from the
assessed system. SARFI can be broken
down into sub indices by the causes of
the events or by the durations of the
events. For
instance, it may be useful to define an
index related to voltage sags that are
caused by lightning-induced faults.
Indices have been defined for
subcategories associated with
instantaneous, momentary, and
temporary voltage sags, as defined in
IEEE 1159 [4].
It is also useful to introduce the
concept of aggregated events. Multiple
voltage sags often occur together due to
reclosing operations of breakers and
characteristics of distribution faults.
Once a customer process is impacted by
a voltage sag, the subsequent sags are
often less important To account for this
effect, SAAFI, uses an amgate event
method that results in only one count for
multiple sags within a one minute period
(aggregation period).
have already installed extensive
monitoring systems to help characterize
system performance on a continuous
basis. Some utilities have installed
monitoring symm to track performance
at specific customers as part of the
contractual requirements associated with
serving these customers. [5]
Other indices are also being
defined to characterize steady state
voltage regulation, unbalance, flicker
levels, harmonic distortion (voltage and
current), and transient d i " e
performance. Steady state variations are
characterized by statistical distributions
that define the percentage of time the
values are within specified ranges.
The 99% and 95% probability levels for
these distributions provide good indices
for evaluation. Disturbances, like voltage
sags and transients, are characterized by
the expected number of events per
period of time that exceed specsed
thresholds. Different thresholds are used
for merent applications.
REQUIREMENTSFORPOWER
QUALITY CONTRACTS
The requirements of particular power
quality contracts and the concerns that
must be addressed will depend on the
parties involved and the characteristics
of the system. These are typical areas
that will be addressed in a power quality
contract:
* Reliability/power quality concerns to
Be evaluated
* Performance indices to be used
* Expected level of performance
2 illustrates some of the important
relationships where power quality
considerations will be included in the
* Penalty for performance outside the
Expected level and or incentives for
Performance better than the expected
level (financial penalties,
Performance-based rates, shared
Savings, etc.)
* Measurementkahlation methods to
Verify performance
* Responsibilities for each party in
achieving the desired performance
* Responsibilities of the parties for
Resolving problems
In the following sections, the most
important concerns that should be
addressed are summarized for each
type of contract. Under each concern,
important factors that should be included
in the contract are described. Figure
contracts.



Control
Figure 3. Power quality contracts in the new electric utility? Industry structure.





Contracts between TRANSCO and
DISTCO or Direct Serve Customer

Contracts between transmission
companies and distribution companies
(or large direct serve customers) will
define the power quality requirements
and responsibilities at the distribution
substation interface between the two
systems.
Voltage Regulation, Unbalance,
Flicker
The steady state
characteristics of the voltage being
supplied by the transmission system
should be described. Responsibilities for
the voltage regulation between the
two companies will be defined. Control
of flicker levels requires limits on both
parties. The supplying transmission
system is responsible for the overall
flicker levels in the voltage. However,
the distribution company or the direct
serve customer is responsible for
controlling fluctuating load
characteristics. This is particularly
important for contracts between the
transmission company and a large arc
furnace customer .

Harmonic Distortion
The transmission supply
company is responsible for the quality of
the voltage being supplied. The
distribution company or end use
customer is responsible for the
harmonic loading of their system. This
will typically be defined in terms of
harmonic current limits at the point of
common coupling, as described in
IEEE 519-1992 or similar standard.

Transient Voltages
Many utilities are applying capacitor
banks at the transmission level for
system voltage support and to improve
power transfer capabilities. Switching
these capacitor banks creates transient
voltages that can impact distribution
systems and end use customers due to
magnification at sensitive loads.[6,7]
With integrated utilities in the United
States, these problems are usually
solved with switching control in the
transmission capacitor banks
(synchronous closing or closing
resistors). Requirements for control of
switching transients should be defined at
the point of common coupling (the
supply substation) in a power quality
Contract. Transients from capacitor
switching should be limited to very low
levels (e.g. less than 1.1 pu) at this point
due to their potential for causing
problems at lower voltages.

Voltage Sags and Interruptions
Expected
Voltage sag and interruption
performance at the point of common
coupling should be defined. It is
important to recognize that voltage sags
can be caused by faults on the
transmission system or faults on the
Down line distribution system. Contracts
that include voltage sag limits between
utilities and large customers supplied
from the transmission system have
already been implemented by some
utilities. [5] Payments or rate structures
that provide compensation for voltage
sag and interruption performance outside
of the specified levels should be defined.
Contracts between DISTCO and End
Users lor End User Representative)
The power quality requirements at the
point of common coupling between the
distribution system and end use
customers must be defined. In some
cases, the end user may be a customer of
the distribution company. In other
cases, the end user may be represented
by a retail marketer or an energy service
company. The basic power quality
requirements at this inherent will
probably be defined by regulations.
However, opportunities for performance-
based rates or enhanced power quality
service from the distribution system will
create the need for more creative
contracts.

Voltage Regulation, Unbalance,
Flicker
These steady state characteristics
of the voltage being supplied will be
defined. Customer requirements to
control fluctuating loads, unbalanced
loads, and motor starting will be part of
this contract.
Harmonic Distortion
IEEE 519-1992 describes the split of
responsibility between the customer and
the distribution system supplier in
controlling harmonic distortion levels.
The distribution company is responsible
for the voltage distortion and the
customer is responsible for -ONC
currents being created by nonlinear loads
within the facility.
Transient Voltages
Capacitor switching transients could be
important due to their impact on
sensitive loads. The distribution system
supplier should control the capacitor
switching transient magnitudes but the
customer should be responsible for
avoiding magnification problems created
by power factor correction capacitors
within the facility. Basic requirements
and responsibilities for surge
suppression should also be defined to
avoid problems with high fresuency
transients associated with lightning.
Voltage Sags and Interruptions
The contract should define expected
voltage sag and interruption
performance. This is an area where
Enhanced performance options may be
offered in caseswhere it may be more
economical to improve
Performance through modifications or
power conditioning equipment applied at
the distribution system level.
Contracts between RETAILCO O
and End User
The retail energy marketer or the energy
service company will have separate
contracts with the end use customers.
These contracts may be much more
creative and complicated than the
contract that defines the basic
distribution company interface with the
customer. The energy service company
may offer a whole range of services fot
improving the power quality, efficiency,
and productivity that dictate the contract
requirements. A couple of categories that
are of particular interest are
Discussed here.

Enhanced Power Quality
Requirements to Improve
Productivity
At this level, the power quality
requirements must be defined in terms of
the characteristics of the equipment
being used within the facility. The power
quality may be defined in terms of the
performance of the process, rather
Than the electrical characteristics of the
voltage and the current. The energy
service company can take responsibility
for the interface with the distribution
company and provide the necessary
power conditioning to assure proper
operation of the facility. Payment terms
for this power quality guarantee can be
in terms of shared savings from
improved productivity (similar to many
contracts that spec@ payments to energy
service companies from the shared
savings of energy efficiency
improvements) or they can be k e d
payments based on the power quality
improvement requirements.

Power Factor and Harmonic
Control
The supplying distribution system
will have requirements for harmonic
control that must be met by the
customer. The tariffs for the distribution
system supply will probably also include
power factor penalties. The retail energy
marketer or the energy service company
win have to deal with these
requirements, possibly integrating
Harmonic control and power factor
correction with power conditioning
equipment for voltage sag and transient
control.

Contracts between DISTCO and
Small IPP
Deregulation also creates more
opportunities for small independent
power producers (PPs) to generate and
sell electricity. Many of these smaller
producers may be located on distribution
systems, creating a need to define
the power quality requirements for this
interface (along with protection and
reliability requirements). The power
Quality contracts will define the
expected power quality that the IPP
can expect at the interface (similar to the
Contract with end users) and will define
the requirements for the IPP in terms of
the quality of the generated power.
Important areas to consider for the IPP
requirements are the power fluctuations
(e.g. startup for motodgenerator
Systems, power fluctuations for wind or
photovoltaic systems), harmonic
characteristics of the generated
Current, power factor characteristics,
balance.

SUMMARY
Basic power quality requirements
will have to be regulated. As
deregulation takes over the industry, the
temptation to let the level of service and
investment in the system deteriorate is
obvious. Regulators will want to prevent
this be requiring some basic level of
quality and reliability. Indices are
being developed and standardbed to
facilitate the charactmizing of power
quality levels on the system (IEEE
1159-1992 provided a starting point by
standardizing the definitions). EpRl
recently completed a 2 year monitoring
project to provide benchmark indices
describing power quality levels on
distribution system in the United
States. [SI the Europeans have already
Started the process with the Euro
norms [9] that define levels of power
quality that can be expected in a number
of important categories (harmonics,
flicker, regulation, unbalance,
disturbances). Utilities will have to
report power quality performance
statistics and make sure that the
performance does not
Significantly deteriorate over time. The
regulations governing power quality will
be part of the overall regulations for
operating the distribution pait of the
Electricity supply business (often called
the lines company). This will require
more system monitoring and analytical
tools to predict performance as part of
the system design process. The
distribution companies and opportunities
for a wide variety of services related to
power quality and energy efficiency
improvement are already being created
for retail energy marketing companies
and energy service companies. These
new services and rate structures
require standard definitions and
performance measures that can be used
in contracts and performance
evaluations.




REFERENCES
C.M. Warrm. %e Effed of Reducing
Momentary Outages on
Distribution Reliability Indices," IEEE
Transactions on Power
Delivbety
D. L Brooks, R. C. Dugan. M.
Wnclawiak, A. Sundaram. "Indices
for Assessing Utility Distribution
System RMS Variation
P*fomance," lEEE Trans. Power
Delivery. PEdZO-PWRD-1-
04-1997.
[31 ANSVIEEE Std 493-1980,
IEEERecommended Practice For
Design of Reliable Jndusm.al and
Commercial Power Systems,
ShRLaing. 1989.
IEEE Std. 1159-1992 I .Re commended
Practices and
Requirements for Harmonic Control in
Elecmc Power Systems
Sabin D.D.. Sun&" A. Quality
Enhances Reliability. IEEE
Sp-zd", vol. 33, no. 2, February 1996,
pp. 34-41.
14) 0. Hensky. T. Sin&, M. Samotyj,
M. M C G " , and R.
Zavdd, "Impact of Utility Switched
Chpa1.5- on Customer
Transactions PWRD. pp. 862-868,
April, 1992.
[5] G. Henslcy. T. Sin& M. Smotyj. M.
McGranaghan. and T.
Grebe, "Impad of utility Switd&
Capacitors on customer
Systems Part II - Adjustable spad
Drive Concerns," IEEE


A CONTROL STRATEGY FOR UNIFIED POWER
QUALITY CONDITIONER BASED ON
INSTANTANEOUS ACTIVE AND REACTIVE POWERS.










Electrical & Electronics Engineering

Sir C.R.Reddy College of Engineering














N.SATYANARAYANA T.SIVA CHAITANYA
E-mail: n.satyanarayana_079@yahoo.co.in
chaitu_thanneru@yahoo.co.in



Abstract:

One of the serious problems in electrical systems is the
increasing number of electronic- components of devices that are
used by industry as well as residences. These devices, which need
high-quality energy to work properly, at the same time, are the most
responsible ones for injections of harmonics in the distribution
system. Therefore, devices that soften this drawback have been
developed. One of them is the unified power quality conditioner
(UPQC). This paper presents a control strategy for a Unified Power
Quality Conditioner. This control strategy is used in three-phase
three-wire systems. The UPQC device combines a shunt-active
filter together with a series-active filter in a back -to- back
configuration, to simultaneously compensate the supply voltage and
the load current. Some of the other control strategy for shunt-active
filter that guarantees sinusoidal balanced and minimized source
currents even if under unbalanced and / or distorted system
voltages, also known as Sinusoidal Fryze Currents. Then, this
control strategy was extended to develop a dual control strategy for
series-active filter. Now, this paper deals about the integration
principles of shunt current compensation and series voltages
compensation, both based on instantaneous active and non-active
powers, directly calculated from a-b-c phase voltages and line
currents. Literature-simulated results are presented to validate the
proposed

Index Terms: Active Filters, Active Power Line Conditioners,
Instantaneous Active and Reactive Power, Sinusoidal Fryze
Currents, Sinusoidal Fryze Voltages.

I. Introduction:

ONE of the serious problems in electrical systems is the
increasing number of electronic- components of devices that are
used by industry as well as residences. These devices, which need
high-quality energy to work properly, at the same time, are the most
responsible ones for injections of harmonics in the distribution
system. Therefore, devices that soften this drawback have been
developed. One of them is the unified power quality conditioner
(UPQC), as shown in Fig.1. It consists of a shunt- active filter
together with a series-active filter. This combination allows a
simultaneous compensation of the load currents and the supply
voltages, so that compensated current drawn from the network and
the compensated supply voltage delivered to the load are sinusoidal,
balanced and minimized. The series and shunt-active filters are
connected in a back-to-back configuration, in which the shunt
converter is responsible for regulating the common DC-link voltage.

Fig1. General Configuration of UPQC

In the 30s of the last century, Fryze [1] proposed a
set of active and non-active (reactive) power definitions in the
time domain. From these concepts, Tenti et al [2] developed a
control strategy for shunt-active filters that guarantees
compensated currents in the network that are sinusoidal even if
the system voltage at the point of common coupling (PCC)
already contains harmonics. However, this control strategy does
not guarantee balanced compensated currents if the system
voltage itself is unbalanced (i.e. it contains a fundamental
negative-sequence component). In [3], this drawback was
overcome, by the addition of a positive sequence voltage
detector in the shunt-active filter controller. This control circuit
determines the phase angle, frequency and magnitude of the
fundamental positive sequence voltage component. This new
control strategy has been denominated as the Sinusoidal
Fryze Currents control strategy.

This work exploits the use of that positive-sequence
voltage detector to develop a new control strategy for series-
active filter. It is based on a dual minimization method for
voltage compensation, together with a synchronizing circuit
(PLL circuit). The synchronizing circuit is responsible to detect
the fundamental frequency, as well as the phase angle of the
positive-sequence voltage component. The dual minimization
method is responsible to accurately determine the magnitude of
this voltage component. This control strategy is denominated
here as the Sinusoidal Fryze Voltages control strategy.
Further, this paper presents the integration the Sinusoidal
Fryze Currents and the Sinusoidal Fryze Voltages control
strategies into an UPQC controller.

II. The UPQC Controller:

Fig. 2 shows the complete functional block diagram
of the UPQC controller. The part shown in Fig. 2(a) is
responsible to determine the compensating current references
for PWM control of the UPQC shunt converter, whereas the
other part shown in Fig. 2(b) generates the compensating voltage
references for PWM series converter. Next, each functional block of
Fig. 2 will be detailed.


Fig 2: The functional diagram of the UPQC Controller
(a). Shunt UPQC Converter, (b). Series UPQC
Converter


A. The positive sequence voltage Detector:

A positive-sequence voltage detector is [V
+1
voltage
detector block in Fig. 2(a)] in terms of "minimized voltages". A dual
principle for voltage minimization together with a phase-locked
loop circuit (PLL circuit), as shown in Fig. 3. The used PLL circuit
is detailed in the next section. In fact, this dual principle of voltage
minimization is used here for extracting instantaneously" the
fundamental positive-sequence component (V
+1
in phasor notation,
or v
a1
, v
b1
, v
c1
as instantaneous values, as the outputs of Fig. 3) from
a generic three-phase voltage. The distorted and unbalanced
voltages v
as
, v
bs
, v
cs
of the power supply are measured and given as
inputs to the PLL circuit.

As shown in the next section, it determines the signals
i
a1
, i
b1
, i
c1
, which are in phase with the fundamental positive-
sequence component ( V
+1
) contained in v
as
, v
bs
, v
cs
. Thus, only the
magnitude of V
+1
is missing. The fundamental characteristic of the
used PLL allows the use of a dual expression for determining active
voltages in the form
) 1 (
1
1
1
2
1
2
1
2
1
1 1 1

+ +
+ +
=

c
b
a
c b a
c cs b bs a as
cp
bp
ap
i
i
i
i i i
i v i v i v
v
v
v

As an artifice to extract the V
+1
component from v
as
,
v
bs
, v
cs
. The reason is that the signals i
a1
, i
b1
, i
c1
are three
symmetric sinus functions with unity amplitude, which
correspond to an auxiliary fundamental positive-sequence
current I
+1
that is in phase with V
+1
. Hence, the average value of
the "three-phase instantaneous power", 3V
+1
I
+1
cos is
maximum (would be zero if V
+1
and I
+1
are orthogonal), and the
average signal R
bar
in Fig. 3 comprises the total amplitude of
V
+1
. Therefore, it is possible to guarantee that the signals v
a1
,
v
b1
, v
c1
are sinusoidal and have the same magnitude and phase
angle of the fundamental positive-sequence component of the
measured system voltage.

B. The PLL Circuit:
The used PLL circuit, Fig.4 can operate
satisfactorily under high distorted and unbalanced system
voltages. The inputs are v
ab
= v
as
v
bs
and v
cb
= v
cs
v
bs
. The
outputs of the PLL circuit are i
a1
, i
b1
, i
c1
. The algorithm is based
on the instantaneous active three-phase power expression P3=
v
ab
i
a
+ v
cb
i
c.

The current feedback signals i
a
(wt) = sin(wt) and
i
c
(wt) = sin(wt 2 /3) are built up by the PLL circuit, just
using the time integral of output of the PI-Controller. Note
that they have unity amplitude and i
c
(wt) leads 120 i
a
(wt).
Thus, they represent a feedback from a positive sequence
component at frequency . The PLL circuit can reach a stable
point of operation only if the input P
3
of the PI-Controller has
a zero average value ( =0) and has minimized low-
frequency oscillating portions in ( ).
Once the circuit is stabilized, the average value of P

~
3
P
3
P
3

~
3
3

P P + =

3 is
zero
and, with this, the phase angle of the positive-sequence system
voltage at fundamental frequency is reached. At this condition,
the auxiliary currents i
a
(t) and i
c
(t) = sin(t 2/3),
becomes orthogonal to the fundamental positive-sequence
component of the measured voltages v
as
, v
cs
respectively.
Therefore, i
a1
(t) = sin(t /2) is in phase with the
fundamental positive-sequence component contained in v
as
.
.

Fig 3. The V
+1
Voltage Detector.


Fig4. The Synchronizing Circuit-PLL Circuit

C. The DC voltage regulator :

The dc voltage regulator is used to generate a control
signal G
loss
, as shown in Fig. 2(a). This signal forces the shunt active
filter to draw additional active current from the network, to
compensate for losses in the power circuit of the UPQC.
Additionally, it corrects dc voltage variations caused by abnormal
operation and transient compensation errors. Fig. 5 shows the dc
voltage regulator circuit. It consists only of a PI-Controller [G(s) =
Kp + KI/s], where for normalized inputs, Kp = 0.50 and KI = 80.


Fig5. The DC Voltage regulator

Fig.6 shows in details that functional block named
"Current Minimization" in Fig. 2(a). It determines the instantaneous
compensating current references, which should be synthesized by
the shunt PWM converter of the UPQC. It has the same kernel as
the Generalized Fryze Currents methods widely used, like in
[4], [5] and [6]. The inputs of the controller are the load
currents i
a1
, i
b1
, i
c1
, the control voltages v
a1
, v
b1
, v
c1
determined
by the V
+1
detector, and the DC voltage regulator signal G
loss
.

The conductance G is determined in Fig. 6
corresponds to the active current of the load. In other words, it
comprises all current components that can produce active
power with the voltages v
a1
, v
b1
, v
c1
. A low-pass fifth order
Butterworth filter is used to extract the average value of G,
which is denominated as G
bar
. . Now, since v
a1
, v
b1
, v
c1

comprises only the V
+1
component, G
bar
must correspond to the
active portion of the fundamental positive-sequence component
( I
+1
) of the load current

Fig.6 The Current Minimization control
algorithm

The control signal G
control
is the sum of G
bar
and
G
loss
, which, together with the control voltages v
a1
, v
b1
, v
c1
, are
used to determine the currents i
aw
, i
bw
, i
cw
. These control signals
are pure sinusoidal waves in phase with v
a1
, v
b1
, v
c1
and include
the magnitude of the positive-sequence load current
(proportional to G
bar
) and the active current (proportional to
G
loss
) that is necessary to compensate for losses in the UPQC.
Since the shunt active filter of the UPQC
compensates the difference between the calculated active
current and the measured load current, it is possible to
guarantee that the compensated currents i
as
, i
bs
, i
cs
drawn from the
network are always sinusoidal, balanced and in phase with the
positive sequence system voltages. This characteristic represents a
great improvement done at the Generalized Fryze Currents
control strategy

D. The Damping Control Algorithm:

In a UPQC configuration, instability problems due to
resonance phenomena may occur. In order to enhance the overall
system stability, an auxiliary circuit can be added to the controller
of the series active filter. The basic idea consists is increasing
harmonic damping, as a series resistance, but effective only in
harmonic frequencies, others than the fundamental one. This
damping principle was first proposed by Peng [7], in terms of
components defined in the pq Theory and used by Aredes [8] and
Fujita [9]. This damping control algorithm, now is in terms of abc
variables (in the phase mode), can be seen in Fig. 7.
The inputs to the damping circuit are the source currents
i
as
, i
bs
, i
cs
(compensated currents), which are flowing through the
series transformers of the UPQC, and the voltages determined by
the V
+1
voltage detector v
a1
, v
b1
, v
c1
. The active and non-active
instantaneous powers are determined by using the equations (2) and
(3);


Fig.7. Damping control algorithm in terms of abc
variables

) 2 (
1 1 1
+ + =
+ + =
cs cq bs bq as aq
cs c bs b as a
i v i v i v Q
i v i v i v P


where
3 / ) (
1 1 c b aq
v v v = ., 3 / ) (
1 1 a c bq
v v v ., =
) 3 ( 3 / ) (
1 1
=
b a cq
v v v

Note that the voltages v
aq
, v
bq
, v
cq
are achieved from the
fundamental positive-sequence voltages v
a1
, v
b1
, v
c1
. Therefore,
it is possible to guarantee that the voltages v
aq
, v
bq
, v
cq
are still
sinusoidal and lag 90 the voltages v
a1
, v
b1
, v
c1
, respectively. A
conductance G and a susceptance B are determined from the
calculated active and non-active instantaneous powers, as
shown in Fig.7. Then, high-pass, fifth order Butterworth filters
are used to extract the oscillating parts of that conductance and
susceptance.
The auxiliary currents i
ap
, i
bp
, i
cp
and i
aq
, i
bq
, i
cq
, are determined
as follows:
1
.
a osc ap
v G i = ., .,
1
.
b osc bp
v G = i
) 4 ( .
1
=
c osc ap
v G i
i

aq osc aq
v G . = ., .,
bq osc bq
v G . = i
) 5 ( . =
cq osc aq
v G i
i

Damping signals (harmonic components still present in the
source currents) are determined as described in (6).
ap aq ah
i i + = ., i .,
bp bq bh
i i + =
) 6 ( + =
p cq ch
ic i i
Finally, the multiplication between the damping signals i
ah
, i
bh
,
i
ch
and a gain K determines the damping voltages v
ah
, v
bh
, v
ch

that will be added to the compensating voltage references of the
series active filter of the UPQC, as will be explained in the next
section. Thus, the gain K acts as a harmonic resistance to damp
resonance phenomena.

E. Compensating Voltages Calculation:

The block diagram that determines the
compensating voltages v
ac
, v
bc
, v
cc
[Fig. 2(b)], which is
synthesized by the series PWM converter, is shown in Fig. 8.
The inputs are the control voltages determined by the V
+1
voltage
detector: v
a1
, v
b1
, v
c1
, the source voltages: v
as
, v
bs
, v
cs
, and the
damping voltages: v
ah
, v
bh
, v
ch
.
The compensating voltages are:
) (
1 ah as a ac
v v v v + = ., ) (
1 bh bs b bc
v v v v + = .,
) 7 ( ) (
1
+ =
ch bs c cc
v v v v

Fig.8. Compensating voltages calculation.
Ideally, the compensated voltages delivered to the critical load will
comprise only the fundamental positive-sequence component (va1,
vb1, vc1) of the supply voltage v
S
. Therefore, the power factor is
ideal, the voltages delivered to the load are sinusoidal and balanced,
and it is possible to guarantee that the source currents will be
sinusoidal, balanced and minimized even if under unbalanced and /
or distorted system voltages.

RESULTS:

Some of the simulated results are presented based on the
literature for the three- phase six pulse thyristor rectifier, with 0.2 A
DC-current (20 %), used as a non-linear load. The results are based
on the per unit bases. Thus, 1V (phase to ground) and 1A (line
current) were used as the basis of the system and a balanced,1V,
three-phase, voltage source is used.
The shunt-active filter and the series-active filter start its
operation in 0.2s. The total simulation time is 0.8s. The thyristor
rectifier is connected at t = 0.1s. An inductor and a resistor, whose
values correspond to 0.1 % of the system base impedance, compose
the source impedance. In this case, the short-circuit power at the
load terminal is equal to 10 p.u. The small high-pass filters to
mitigate switching frequency harmonics at the series and shunt
PWM converters are R=0.6 and C = 170 F. Although it seams a
high capacitor, it corresponds to 5% of the system base impedance.
A capacitor of 2400F is used at the DC link of the UPQC. The
reference voltage is equal to 4.5 V. To give an idea of the
capacitors dimension, the unit capacitor constant (UCC) is
calculated, by the following equation:
) 8 (
1 . 1 . 3
) 5 . 4 ( 400 2 .
2
1
v
2
1
2 2
= =

p
c
Ucc
Fig. 9 shows the load, shunt and source currents i
al
,
i
ac
,i
as
, before, and after the start of the shunt-active filter. After
the start of the shunt active filter, the source current becomes
almost sinusoidal. It may be noticed, that the time that the
source currents take to reach the steady state is pretty small.
This demonstrates that the proportional and integral gains of the
DC voltage regulator are well dimensioned.


Fig.9. Load currents of the shunt active filter and
source current.

Fig. 10 shows the supply voltage v
as

(uncompensated, left side of the UPQC), the compensating
voltage v
ac
of the UPQC, and compensated voltage v
aw
,
delivered to the critical load, before and after the start of the
series active filter. The v
aw
voltage, after the start of the series
active filter, becomes almost sinusoidal.


Fig.10. Supply voltages, Compensating voltage,
and the compensated voltage delivered to the
critical load.

Fig. 11 shows the source currents i
as
, i
bs
, i
cs
, the
compensated voltages v
aw
, v
bw
, v
cw
, and the current i
as
together
with voltage v
aw
repeated in a separated graphic, before and
after the UPQC energization. It may be seen that, when the
UPQC start its operation the source currents, as well the
compensated voltages become almost sinusoidal and balanced.
The source current i
as
and the compensated voltage v
aw
are
almost in phase after the start of the UPQC. It confirms that the
control strategy proposed is useful in a three phase three-wire
system, where the system voltages are unbalanced and distorted and
the load currents with high contents of harmonics.

CONCLUSIONS:

A control strategy for Unified Power Quality
Conditioner based on instantaneous active and reactive powers for
three-phase three-wire systems is explained. In case of using in
three phase four-wire systems, there is the necessity of
compensating the neutral current. In this case, three-phase four wire
PWM converter is necessary. The computational effort to develop
this control strategy is less as compared with pq-Theory-based
controllers, since the (o -d-q) transformati voltages and the
compensated voltages v
aw
together with the source currents. is
avoided. For three-phase threewire on system performances of the





Fig.11 Source currents, Compensated approach is comparable with
those based on the pq Theory, without loss of robustness even if
operating under distorted and unbalanced system voltage conditions.
















References:

[1] S. Fryze, Wirk-, Blind- und Scheinleistung in elektrischen
Stromkainsen mit nichtsinusfmigen Verlauf von Strom und
Spannung, ETZ-Arch. Elektrotech., vol. 53, 1932, pp. 596-
599, 625-627, 700-702.
[2] L. Malesani, L. Rosseto, P. Tenti, Active Filter for
Reactive Power and Harmonics Compensation, IEEE PESC
1986, pp. 321-330.
[3] Lus F.C. Monteiro, M. Aredes, A Comparative Analysis
Among Different Control Strategies for Shunt Active Filters,
Proc. (CDROM) of the V INDUSCON - Conferncia de
Aplicaes Industriais, Salvador, Brazil, July 2002, pp.345-
350.
POWER QUALITY IMPROVEMENT USING
RESONANT CIRCUIT

Ch.meher gayatri N. swathi
III EEE III EEE
Gitam Gitam
E Mail:gayatrichigurupati@gmail.com E Mail: swathi3086@gmail.com

ABSTRACT

Characteristics of high-frequency modified series-parallel resonant converter
operating in high power factor mode are presented. The highorder series resonant
converter presented in this paper has very good characteristics viz.., high efficiency
and narrow variation in switching frequency for good regulation.This converter uses
capacitive output filter. Fixed-frequency controls are used to regulate the magnitude
of output voltage and power. An active control of series-parallel resonant converter
is used for improving input line current wave-shape. Because of this converter has a
very high power factor. Design criteria that incorporate transformer non-idealities are
developed, and are employed in the construction of high-voltage prototype converter.
Simulation results for the converter so designed and experimental results for a 1kW,
1kV converter are presented to verify the performance of the proposed converter for
varying load conditions. The converter operates in lagging power factor mode for the
entire load range.
Index Terms
High frequency resonant converter, high-voltage, fixed frequency control,
Power quality.
I INTRODUCTION
A number of active power line conditioners (APLCs) have been proposed in
the literature. Most of these configurations are PWM-type, hard-switched. They are
subjected to very high switching losses, switching noise, switching stresses and
electromagnetic interference (EMI), especially at high frequency. Thus, this reduces
the efficiency of the converter. To overcome these problems, the quasi-resonant
converter (QRC) with high power factor of input line cur-rent is reported in recent
1
literature. The switching losses and the stresses of the converters are significantly
reduced in comparison with the PWM converters. But, it is difficult to use high
frequency transformer, and to operate it at high power level.
In order to overcome above disadvantages, improvement of input power
factor and reducing the total harmonics distortion (THD) in input line current using
full-bridge configuration of resonant converter is necessary [1-7]. Most of the
schemes are series, parallel, or series-parallel resonant converters. A series resonant
converter has volt-age regulation problem at light loads, parallel resonant converter
has lower efficiency at light loads and LCC-type series-parallel resonant converter
takes the properties of parallel resonant converter at load below 50 % of the full load.
A Modified resonant converter [4] can be used as an AC-DC converter, but the major
drawback of this converter is high component stress at peak of ac input, especially at
full load. The highorder series resonant converter presented in [5] has very good
characteristics viz., high efficiency and narrow variation in switching frequency for
good regulation. It is a good candidate for high voltage dc applications, but has high
voltage stress on the output HF rectifier diodes. M. J . Schutlen [6] supports the
parallel and series-parallel resonant converter operated in a high power factor mode
even with no active control of input line current. However, it requires large output
capacitive filter to store the second harmonic (120Hz ripple for 60Hz-power supply
frequency). Use of such a large capacitive filter leads to input line current with nearly
quasi-square waveform. The waveform of this current contains lower order
harmonics and contributes to losses in the system. This reduces the efficiency of the
converter.

In this paper, an improvement in power factor on ac line side of AC-DC
converter using high frequency (HF) modified series-parallel resonant converter
(MSPRC) is proposed. This scheme consists of an uncontrolled diode bridge rectifier
followed by a small dc link capacitor Cin (i.e., high frequency bypass) connected to
the MSPRC. The input line current of this converter is quasi-square wave. This leads
to the presence of THD more than 100% at load below 25 % of full load. Therefore
with the use of active control of input line current of ac-to-dc MSPRC, the line
current waveform is purely sinusoidal in nature and power factor of the circuit is very
high. The HF transformer non-idealities (i.e., leakage inductance and winding
capacitance) are considered into the basic operation of circuit. The winding
2
capacitance of HF transformer when taken into consideration for analysis contributes
to improvement of predicted performance of the converter in terms of efficiency.
Therefore, this converter is best suited for high voltage dc applications like
distributed power system as well as standalone converter for a variety of applications.



The block diagram and circuit diagram of the proposed ac-to-dc converter
employing MSPRC with capacitive output filter is shown in Fig.1 and Fig.2
respectively. The small LC filter used at input of line rectifier to re-duce the high
frequency switching noise and ripple. The idealized operating waveform of MSPRC
is shown in Fig.3. The proposed converter consists of an uncontrolled diode full-
bridge rectifier followed by a small dc link capacitor in C (i.e., high frequency
bypass) connected to the MSPRC. Therefore, Input voltage to MSPRC is a rectified
line voltage. The HF transformer non-idealities are considered while designing the
MSPRC. In addition to this, use of this tank circuit reduces the component-stresses.
Therefore, over-all performance of the proposed resonant converter is improved in
3
terms of the efficiency and the power factor. A useful analytical technique, based on
classical complex ac-circuit analysis is suggested for designing the modified series-
parallel resonant tank circuit. Fixed frequency control (200 kHz) is used to regulate
the out-put voltage. The closed loop control is used to meet the output ripple
specifications. The closed loop gain should be chosen properly for achieving good
transient response. This ac-to-dc converter is simulated using TUT-SIM package, and
experimental prototype unit is de-signed and fabricated using high frequency
switches MOSFETs.

III DESIGN OF TANK CIRCUIT
Based on the classical complex ac-circuit analysis, MSPRC is designed.
MSPRC is operated just above the resonant peak of converter gain to maintain zero-
voltage switching for the duty ratio control. AC equivalent circuit of proposed
MSPRC is shown in Fig.4, and the important equations relating to the resonant tank
circuit are given as follows:




where leak L and w C are leakage inductance of primary winding and secondary
winding capacitance of HF transformer, respectively.
4


If a sinusoidal current is drawn from the ac input line, the power delivered by
the converter to the output filter capacitor and load has a waveshape of a double
frequency sinusoid with peak value equal to twice the aver-age power delivered by
the converter. The converter is designed in the same way to a dc-to-dc converter [7]
to deliver twice the average output power at the peak of minimum line voltage (115
V, rms). The different design curves like minimum kVA rating of tank circuit per kW
of the output power, minimum inverter output current, maximum efficiency and
5
normalized output voltage are considered in the design of converter. One such an
optimum value of normalised voltage curves for the converter is shown in Fig.5. The
optimum design values are referred to Fig.4 are given below: The design
specifications of proposed ac-to-dc converter circuit fabricated are as follows:



where, n is transformer turns ratio (primary winding to secondary winding), 'L
R is load resistance referred to primary side of HF transformer, and d V is forward
voltage drop of HF rectifier diodes. Using equation (1), (2), (3), and (5), the optimum
component values calculated are as:

V Results
The simulation and experimental waveform of ac input line voltage (Vin) and
input line current (Iin) at different load conditions are shown in Fig.6 and Fig.7
respectively. The output voltage Vo is regulated at 975 V. This is due to fact that the
component values used were not exactly the same as those obtained from the design
and also losses in the semiconductor devices were not taken into account in the
design. The decrease in load requires decrease in duty ratio (0.48 at full load to 0.31
at 10% of the full load) to maintain constant output voltage. The performance of the
converter was investigated with closed loop conditions, with active control on the
input line current. It seen from the experimental and simulation result that the power
factor is very high with active control of input line current. The overall efficiency of
ac-to-dc converter is 93% at full load. The Table-1 shows the power factor with and
6
without active control of input line current. It is observed from the experimental
waveform and table-1 that the maximum power factor is 99.97 at full load.





7

In this paper, HF transformer non-idealities (i.e., leak-age inductance and
winding capacitance) are considered as useful elements of the resonant tank circuit.
The resonant converter has been designed for 1000-W as given above. This converter
has high power factor (99.96%) at full load and low total harmonic distortion. This is
true even with no active control on the input line current. It is noteworthy from the
results that the input line current decreases with decrease in load current, resulting
into high efficiency (93%) at full load. Owing to use of the resonant converter at HF
operation and high voltage applications, a small value of output filter capacitor is
required to reduce the ripple in the output dc voltage. Also, due to the action of the
resonant converter and small value of output filter capacitor, the magnitude of
capacitor charging current is kept low. Thus, input line current is sinusoidal in nature
and converter has high power factor. The input line current waveform is slightly
8
distorted at light load (10% of full load). This is because of the fact that the
component values used were not exactly the same as those obtained from the design
and also losses in the semiconductor devices were not taken into account in the
design. The MSPRC is a strong candidate for the high-voltage ac-dc applications,
like distributed power system stand-alone converter, high-speed DC motor drives,
and other power electronics systems.



Fig.7 Experimental waveforms for AC-to-DC resonant converter for input voltage
Vin , input current iin, with variable frequency control. (a) on full load Vin (100
V/div), iin (4 A/div) (b) on 50 % of full load Vin (100 V/div), iin (2 A/div) and (c) on
10 % of full load.. Vin (100 V/div), iin (0.5 A/div), Time scale: 5 ms/divison.
VI REFERENCES
[1] R. L. Steigerwald, A Comparison of half-bridge resonant converter
9
topologies, IEEE Trans. on Power Electronics, vol.3, no.2, pp.174-182, April 1988.

[2] A. K. S. Bhat and M. M. Swamy, Analysis and design of a parallel resonant
converter including the effect of a high frequency trans-former, IEEE Trans. on
Industrial Electronics, vol.37, no.4, pp.297-306, Aug.1990.

[3] A. K. S. Bhat, Analysis and design of a series-parallel resonant converter, IEEE
Trans. on Power Electronics, vol.8, no.1, pp.1- 11, J an.1993.

[4] H. M. Suryawanshi and S. G. Tarnekar, Modified LCLC-type series resonant
converter with improved performance, IEE Proc.-Electr. Power Appl., Vol.143,
No.5, Sept.1996, pp.354-360.

[5] H. M. Suryawanshi and S. G. Tarnekar, Resonant converter in high power
factor, high-voltage dc applications, Proceedings-Electric Power Application,
Vol.145, No.4, J uly-1998, pp.307-314.

[6] M. J . Schutten, R. L. Steigerwald, M. H. Kheraluwala, Characteristics of load
resonant converter operated in a high-power factor mode, IEEE Trans. on power
Electronics, vol.7, no.2, pp.304-314, April 1992.

[7] Issa Batarseh, Resonant converter topologies with three and four energy storage
elements, IEEE Trans. on Power Electronics, vol.9, no.1, pp.64-73, April 1994.

10
RELIABILITY ANALYSIS OF ELECTRIC UTILITY BY SCADA SYSTEMS
B.UDAYABHASKAR,A.BHASKAR,SRIVASAVI ENGG. COLLEGE,
TADEPALLIGUDEM,EMAIL:udaya_alisha@yahoo.com,ph:9441054777
ABSTRACT
God created the world. man created the electric world. By the invention of
electricity made man to enter into ELECTRIC ERA.
But man still in search of satisfactory transmission and distribution of this
Electric powder by economic and secure operations. In this evoluation they found
the exiting system SCADA system. (SCADA means supervisory control and data
acquisition )
This paper presence an analysis of SCADA system reliability in terms of its
expected, aggregate contribution to load curtailment on the power system. Also
quantitative comparison of alternative SCADA system options.
INTRODUCTION
In seeking to quantify SCADA reliability in terms of SCADAs
contribution to load curtailment we infer the need to analyse a system model
encompassing both SCADA and Power systems.
Analysis of Power System reliability provides a means of determining
when power system failures or load curtailments occur. SCADA failures are
assumed to occur independently of load curtailments and occur when a power
system Operator is unable to retrieve data from or issue controls to primary plant
associated with one or more busses.
SCADA power system reliability analysis
This system is a detailed model representing the master station, SCADA
data communications network, remote terminal units and the data wide area
network (WAN).
Interest in SCADA reliability arose out of Trans Powers adoption of a
new, centralized subtransmission. SCADA architecture in figure 1. This relies on
a PC full graphics terminal located remotely at each Area Control Centre (ACC)
connected to National Control Centre (NCC).
This work was undertaken in response to concern over the apparent
reduction in reliability caused by the removal of a master station and consequent
dependence on a single master. Hence a comparison of the differences example of
the application of the composite reliability analysis algorithm.

ALGORITHM ANALYSIS
Basing on algorithm the objective of the analysis is to calculate the
expected amount of unsumpplied energy attributed to a SCADA failure. Taking an
annual reference period we denote to be the random variable representing the joint
unsupplied energy in annual system minutes. Hence we want to derive E [J], the
expected value of J. For a given system model, a single simulation will provide a
sample as follows, assuming a constant load.

lments loadcurtai joint


simulation 525600
j = ------------------- ----------------------------------------
Total load no. of samples in simulation
Then making a number of simulation runs will give an

Estimate of E [J] from
[ ]
ns numberofru
j
J E
runs simulation

=
The quantity, joint load curtailments, is derived from the algorithm
illustrated in Figure 2 and described below.
1. Generate a SCADA and power system state based on individual component
reliabilitys and determine which busses the operator has control of.
2. If there are no power system failures than there will be no load
curtailments.
3 and 4 If there is sufficient, available generation to meet the load then it may
be possible to re schedule this subject to circuit constraints. This is done using
a dc load flow.
5. This step is based on the linear programme based optimal power flow
(OPF) method described on (2). The OPF minimizes load curtailment by
rescheduling generation subject to branch and generation capacities.
6. Compare whether load curtailment and SCADA control failure has
occurred at the same bus or busses. This is perhaps a harsh simplification.
7. Accumulate the total power not supplied at busses where SCADA control
has also failed.
Communications and SCADA system
In order to use the IEEE RTS as an example, a SCADA system must be
designed for it. Aligned with our intention of looking at Area SCADA
architectures the RTS is split into two areas as shown in the figure. An area control
centre is located at the site of busses 8, 9, 10 and 11 and a National control centre
somewhere in Area 2. Then using the communication availability categories
defined in Table 1, a possible communication network. Figure 5 and figure 6
along with table 2 define reliability component models for the centralized and
distributed SCADA architectures respectively.
It is normal for system availability to be specified as a single number such
as 99.8% (3).
Power system model
The analysis algorithm requires a dc load flow model of the power system.
The phitosophy chosen for the IEEE RTS s defined by the load schedding weights
in table 1; loads with the lowest weight will be shed first if possible.
To illustrate this consider the extreme example of assigning a high weight
on all busses in Area 1 of the IEEEE RTS. This would cause a disproportionate
number of load curtailments to occur at area 2 busses which to turn would result in
a significantly lower probability of a joint failure event in Area 1.
Table : SCADA RTU communications availability criteria
Bus category Load shed weight Comms availability
1 5 pu T 1.3 99.997%
2. 2 pu T 5 pu 1.2 99.95%
3. 0.5 pu T 2 pu 1.1 99.8%
4 T 0.5 pu 1.0 98.6%
Where T = bus MW inflows.


RTS Calculations and Results
As well as calculating the system reliability at normal component
availabilities, the sensitivity of joint system minutes to backend availability was
also calculated.





Economics Interpretation
These results show that the centralised architecture, at normal component
availability levels, is around 5.5 times more expensive than the distributed
architecture and is also more sensitive to back end availability.

TRANS POWER EXAMPLE
Trans Power has replaced two distributed Area SCADAs with centralized
SCADA systems. It is likely that the comparative performance of these
architectures, found in the IEEE RTS results described in the previous section,
remains valid.
This followed the model construction and analysis detail described
previously for the IEEE RTS with the following additions.
1. The model comprised an AREA SCADA covering 13 busses within New
Zealands North Island grid (the depower flow model for this system
comprises 116.
2. Much of the load curtailment in Trans Powers system results from supply
transformer trippings.
3. Based on this the composite model predicted around 39 system minutes
compared with an actual value of around 10 system minutes. It is likely
that this would lead to an over estimate of reliability worth.
4. A single power system load value was used of approximately 60% of full
load.
Conclusion
A method of evaluating SCADA system reliability has been presented in
the paper. This provides an aggregate assessment of system reliability which can
define reliability in absolute cost terms. This cost enables a direct, quantitative
comparison of alternative SCADA system options from the examples described
REFERENCES
1. IEEE Magazine or transmission of power systems.
2. Conference on Probabilistic Methods Applied to electric power system.

PAPER PRESENTATI ON ON
SATELLITE POWER SYSTEM (SPS)





BY
A.C.K. RAJESH
&
L.LOKESH

DEPARTMENT OF ELECTRICAL ENGINEERING
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY
ANANTAPUR-515002



ADDRESS:

A.C.K.RAJESH L.LOKESH
ADMISSION NO: 04001A0220 ADMISSION NO: 04001A0229
ROOM NO: 303 ROOM NO: 305
ELLORA HOSTEL ELLORA HOSTEL
JNTUCEA
J NTUCEA

E-MAIL:

1)Rajesh_ack@yahoo.co.in

2)Lokesh_lavu@yahoo.co.in
















ABSTRACT
This paper reports on the futuristic
advances in power transmission through
microwaves. Sun is a limitless source of
Energy. A Space Power Satellite (SPS)
orbiting round the earth traps solar
energy and generates electric power
using photovoltaic cells of sizable area.
SPS transmits the generated power via a
microwave beam to the receiving
Rectenna site on earth. A Rectenna
(RECtifying anTENNA) comprises of a
mesh of dipoles and diodes for absorbing
microwave energy from a transmitter
and converts it into electric power. We
can in fact directly convert solar energy
into electrical energy with the use of
solar cells, but this process will be
affected by day/night cycles, weather,
and seasons. We are aware of the fact
that light is an electromagnetic wave.
Light rays never diffuse in space & if by
any means these rays can be transmitted
from space to earth then it will be a
perfect solution for our desired need of
24 hrs power supply. The 21st century
endeavors and approaches for
establishing human race in space can
come true only if the basic requirement
of human beings is satisfied i.e. 24hrs
power, which can be efficiently served
by rectenna. This paper presents the
concept & evolution of satellite power
system, SPS2000 (a research work by
ISAS) and the impact of Microwave
Power Transmission (MPT) on space
plasma. In near future conventional
power sources cannot meet total power
demand, for which SPS is a best
solution.

CONTENTS
1. INTRODUCTION
2. SOLAR POWER
SATELLITE
3. DC TO MICROWAVE
GENERATION
4. RECTENNA
5. DEVELOPMENT OF
SOLAR POWER SATELLITE,
SPS2000
6. SPACE SETTLEMENT
7.CONCLUSION
1.INTRODUCTION
The post-war history of research on free-
space power transmission is well
documented by William C. Brown, who
was a pioneer of practical microwave
power transmission. It was he who first
succeeded in demonstrating a
microwave-powered helicopter in 1964.
A power conversion device from
microwave to DC, called a rectenna, was
invented and used for the microwave-
powered helicopter. The first rectenna
was composed of 28 half-wave dipoles
terminated in a bridge rectifier using
point-contact semiconductor diodes.
Later, the point contact semiconductor
diodes were replaced by silicon
Schottky-barrier diodes, which raised the
microwave-to-DC conversion efficiency
from 40 % to 84 %. The highest record
of 84 % efficiency was attained in the
demonstration of microwave power
transmission in 1975 at the J PL
Goldstone Facility. Power was
successfully transferred from the
transmitting large parabolic antenna dish
to the distant rectenna site over a
distance of 1.6 km. The DC output was
30 kW.

An important milestone
in the history of microwave power
transmission was the three-year study
program called the DOE/ NASA
Satellite Power System Concept
Development and Evaluation Program,
started in 1977. The extensive study of
the SPS ended in 1980, producing a 670
page summary document. The concept
of the SPS was first proposed by P. E.
Glaser in 1968 to meet both space-based
and earth-based power needs. The SPS
will generate electric power of the order
of several hundreds to thousands of
megawatts using photovoltaic cells of
sizable area, and will transmit the
generated power via a microwave beam
to the receiving rectenna site. Among
many technological key issues, which
must be overcome before the SPS
realization, microwave power
transmission (MPT) is one of the most
important key research issues. The
problem contains not only the
technological development microwave
of power transmission with high
efficiency and high safety, but also
scientific analysis of microwave impact
onto the space plasma environment.
2.SOLAR POWER SATELLITE:
figure 2.1
The concept of the Solar Power Satellite
(SPS) is very simple. It is a gigantic
satellite designed as an electric power
plant orbiting in the Geostationary Earth
Orbit (GEO). It consists of mainly three
segments.
1) Solar energy collector to convert the
solar energy into DC (direct current)
electricity
2) DC-to-microwave converter,
3) large antenna array to beam the
microwave power to the ground.
The solar collector can be either
photovoltaic cells or a solar thermal
turbine. The DC-to-microwave converter
of the SPS can be either a microwave
tube system and/or a semiconductor
system, or their combination. The third
segment is a gigantic antenna array. The
SPS system has that advantage of
producing electricity with much higher
efficiency than a photovoltaic system on
the ground. Since SPS is placed in space
in GEO, there is no atmospheric
absorption, the solar input power is
about 30% higher density than the
ground solar power density, and power
is available 24 hours a day without being
affected by weather conditions. It is
confirmed that the eclipses would not
cause a problem on a grid because their
occurrences are precisely predictable.
3.DC TO MICROWAVE
GENERATION AND
AMPLIFICATION
The technology
employed for generating microwave
radiation is extremely important for the
SPS system. It should be highly
efficient, very low noise, and have an
acceptable weight/power ratio. A
microwave energy transmitter often uses
2.45 GHz in the ISM band. There are
two types of microwave generators and
amplifiers, the microwave tube and the
semiconductor amplifier. These have
contrasting electric characteristics. The
microwave tube magnetron, can generate
and amplify high power microwaves
(over kilowatts) with a high voltage
(over kilovolts). It is very economical.
The semiconductor amplifier generate
low power microwave (below 100W)
with a low voltage (below fifteen volts).
It currently is still expensive. The
microwave tube has higher efficiency
(over 70%) and the semiconductor has
lower efficiency (below 50%). The
weight of the MPT system is also
important for reducing the transportation
cost of the SPS. Microwave tube is

4.RECTENNA

RECTifying anTENNA rectifies
received microwaves into DC current. A
rectenna comprises of a mesh of dipoles
and diodes for absorbing microwave
energy from a transmitter and converting
it into electric power. Its elements are
usually arranged in a mesh pattern,
giving it a distinct appearance from most
antennae. A simple rectenna can be
constructed from a schottky diode placed
between antenna dipoles as shown in
Fig. 1. The diode rectifies the current
induced in the antenna by the
microwaves. Rectennae are highly
efficient at converting microwave energy
to electricity. In laboratory
environments, efficiencies above 90%
have been observed with regularity. In
future rectennas will be used to generate
large-scale power from microwave
beams delivered from orbiting SPS
satellites.
lighter than a semiconductor amplifier
when we compare the weight by power-
weight ratio(kg/kW) because the
microwave tube can generate and
amplify higher power microwaves than
can the semiconductor amplifier.
Brief introduction of Schottky
Barrier Diode:
A Schottky barrier diode is different
from a common P/N silicon diode. The
common diode is formed by connecting
a P type semiconductor with an N type
semiconductor, this is connecting
between a semiconductor and another
semiconductor; however, a Schottky
barrier diode is formed by connecting a
metal with a semiconductor. When the
metal contacts the semiconductor, there
will be a layer of potential barrier
(Schottky barrier) formed on the contact
surface of them, which shows a
characteristic of rectification. The
material of the semiconductor usually is
a semiconductor of n-type (occasionally
p-type), and the material of metal
generally is chosen from different metals
such as platinum, tungsten e.t.c.
Sputtering technique connects the metal
and the semiconductor.
A Schottky barrier diode is a majority
carrier device, while a common diode is
a minority carrier device. When a
common PN diode is turned from
electric connecting to circuit breakage,
the redundant minority carrier on the
contact surface should be removed to
result in time delay. The Schottky barrier
diode itself has no minority carrier, it
can quickly turn from electric
connecting to circuit breakage, its speed
is much faster than a common P/N
diode, so its reverse recovery time Trr is
very short and shorter than 10 nS. And
the forward voltage bias of the Schottky
barrier diode is under 0.6V or so, lower
than that (about 1.1V) of the common
PN diode. So, The Schottky barrier
diode is a comparatively ideal diode,
such as for a 1 ampere limited current
PN interface.Below is the comparison of
power consumption between a common
diode and a Schottky barrier diode:
P=0.6*1=0.6W
P=1.1*1=1.1W
It appears that the standards of efficiency
differ widely. Besides, the PIV of the
Schottky barrier diode is generally far
smaller than that of the PN diode; on the
basis of the same unit, the PIV of the
Schottky barrier diode is probably 50V
while the PIV of the PN diode may be as
high as 150V. Another advantage of the
Schottky barrier diode is a very low
noise index that is very important for a
communication receiver; its working
scope may reach 20 GHz.

Development of a Functional
System Model of the Solar
Power Satellite, SPS2000

SPS2000 is a strawman model of solar
power satellites with microwave power
output of 10 MW, which was proposed
by the SPS working group of the
Institute of Space and Astronautical
Science (ISAS). The primary objective
of SPS2000 research is to show whether
SPS could be realized with the present
technology and to find out technical
problems. The conceptual study of
SPS2000 is now being carried out under
the assumption that the first construction
will be started before the beginning of
the twenty
first century. SPS2000 transforms the
DC power generated by huge solar
arrays to microwave power at 2.45 GHz
and transmits it to the rectennas on the
earth while it moves from west to east in
an equatorial low earth orbit (LEO) of
1100 km altitude. Transmission is
possible when the rectenna can be in the
field of view of the controllable
microwave beam from SPS2000.
Therefore, SPS2000 should always
detect the location of the rectenna and
direct a microwave beam toward the
rectenna. In order to perform the beam
scan, the spacetenna should have a
function of a phased-array antenna. We
discuss a configuration of spacetenna of
SPS2000. On the basis of the spacetenna
proposed above, we design a functional
system model of SPS2000 as a
demonstration model and construct
microwave circuits employing silicon
(Si) semiconductors since there are
many advantages in Si technology
compared with others in terms of cost
reduction, robustness of the system and
extraterrestrial resources.



Solar Power Satellite of
SPS2000:
The general configuration of SPS2000
has the shape like a triangular prism as
shown in Figure 2 The power
transmission antenna, spacetenna, is
built on the bottom surface facing to the
earth, and the other two surfaces are
used to deploy the solar panels. SPS2000
moves on an equatorial LEO at an
altitude of 1100km. The choice of the
orbit minimizes the transportation cost
and the distance of power transmission
from space. The spacetenna is
constructed as a phased-array antenna. It
directs a microwave power beam to the
position where a pilot signal is
transmitted from a ground-based
segment of power system
(RECTENNA). Therefore, the
spacetenna has to be a huge phased-array
antenna in size with a retro directive
beam control capability. So, microwave
circuits are connected to each antenna
element and driven by DC power
generated in the huge solar panels. A
frequency of 2.45 GHz is assigned to
transmit power to the earth. Figure 2 also
shows a scheme of microwave beam
control and rectenna location. SPS2000
can serve exclusively the equatorial
zone, especially benefiting
geographically isolated lands in
developing nations.


Figure 3 illustrates a configuration of the
Spacetenna. The Spacetenna has a
square shape whose dimension is 132
meters by 132 meters and which is
regularly filled with 1936 segments of
sub array. The sub array is considered to
be a unit of phase control and also a
square shape whose edges are 3 meters.
It contains 1320 units of cavity-backed
slot antenna element and DC-RF circuit.
Therefore, there will be about 2.6
million antenna elements in the
spacetenna.





Figure 4 illustrates a block diagram of
the spacetenna. The spacetenna is
composed of pilot signal receiving
antennas followed by detectors finding
out the location of the rectenna on the
earth, power transmission antenna
elements and phase control systems. The
left and right hand sides in Fig.4
correspond to parts of power
transmission and direction detection,
respectively. The antenna elements
receiving the pilot signal have a
polarization perpendicular to the antenna
elements used in the power transmission
so as to reduce effectively interactions
between both antenna elements.
Moreover, the pilot signal frequency and
a frequency for the energy transmission
are different from each other. Using two
kinds of frequency for the power
transmission and the pilot signal
prevents each other from interfering and
makes it possible to find out the accurate
direction of a specified rectenna.
6.SPACE SETTLEMENT :
Space settlement is a unique concept for
colonization beyond the Earth. While
most thinking regarding the expansion of
the human race outward into space has
focused on the colonization of the
surfaces of other planets, the space
settlement concept suggests that
planetary surfaces may not be the best
location for extraterrestrial colonies.
Artificial, closed-ecology habitats in free
orbit would seem to have many
advantages over any planetary home
(Earth included).

FIGURE 6.1
This is a solar-powered mass-driver, an
electromagnetic linear accelerator. It can
be utilized as a reaction engine, which
can use literally anything for fuel (even
ground-up chunks of Space Shuttle
External Tanks). The mass-driver has
been assembled from components lifted
by several shuttle flights, and soon will
be ready to begin hauling cargo for a
small moon base to lunar orbit.
FIGURE 6.2
There, a lunar shuttle soft-lands cargo
for the moon base onto the surface. The
cargo includes small habitats, solar
arrays, mining equipment, and
components for the assembly of another
mass-driver on the surface. This mass-
driver will be used as a catapult to
launch lunar ores to a point in space
where they can be collected.

FIGURE 6.3
A Solar Power Satellite (SPS) with a
thoroughly energized Earth in the
background. One of the first things we
will begin doing once we are using space
resources is constructing a SPS, a vast
solar array which gathers the constant
solar power in orbit and beams energy to
Earth in the form of a safe, low-density
microwave beam.
FIGURE 6.4
On Earth, the beam is intercepted by a
rectenna several miles across, where it is
converted back into electricity. The
electricity is then rectified to AC, and
fed into the power grid. The goal is to
undersell power generated by fossil fuels
or nuclear energy.
FIGURE 6.5
The rectennas will be huge, but the land
underneath need not go to waste. Since
the array absorbs the microwaves, but
allows sunlight and rainfall through, the
land could be used for farming or
ranching. Or, as in this case, the rectenna
could be built as a vast set of
greenhouses, feeding millions.
CONCLUSION
In near future Rectenna stands as a
mile stone among non conventional
energy resourses.

SRI VENKATESWARA UNIVERSITY COLLEGE OF
ENGINEERING,
DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING,
TIRUPATI-517502.



PAPER PRESENTATION
ON
SCADA WITH GIS

Presented by:
M.BALAJI
&
K.MUNI SWAMY

Address for communication:

M.BALAJI
III-B.Tech,II-Semester,
Roll no:104302
S.V.U.college of engineering,
Tirupati-517502,
Chitoor (D.T), A.P.
E-mail: macha.balaji@gmail.com
Phone: 986671525
K.MUNI SWAMY
III-B.Tech, II-Semester,
Roll no: 104312
S.V.U.college of engineering,
Tirupati-517502,
Chitoor (D.T), A.P.
E-mail:muni_eee@yahoo.com
Phone: 9948721638






SCADA WITH GIS


Abstract
The aim of this paper is to introduce a
new technique to control the pilferage
and power theft using interface of
SCADA with GIS system. The SCADA
system will continuously get the real
time readings of all electrical parameters
at monitored points on feeders. These
parameters include, Voltage, Angle,
Power Factor, Active Power, Reactive
Power and Energy. The system shall also
get the status of various switching
devices like circuit breakers, switches
and isolators. It will also get the
transformer parameters like tap position,
etc.

Electronic meters will be installed at HT
consumers. These meters will be
equipped with the interface for
communications with the SCADA
system. SCADA system will be
communicating with the meters using an
industry standard protocol. Meter
readings shall be used to monitor the
load and for detection of attempts to
tamper with the meter. As soon as a
tamper is detected the meter/consumer
shall be tagged on the GIS system. The
information shall be passed on to the
vigilance groups for physical check, to
take further action.























INTRODUCTION:

"The total power generation in the
country was around one lakh MW of
which billing was done only for 55,000
MW and the rest 45,000 MW was going
as pilferage and power theft. Out of
45,000 MW, the annual power theft was
around 30,000 MW causing a financial
loss of Rs 20,000 crore to the nation's
exchequer every year - A report"

The Power sector plays a very important
and vital role in the economic
development of a country. The growth of
development of Industries, Agriculture,
Infrastructure, is dependent on the state
of power sector. In India approx. 35-
40% of the losses are contributed by
Transmission and Distribution losses.
These losses are much higher if we
compare any other developed country.

Geographic information system (GIS)
technology can be used for scientific
investigations, resource management,
and development planning. For example,
a GIS might allow emergency planners
to easily calculate emergency response
times in the event of a natural disaster,
or a GIS might be used to find wetlands
that need protection from pollution.

Several State Electricity Boards (SEB), to
identify the areas incurring high losses, have
performed detailed Energy Audit.

As the nature of the loss is both technical
and commercial, it becomes more difficult to
differentiate the loss in between these two
factors. This can only be possible by making
a detailed study of the system. As pilferage
takes place mostly at the LT level hence it
becomes crucial to carry out the study up to
consumer level. The losses in the physical
system like line losses, transformation
losses form the technical losses and
removing the technical losses from total
losses will give us the commercial losses in
the system. Commercial losses come from a
variety of sources, all of which have in
common that energy was delivered but not
paid for. The potential sources of
commercial loss or the theft of utility service
could be a direct connection from a feeder
or wire bypassing the meter, tampering with
meters or meter reader fraud. This paper
emphasizes the role of GIS for identifying
the network areas, which could be facing the
problem of power pilferage.


1)SCADA System

What is SCADA:

SCADA stands for Supervisory Control
And Data Acquisition. As the name
indicates, it is not a full control system,
but rather focuses on the supervisory
level. As such, it is a purely software
package that is positioned on top of
hardware to which it is interfaced, in
general via Programmable Logic
Controllers (PLCs), or other commercial
hardware modules.


Supervisory Control and Data
Acquisition is a commonly used industry
term for computer-based systems
allowing system operators to obtain real-
time data related to the status of an
electric power system and to monitor
and control elements of an electric
power system over a wide geographic
area.







Overview of SCADA System
Functions.

The SCADA System connects two
distinctly different environments. The
substation, where it measures, monitors,
controls and digitizes; and the
Operations Center, where it collects,
stores, displays and processes substation
data. A communications pathway
connects the two environments.
Interfaces to substation equipment and a
conversions and communications
resource complete the system. The
substation terminus for traditional
SCADA system is the Remote Terminal
Unit (RTU) where the communications
and substation interface interconnect.
Figure 1.1 below illustrates this concept.





SCADA system RTUs collect
measurements of power system
parameters, transport them to an
Operations Center where the SCADA
Master presents them to system
operators. Predominantly, these are real
and reactive power flow (watts and vars)
voltages and currents. But other
measurements like tank levels, pressures
and tap positions are common to
SCADA systems. These belong to the
class of measurements termed analogs.
Almost anything that can be viewed as a
continuous variable over a range fits this
category. Analog data is refreshed
periodically so that the operator can be
assured the data on his screen is relevant.
The re-fresh rate is often dependant on
the characteristics of the data being
viewed.
SCADA system master stations monitor
the incoming stream of analog variables
and flag values that are outside
prescribed limits with warnings and
alarms to alert the system operator to
potential problems. Data is screened for
bad (i.e., out of reasonability limits)
data as well. SCADA systems also
collect the state of power equipment
such as circuit breakers and switches.
This data is presented to the system
operator, usually on graphical displays,
to give the operator a view of the
connectivity of the power system at any
given moment. Various state change
reporting techniques have been used to
report such changes for the system
operator. These include flagging
momentary changes, counting changes
and time tagging them with varying
degrees of resolution (sometimes as
short as one millisecond).

2) GIS System
Geographic information system (GIS)
technology can be used for scientific
investigations, resource management,
and development planning. For example,
a GIS might allow emergency planners
to easily calculate emergency response
times in the event of a natural disaster,
or a GIS might be used to find wetlands
that need protection from pollution.
What is a GIS?
A GIS is a computer system capable of
capturing, storing, analyzing, and
displaying geographically referenced
information; that is, data identified
according to location. Practitioners also
define a GIS as including the
procedures, operating personnel, and
spatial data that go into the system.
Geographic information system (GIS) is
a computer-based tool or mapping and
analyzing things that exist and events
that happen on Earth. GIS technology
integrates common database operations
such as query and statistical analysis
with the unique visualization and
geographic analysis benefits offered by
maps. These abilities distinguish GIS
from other information systems and
make it valuable to a wide range of
public and private enterprises for
explaining events, predicting outcomes,
and planning strategies.
GIS is a technology that can transform
the way you do business. GIS
technology allows you to see your
business information in a whole new
way, through maps, and discover
relationships you didn't know existed.
A geographic information system (GIS)
uses computers and software to leverage
the fundamental principle of
geographythat location is important in
peoples lives. GIS helps a retail
business locate the best site for its next
store and helps agencies track
environmental degradation. It helps
route delivery trucks and manage road
paving. It helps marketers find new
prospects, and it helps farmers increase
production and manage their land more
efficiently. GIS takes the numbers and
words from the rows and columns in
databases and spreadsheets and puts
them on a map. Placing your data on a
map highlights where you have many
customers if you own a store, or multiple
leaks in your water system if you run a
water company. It allows you to view,
understand, question, interpret, and
visualize your data in ways simply not
possible in the rows and columns of a
spreadsheet. And with data on a map,
you can ask more questions. You can ask
where, why, and how, all with the
location information on hand. You can
make better decisions with the
knowledge that geography and spatial
analysis are included

Integrate data in various formats and
from many sources using GIS

3) Role of GIS in preventing
power pilferage

Several State Electricity Boards (SEB),
to identify the areas incurring high
losses, have performed detailed Energy
Audit. As the nature of the loss is both
technical and commercial, it becomes
more difficult to differentiate the loss in
between these two factors. This can only
be possible by making a detailed study
of the system. As pilferage takes place
mostly at the LT level hence it becomes
crucial to carry out the study upto
consumer level. The losses in the
physical system like line losses,
transformation losses form the technical
losses and removing the technical losses
from total losses will give us the
commercial losses in the system.
Commercial losses come from a variety
of sources, all of which have in common
that energy was delivered but not paid
for. The potential sources of commercial
loss or the theft of utility service could
be a direct connection from a feeder or
wire bypassing the meter, tampering
with meters or meter reader fraud.
This paper emphasizes the role of GIS
for identifying the network areas, which
could be facing the problem of power
pilferage.

Geographical Information Systems
(GIS) is the emerging technologies
which are being increasingly used in the
power sector efficient management.
Several electric power corporations
particularly in the developed countries
have adopted GIS for developing
efficient distribution information and
management systems Application of
GISS in Power Sector involves
converting existing maps into digital
format attaching attribute data and
updating. GPS receives signals from
satellites and provides location
information speedily and economically.
This tool is very useful in updating the
database on day-to-day basis. The up-to-
date spatial database is useful for
managing various functions like outage,
load analysis, asset valuation and
eventually in efficient management of
distribution network. Analyses such as
the selection of suitable areas, the
optimum path finding, the profile
analyses, the engineering design of
towers and wires, and the cost estimation
can be done using GIS. GIS can make
the information easily updatable and
accurate and hence can cater to the needs
of maintaining large power
infrastructure. GIS can effectively
manage information on the distribution
of electricity to customers and
information describing the attributes of
each customer such as location and
electricity use. With the use of GIS,
power companies can collect and store a
large amount of data that can be readily
accessed and analyzed. Strength of GIS
is integrating data and preparing it for
analysis or modeling apart from tying
together data from various sources
makes it an important tool for the
planning and decision making

Application of GIS in Power
Utilities
GIS is at the core of a full gamut of
electrical utility applications customer
information systems, work management,
distribution management, meter order
processing, load and feeder planning,
outage management. Electric companies
are already finding GIS very useful in
management of distribution. In the
power distribution sector, GIS in
combination with AM (Facilities
Management) and FM (Facilities
Management) is revolutionizing this
sector.
AM/FM/GIS in the power sector is used
for:
The study and analysis for
electrical distribution system,
analysis and design
Applications are also being
developed for tackling problem
of designing the electrical supply
system for new residential
development
For process automation in order
to provide their customers with
high quality attendance
To rebuild the design of work
procedures in electric utilities
GIS are also integrated for
mapping and analysis of electric
distribution circuits
GIS can play a crucial role in
tightening the leakages -real and
procedural -that result in
monstrous losses in the
transmission and distribution
chain A GIS system integrated
with consumer billing system can
be very effectively used in
detecting power pilferage
Pilferage detection can be done
at consumer, distribution
transformer, and feeder or
substation levels
Analysis of patterns of individual
consumption over GIS can help
in identifying the sources of
pilferage at subscriber level
GIS system integrated with
SCADA can be used in detecting
power thefts by HT consumers

Role of SCADA interfaced GIS
system in identifying potential
thefts in H.T. distributions

The proposed solution is interface of
SCADA with GIS system. The SCADA
system will continuously get the real
time readings of all electrical parameters
at monitored points on feeders. These
parameters include Voltage, Angle,
Power Factor, Active Power, Reactive
Power and Energy. The system shall also
get the status of various switching
devices like circuit breakers, switches
and isolators. It will also get the
transformer parameters like tap position,
etc.


Electronic meters will be installed at HT
consumers. These meters will be
equipped with the interface for
communications with the SCADA
system. SCADA system will be
communicating with the meters using an
industry standard protocol. Meter
readings shall be used to monitor the
load and for detection of attempts to
tamper with the meter. As soon as a
tamper is detected the meter/consumer
shall be tagged on the GIS system. The
information shall be passed on to the
vigilance groups for physical check, to
take further action. The system can be
graphically illustrated in Figure 4.
In its stride towards Power for All by
2012the Ministry of Power has decided
to deploy Geographical Information
Systems (GIS) and Global Positioning
System (GPS) and Remote Sensing (RS)
to improve its distribution network,
restoration services as well as to harness
the hydro power potential in the North
Himalayan region and in Northeastern
India. Recent ranking survey of potential
hydro sites conducted by the Central
Electricity Authority (CEA) had
extensively used GIS in its report. Power
Grid has charted an ambitious plan to
add about 60,000 circuit km of
transmission lines by 2012. To facilitate
this, construction of high capacity inter-
regional transmission lines and power
highways, culminating in a national grid
is envisaged. However, conventional
methods of survey have proved to be
inaccurate and Power Grid is looking at
new methods like remote sensing, aerial
techniques and GPS.

Conclusion:

In this paper a GIS solution for
preventing the power pilferages has been
presented. It can be concluded from the
above discussion:
A GIS system integrated with
consumer billing system can be
very effectively used in detecting
the power pilferage.

Pilferage detection can be done
at consumer, distribution
transformer, and feeder or
substation levels.

The accuracy of the result
depends on the accuracy of the
loading pattern considered during
the evaluation of technical losses
and the accuracy of the meter
readings.

Analysis of patterns of individual
consumption over GIS can help
in identifying the sources of
pilferage at subscriber level.

GIS system integrated with
SCADA can be used in detecting
power thefts by HT consumers.
MICRO CONTROLLER BASED
SINGLE PHASE INDUCTION MOTOR
CONTROL

Presented by:
J RAJA PAVAN KUMAR
A.MOHANA RAO
THIRD YEAR

ELECTRICAL AND ELECTRORNICS ENGINEERING
SIR C R REDDY COLLEGE OF ENGINEERING,
ELURU-7, W.G DIST,
A.P., INDIA,

E-mail ids:
jrajapavan@yahoo.co.in
mohanarao_eee@yahoo.com.





ABSTRACT:

This paper describes a new control technique
of the single-phase a.c. induction motor. It is a
low-cost, high-efficiency drive capable of
supplying a single-phase a.c. induction motor
with a PWM modulated sinusoidal voltage.
The circuit operation is controlled by a tiny
MC68HC908QT4 MCU.
The device is aimed at substituting the
commonly used triac phase angle control
drives. The circuit is capable of applying a
single-phase a.c. induction motor (or general
a.c. inductive/resistive load) with varying a.c.
voltage. The same as in triac control, the
voltage applied to the load can be varied from
zero to maximum value. On the other hand, it
uses a pulse width modulation technique
(PWM), and when compared with the phase
angle control used for triacs, produces much
lower high harmonic pollution.
Because the circuit is aimed at low-cost,
low/medium-power applications, it does not
use a conventional converter topology
(expensive) to produce the output voltage
waveform. It directly modulates the mains a.c.
voltage. Compared with the converter, it
requires a lower number of active and passive
power components. Thus, the price of the drive
can be kept at a reasonable level.

In summary, the device takes advantage of both
the low price of the phase angle control and the
low harmonic content and high efficiency that
we can get with standard converter topology.
The drives based on this new control technique
are targeted for use in consumer and industrial
products where the system cost is a
consideration.
1. Background of Single-Phase Drives
Control
Single-phase motors are widely used in home
and industrial appliances.The main advantage
of these motors is their ability to operate from a
single-phase power supply. Therefore, they can
be used wherever a single-phase power is
available. There are also other aspects for their
Popularity: low manufacturing cost, reliability,
and simplicity. However, compared with three-
phase systems, they offer lower efficiency.
1.1 Single-Phase Motor
Because the single-phase motor is supplied
from a single-phase current source, the motor
produces an alternating magnetic field. At zero
speed, the torque produced is zero. To start
such a motor, it is necessary, besides a main
winding, to have an auxiliary winding to help
to generate the phase-shifted magnetic field.
Typical construction of the single-phase motor
is shown in Figure 1-1. The auxiliary winding
is placed in quadrature with the main winding.
The current flowing to the auxiliary winding
has to be phase shifted from the current
flowing through the main winding. There are
several ways to do this.
Usually there is a capacitor connected in series
with the auxiliary winding. Thus, we can
generate magnetic fields of main and auxiliary
Windings shifted 90
.
Such a field appears to
rotate the same as for a field in three-phase
motors.

It causes the motor to start rotating. The
capacitor and auxiliary winding may be
disconnected when the motor reaches 75% of
nominal speed. Usually a centrifugal switch is
used. Then the motor continues running with
the main winding. This configuration produces
high starting torque and is ideal for
applications such as compressors, refrigerators,
etc.If the high starting torque is not required,
the starting capacitor may also be left
connected in normal operation. Then no
centrifugal switch is required. The capacitor
and an auxiliary winding have to be designed
for continual operation. The starting torque of
such a motor is low, making the motor suitable
for use in low-inertia loads such as fans and
blowers.
The simplest method to start a single-phase
motor is to use a shaded-pole motor. This
construction does not require a starting winding
and capacitor. The stator poles have a short
copper ring placed around a small portion of
each pole. The magnetic field of the shaded
part of the pole is delayed in relation to that
from the non-shaded part. The magnetic field
appears to be rotating and the motor can start
rotating. The shaded-pole motors are very
simple and inexpensive when produced in high
volumes. Against these advantages, they have
also a number of disadvantages: very low
efficiency (below 20%), low starting torque
and high slip. They have a limited range of use,
i.e. in small home appliances such as fans or
blowers.
1.2 Varying Speed of Single-Phase Motors
In many applications it may be desirable to
change the speed of the motor e.g. if we want
to control the air-flow of a ventilator. Then it is
useful to use some techniques for varying a.c.
induction motor speed. The speed of the single-
phase a.c. induction motor can be adjusted
either by applying the proper supply voltage
amplitude and frequency (Called volt-per-hertz
control) or by the changing of supply voltage
amplitude with constant frequency (slip
control).



1.3 Conduction Angle Control
Because of its very low cost and simplicity, the
most popular technique of supplying single-
phase a.c. motors is the conduction angle
control. To carry out this control, a triac device
is used. The conduction angle is adjusted by
changing the switching instant of the triac
device. In such a way, the conduction angle can
be varied from 180

to 0

. The voltage R.M.S.


value is a function of the conduction angle.
Basic principles of this technique are shown in
Figure 1-2. Using the conduction angle control,
we can adjust only the output voltage
amplitude and not the frequency, therefore only
the slip control can be used.
This method represents a cost-effective
solution. It is the most used technique for low-
cost appliances, widely used in modern
consumer products. For the low cost, we pay a
high price with this method. It produces very
high harmonic content in both motor and
supply current waveforms. The effects of these
are low efficiency of the drive and transmission
lines, acoustic noise, and electromagnetic
interference.For these reasons, the conduction
angle control is not preferred for the latest
designs. The high harmonic pollution does not
comply with strict EMI/EMC regulations.
1.4 Converter Topology
The three-phase drives are usually based on
complex converter topology. The converter
topology can also be used for supplying single-
phase a.c. motors. The block diagram of such a
drive is shown in Figure 1-3.








In
a conventional converter, the a.c. line voltage is
converted into D.C. voltage using the diode
bridge rectifier at the input. The D.C. voltage is
then filtered by the filter capacitor in the D.C.
link. Finally, the D.C. voltage is converted
back to an a.c. voltage of the desired amplitude
and frequency by the inverter. The inverter is
usually implemented as a single-phase bridge.
PWM modulation is used to generate an output
voltage waveform. The advantage of this
topology is the ability to generate both the
amplitude and frequency of the output voltage
independently. The converter systems have one
big disadvantage high cost. They require
numerous active and passive power
components. Another problem is the diode
bridge rectifier at the input. The current drawn
from the a.c. line by this system is non-
sinusoidal, with high and narrow current
spikes. To eliminate this, some power
correction circuits need to be implemented.
This increases the overall system cost.
Converter systems are not suitable for low-
cost, high-volume products. It fits best mainly
for use in high power and high efficiency
drives.
1.5 Circuits for Single-Phase A.C. Induction
Motor Control
The new circuits are designed to provide a
cost-effective solution for driving a.c.
induction motors with varying R.M.S. voltage
value. The circuits modulate the a.c. line
voltage directly. The input sinusoidal
waveform is chopped using a bi-directional
switch and applied to the load. The switching
frequency is fixed at 16 kHz. The r.m.s value
of output voltage is proportional to the duty-
cycle of the modulation signal (PWM
modulation). The first harmonic of the output
waveform remains unchanged and is the same
as the line voltage. Therefore the circuits are
suitable for applications where the voltage of
variable amplitude and constant frequency is
required (e.g. slip control of a.c. induction
motors).
Two practical circuits for supplying single-
phase a.c.induction motors have been
described. Both circuits are based on the same
topology, which is shown in Figure 1-4
The topology uses two bidirectional switches.
A bidirectional switch is a device capable of
switching on and off both polarities (i.e.
positive and negative) of an a.c. voltage and
current. The a.c. load is connected to the a.c.
line by means of the bidirectional switch S1.
According to the input control signal, switch
S1 connects and disconnects the load to/from
the a.c. line. Thus, the input sinusoidal
waveform is chopped and the R.M.S. value of
the voltage applied to the load is controlled. An
example of the chopped line voltage is shown
in the diagram.
In instances when the load is disconnected
from the a.c. line, the current flowing through
the load needs to be freewheeled. Therefore the
other bidirectional switch S2 is connected
across the load. It is switched on when S1 is
switched off, and vice versa. That means that
switches S1 and S2 have to be switched
complementarily.


The bidirectional switches S1 and S2 are both
implemented as a rectifying bridge. The input
terminals of the rectifying bridge are connected
to the load. The output terminals (rectified
side) has a power transistor (IGBT, MOSFET
or bipolar) connected across them. When the
power transistor is off, current cannot flow
through the rectifying bridge and the
bidirectional switch is in an off-state. When the
power transistor is on, output terminals are
short-circuited, and current can flow through
the rectifying bridge. The bidirectional switch
is in an on state. The switching state of the
bidirectional switch is determined by the power
transistor. The function of the bidirectional
switch is shown in Figure 1-5.
The power transistor is controlled by the
electric signal applied across the transistor gate
(base) and source (emitter) electrodes. In the
circuit topology as shown in Figure 1-4, the
source electrodes of the power transistors in
switches S1 and S2 are alternate to each other.
Therefore,
It is not possible to get an inverted electric
signal simply from one to the other.
This paper presents two different approaches to
solve this task and properly control switches S1
and S2.















Fig-2.1
Power
Supply
+ MCU

3. Implementation
The application control is implemented on the
MC68HC908QT4 microcontroller. It is a
member of the small and low-cost Nitron
family.
To minimize external components, the MCU
uses an internal oscillator. The generated
internal clock frequency is 12.8 MHz, resulting
in a bus speed (internal clock 4) of 3.2 MHz.
untrimmed internal clock tolerance is less than
25%. The application uses an OSCTRIM
register to adjust the clock tolerance to less
than 5%.
The MCU pin functional assignment is as
follows:
PTA0 PWM signal generation
PTA1 Green LED control, Bootloader TxD
pin
PTA2 Bootloader RxD pin
PTA3 Start/Stop switch
PTA4 Speed command from potentiometer
P1
The software dataflow diagram is shown in
Figure 3-1.








Fig-3.1Flow chart, main routine and
application state machine
3.1 Initialization
After reset, the MCU is initialized and the
peripherals are configured.The MCU pins are
configured according to assigned functions.
The timer interface module is configured to
generate buffered PWM. The prescaler register
and timer modulo register are set to generate a
fixed output PWM frequency at 16 kHz. The
ADC is set to run in continuous conversion
mode. It periodically scans the speed command
set by an analog voltage value (in range 0
5V) connected to channel 2. Finally,
application variables are initialized and
interrupts are enabled.
3.2 Main Routine
After initialization, the application enters an
endless background loop, which is interrupted
by interrupt service routines. In the
background, a so called software timer is
executed. The SWCounter is a time base
variable of the software timer. It is incremented
in the interrupt service routine called on timer
module overflow. The period at which the
software timer executes background tasks is
defined by the EXEC_CONST constant. It is
set to approximately 10 ms. on every compare
of the software timer, the following actions are
performed:
ADC data register is read for actual speed
command
Application state machine is executed
New PWM duty-cycle is set
The application state machine can be seen in
Figure 3-1. The application enters two states
APP_STOP and APP_RUN. The transition
from one state to another is done according to
bits in the AppControl control byte. After reset
the application enters the APP_STOP state
regardless of the Start/Stop switch position. To
start the motor, toggle the switch first to Stop
and then back to Start. This safety feature
prevents the motor from starting when the
application is reset.
The maximum rate of the PWM duty cycle
change (increment/decrement) is determined by
the speed ramp function. The rate of change is
defined by RAMP_INCREMENT.
3.3 Interrupts
The timer overflow interrupt calls an interrupt
service routine. In the ISR, the SWCounter
variable is incremented to provide a time-base
for the software counter.
3.4 PWM Generation
The required actual speed sets the duty cycle of
the PWM. The PWM can use both positive and
negative logic. The control circuit with
feedback signals uses a negative logic. If the
PWM output is set to logic1, the power switch
S1 is off; if set to logic 0, the power switch S1
is on. The freewheeling circuit with direct
MCU control uses a positive logic (1=switch
S1 is on, 0 = switch S1 is off). The output
PWM logic is defined by a constant
PWM_POLARITY in main.h.
If the motor is stopped, the power switch is
fully closed. If the motor is running at full
speed, the switch S1 is fully open. According
to the output logic, the PWM duty-cycle is
either 100% or 0%.
In the case of a 0% or 100% duty-cycle, the
PWM pin is constantly held either at high or
low level, e.g. in the case of a freewheeling
control circuit with feedback signals: if the
motor is stopped, the output pin is held at high
level (logic 1), if the motor is running at full
speed, the output pin is held at low level (logic
0). In the case of a freewheeling circuit with
direct MCU control, the pin output is set vice
versa.
Transition to and from fully-closed and fully-
open modes is handled in the background loop.
The duty-cycle limit process limits the
minimum and maximum duty-cycles generated
by PWM. The minimum generated PWM duty
Cycle is set by the DEADTIME constant. The
maximum duty-cycle is set by the
DUTYMAXL and DUTYMAXH constants.
The constant DUTYMAXL sets the maximum
limit for transition from fully-open to PWM
modulation, and DUTYMAXH sets the limit
for transition from PWM modulation to fully-
open.
The timer module is configured to run in
buffered PWM mode. The timer module
channel registers (0 or 1) are written
alternately.
4. Conclusion
New EMC standards bring strict limits for
higher harmonic pollution. A conduction angle
control technique, which was popular for a
long time in industrial and home appliances,
does not comply with these new standards. The
manufacturers will have to look for new
solutions to replace triacs in the near future,
and keep the manufacturing costs at a
reasonable level.
The presented circuits are aimed to be such a
solution. The same as in triac control, it is able
to supply a single-phase a.c. induction motor of
all kinds (or general a.c. inductive/resistive
load) with varying a.c. voltage. The voltage
applied to the load can be varied from zero to
maximum value. The cost of the drive based on
this new technique is kept low. Therefore, it is
suitable mainly for low-cost designs, where
triacs and conduction angle control were used
in the past.
The proposed circuits overcome drawbacks of
conduction angle control
Harmonic pollution, low efficiency, and
acoustic noise caused by non-sinusoidal motor
current. The PWM modulation considerably
reduces the harmonic content of the line
current. Only high harmonics, which
correspond to the switching frequency at 16
kHz, are present and can be filtered using a
simple EMC filter at the input. Current of the
load remains sinusoidal throughout the range of
the output voltage.
The main advantages of the proposed solution:
Line current harmonic content reduction
the harmonic content of line current is
significantly reduced mainly compared with
the most common triac conduction angle
control. PWM modulation is used, and the
higher harmonic content that is injected to a.c.
line is low.
Low cost the circuit offers a significant
reduction in the system cost compared with
other commonly used techniques based on
PWM modulation. The proposed solution takes
advantage of high-end converter topology
while keeping the overall cost at a reasonable
level.
No limitation of cos the circuit is not
limited by the cos of supplied loads. The
feedback signals that control the load current
freewheeling are designed to work under all
conditions on the a.c. load. The proper signal is
generated for all combinations of the a.c.load
voltage and current polarities. The proposed
circuit is capable of driving all types of
common a.c. loads (inductive or resistive).
Minimal number of power components
the proposed circuit requires a low number of
power components.
REFERENCES:
The 8051 family of micro controllers
-Richard. H Barnett
The 8051 Micro controller
-James W.Stewart

























SOLAR POWER SATELLITE










K.Sushmitha P.Prathibha Sravanthi
II year EEE II year EEE
KSRM COLLLGE OF ENGG KSRM COLLLGE OF ENGG
Id no: 305376. Id no: 305348
Email:sushmitha.summi@gmail.com Eail:prathibha.sravanthi@gmail.com

























ABSTRACT
Energy development is the on
going effort to provide abundant and
accessible energy, through
knowledge, skills and constructions.
For years humanity has dreamed of a
clean, inexhaustible energy source.
This dream has lead many people to
do what, in retrospect, seems
obvious, and look upward toward
nature's "fusion reactor", the sun. The
sun powers the biosphere, which is to
say that the energy used by almost all
plants and animals comes from the
sun. So why not use solar energy to
power industry, transportation, and
the home as well? Promoted as early
as 1968 by Peter Glaser, then a
NASA scientist, solar power satellites
can be built to convert direct solar
radiation received in the full,
unobstructed intensity possible in
space to direct current (DC),
electrical power. Such collectors are
known as solar power satellites
(SPS). The solar energy collected by
an SPS would be converted into
electricity, then into microwaves. The
microwaves would be beamed to the
Earth's surface, where they would be
received and converted back into
electricity by a large array of devices
known as a rectifying antenna, or
retina.
Introduction
Can you ever imagine life without
lights, fans, cars, computers and
television or of fetching water from
the well and river? This is what life
would have been like had man not
discovered the uses of energy both
renewable and nonrenewable
resources. Nonrenewable resources
are of different types for e.g. solar
energy, wind energy, tidal energy,
geothermal energy etc. Now let us
focus on solar energy as it is one of the
abundant forms of energy available.
Power
"For the successful technology, reality
must take precedence over public
relations, for nature cannot be fooled."
Richard Feyman Energy. It has been
already said that the rationale for going
into space, apart from the fact that the
human race must extend its limits and
explore and then conquer space, has to
do with retrieving energy, mainly the
Sun's energy. About 80 % of the total
energy demanded by our society is
supplied from fossil fuels. 90% of the
CO2 which is the major cause of the
greenhouse effect comes from
combustion. It is now widely accepted
that the only way to reduce the
environmental risks while sustaining
the economic growth is to develop a
large-scale alternative energy system
which is ecologically benign.
A scientific venture must be
pursued when it follows certain logic
and the solution is correct, even if
technology for proper utilization is not
current or not available. Such is this
case. Although the human race would
perhaps not be able in the very
immediate future to exploit the
untapped potential of solar energy, it is
certainly a direction that must be
followed. Exclusive dependence on
fossil fuels will inevitably lead to
energy shortages. (see Introduction)
It must be remembered that this
scheme was one of the main
determinants in choosing the location
of the space colony. The liberation
points along the Earth's path were
chosen primarily for their constant
exposure to sunshine Solar Energy
here on Earth Why should we go into
space to get solar energy and not
profit directly from it here on Earth?
The answer is twofold
Solar Power Satellites
A possible scheme for
producing power on a large scale
contemplates placing giant solar
modules alongside the colony where
energy generated from sunlight
would be converted to microwaves
and beamed to antennas on earth for
recon version to electric power. On
ground, the microwave power is
rectified and converted to the
commercial electric power.
To produce as much power as
five large nuclear power plants (1
billion watts each) several square km
of solar collectors, weighing more
than 5 million kg would have to be
assembled in the settlement. An
earth-based antenna 5 miles in
diameter would be required for
reception. These vast assemblies are
often referred to as Solar Power
Satellites (SPS) .The concept of the
SPS is revolutionary with a high
potentiality to solve the global
environmental problems, as it uses
the limitless solar energy, it utilizes
the space outside of the earth ecology
system, and it has no by-product
waste. The use of Even though one of
its panels could never be deployed,
Skylab effectively demonstrated solar
energy.
A large-scale receiving antenna, retina,
is necessary to collect the microwave
power from space.
The Atmosphere :
The benign atmosphere protects us from
the intensity of the sun's rays, that are
filtered by our gaseous cover. That same
protective effect which shields us and
allows life on Earth also prevent us from
fully receiving the Sun's energy. It is
estimated that, in average, between 0.1 and
0.2 kW/m2 of solar energy can be received
from the Sun on the Earth's surface. In
near Earth space the quantity o energy that
can be collected is approximately ten
times as much, that is, around 1 to 2
kW/m2 in average. This first reason is
obviously decisive.
The Earth's rotation : But even if
extra sensitive solar panels could be
engineered, there is another
problematic factor that complicates full
utilization of the sun's energy. The
rotation of the Earth, as we very well
know, gives rise to days and nights,
which means that during 12 hours in
average no sunlight hits the surface of
our planet. Because of this, solar
energy devices have to trap the heat
during the night period and great pains
are taken to ensure that minimum heat
gets lost. None of these problems will
be met in space, where sunshine is
constant and with far greater intensity.
Generating electricity
Apart from using the sun's energy to
supply the Earth, the colonists would
benefit from the abundance of energy
for their own home processes.
Solar energy can be directly converted
into electricity by means of
photoelectric cells. These cells
produce an electrical voltage as long
as light shines on them .
The photoelectric effect consists in
the formation and liberation of
electrically charged particles in
matter when it is irradiated by light or
other electromagnetic radiation. The
term photoelectric effect designates
several types of related interactions.
In the external photoelectric effect,
electrons are liberated from the
surface of a metallic conductor by
absorbing energy from light shining
on the metal's surface. The effect is
applied in the photoelectric cell, in
which the electrons liberated from
one pole of the cell, the
photocathode; migrate to the other
pole, the anode, under the influence
of an electric field.
Solar power satellite concept
The sun powers the biosphere,
which is to say that the energy used
by almost all plants and animals
comes from the sun. So why not use
solar energy to power industry,
transportation, and the home as well?
Well, a principal difficulty with solar
power is that the sun doesn't always
shine on a particular location: half the
time the earth blocks the sun, and for
much of the remaining time clouds
and fog do. But what if the solar
energy were collected by a set of
satellites above the earths atmosphere?
Then we might obtain solar power for
24 hours every day of the year. This is
the idea behind solar-power satellites.
A satellite with solar panels to
convert light energy into electricity can
be put into orbit. Indeed, most satellites
in orbit today are powered by solar
panels. But how can we get the energy
from the satellite back to earth? Clearly
it would be impossible to use the
electric lines we use for long-distance
power transmission on earth. This is
where microwaves come in. The idea is
that a satellite be equipped with a
microwave generator, so that the
electrical energy from the solar panels
can be converted into a microwave
beam. Then the microwave beam can
be directed to antennas on the surface
of the earth, which would convert the
microwaves back to electrical energy.
The energy could then either be used at
the site of the antenna or injected into
the electric-power network.
It was during the late 1960s that the
engineer Peter Glaser first had the
notion of solar power satellites. The
principle of transmitting power by
microwaves had already been
demonstrated, though not put into
practice. (Microwaves in practical
devices, such as radar systems and
long-distance telephone relays, were
used to convey information.)
To convey information, the intensity
of the received signal need only be
less than one nanowatt (one billionth
of a watt). Glasers idea was to put
the solar-power satellites in
geosynchronous orbits, so that each
would hover over a single location on
the earth. This meant, however, that
the satellites had to be very high
(36,000 kilometers or about 22,000
feet), and this in turn meant that the
antenna on the satellite and the
receiving antenna on the ground had
to be extremely large (a kilometer or
more in diameter). The idea did not
seem practical, and after some initial
funding by the U.S Department of
Energy and NASA there was little
interest in pursuing the technology.
Today, however, the situation
is changed because of the very large
number of







Co
m
mu
ic
tions satellites in low orbits. It might
atellite
ar satellites of various shapes
n
a
be possible to make these satellites
dual purposesolar-energy collectors
as well as communications devices.
Because of the much lower orbits, the
antennas on the satellites and on the
ground need not be nearly so large. A
drawback however, is that satellites in
low-earth orbit circle the earth rapidly
(about every 90 minutes) and therefore
do not provide a connection for a very
long time. There are also other
concerns. One is that the transmission
down to the ground might be
interrupted by clouds and weather.
Another is the safety of the people and
animals near the receiving antennas
who might be exposed to the
microwave radiation. Today, the
viability of solar-power satellites as a
long-term solution to our energy needs
is being investigated by government
agencies and individual companies in
many countries.
Current Solar Power S
Designs
Sol
and sizes have been designed by
NASA, aerospace firms and
independent engineers since the 1960s.
They range in size from a hundred
meters to more than five kilometers in
diameter. Their basic components are
(1.) solar cells to convert sunlight into
electricity, (2.) a framework to hold the
cells and their support equipment, (3.)
devices which will convert the
electricity into radio waves or laser
beams able to safely send the power
down to Earth, and (4.) receiving
antennas on Earth to convert the
beams back into electricity and feed it
into standard power gridsv
One solar satellite variation generates
electricity from an orbiting, high-tech
or radio
bove the Earth, in the same orbit
boiler and turbine system.
Huge reflectors concentrate sunlight
on the boiler, and lasers
waves would transmit the energy to
receiving antennas on Earth.
a
used by todays communications
satellites. At that height it takes 24
hours for an object to circle the
planet, so from the Earths surface
the com satellites (or solar satellites)
appear to stay directly overhead all
day and night. They never pass into
the Earths shadow. Thats important
to solar powered communications
satellites, and even more important to
solar satellites. This orbit will allow
solar satellites to send their electricity
down to a specific spot on Earth 24
hours a day for decades at a time.
Most of the components for
these designs already exist, having
been in production on other
commercial projects for decades. Also,
plants and animals have been grown
under the weak beams these solar
satellites will produce, with no physical
damage. Environmental protection is
not an issue, as we will explain later.
The reason solar satellites havent been
deployed isnt a technology issue; its
economics.
POWER GENERATION AND
POWER LINE
Solar Cell: The following baseline data
used for Solar Cell Unit
is based on the current performance of
ground-use a-Si solar
cells and their possible evolution in the
near future. Further
detail of solar cells under test will be
presented.
Conversion Efficiency 15 %
Unit Weight 0.22 Kg/m2
Solar satellites will orbit 22,300 miles
Specific Power 950 Watt/kg
Thickness 0.2 mm
Array Module: A subarray is composed
of 12 solar cell units.
The array module, composed of 110
subarrays, is a mechanical
element for assembly. Each array
module generates 180A at 1
kV. The weight of the array module is
270 kg per each module.
Forty-five array modules are assembled
in each wing; northeast,
southeast, northwest, and southwest.
Power Collection and Distribution:
The Wing Summing Bus collects the
electric power from the array modules.
Each bus line has hot and return bus
cables. The bus lines are
insulated copper plates 1 mm thick.
They get wider as they approach the
center of the SPS2000 satellite to
keep the joule loss per surface area
constant. The Wing Summing Bus
Lines are connected to the Central
Bus Lines (322), which are interfaced
with the spacetenna system. The
Central Bus Lines are insulated
copper plates 0.7 mm thick by 100
mm wide. The Bus Lines are
mechanically attached to the truss
pipes using insulated adapters. The
power loss in the bus lines is 7 % in
total. The total weight of the power
lines is approximately 11,000 kg.
Power Transmission System
Power transmission from the satellite
to a rectenna is made by 2.45 GHz
microwave beam emitted from the
spacetenna, the antenna onboard the
satellite, provided with retro directive
beam control capability. Using the
principle similar to that of the U.S.
Reference System, electrical and
mechanical design of this system is
simpler by employing a square shape
and a single power level. Detailed
design of the spacetenna will be
shown. This makes the microwave
beam broad, and results in relatively
inefficient power transmission and an
increase in microwave exposure
outside the rectennas. However in
this case, the microwave power level
is much lower than in the case of the
Reference System, and well below
international safety standards.
The beaming angle as large as 60
degrees of this case makes this
requirement more important than in
the case of the Reference System.
Spacetenna Design: Antenna
characteristics are shown by
Table 2.
Table 2: Spacetenna Characteristics
Electrical Characteristics
Frequency 2.45GHz
Beam control Retrodirective
Beam scanning
angle
+30 degrees (east-west)
+16.7 degrees (north-south)
Power distribution constant
Power density 574W/m2
Max. power density
on ground
0.9mW/cm2
Input power to
spacetenna
16 MW
Transmitting power 10 MW
Mechanical Characteristics
Shape and Dimension 132m x 132m
square
Mass 134.4 ton
Number of Array module 88
Number of subarray 1936
Number of antenna elements 2,547,776
units
Number of pilot receiver 7,744 units
Rectenna and Electricity Supply
An antenna comprising a mesh of
dipoles and diodes for
absorbing microwave energy from a
transmitter and
converting it into electric power.
Microwaves are
received with about 85% efficiency
Around 5km across (3.1 miles).

nuclear. The rectennas will be huge,
but the land underneath need not go
waste. Since the array absorbs the
microwaves, but allows sunlight and
rainfall through, the land could be
used for farming or ranching. Or, as
in this case, the rectenna could be
built as a vast set of greenhouses,
feeding millions. Rectenna
Technology: For SPS2000 two basic
rectenna designs have been
considered to date, the high-
efficiency "wire mesh reflector"
supported on a rigid frame above the
ground, and the low-cost "magic
carpet" which could be pegged to the
ground. Power collection,
conditioning and energy storage will
be provided according to customers'
requirement. Rectenna system:
SPS2000 rectenna systems may be
developed for different purposes,
such as a small-scale, low-cost
system; a full-size maximum-output
system; a system intended to be
developed later into a commercial
system. At least one SPS2000
rectenna site will be used as an SPS
operation research center. Rectennas
may deliver power into an existing
grid, or operate independently.
Rectenna site conditions: To deliver
power for the maximum length of time,
rectennas will be at least 1200 km
apart.
Rectenna construction and operation
will have environmental
and economic impacts, which will need
to be analyzed for each site
Magic carpet
Material pegged to the ground
5,000 MW Receiving Station
(Rectenna). This station is about a
mile and a half long. Launch costs
Without a doubt, the biggest problem
for the SPS concept is the currently
immense cost of all space launches.
Current rates on the Space Shuttle run
between $3,500 and $5,000 per pound
($8,000/kg and $11,000/kg), depending
on whose numbers are used. In either
case the concept of building a structure
some kilometers on a side is clearly out
of the question. Development of a
vehicle that can launch 100-ton loads at
less than $400/kg is likely to be
necessary.
Gerard O'Neill noted this problem in
the early 1970s, and came up with the
idea of building the SPS's in orbit with
materials from the Moon. The costs of
launch from the Moon are about 100
times lower than from Earth, due to the
lower gravity. However this concept
only works if the number of satellites
to be built is on the order of several
hundred, otherwise the cost of setting
up the production lines in space and
mining facilities on the Moon are just
as huge as launching from Earth in the
first place. However it appears that
O'Neill was more interested in
coming up with a justification for his
space habitat designs than any
particular interest in the SPS concept
on its own.
More recently the SPS concept has
been suggested as a use for a space
elevator. The elevator would make
construction of an SPS considerably
less expensive,
possibly making them competitive
with conventional sources. However
it appears unlikely that even recent
advances in materials science,
namely carbon nanotube, can reduce
the price of construction of the
elevator enough in the short term.

Solar Satellite Power Costs
Earth-built components for a single
solar satellite will weigh from several
thousand tons to several hundred
thousand tons, depending on the
design and power output needed. The
largest versions could supply power
to an entire city, state or provine
while the smaller versions could
supply individual factories with
heavy electrical needs, like aluminum
smelters. Communications satellites
weigh from a few hundred pounds to
over ten tons. Launching them on
todays unmanned rockets costs from
$3,000 to $5,000 per pound (of
satellite). Manned launches cost ten
times as much. Several start-up launch
companies hope to drop the unmanned
launch cost to $1,000 per pound during
this decade, but thats not nearly low
enough
The cost of electricity in the U.S. varies
from region to region, depending on
how its produced. Hydropower from
dams is commonly the cheapest and
nuclear is often the most expensive.
The cost at the point of generation
ranges from 4 cents to about 10 cents
per kilowatt hour, which includes a
profit of less than one cent per kilowatt
hour. A few more cents are added to
cover various taxes and the cost of
transporting it over power lines to
where its needed. The cost of fuel for
all other Earth-based power plants
fluctuates widely over the 30-40 year
construction costs, and the suspected
environmental damage from carbon-
based fuels is well known. Nuclear fuel
causes less direct damage, but has
higher environmental risks. Disposal
costs of depleted nuclear fuel and the
life of the plant. It can often exceed
costs of tearing down old nuclear
plants are extremely high.

The cost of fuel for all other
Earth-based power plants fluctuates
widely over the 30-40 year life of the
plant. It can often exceed
construction costs, and the suspected
environmental damage from carbon-
based fuels is well known. Nuclear
fuel causes less direct damage, but
has higher environmental risks.
Disposal costs of depleted nuclear
fuel and the costs of tearing down old
nuclear plants are extremely high.
An average U.S. home or apartment
uses about 1,000-2,000 kilowatt
hours a month, and a city of 250,000
with factories, stores, homes and
streetlights might need as much as a
billion kilowatt hours a month.
Proposed solar satellites will generate
from a few million to a few billion
kilowatts each, depending on their
size.
Solar satellites will generate about
one kilowatt hour of electricity for
each kilogram (2.2 pounds) of the
satellites weight. A lot of this weight
will be low cost frames, but a lot will
also be higher cost solar cells,
electronics and guidance systems. If
solar satellite components cost an
average of $100 per pound to
manufacture and (optimistically)
$1000 per pound to carry to orbit,
theyd have to sell their power for 30-
50 cents per kilowatt hour to pay off
these costs in 30 years.

That doesnt include the cost of
launching the assembly and
maintenance crews into space at much
higher rates, and launching and
operating the living quarters for these
crews. Even if solar satellite assembly
robots were used, youd need people in
orbit to maintain, repair and refuel the
robots. The launch costs and
maintenance costs for these crews
could add another $200 per pound to
solar satellite costs over a 30-year
period.

Several innovative designs have been
proposed which unfurl sheets of solar
cells like umbrellas after theyre
launched, then allow them to
automatically connect themselves piece
by piece into huge structures in orbit.
These designs will reduce - but
certainly not eliminate - the solar
satellite manpower needs. Some 20%
of communications satellites fail in
orbit because of electrical problems,
fuel shortages or because their solar
panels fail to open as planned, even
though this industry has 40 years of
experience behind it.

The Space Island Space Hardware
section below will explain how well
make it possible for solar satellites to
be launched, assembled and operated
cheaply enough to profitably sell their
power for ten cents per kilowatt-
hour.


shape of the satellite looks like a
saddle back roof. The roof is formed
by solar panels and the spacetenna is
built on the bottom plane to transmit
microwaves to the ground. Of an SPS
is 20 years and it delivers 5 giga
watts to the grid, the Commercial
value of that power is 5,000,000,000 /
1000 = 5,000,000 kilowatt hours,
which multiplied by $.05 per kWh
gives $250,000 revenue per hour.
$250,000 24 hours 365 days 20
years =$43,800,000,000.
In order to be competitive, the SPS
must surmount some
extremely formidable barriers. Either
it must cost far less to deploy, or it
must operate for a very long period of
time. Many proponents have
suggested that the lifetime is
effectively infinite, but normal
maintenance and replacement due to
meteorite impacts makes this unlikely.
A potentially useful concept to contrast
SPS with is the constructing a ground-
based solar power system that
generates an equivalent amount of
power. Such a system would require a
large solar array built in a well-sunlit
area, the Sahara Desert for instance.
However, an SPS also requires a large
ground structure -- the rectenna on the
ground is much larger than the area of
the solar panels in space. The ground-
only solar array would have
Unit Weight 0.22 Kg/m2
+16.7 degrees (north-south)
Power distribution constant Power
density 574W/m2 Max. power density
on ground 0.9mW/cm2
Input power to space antenna
16 MW
Transmitting power 10 MW
Mechanical Characteristics
Shape and Dimension 132m x 132m
square
Mass 134.4 ton
Number of Array module 88
Number of subarray 1936
Number of antenna elements 2,547,776
units
Number of pilot receiver 7,744 units
Rectenna and Electricity Supply
An antenna comprising a mesh of
dipoles and diodes for absorbing
microwave energy from a transmitter
and converting it into electric power.
Microwaves are received with about
85% efficiency Around 5km across
(3.1 miles).The advantages of costing
considerably less to construct, and
would require no significant
technological advances. However,
such a system has a number of
significant disadvantages as well.
Night time at a terrestrial solar
The World's First Prototype
Solar Power Satellite
Station reduces the average amount
of electricity produced by more than
50%, since no power at all is
generated during the night and the
Sun's angle is low in the sky during
much of the day. Some form of
energy storage would be required
continue providing power through the
night, such as pumped storage
hydroelectricity. This is both
expensive and inefficient. Weather
conditions would also interfere
greatly with power collection, and
could prove to cause much greater
wear and tear on the solar collectors
than the environment of Earth orbit;

Asandstorm could cause devastating
damage, for example Beamed
microwave power allows one to send
the power to where it is needed, while
a solar generating station in the
Sahara would primarily provide
power to the surrounding area where
there is not significant demand
(Alternately, the power could be used
on-site to produce chemical fuels for
transportation and storage). Many
advances in construction techniques
that make the SPS concept more
economical could make a groundbased
system more economical as well. For
instance,
many of the SPS plans are based on
building the framework with automated
machinery supplied with raw materials,
typically aluminum. Such a system
could just as easily be used on Earth,
no shipping required.
However, it should be noted that Earth-
based construction already has access
to extremely cheap human labor that
would not be available in space, so
such construction techniques would
have to be extremely competitive.
Current work
NASDA (J apan's national space
agency) has been researching in this
area steadily for the last few years. In
1990s, J apan research flew a small
airplane powered by microwaves
beamed up from the ground. Indeed,
because the island nation has no energy
resources of its own, Japanese officials
have announced plans to have their
first solar power satellite in operation
by the year 2040. WPT, however, also
has great potential for non-terrestrial
applications, including electrically
propelled spaceships for interplanetary
(within Solar System) as well as
interstellar transport (at sunlight
speeds) by providing beamed power for
space propulsion systems, such as
those using space Sails In 2001 plans
were announced to perform additional
research and prototyping by launching
an experimental satellite of capacity
between 10 kilowatts and 1 megawatt
of power In J apanese continued to
study the idea of SPS throughout the
1980s 1995 NASA began a Fresh
Look Study Set up a research,
technology, and investment schedule
NASA Fresh Look Report
SPS could be competitive with other
energy sources and deserves further
study Research aimed at an SPS
system of 250 MW Would cost
around $10 billion and take 20 years
National Research Council found the
research worthwhile but under funded
to achieve its goals
CONCLUSION
Global energy demand continues
to grow along with world wide
concerns over fossil fuel pollution,
the safety of nuclear power and
waste, and the impact of carbon-
burning fuels on global warming. As
a result, space-based, solar power
generation may become an important
source of energy in the 21st Century.
Possible power generation of 5 to 10
gigawatts If the largest conceivable
space power station were built and
operated 24 hours a day all year
round, it could produce the equivalent
output of ten 1 million kilowatt-class
nuclear power stations.
If microwave beams carrying power
could be beamed uniformly over the
earth they could power cell phones.
More reliable than ground based solar
power. Today, however, the situation
is changed because of the very large
number of communications satellites
in low orbits. It might be possible to
make these satellites dual purpose
solar-energy collectors as well as
Communications devices. Because of
the much lower orbits, the antennas
on the satellites and on the ground need
not be nearly so large. Thus, the
viability of solar-power satellites as a
long-term solution to our energy needs
is being investigated by government
agencies and individual companies in
many countries



SOLAR POWER SATELLI TES

WIRELESS TRANSMISSION OF POWER






G.C.N.Chandrasekhar Gupta M. Sadiq Ali
guptagcn@gmail.com mohammadsadiqali@yahoo.com

II B.TECH II B.TECH

Electronics & Communication Engineering













N.B.K.R. Institute of Science & Technology
Vidyanagar, Nellore (DT).












ABSTRACT:

Cant we use solar power at
the night? This question may look
somewhat absurd since there is
obviously no meaning of Using
solar power at night! Now-a-days
we are using the solar power to
generate electricity by the solar
panels mounted on the earth. But, in
outer space, the sun always shines
brightly. No clouds block the solar
rays, and there is no night time. Solar
collectors mounted on an orbiting
satellite would thus generate power
24 hours per day, 365 days per year.
If this power could be relayed to
earth, then the world's energy
problems might be solved forever.

We propose a new method for
power generation in which the solar
power is converted into microwaves
through satellites called Solar Power
Satellites (SPS) and it is received
using a special type of antennae
called rectenna, mounted on earth
surface.

The concept of free space
power propagation is not a new
concept and it is the topic of
discussion for nearly four decades. In
this paper we explain the same for the
generation and reception of electrical
power using the rectennas. Rectennas
are special type of antennae that
could convert the incoming
microwave radiation into electricity
and this electricity can be sent to
grids for storage and future usage.

The paper first discusses about
the history of free space power
transmission and gives a brief
introduction to the rectenna concept.
The important component of the
rectenna, the schottky barrier diode is
explained. Then the functional model for
the Solar Power Satellite is explained.
The importance of the solar energy is
explained both in terms of the cost and
its echo friendly nature. The paper is
concluded explaining our model of a
simple rectenna, which could be readily
built using the components from the
laboratory.

SPS: a great idea!
It is probably well known that we are running
out of fossil fuel. Most of the energy sources
we are using are non renewable. Oil and gas
are not to last longer than about fifty years,
whereas coal will probably last another two or
three centuries. Uraniumand nuclear plants
will not last forever either. So, in order to
provide the generations to come with energy,
we have to find the way to use unlimited
sources. And this is where SPS gets in action. It
provides solutions to use one of the most
renewable and unlimited source on earth; the
SUN.
Still someone might ask why using SPS and
not solar panels on the surface of the earth?
With the SPS, problems such as daylight and
bad-weather conditions, which one might have
to deal with, when using solar panels, do not
exist. Neither does the need of storage in order
to have continual provision of energy and
especially considering our inability for
adequate energy storage on earth. With SPS
the maximumenergy loss due to eclipses is
only a hundred and twenty hours a year.
Furthermore the energy received by the
rectennas on earth is ten times more than that
received by solar panels of the same surface.
The solar panels used on the surface of the
earth prevent sun beams to go through them
and consequently prevent any kind of
cultivation of the earth under them. But with
SPS the photo voltaic cells are in space, so we
have no problemwith the area needed. In
addition the rectennas on earth are semi-
transparent allowing sun light to go through
themand making possible the cultivation of the
soil. So we have no waste of space.
Probably in the future there will be other
sources of energy such as fusion, in fact we are
already using renewable sources like either
hydroelectricity or geothermal energy. SPS
might be one of several renewable energies we
will use in the future.
Now that we have seen several reasons why
SPS could be a great project, let's keep our feet
on the ground. There are still some problems to
solve before we can see the first SPS working.

INTRODUCTION TO SPS:
The Solar Power Satellite (SPS) concept
would place solar power plants in orbit above
Earth, where they would convert sunlight to
electricity and beamthe power to ground-
based receiving stations. The ground-based
stations would be connected to today's regular
electrical power lines that run to our homes,
offices and factories here on Earth.
Why put solar power plants in space? The sun
shines 24 hours a day in space, as if it were
always noontime at the equator with no clouds
and no atmosphere. Unlike solar power on the
ground, the economy isn't vulnerable to cloudy
days, and extra generating capacity and storage
aren't needed for our nighttime needs. There is
no variation of power supply during the course
of the day and night, or fromseason to season.
The latter problems have plagued ground
based solar power concepts, but the SPS
suffers none of the traditional limitations of
ground-based solar power.
INTRODUCTION TO RECTENNA:
A rectenna is a rectifying antenna, a special
type of antenna that is used to directly convert
microwave energy into DC electricity. Its
elements are usually arranged in a mesh
pattern, giving it a distinct appearance from
most antennae.
A simple rectenna can be constructed froma
Schottky diode placed between antenna
dipoles. The diode rectifies the current induced
in the antenna by the microwaves. Schottky
diodes are used because they have the lowest
voltage drop and therefore waste the minimum
power.
Rectennae are highly efficient at converting
microwave energy to electricity. In laboratory
environments, efficiencies above 90% have
been observed with regularity. Some
experimentation has been done with inverse
rectennae, converting electricity into
microwave energy, but efficiencies are much
loweronly in the area of 1%.
Due to their high efficiency and relative
cheapness, rectennae feature in most
microwave power transmission
BLOCK DIAGROM OF SPS MODEL:
The satellites would be placed in so-called
"geostationary" or "Earth synchronous" orbit, a
24-hour orbit which is thus synchronized with
Earth's rotation, so that satellites placed there
will stay stationary overhead from each's
receiving antenna. (Likewise, today's
communications satellites are put into
geostationary orbit, and each TV satellite dish
on the ground is pointed towards one satellite
"stationary" in orbit.) The receiving antenna is
called a "rectenna" (pronounced "rektenna").
Geostationary orbit is very high, 36,000 km
(22,500 miles) above the surface of the Earth.
It is far above the range of the Space Shuttle,
which has a maximumrange of about 1000
km(600 miles) above Earth's surface
The SPS will consist of a large sheet of solar
cells mounted on a frame of steel-reinforced
lunarcrete or astercrete. The solar cells produce
electricity fromsunlight with no moving parts.
The only moving part on the satellite is the
transmitter antenna(s) which slowly tracks the
ground-based rectenna(s) while the solar cell
array keeps facing the sun. Each transmitter
antenna is connected to the solar cell array by
two rotary joints with slip rings.
The transmitter on the SPS is an array of radio
tubes (klystrons), waveguides, and heat
radiators. They convert the electricity fromthe
SPS solar cell power plant into a radio or
microwave beam.
The ground-based rectenna consists of an array
of antennas and standard electronics to convert
the energy into regular AC electricity which
can then be supplied into today's power lines.



MODELS OF TRANSMITTER AND
RECTENNA RECEIVER:



TRANSMISSION OF POWER:
The rectenna consists of an array of dipole
antennas connected to diodes to convert the
radio frequency energy to DC voltage, which
is then converted to regular AC electricity and
wired to homes, factories, etc. While DC to
AC conversion can occur at the rectenna, if the
consumers are a long distance away, e.g., in
another state, it may be more efficient to
transmit by DC power lines and then to
convert to AC at a local power grid.
The efficiency of the SPS is often stated in
terms of "DC to DC efficiency", i.e., fromthe
DC input at the solar cells to the DC output of
the rectenna. The DC to DC efficiency is
generally estimated at 63%, with losses shown
in the figure below.

The satellites can have a useful lifetime
of many decades. As space development
takes off, they are more likely to become
obsolete than to be taken out of service
due to problems. Old SPSs can be either
upgraded (e.g., the transmitter or the
solar array) or sold off to a less
developed country and moved to that
country's space in geosynchronous orbit.
Compared to today's energy sources
Currently, the world gets about 95% of its
energy fromcoal, oil and natural gas, and
almost all of the other 5% fromnuclear power
and hydroelectric dams.
The SPS concept appears to have inherent
promise to be a most economical source of
electric power to our economies, relative to
today's electricity sources and all other energy
sources seriously projected for the foreseeable
future.
The mass of the rectenna would be a little bit
less that a coal fired power plant with the same
output, assuming a safe, low power density
SPS beam, and the satellite in space is about a
tenth the mass of a coal fired power plant, to
give you a picture of what we're dealing with.
Once built, the SPS and rectenna would
continuously supply energy passively with no
pollution. In contrast, a coal fired plant of equal
power output to an SPS would have to burn
tonnages of coal in excess of 20 TIMES the
combined weight of the SPS and its ground-
based rectenna, and also mine, transport,
process and dispose of the ash of these
tonnages, each and every year!! This is a
massively expensive operation, yet it is the
least expensive electricity source today which
can reliably supply electrical energy in
quantities large enough for our demands.
A nuclear power plant is much more complex
than a coal plant, i.e., composed of an even
greater variety of specialized and expensive
components. A nuclear power plant is about
twice as massive as a rectenna, considering just
the power plant and not all the facilities
required for nuclear fuel mining, transport,
purification, enrichment, rod fabrication, spent
fuel temporary storage, reprocessing and
disposal facilities. Nuclear power plants have
very high front-end capital costs (especially
with ever-changing safety regulations and the
need for nuclear safety), but lower operating
costs compared to coal-fired plants. Nuclear
power also has long pending nuclear waste
disposal issues.
Source
Cost per
megawatt
Sup pliable
on large
scale?
Environmentally
challenging?
Versatility Limits
Coal low yes
acid rain, CO
2
,
mining, waste
electric ~none
Oil
low
(pre-peak)
yes
(peaks
~2010)
acid rain, CO
2
,
spills
transport exhaustible
Natural gas
low
(pre-peak)
yes
(peaks
~2025)
CO
2
, some
acid raid
electric, heat,
LNG for
transport
~exhaustible
Hydroelectric low no (~4%) flooding electric econ. sites
Nuclear fission low yes
radiation,
terrorism
electric none
Fusion high far future radiation electric none
Photovoltaic
(ground-based)
medium
sunny
daytimes
(unreliable)
OK electric
sunny,
south
Solar space
heating
low n/a OK space heat
sunny
winters,
new
structures
Solar thermal
electric (ground)
medium
sunny
daytimes
(unreliable)
OK electric sunny south
Solar thermal
ind. heat (ground)
medium
sunny
daytimes
(unreliable)
OK thermal sunny south
Wind energy OK no loud electric some coasts
Alcohol fuels livable OK OK transport ~OK
Biomass gas livable no OK gas ~OK
Ocean thermal high no OK electric OK
Geothermal medium no
brine, sulfur,
toxic metals
electric few sites

ENVIRONMENTAL EFFECTS-
THE SPS MICROWAVE BEAM

This section puts the SPS beam into a
bigger perspective in a general sense.
The SPS beam is basically a radio beam.
When people hear the word
"microwave", they think of a microwave
oven. The SPS beam intensity does not
need to have the power intensity
anything near a microwave oven, and
current designs have the intensity as
hundreds of times less above the
rectenna, indeed a power intensity about
one-tenth that of sunlight. Also, a
microwave oven is designed to operate
at a frequency which is absorbed by
water (which is why dry stuff doesn't
heat up well in a microwave). The SPS
will operate at a frequency designed to
NOT be absorbed by water in the
atmosphere, and to pass through clouds
and rain.
Microwave frequencies are harmlessly used in
communications, using different microwave
frequencies which avoid absorption by water
in the atmosphere so that they travel a long
distance. The SPS beamwill use a frequency
tuned for minimal absorption by the
atmosphere, clouds and storms.
Microwaves cause heating, but not much else.
The intensity would not be anything near that
inside a microwave oven -- a microwave oven
operates at power densities hundreds of times
higher.
The reason why microwave "radiation" is safer
is that it is of much lower frequency or energy
than the ultraviolet light you receive from
sunlight outdoors, and fromthe x-rays coming
fromyour TV and computer screen. X-ray and
ultraviolet radiation are ionizing radiation
which can disrupt molecules in the body.
Microwave radiation by the SPS is even less
potent than infra-red radiation fromheaters and
stoves. If the SPS beam is significantly
absorbed by biota, it would produce only
heating, and usually not significant heating.
EFFECT OF SPS ON
COMMUNICATION:
Microwaves are Effects of the SPS on
communications is another "environmental"
topic that has been studied by a number of
professionals. Frequencies near the proposed
2.45 GHz SPS beamfrequency are currently
being used by some communications sectors,
but they're a tiny segment of the
communications services. Those operating at
2.45 GHz would probably want to switch
frequencies, though ameliorative measures are
feasible for frequencies close to 2.45 GHz in
areas not near a rectenna. The SPS can be
made to not interfere with other
communications in general.
A major factor in setting the "baseline" SPS
beampower density to peak at 23 milliwatts
per square centimeter was its effects on the
ionosphere, a layer of the upper atmosphere
used to bounce some kinds of traditional
communications, e.g., long range radio, TV
(non-satellite and non-cable), and microwave
relay of telephone calls. Only the spots of the
ionosphere above rectennas would be affected,
and those spots may not work as well in
reflecting (actually, refracting) these kinds of
communications back down to the Earth's
surface if the beamintensity is pushed too high.
Whether the SPS would significantly affect
users of those kinds of communications in
certain areas has yet to be determined, but it
was later thought that the 23 mW/cm2 setting
was considerably lower than the threshold for
significant effects on the ionosphere. However,
with fiber optics, larger satellites, and internet,
those traditional ionosphere-reliant methods of
communications are being phased out.
Experiments were conducted transmitting a
2.45 GHz beamup through the ionosphere
(using the large Platteville, Colorado and the
Arecibo, Puerto Rico high frequency
transmission dishes) and taking many
measurements over time. (171-177)
"Experiments have shown that the limit is too
low, and theory now suggests that the
threshold is soft. The current consensus is that
the limit of 23 mW/cm2 can be at least
doubled, and perhaps more, pending further
tests." (178) Further, it was concluded that
atmospheric heatingcould be reduced by 80%
by switching to the 5.8 GHz frequency.
There are 10 different bandwidths used for
communications -- EHF, SHF, UHF, VHF,
HF, MF, LF, VLF, VF and ELF. The SPS
beamfalls within the UHF band.
There would be interference with some
communications operating at the 2.45 GHz
frequency of the SPS beamand some of its
harmonics. 2.45 GHz falls within the UHF part
of the communications spectrum. The SPS
beamof 2.45 GHz is located in the microwave
area of the radio communications spectrum,
near the TV and FM radio frequencies, and
falls within the 2.3 GHz to 2.5 GHz bandwidth
allocated for police, taxi, citizen's band, mobile,
radiolocation, amateur, amateur-satellite and
ISM (industrial, scientific and medical)
applications -- the UHF band. The vast
majority of these other applications do not use
2.45 GHz but use other frequencies near or
somewhat near to 2.45 GHz. Many of these
applications (except FM and UHF TV) have
also been allocated frequencies in other
bandwidths (outside UHF) by the authorities.
However, analytic studies concluded: "With
the exception of sensitive military and research
systems, equipment more than 100 km[60
miles] froma rectenna site should not require
modification or special design to avoid
degradation in performance," (157) and that
conventional mitigative techniques would even
permit operation of almost all devices at the
rectenna boundary by filtering, nulling, minor
circuit modifications and other mitigative
techniques (158-162) which would cost
between 0.1 to 5% of the unit cost to modify.
Tests confirmed the effectiveness of these
mitigative techniques. Sensitive military
equipment would generally not be affected as
long as the closest rectenna was more than 400
km(250 miles) away.
Pacemakers and other medical electronic
devices would not be affected.
In summary, some kinds of communications
would be adversely affected by the 2.45 GHz
SPS beam, but the vast majority of today's
communications would be unaffected.
However, the benefits to communications due
to large scale space development would be
immense.
COMMUNICATION
SATELLITES BENEFITS:
Looking at the effects of SPSs on
communications satellites is a waste of time
unless we consider the revolutionary effects of
large scale space infrastructure associated with
SPSs. If SPSs are put into place, satellite
communications will boomdue to the related
space-based manufacturing.
Currently, satellites are small with weak
transmission powers and small reception
antennas. Satellites are not constructed in space
at all, but are small, compact objects built on
Earth and deployed in space.
Space development will bring about large
satellite platforms and "Orbiting Antenna
Farms" (OAFs), allowing many times the
number of satellites to be placed in
geostationary orbit, and solving crowding
problems. Bigger antennas in space mean
smaller footprints on Earth, mitigating
interference and allowing multiple use of each
frequency. Larger power sources and larger
antennas also make for clearer signals and
smaller Earth ground stations. New
frequencies currently not used due to partial
atmospheric absorption will become usable.
Linking satellites together on large platforms
will lead to enhanced services. Fuel propellants
fromnonterrestrial materials will be used to
ferry up satellites, provide station keeping in
geostationary orbit, and extend the life of
satellites (e.g., selling old satellite to less
developed countries).
As for satellites in lower orbit passing
through the beam on occasion, "Improved
electromagnetic shielding and other minor
modifications would be expected to
eliminate or substantially reduce effects to
allow normal performance." (169) e NOT
like ultraviolet and other "ionizing" radiation
which you get from walking in the sun.
.Make your own rectenna !

The components required to construct rectenna
are
SHOTKEY DIODE
LIGHT EMITTING DIODE
MICROWAVE SOURCE (i.e.,
MICROWAVE OVEN)
It is very easy to construct, you just have to put
the "plus" side of the LED with the "minus"
side of the schottky. Still, you have to be
careful not to bend the schottky and leave its
connectors straight for them to act as an
antenna.
Now we got a rectenna, lets try it .We have a
closed microwave oven, which we turn on, and
then when we are moving our rectennas
around the oven, and it lights. "How is this
possible".
Initially the electrons in the conductor are in
rest position,when a microwave of 2.4GHZ
hits the electron ,the electron gets excited and
enters in to shottkey diode ,the entered electron
doest go back and the negative charge density
increases on one side because of this the
electrons come out from another side and
enters in to the LED which makes it
glow.Fromthis experiment we can conclude
that we got wireless power which makes the
LED glows.
Conclusion
SPS: what the future energy
production could become
Fromall the reasons above we can conclude
that SPS is the major way to face the demand
of electricity. If we develop this project we
need not bother about the exhaustment of
natural resources which are used in generation
of power. If this project succeeds it will
become boon to the next generations.



1


SPECIAL MACHINES
[MAGNETIC LEVITATED TRAIN]





K.L.N. COLLEGE OF ENGINEERING
POTTAPALAYAM 630 611













PRESENTED BY,
SENTHIL NATHAN.N
SANTHANARAJ.D
Email_ Id
Senthilnathan _5 @yahoo.co.in
Santhanaraj_eee@yahoo.com








2













ABSTRACT

A super high-speed transport system with a non-adhesive drive system that is independent of wheel-
and-rail frictional forces has been a long-standing dream of railway engineers. Maglev, a combination
of superconducting magnets and linear motor(,which is one of the special machine ) technology,
realizes super high-speed running, safety, reliability, low environmental impact and minimum
maintenance. In this paper we shall see about the high speed rail traction application of linear
induction motor.


We hope that our paper will give a full meaning to the objective to which we have aimed.


























3




INTRODUCTION

SPECIAL MACHINES

Electrical machines which have special application are often referred as special machines. It includes
machines whose stator coils are energized by electronically switched currents. Some of them are,

Stepper Motors
Variable-reluctance stepper Motors
Permanent Magnet stepper Motor
Hybrid stepper Motor
Permanent-Magnet DC Motor
Low Inertia DC Motor
Shell-type Low Inertia DC Motor
Printed-circuit(disc) DC Motor
Single-Phase Synchronous Motors
Reluctance Motor
Hysteresis Motor
Linear Induction Motor
Servomotor
DC Servomotor
AC Servomotor
Synchros
Universal motor

In a brief manner we shall see about Linear Induction motor and its special application in
high-speed rail traction.

LINEAR INDUCTION MOTOR

It is a special type of induction motor which gives linear motion instead of rotational
motion as in the case of a conventional induction motor. It operates on the same principle on which a
conventional induction motor operates i.e. whenever there occurs a relative motion between the field
and the short-circuited conductors, currents are induced in them which results in electro-magnetic
forces and under the influence of these forces, according to Lenzs law, the conductors try to move in
such a way as to eliminate the induced currents In case of a conventional induction motor, movement
of field is rotary about an axis so the movement of the conductors is also rotary. But in case of linear
induction motor , the movement of the field is rectilinear and so the movement of conductors.
In its simplest form, a linear induction motor consists of a field system having a 3-
phase distributed winding placed in slots. The field system may be single primary system or double
primary system. The secondary of the linear induction motor is normally a conducting plate made of
either copper or aluminum in which interaction currents induced. Either member can be the stator, the



4
other being the runner in accordance with the particular requirement imposed by the duty for which
motor is intended.



PROPERTIES OF A LINEAR INDUCTION MOTOR

These properties are identical to those of a standard rotating machine.
Synchronous speed:
It is given by,
v
s
= 2.w.f
Where,
w = width of one pole-pitch(m)
v
s
= linear synchronous speed(m/s)
f = supply frequency(Hz)
Slip:
It is given by,
S = (v
s-
v)/v
s

Where,
v
s
= linear synchronous speed
v = actual speed
Thrust or Force
It is given by,
F = P
2
/ v
s

Where,
F = thrust or force
P
2
= active power supplied to the rotor


APPLICATIONS OF LINEAR INDUCTION MOTOR

Linear induction motors can be used in conveyors, traveling cranes, haulers, electro-magnetic
pumps, high-speed rail traction etc. Such motors may be employed in applications in which the field
system moves and the conducting plate remains stationary such as traveling crane motors. These
motors can also be used in applications where the field system remains stationary and the conducting
plate moves such as in automatic sliding doors in electric trains, metallic belt conveyors etc. It can be
used on trolley cars for internal transport in workshop, as booster accelerator for moving heavy trains
from rest or up the inclines or on curves or as a propulsion unit in marshalling yards in place of
shunting locomotives. Linear induction motor provides excellent source of motive power for
magnetically suspended trains. It has superiority over conventional rotary motor for speeds over
200kmph.


AMONG THE VARIOUS APPLICATIONS OF LINEAR INDUCTION MOTOR WE SHALL SEE
ABOUT HIGH SPEED RAIL TRACTION IN A BRIEF MANNER.






MAGLEVS (Magnetically Levitated Trains)

The principal of a Magnet train is that floats on a magnetic field and is propelled by a linear induction
motor. They follow guidance tracks with magnets. These trains are often referred to as Magnetically
Levitated trains which are abbreviated to Maglev.

WORKING PRINCIPLE OF MEGLEV:
Maglev is a system in which the vehicle runs levitated (floats about 10mm above) from the guide way
(corresponding to the rail tracks of conventional railways) by using electromagnetic forces between
superconducting magnets on board the vehicle and coils on the ground.
It is propelled by the guide way itself rather than an onboard engine by changing magnetic
fields. Once the train is pulled into the next section the magnetism switches so that the train is pulled
on again. The Electro-magnets run the length of the guide way




Principle of lateral guidance
The levitation coils facing each other are connected under the guide way, constituting a loop. When a
running Maglev vehicle, that is a superconducting magnet, displaces laterally, an electric current is
induced in the loop, resulting in a repulsive force acting on the levitation coils of the side near the car
and attractive force acting on the levitation coils of the side farther apart from the car. Thus, a running
car is always located at the center of the guide way.






5


Principle of propulsion
A repulsive force and an attractive force induced between the magnets are used to propel the vehicle
(superconducting magnet). The propulsion coils located on the sidewalls on both sides of the guide
way are energized by a three-phase alternating current from a substation, creating a shifting magnetic
field on the guide way. The on-board superconducting magnets are attracted and pushed by the
shifting field, propelling the Maglev vehicle.

commercial service:
In the mid 1980s, Britain was the first country to introduce a maglev service. It was to link two
terminals at Birmingham airport, about 400meters long and a top speed of about 10mph (16km/h).
However it was recently replaced with a bus service due to the difficulty of getting spare parts.
Germany is the only country with solid plans for a maglev railway which will link Berlin with
Hamburg in 2005. This will be high speed called the transrapid project.
Japan has built two maglev lines, the first in the 1960s and the second in 1996. The first was to test the
basic theory of maglev the second is going for more advanced things such as high speed tests with the
MLX01, and set a speed record of 550km/h (344mph) in early 1998.


SCM (Super Conducting Magnet)
The SCM (Super Conducting Magnet) is the core element of superconducting Maglev. Two SCMs are
mounted on each bogie. The SCM features high reliability and high durability..The cylindrical unit at
the top is a tank holding liquefied helium and nitrogen. The bottom unit is an SC coil alternately
generating N poles and S poles. At one end of the tank is the integrally-attached on-board refrigerator,
which serves to re-liquefy the helium gas once vaporized by regular heat absorption and external
disturbances
Guide way Line
The guide way consists of a structure corresponding to the conventional track and ground coils
corresponding to the conventional motor. It is a vital element of Maglev.The following methods of
installing the ground coils for propulsion, levitation, and guiding to the guide way are adopted.
1.Beam Method
2.Panel Method
3.Direct Attachment Method
4.New Method.




6



7


Beam Method
In the beam method, the sidewall portion will be constituted solely of concrete beams. The
entire process from beam manufacturing to installation of the ground coils take place at the on-site
factory (provisional yard). A finished beam is transported to the work site within the guide way, to be
placed on two concrete beds set up in advance there.
Panel Method
In a factory set up on-site (provisional yard) the concrete panel is produced and attached with
ground coils. The finished assembly is carried to the work site, where it is fixed, with 10 bolts, to the
concrete sidewall erected in advance there.
Direct-Attachment Method
At the work site in the tunnels or on the bridges a concrete sidewall portion is produced. At the
same site the finished sidewall is directly fitted with the ground coils. With no need for the factory or
transport vehicle, this method is economically superior to the other two, but its drawback lies in that it
allows only slight adjustments of individual ground coils to correct the irregularities.
New Method
Former three types of sidewalls were adopted as the guide way structure to evaluate their
functions and clarify merits and defects. We developed the new type guide way structure taking
advantage of the merits based on the evaluation results. We placed emphasis on the improvement of
the efficiency of installing sidewalls to concrete roadbed, as a means to reduce costs for the
construction or maintenance. We discussed a shape of the sidewall in all aspects of the efficient
installation, and eventually adopted an invert-T-shaped sidewall.

POWER CONVERSION:
In a substation where the voltage is received from the utility company is transformed with the inverters
controlling the magnitude and frequency of the current that runs in the ground coils and thereby
adjusting the train speed.
Inverter unit
The inverter installed at the substation for power conversion is a facility to transform the power
supplied from the utility company at commercial frequency into one of a frequency required for train
operation. In Japan there are inverters provided in three sets respectively for three phases, of 38MVA
for the north line and 20 MVA for the south line. Depending on the train speed, the north line inverters
give a frequency output of 0-56 Hz (550 km/h) and the south line inverters give a frequency output of
0-46 Hz (450 km/h). The operation control system at the test center formulates run curves, which in
turn instruct the drive control system at the substation for power conversion

PRELIMINARY STEPS
The followings are major test items to take place on the Test Line:
Confirmation for possibilities of safe, comfortable, and stable run at 500 km/h;
Confirmation of reliability and durability of the vehicle, wayside facilities, and equipment as
well as the Superconducting magnets
Confirmation of structural standards including the minimum radius of curvature and the
steepest gradient;



8
Confirmation of center-to-center track distance for safety of trains passing each other;
Confirmation of vehicle performance in relation to tunnel cross-section and to pressure
fluctuations in the tunnels;
Confirmation of performance of the turnout facilities;
Confirmation of environmental impact;
Establishment of multiple-train operation control systems;
Confirmation of operation and safety systems and track maintenance criteria;
Establishment of inter-substation control systems;
Pursuit of economic issues, construction and operation costs.


ADVANTAGES OF MAGLEV:
Well it sounds high-tech, a floating train; they do offer certain benefits over conventional steel rail on
steel wheel railways. The primary advantage is Maintenance. Because the train floats along there is no
contact with the ground and therefore no need for any moving parts. As a result there are no
components that would wear out. This means in theory trains and track would need no maintenance at
all. The second advantage is that because maglev trains float, there is no friction. Note that there will
still be air resistance. A third advantage is less noise, because there are no wheels running along there
is no wheel noise. However noise due to air disturbance still occurs. The final advantage is speed, as a
result of the three previous listed it is more viable for maglev trains to travel extremely fast, i.e.
500km/h or 300mph. Although this is possible with conventional rail it is not economically viable.
Another advantage is that the guide way can be made a lot thicker in places, e.g. after stations and
going uphill, which would mean a maglev could get up to 300km/h (186mph) in only 5km where
currently takes 18km. Also greater gradients would be applicable.

A POSSIBLE SOLUTION:
A solution could be to put normal steel wheels onto the bottom of a maglev train, which would allow
it to run on normal railway once it was off the floating guide way.

MAGLEVS ARE REALLY MORE ENVIROMENTALLY FRIENDLY:
In terms of energy consumption maglev trains are slightly better off than conventional trains. This is
because there is no wheel-on-rail friction. That said the vast majority of resistive force at high speed is
air resistance (often amounting to several tons), which means the energy efficiency of a maglev is only
slightly better than a conventional train.
German engineers claim also that a maglev guide way takes up less room and because greater
gradients are acceptable there is not so much cuttings and embankments meaning a new guide way
would be less disruptive to the countryside than a new high speed conventional railway.









9







REFERENCES:


1. Theory & Performances of ELECTRICAL MACHINES J.B.GUPTA
2. A text book of ELECTRICAL TECHNOLOGY Volume II B.L. THERAJA
A.K.THERAJA

3. HOLZER magazine
4. IEEE Transaction 2006















THANK U













WIRELESS POWER TRANSMISSION AND RECEPTION
USING SPS & RECTENNA




BY

IBRAHIM BAIG CH.BHARATH KUMAR
Reg.no-Y4EC420 Reg.no-Y4EC410
ECE ECE
E-MAIL:ibruece@gmail.com EMAIL:bharath.chennupati@gmail.com





WIRELESS TRANSMISSION

BAPATLA ENGINEERING COLLEGE

BAPATLA.




ABSTRACT




The search for inexhaustible energy resources to satisfy long term needs is a high
priority. Solar Power Satellites answer mankinds energy needs in the 21
st
century.
We can,infact directly convert solar energy into electrical energy with the use of
solar cells, but sunlight duffuses at night time from the earth. It the need arises for
24 hours power supply,we are helpless. The solution is wireless power transmission
from space through a system consisting of SPS(Solar Power Satellite) and
RECTENNA (RECtifying anTENNA) by Microwaves. The principal advantage of
the space location os the independence of weather and day-night cycle and is
pollution free.Modern techniques enables us to create a platform in space carrying
solar batteries, generators converting electric current worked out by them into the
Electromagnetic wave beam. On the earth, the other antenna (rectenna) receives the
beam and it is converted into electric current again. Space solar power stations are
costly because of the great size of their radiation and receiving antennas. It is shown
that a correct choice of the field distribution on the radiation antenna allows us to
increase the wireless power transmission efficiency and to lessen its cost. Antenna
and Rectenna sizesare chosen such that the rectenna is situated in the antennas
Fresnels area(but not in the far area as in the ordinary radio communication). This
paper thoroughly describes the construction of spacetenna and rectenna to increase
the effectiveness of WPT.








INTRODUCTION


Compared to todays energy sources, the SPS and rectenna system is an economically
competitive large scale energy source, and in fact appears to offer a much less expensive
energy source once significant space-bases infrastructure is establishes. In addition, the
SPS and rectenna system has strong advantages in terms of environmental issues.

OVERALL VIEW

The overall configuration of the spacetenna is a triangular prism with a length of 800 m
and sides of 100 m as shown in figure. The main axis lies in the north-south direction,
perpendicular to the direction of orbital motion. The transmitting antenna on the
horizontal under-surface faces the Earth, and the other two sides of the prism carry solar
arrays. The faces of the prism are embedded with photovoltaic cells.


These Photovoltaic cells would convert sunlight into electrical current, which would, in
turn, power an onboard microwave generator. The microwave beam would travel
through space and the atmosphere. On the ground, an array of rectifying antennas, or
rectennas, would collect these microwaves and extract electrical power, either for
local use or for distribution through conventional utility grids.

The Spacetenna has a square shape whose dimension is 132 meters by 132 meters and
which is regulary filled with 1936 segments of sub array. The sub array is considered to
be a unit of phase control and also a square shape whose edges are 3 meters. It contains 1320
units of cavity-backed slot antenna element and DC-RF circuit. Therefore, there will be
about 2.6 million antenna elements in the spacetenna




The spacetenna is composed of pilot signal receiving antennas followed by the
detectors finding out the location of the rectenna on the earth, power transmission antenna
elements and phase control systems. The left and right hand sides in the figure below
correspond to parts of power transmission and direction detection, respectively. The antenna
elements receiving the pilot signal have a polarization perpendicular to the antenna elements
used in the power transmission so as to reduce effectively interaction between both antenna
elements. Moreover, the pilot signal frequency and a frequency for the energy
transmissionare dufferent from each other. Using two kinds of frequency for the power
transmission and the pilot signal prevents each other from interfering and
makes it
possible to find out the accurate direction of a specified rectenna.



A Rectenna can be considered as a base station for a geo-stationary satellite. Microwaves
of 2.45 GHz frequency are used to transmit power from the satellite to th rectenna. It
consists of a mesh of an array of dipole antennas connected to diodes to convert the
radio frequency energy to DC voltage, which is then converted to regular AC electricity
and wired to homes, factories, etc. A simple reflector plane could be added to the mesh
to improve the efficiency to 50%.

Orbit Selection

A 1100 km altitude quatorial orbit will be used. This choice minimizes the transportation
cost and the distance of power transmission from space.
.



The system power is defined by the microwave power transmitted from the satellite, not
by the power received on earth. It also has to be in low Earth orbit, in order to be low-
cost . If it was in GEO ( that is 35,800 kilometers from Earth) then the transmittin
antenna would bave to be 40 times larger than a LEO one or else the receiving antenna
would have to be 40 times larger. Consequently the satellite has to orbit above the
equator in order to have frequent transmission opportunities. It transmits up to 10 Mwatts
of radio-frequency power. Ten satellites placed evenly arount the orbit would require
only nine minutes of storage capacity to provide continuous power.


Field Distribution

The basic drawback of a WPT system is that the essential part of the radiated energy does
not reach the given area of space because of wave beam diffraction expansion. For an
qual phased field distribution the focused field with peak distribution falls down to edges
as shown by the function

The peak distribution of the field at v=1 is close to cos(pi*x/2a) and v=2 is
[cos(pi*x/2a)]^2 which is similar to a GUASSIAN FIELD distribution. For a circular
aperture, this sort of field distribution falls down to the edges which is ineffective use of
the antenna size. The way out is accommodation the receiver with irregular sub-apertures,
each of which gives a distribution of field uniform and equal in amplitude.







The increase of WPT effectiveness with non-equidistant antenna array by the discrete
radiating antenna is of the factor
where 2b is the length of the receptor.Size of A is increased
if the amplitude of the radiating field falls down to edges and the receiver with in the
Fresnels area. However the efficiency WPT systems depends not only on the size of the
A but also on active surface of the radiating antenna. The factor of surface utilization for
a square aperture is
Where U(x) is the field distribution of the radiator Um(x) is the max.allowable field.
We require receiving a high (xA
2
) factor of energy transfer by saving good operating
active source of the antenna .Hence it is necessary to have a high product,which may be
termed as generalised criterion for energy transfer.
Thus, for a discrete step distribution , we need to have concentrated sub
apertures in the centre and their gradual discharge on the edges.Thus all the sub
apertures are similar and have a uniform distribution of the field with the equal
amplitude,which may reach mawimum admissible value.The optimal distribution form
may be reached for the large radiating sub aperture clots in places, which corresponds to
high field intensity and relieving sub aperture density at the edges pf the antenna.This
construction allows approaching to unit the value both of the coefficients A and x.
As a result of the WPT system will be essentially increased.


Advantages

1. Unaffected by day-night cycle, weather or seasons. Optimised advances may
enable 21 power supply per day.
2. This is eco-friendly, renewable and maintenance free energy resource unlike the
conventional fuels.
3. As the equipment is positioned in space, it is occupies no land area and remains
unaffected by harsh weather conditions. Rectennas can share land with farms.
4. The spacetenna could direct energy to any rectenna on earth with in the range of its
steering angle, which could satisfy the energy requirements of all the equatorial
countries.
5. Waste heat is re-radiated back into space , instead of warming the biosphere.
6. Will be a boon as we are running out of fossil fuels.

Conclusion
Synthesizing the wireless power transmission , it can be concluded that to make the
SPS concept commercially viable, it becomes a priority to improve its efficiency and the
cost per watt. This can be achieved by:
1. Placing the rectenna within the Fresnels area of the transmitter
2. Placing the transmitting antenna in the LEO orbit to reduce installation cost and
distance of transmission.
3. Using a discontinuous equidistant array with quasi Gauss distribution.
4. Using a discontinuous non-equidistant array with uniform distribution.














References

1. Proceedings of INTERNATIONAL CONFERENCE 2004 held at IISC
Bangalore.
2. P.E.Glaser An overview of the solar power satellite , IEEE Transactions on
Microwave Theory and Techniques vol.40.No.6,J une 1992.
3. Microwave Devices and Circuits by Samuel Y Liao.
4. www.spacefuture.com






























The Structure and Application of
Flexible SCADA

Category: SCADA
Category: Computer applications in electrical
engineering.


Authors: MOHIN AHAMED SYED
&
PADMANABHA REDDY .P

College: Chadalawada Ramanamma Engineering
College Tirupati.



E-mail: moinahamed245@yahoo.co.in


















Abstract--This paper put forward to an
interactive flexible SCADA system in
which information shared, on the basis
of analyzing and summarizing the
defections of former SCADA. It not only
solves the interaction of data and
function in the information islands, but
also solves functions coordination and
data exchange of distributed application
system in different location. The paper
proposes the concept of general data
acquisition, discusses the structure and
the ways of technology support, and
describes the application of flexible
SCADA based on the example of fault
decision support system (FDSS). This
proposal flexible SCADA extends and
develops the data source of traditional
SCADA and enriches the data
foundation for power system advanced
analysis software. Index Terms--
Comprehensive management center,
Fault decision, Flexible SCADA,
General data acquisition, Information
islands,

II. INTRODUCTION
Supervisory Control And Data
Acquisition (SCADA) is the basis and
core of electrical grid dispatching
automation, which has been widely used
in the supervisory control, operation and
control of power system. Power system
builds other various independent
application system in terms of different
demands, for example, dispatching
operation order expert system, protective
& fault management information system
(PFMIS), production management
information system and intelligent power
grid maintenance scheduling system etc,
only EMS can utilize SCADA data
nowadays. These many information
islands[1] not only can results in the
following problems that the lower
efficiency and the bad data coherence
due to switching system and input
continuously during operation has been
taken action, but also the fact that many
new functions can not generally make
use of above data, not interact with
existing application software has
seriously restricted the enhancement of
power grid automation. These new
functions are as follows:
a) Comprehensive fault analysis and
decision in power grid
The application of EMS advances
scheduling operation from experiential
mode to analytical mode[17], which
experts in power grid s security and
economic analysis in normal mode and
does not has the function of analyzing
fault because SCADA can not provide
the self-contained fault data for FDSS.
Now, PFMIS only can collect and
analyze fault information of relay and
fault record device in post-fault, mainly
serves for relay operators, not provide
help for dealing with real time fault[2].
b) Online distributed dispatching
operation and management
Management information system (MIS)
is gradually developing intoDMIS[3],
which distributes the dispatching data
into many severs in terms of functions.
DMIS not only can ensure network
communication flow to descend, but also
can boost up security of data effectively.
One of pivotal problems is to solve the
interface between existing dispatching
automation system and DMIS. DMIS
needs realtime data and EMS static data
model, or it only can be an
application tool to word.
c) Power grid distributed
decomposition and coordination
calculation
Because dispatching center can not
obtain the data near power grid, the
dealing methods of online analysis
calculation in EMS for external power
grid basically adopt the static
equivalence including WARD
equivalence, radicalized equivalence and
independent electrical source method.
For the operation of external power grid
is changeful, computational result has
biggish error with actual instance and
hinders the application of EMS
seriously. The reference [4] proposes a
computational mode for decomposition
and coordination of power grid, which
need the superior scheduling center send
the external power grid equivalence
model and data to inferior dispatching
center, and this method can improve
accuracy in larger extent.
d) Enlargement and supplement of
other functions
With the development of technology,
many researches are based on wide-area
information and large power grid
security analysis control, for example,
stability analysis and wide-area
protection system based on phasor
measurement units (PMU), the
secondary or triple voltage control, the
application of advancing power quality,
economic operation, state maintenance,
and electric market supporting system.
The above applications maybe need
necessary data exchange. In summary,
the data acquisition and data exchange
provided by traditional SCADA can not
be suitable to development of power grid
automation [5,6]. The wide-area flexible
SCADA introduced in this paper not
only can solve the interaction problem of
data and application function among the
information islands, but also can solve
the function coordination and data
exchange for distributed application
system in different places.
III. STRUCTURE OF THE
FLEXIBLE SCADA
A. Concept of the flexible
SCADA
The kernel of flexible SCADA is to
provide a flexible and effective platform
of data exchange and function
coordination. Flexible means than the
SCADA system is not a point-to-point
communication of fixed path, but a
communication that can take place
between (among) any random two (or
more) points at any time. The main
functions can be described as follows:
a) providing data exchange and data
share among independent system
b) providing function coordination and
result interaction among independent
system
c) providing data exchange and data
share in different locations in distributed
system
d) providing function coordination and
result interaction in different locations in
distributed system
The application system not only can
obtain the data from traditional SCADA,
but also can gain the data acquiring and
login from other application system,
even can coordinate functions with the
other system and obtain the data of
operation result. This concept enlarges
the scope of data acquisition of
traditional SCADA, which not only can
acquire data from the device, but also
extend the scope to conclusion data [7].
The flexible SCADA may protect the
application result of existing dispatching
automation, can eliminate the hidden
trouble of information island.
B. Structure of flexible SCADA
a) Whole Structure
The whole structure of flexible SCADA
is as Figure 1. Due to the advantage of
real-time operation, the traditional
SCADA is a part of the flexible
SCADA[8,9].The flexible SCADA
covers with all levels dispatching centers
and substations in geographical scope:
The local CMC constituted by local area
network(LAN) is set inside of
dispatching centers or substations, and
the CMC is equivalent in grade to ensure
the flexibility of interaction[10].
CMCs in the dispatching centers and
substations are connected by fiber with
wide area network(WAN). The CMC
distributed in various locations mainly
has the
functions as follows: The uniform data
management in local independent system
, convenient for data share .
Having the capability of deputing
demand and function of external system
for local independent system.
Having the mechanism of security
protection, information among local
independent system, dispatching center
substation and substation can interact
each other, but the interaction must be
validated and deputed by CMC in
different locations.

Fig.1 Structure of Flexible SCADA
system
b) The substation side of flexible
SCADA The substation side of flexible
SCADA provide advances to SCADA
systems in two important ways. As a
substation device, CMC in substation is
capable of gathering and organizing data
from a variety of different devices.
These provide operational information
that has been either too costly or
technically infeasible to gather with the
traditional SCADA system arrangement
of master and remote. As a powerful
processing engine, the CMC in
substation runs algorithms and derives
data that has been historically developed
in the master station. This data is more
effectively developed by the substation
controller and is available to personnel at
the substation. A part of the larger
scheme of utility communications, the
CMC in substation will provide a
gateway function to process and transmit
data from the substation to the WAN.
The SCADA system, as well as the other
users, will provide the necessary filtering
to extract data of interest and importance
to various software applications. In brief
summary, the substation side of flexible
SCADA is composed of remote host
computer (or RTU) and CMC in the the
substation. Remote host computer (or
RTU) realizes the function of traditional
SCADA. CMC in substation provides
the data and application function of local
save, local deal, transfer on demand and
interaction. It concludes: Protective and
fault management information system in
substation Partial fault analysis system
in this substation and near substation
Production management information
system in substation
c) Dispatching side of flexible SCADA
Dispatching side of flexible SCADA is
composed of the traditional SCADA and
CMC in dispatching side, which has the
application system based the whole
dispatching data, for example, EMS,
DTS, electricity management system,
load control in distributed network,
power grids stabilization control,
reactive voltage management [11]. The
excessive and low transfer speed data
will distribute the original of data
(substation). When we analyze the fault,
we can call partial data or result data.
The system has the following functions:
PFMIS in power grid Fault analysis and
decision system in power grid
Production management information
system in power grid
.C. The data shared mode of flexible
SCADA
At the heart of this integrated system is
the concept of a distributed database.
Unlike the centralized database of most
existing SCADA system, the flexible
SCADA system database will exist in
dynamic pieces that are distributed
throughout the WAN. Modification to
any of the interconnected elements will
be immediately available to all users.
The key to the CMC is its database
functionality. In this facility, all data
resides in a single location, whether
from a EMS, DMIS, or derived from the
CMC itself. This database acts as a data
server and provides data to the various
data concentrator and data processing
applications.
D. The pivotal technology
The first is the system model criterion.
Various devices in electric power system
have the public and private attribute in
different application. It should be
designed uniform data structure and data
dictionary In the interest of data share
applying in various application systems.
The attribute of device should be base of
basic data of device, and the attribute
can be expanded to avoid data
redundancy. The standard adopts unified
modeling language(UML) based on
common information model(CIM) and
follows the standard of
International electrotechnical
commission(IEC)61970, IEC61850 and
IEC61968 is as follows[12][13]. The
second is software component
model.This model needs to establish the
mechanism of kernel management, the
internal agency service and real-time
communication, to adopt the middleware
platform in distributed system based on
the common object request broker
architecture (CORBA), and to provide
object oriented, isomerized, distributed
and standardized support platform. The
last is the mode of communication.The
wide area circle network set by optical
fiber between substation with
dispatching center can fully use net
performance. The technology of
component object model(COM) and
distributed component object
model(DCOM) is adopted in software
implementation, and it ensure the
vitrification and high efficiency of
process communication and interface.
The main network adopts optical fiber,
has transfer distance so far, and the
structure of circle have high
reliability.[14]
IV. APPLICATION IN POWER
GRID FAULT DECISION SYSTTEM
One of the differences between flexible
SCADA and traditional SCASA is that
necessary protection and fault record
information is provided by PFMIS fault
analysis [15]. Previously, the collection
and maintenance of the power system
fault data are conducted manually, which
can neither meet the requirements of on-
line fault disposal nor manage
and maintain the data. The Flexible
SCADA can easily collect and manage
the fault information, and, based on this,
performs advanced analysis and data
mining. When power grid occurs fault,
there will be large information in few
minutes. If the information is sent to
dispatching center without processing, it
will not only form information blockage,
but also dispatcher can not analysis and
deal with the information in short time,
then will have the bad impaction of fault.
Power grid fault decision system based
on flexible SCADA is imposed of the
following parts:
a) One is the local fault analysis system
which is set in CMC of substation side.
It has the following main functions: To
confirm the priority and transfer mode of
fault information, avoid information
blockage. The information having
upmost priority is voluntary transferred,
and the data having lower priority wait
for the evocation of dispatching center or
the substation. Local treatment of local
information. Thinking procedure which
checking signal and analyzing fault state
of worker in substation is simulated to
deal the need of dispatcher or the result
of fault analysis system of dispatching
side with the local information for the
more step.
b) The other is overall fault analysis
system which is set in IMC of the
dispatching center. It has the following
main functions: The integration analysis
has the total power grid real-time
information. The data is composed of
flow data and switchstate of SCADA in
previous fault and post of fault,
protective information of CMC of
substation to analysis the possible fault
element. The coordinated analysis has
middle result of substation and recall
data. The moment can not assure fault or
evaluating protection, asking the detailed
relay information with pertinence. The
analysis of fault record is operated in the
substation, the attention is the middle
result. The fault analysis system
continued to work after the newly
information, if it could not find the fault
element, repeat it .
V. CONCLUSIONS
The flexible SCADA in this paper is the
enlargement and development of
traditional SCADA, widens the data
origination, and enriches data foundation
of Power Advanced Software. Compared
with the traditional SCADA, it has the
following peculiarities [16]:
The broad data source. It not only can
collect the normal state information of
power grid, but also dynamic data and
fault data of power grid, and make
possible to monitor and control in
dynamic proceedings and analyze the
fault element after the fault. Having the
function of data management and
intelligent analysis in substation.
Providing the middle result for other
substation and dispatching center, it is
important to implement the distributed
control of power system. Information
interaction in demand. It is not necessary
to centralize all the dealing function in
control center, an independent system to
actualize the access of information
islands without slot.
VI. REFERENCES
Periodicals:
[1] Xin Yaozhong, "several problems
discuss in electric Power information
technology," electric power information
technology, vol. pp. 20-23, J ul. 2003.
[2] GAO Zhanjun, PAN Zhencun and
BIAN Pen, "Modeling of relay
protection and fault information system,
"Relay, vol.33, pp. 50-53, Feb. 2005.
[3] ZHOU Ming, REN J ianwen and LI
Genyin, "A Multi-agent based
dispatching operation instructing system
in electric power systems, " Proceedings
of the CSEE, vol24, pp.58-62, Apr.
2004.
[4] Zhang Hongbo, Zhang Boming and
Sun Hongbin, "A decomposition and
coordination dynamic power flow
calculation for multi-area interconnected
system based on asynchronous
iteration," Automation of Electric Power
Systems,vol24, pp.1-5,Dec. 2003.
[5] Robert H. McClanaban. "SCADA
and IP Is network convergence really
here," IEEE industry application
magazine.pp.29-36.Mar/Apr 2003.
[6] WANG Mingjun, "Development of
dispatching automation technology in
china-from SCADA to EMS, " .Power
system technology, vol, 28,pp.44- 46,
Feb. 2004.
[7] Tomomichi seki, Toshibumi seki and
Tatsuji tanaka, "Flexible Network
Integrated Supervisory Control for
Power Systems Based on Distributed
Objects, " Electrical Engineering in
J apan, Vol. 136, No. 4, 2001.
[8] LIN Rongdong, HUNG Bin and
CHEN Feng. "Construction of data
center in Dispatching automation
system," Automation of Electric Power
Systems, vol..27, pp.79-81, Mar. 2003.






[ELECTROMAGNETIC SUSPENSION SYSTEM]



N.A.Mohamed Sarfraz Deedat
M.B. Mohamed Haseeb
Final year B.E.
Velammal Engineering College
Chennai- 600 066



E-mail Address:
youcanwin99@yahoo.co.in
Contact no: 9840529580
HYBRID SUSPENSION SYSTEM USING FUZZY LOGIC CONTROL
ELECTRO MAGNETIC SUSPENSION SYSTEM (EMSS)

ABSTRACT

Electro Magnetic Suspension System (EMSS) is an independent-revolutionary suspension
system that reacts instantly to changing road conditions by negating the up and down
movement of wheels. Our paper proposes to amalgamate the features of superior control
and greater comfort in a single system. To achieve this, we have designed a system that
uses a Linear Electromagnetic Motor (LEM), which is mounted on each wheel to push and
pull the wheel without jostling the car body. The electromagnetic motor use input from
sensors throughout the vehicle designed to adapt to our system so as to react to bumps
and potholes instantaneously. This is done by exerting downward force to extend the wheel
into potholes while keeping the car level constant. As the wheel pops back up onto the
road, the suspension recaptures nearly all the energy expended. We have designed a
powerful control algorithm which is controlled by a fuzzy logic unit so as to provide greater
momentum and reliability to the system. A detailed model of LEM has been created in
which we have focused on stress distribution and displacement analysis under working
conditions. We have also simulated a customized fuzzy logic controller for our system and
beyond smoothing out bumpy roads, the system also improve handling, virtually eliminating
body roll in tight turns and minimizing pitching motion during braking and acceleration as
depicted in the performance studies.













INTRODUCTION :-
The LEM which forms the heart of our system is mounted on each wheel of
the automated vehicle. The suspension systems in general are comprised of front and rear
suspension modules that bolt to the underside of the vehicle. Our suspension system takes
advantage of this configuration by creating replacement front and rear independent
suspension modules. Our automotive suspension focuses on two main aspects namely
passenger comfort and vehicle control. Comfort is provided by isolating the passengers
from road disturbances and control is achieved by keeping the car body from rolling and
pitching excessively, and maintaining good contact between the tire and the road. We have
determined the optimum possible performance of an automotive suspension by stress
analysis of the LEM using Finite Element Analysis (FEA) and also by graphically analyzing
the displacement aspect of the motor by subjecting it to practical conditions.

STRUCTURE OF EMSS :-
Our suspension system includes a linear electromagnetic motor and power
amplifier at each wheel, and a set of control algorithms. This proprietary combination of
suspension hardware and control software makes this innovation possible.



SUSPENSION CONTROL SYSTEM
TRAIN OF SENSORS
SUSPENSION
CONTROL UNIT


LINEAR
ELECTROMAGNETIC
MOTOR
Block diagram of electromagnetic suspension system





MODULES OF EMSS :-
LINEAR ELECTROMAGNETIC MOTOR (L.E.M) :-
A linear electromagnetic motor is installed at each wheel of a vehicle. This
forms the heart of our suspension system. Inside the linear electromagnetic motor are
magnets and coils of wire. When electrical power is applied to the coils, the motor retracts
and extends, creating motion between the wheel and car body.
One of the key advantages of an electromagnetic approach is speed. LEM responds
quickly enough to counter the effects of bumps and potholes, maintaining a comfortable
ride. Additionally, the motor has been designed for maximum strength in a small package,
allowing it to put out enough force to prevent the car from rolling and pitching during
aggressive driving maneuvers.


The LEM offers easy two-point mounting. The only electrical connections to the
motor are for power and control. The suspension front corner module features simple strut
and link construction. The LEM is used as a telescoping suspension strut along with a two-
piece lower control arm. A torsion bar spring connected to one end of the lower arm
supports the weight of the vehicle. The wheel damper keeps the tire from bouncing
Representation of Linear electromagnetic motor
POWER AMPLIFIER :-
The power amplifier delivers electrical power to the motor in response to
signals from the control algorithms. The amplifiers are based on switching amplification
technologies. The regenerative power amplifiers allow power to flow into the linear
electromagnetic motor and also allow power to be returned from the motor also.
When our suspension encounters a pothole, power is used to extend the motor and
isolate the vehicle's occupants from the disturbance. On the far side of the pothole, the
motor operates as a generator and returns power back through the amplifier. Hence
suspension designed by us requires less than a third of the power of a typical vehicle's air
conditioner system.
CONTROL ALGORITHMS :-
Our suspension system is controlled by a set of mathematical algorithms.
These control algorithms operate by observing sensor measurements taken from around
the car and sending commands to the power amplifiers installed in each corner of the
vehicle. The goal of the algorithms is to allow the car to glide smoothly over roads and to
eliminate roll and pitch during driving thereby providing both control and comfort.

























Comfort
Battery
charge
Controller
Acceleration and gear
control actuator
Brake actuator
Linear Electro
magnetic motor
actuator
SENSOR TRAIN
Display
Start
Speed of
vehicle
Input parameters
Air
Drag
Dimensions of
irregularity
Steering
sensor
SCHEMATIC REPRESENTATION OF EMSS
Fuzzy control:
Speed Control based on sensor data :

Fuzzy controller1


Speed controller
Reference
inputs and
Dimension of
irregularity
Throttle and
gear box
actuation
The input to the controller is the dimensional irregularity of the potholes and the velocity
of the car from the throttle and gear box actuation.
Rule Base:
The total no of rules required by considering all the input parameters into
consideration are 5000.
S.No. Parameters Under Consideration No. of Levels
1. Comfort 2
2. Battery Charge 2
3. Speed of Vehicle 5
4. Steering Sensor 10
5. Dimensions of Irregularity 5
6. Air Drag 5

So,
Total no. of rules =2251055
=5000
The fuzzy logic simulation is done by considering two main factors namely
Speed of vehicle
Dimensions of Irregularity
Hence the simulation is done by using 25 rules.



The following are the design parameters.
1. If velocity is very slow and irregularity is zero, then throttle is large positive.
2. If velocity is slow and irregularity is small positive, then throttle is small positive.
3. If velocity is optimum and irregularity is moderate positive, then throttle is small
positive.
4. If velocity is very large and irregularity is large positive, then throttle is large negative.

Velocity specifications
Irregularity Very slow Slow Optimum Fast Very fast
Zero LP SP ZR SN LN
Small Positive LP SP ZR SN LN
Moderate positive SP SP SN LN LN
Large positive ZR SN LN LN LN

1. SP Small
positive
LN
ZR
LP
-60 0 60

2. SN Small
negative

3. LP Large
positive

4. LN - Large
negative

5. ZR Zero
Reference
Fig: Range of membership function.

WORKING :-
Linear electromagnetic motor consists of magnets and wire coils. The
electrical connections are given for power and control. Up front there's a modified strut
layout and in the rear is a double linkage, with the electromagnetic motor housing attaching
between the vehicle's body and each wheel, then connecting to computer controls. When
electric power is applied to the coils, the motor housing retracts and expands, thus creating
motion between the wheel and car body.
The motor can respond quickly to wheel motions prompted by road bumps and
potholes and then counteract that wheel movement to prevent it from being translated
through the car's structure. The L.E.M. has the ability to extend (as if into a pothole) and
retract (as if over a bump) with much greater speed than a fluid damper. These lightning-
fast reflexes and precise movement allow the wheel's motion to be so finel y
controlled that the body of the car remains levelled, regardless of the wheel level.
The L.E.M. can also counteract the body motion of a car while accelerating, braking and
cornering , giving the driver a greater sense of control and passengers a greater comfort.
To facilitate smooth ride, wheel dampers Inside each wheel hub smooth out small
road imperfections, isolating even those nuances from the passenger compartment.
Torsion bars take care of supporting the vehicle, allowing our system to concentrate on
optimizing handling, fuel economy and ride dynamics. The amplifier is a regenerative
design that uses the compression force to send power back.

Design of sensors :


To apply sonar range sensors in the navigation of our car we need to obtain dense
maps of the environment with the regions in the maps classified as empty, occupied and
unknown with sufficient accuracy for autonomous navigation. This includes providing input
to our fuzzy controller and obstacle avoidance.

ULTRASONIC SENSOR SYSTEM :
The main sensor used in our proposed design is the region of constant depth
detector (RCDD). The RCDD sonar sensor array consists of a ring of several pair of
Polaroid ultrasonic transducers. It is mounted at the centre of each link for the planar
manipulator. In 3-D two rings of sensors are positioned centrally on two adjacent sides of
each link. Two transducers are employed so that by using both as receivers for a single
sonar pulse differential time of flight data can be obtained.
TRAIN OF SENSORS
IRREGULARITIES


The ability of ultrasonic sensors to detect an obstacle depends on factors like
orientation, reflectivity, curvature etc., of the surface of the obstacle toward the sensor and
also the threshold used for the detection of received echoes. Each sonar range
measurement is interpreted as providing information about probably empty and somewhere
occupied volumes on the road subtended by sonar beams. This occupancy information is
modeled by probability profiles where empty, unknown and occupied profiles are
represented. The accuracy of the sonar map is improved by taking sets of range
measurements from multiple views. To approximate ultrasonic measures we use
segmentation.
PRACTICAL STUDY ANALYSIS :
Analysis of LEM:








Wheel Assembly Stress Analysis
Assembly Of LEM :








Performance Analysis When Encountering A Speed Breaker







SEGMENTATION :
Ultrasonic points are difficult to approximate. This is due to their small values and
their sensibility to continuer tests. Consider a finite number of segments corresponding to
different sides of the car. A set of measures are selected with a gradient verifying an upper
limit. Some sensors up to a certain percentage can be without measures and a minimum
number of accepted measures can be imposed. A segment approximating the resulting
measures is the least square sense calculated. This method produces approximate
information on the orientation of the obstacle other than plane obstacles. dmin is the
minimal distance perpendicular to the segment from of the link of the sensor. The distance,
d used as one of the inputs to the fuzzy controller can be obtained by decreasing dmin by
the minimum approach distance d0.
d=dmin d
o

VEHICLE PERFORMANCE :-

Vehicles equipped with the Electro magnetic suspension system have been
tested on a variety of roads and under many different conditions, demonstrating the comfort
and control benefits drivers will encounter during day-to-day driving. In addition, the
vehicles have undergone handling and durability testing at independent proving grounds.
When test drivers execute aggressive cornering maneuvers like a lane change, the
elimination off body roll is appreciated immediately. Similarly, drivers quickly notice the
elimination of body pitch during hard braking and acceleration. Professional test drivers
have reported an increased sense of control and confidence resulting from these
behaviours.

When test drivers take the EMSS equipped car over bumpy roads, they report that
the reduction in overall body motion and jarring vibrations results in increased comfort and
control. Our system uses only about a third of the power of a vehicle's air conditioning
system. The following figure indicates the difference between normal suspension systems
and our balanced suspension system.


BODY ROLL WHILE CORNERING
Two vehicles of the same make and model are shown performing an aggressive
cornering maneuver. The vehicle on the left has the original factory-installed suspension
and the vehicle on the right has the Electro Magnetic Suspension System.

BODY MOTION BUMP COURSE
The vehicle on the left has the original factory-installed suspension and the vehicle
on the right has the Electro Magnetic Suspension System.

DOUBLE LANE CHANGING
OUR CONTRIBUTIONS :-
Our major contributions to this paper which offers both control and comfort are:
Design of powerful control algorithm.
Adaptive Fuzzy logic controller design.
Modeling of LEM Stress and displacement analysis which makes our suspension
system balanced and innovative.
The goal of our control algorithms is to allow the car to glide smoothly over roads. We
have included a wheel damper at each wheel to keep the tire from bouncing as it rolls down
the road. Unlike conventional dampers, which transmit vibrations to the vehicle occupants
and sacrifice comfort, the wheel damper in our suspension system operates without
pushing against the car body, maintaining passenger comfort. To support the concept of
computation speed, we have chosen electromagnetic motors. After analyzing conventional
and variable spring/damper systems as well as hydraulic approaches, none had the
combination of speed, strength, and efficiency that is necessary to provide the desired
results. So we have used a modified strut layout and the rear suspension modules use a
double linkage.
ADVANTAGES OF OUR APPROACH :-

Our Suspension system has significant advancements in four key
disciplines: linear electromagnetic motors, power amplifiers, control algorithms, and
computation speed. The amplifiers employed by us are based on switching amplification
technologies. The motor has been designed for maximum strength in a small package,
allowing it to put out enough force to prevent the car from rolling and pitching during
aggressive driving maneuvers. Since the system employs Fuzzy logic the accuracy and
reliability is high. The good internal adaptiveness and coordination of various components
involved in the system is shown with the help of stress distribution and displacement
analysis.

FUTURE PROSPECTS :-

Many of these techniques can be updated and implemented to form the
emerging suspension system of the future.

Sensors in the vehicle can be used to create electronic profiles of roads.
These profiles can be collected in a database and shared with other vehicles
via satellite.
Thus for a given destination optimum route can be traced.









SATELLITE


CARS FITTED WITH EMSS
CENTRALISED
GROUND
DATABASE

FUTURE IMPLEMENTATION OF EMSS
CONCLUSION :-
For the first time, our suspension system demonstrates the ability to combine
in one automobile a much smoother ride than any luxury sedan and less roll and pitch than
any sports car. This performance results from a balanced combination of suspension
hardware and control algorithms with the reliability and decision making capability of the
Fuzzy logic controller. So this integrated robust practically feasible technology will be the
future of automobiles and will arguably become one of the best when it comes to fuel
economy and cost.

BIBLIOGRAPHY :-
1.Tires, suspension and handling by J ohn.C.Dixon, 2004.
2.Electromagnetic suspension systems by Ronald K.J urgen,2005.
3. Measurement and Subjective Evaluvation of Vehicle Handling by Bergmen W.,1998.
4. Theory of ground vehicles by J .Wong,1999.




THE UNIFIED POWER FLOW
CONTROLLER





BY

HARITHA. V. V. S. S.

NAGA LAKSHMI DEVI. M



E-MAIL:

seethaharitha@gmail.com

nagalakshmidevi@gmail.com





ABSTRACT:

Now a days there are many problems in the power system due to
increasing demand. FACTS controllers are emerging as viable and
economic solutions to the problems of large interconnected AC
networks, which can endanger the system security. These devices are
characterized by their fast response, absence of inertia, and minimum
maintenance requirements. Thyristors controlled equipments require
passive elements (reactors and capacitors) of large ratings. In contrast,
an all solid state device using GTOs (gate turn-off thyristor valves) leads
to reduction in equipment size and has improved performance.

The Unified Power Flow Controller (UPFC) is an all-solid state
power flow controller that can be used to control the active and reactive
power in the line independently in addition to control of local bus
voltage. The unique capability of the Unified Power Flow Controller is
that it maintains prescribed real and active power flow in the line and
independently controls them as well at both the sending- and the
receiving-ends of the transmission line.

In this paper, we present a control scheme for the UPFC to
improve stability and damping. UPFC is able to control both the
transmitted real power and, independently, the reactive power flows at
the sending- and the receiving-end of the transmission line. The paper
describes the basic concepts of the proposed generalized P and Q
controller and compares it to the more conventional power flow
controllers.
INTRODUCTION:
The Unified Power Flow
Controller (UPFC) was proposed
for real-time control and dynamic
compensation of ac transmission
systems, providing the necessary
functional flexibility to solve many
of the problems faced by the utility
industry.
The UPFC primarily injects
a voltage in series with the line
whose phase angle can vary
between 0 to 2 with respect to the
terminal voltage and its magnitude
can be varied (depending on the
rating of the device). Hence, the
device must be capable of both
generating and absorbing both real
and reactive power. This can be
achieved by using two Voltage
Source Converters (VISSC)
employing GTOs (gate turn-off
thyristor valves) as shown in fig 1.
Fig. 1 - UPFC Configuration

The two converters are
operated from a common dc link
provided by a DC storage capacitor.
Converter 2 is used to inject the
required series voltage via an
injection transformer. The basic
function of converter 1 is to supply
or absorb the real power demanded
by Converter 2 at the common dc
link. Converter 1 can also generate
or absorb controllable reactive
power, thus, providing shunt
compensation for the line
independently of the reactive power
exchanged by Converter 2. Thus,
the UPFC can be modeled by a
controllable voltage source Vser in
series with the line and a
controllable current source Ish in
shunt as shown in Fig 2.

As explained above, the
UPFC basically has three
controllable parameters the
magnitude and angle of the series
injected voltage and the magnitude
of the shunt reactive current. The
internal control systems provide the
gating signals to the converter
valves so as to operate them to
provide the command series voltage
and simultaneously draw the
desired shunt reactive current. The
external controls on the other hand,
decides the reference values of the
series voltage and shunt reactive
current. These values can be set to
some constant values or dictated by
an outer feeder control loop to meet
specific requirements. Automatic
control of the shunt reactive power
to regulate the bus voltage is well
known for SVC and STATCON.
The same principle can be extended
to the control of the shunt reactive
current of the UPFC. However, the
control of the series injected voltage
can be achieved in different ways to
meet various objectives. The focus
of this paper is on the control of the
series injected voltage in steady
state and under disturbances to
damp power oscillations and
improve stability.
BASIC PRINCIPLE OF P AND Q
CONTROL:
Consider Fig 3. (a) a simple
two machine(or two bus ac inter-tie)
system with sending-end voltage
Vs, receiving-end voltage Vr, and
line (or tie) impedance X(assumed
for, simplicity, inductive) is shown.
At (b) the voltages of the system in
form of a phasor diagram are shown
with transmission angle and |Vs |
=|Vr | =V. At(c) the transmitted
power P {P=(V*V/X) sin } and
the reactive power Q= Qs = Qr
{Q=(V*V/X)(1-cos)} supplied at
the ends of the line are shown
plotted against angle . At (d) the
reactive power Q = Qs = Qr is
shown plotted against the
transmitted P corresponding to the
stable values of (i.e. 0
90).





Fig .3 Simple two machine system (a),
related voltage phasor (b), real and
reactive power versus transmission
angle (c), and sending-end/receiving-
end reactive power
Consider Fig 4. where the
simple power system of Fig 3. is
expanded to include UPFC. The
UPFC is represented by a
controllable voltage source in series
with the line which, as explained in
previous section, can generate or
absorb reactive power that it
negotiates with the line, but the real
power it exchanges must be
supplied to it, or absorbed from it
by the sending-end generator. The
voltage injected by the UPFC in
series with the line is represented by
phasor Vpq having magnitude Vpq
(0 Vpq 0.5pu) and angle (0
360) measured from the given
phase position of phasor Vs as
illustrated in the figure. The line
current, represented by phasor Is,
flows through series voltage source,
Vpq, and generally results in both
reactive and real power exchange.
In order to represent the UPFC
properly, the series voltage source
is stipulated to generate only the
reactive power Qpq it exchanges
with the line. Thus, the real power
Ppq it negotiates with the line is
assumed to be transferred to the
sending-end generator as if a perfect
coupling for real power flow
between it and the sending-end
generator existed. This is in
arrangement with the UPFC circuit
structure in which the dc link
between the two constituent
inverters establishes a bi-directional
coupling for real power flow
between the injected series voltage
source and the sending-end bus. As
Fig. 4 implies, in the present
discussion it is further assumed for
clarity that the shunt reactive
compensation capability of the
UPFC is not utilized. That is, the
UPFC shunt inverter is assumed to
be operated at unity power factor,
its sole function being to transfer
the real power demand of the series
inverter to the sending-end
generator. With these assumptions,
the series voltage source, together
with the real power coupling to the
sending-end generator as shown in
fig 4, is an accurate representation
of the basic UPFC.


It can be readily observed in fig 4
that the transmission line sees
Vs+Vpq as the effective sending-
end voltage. Thus it is clear that the
UPFC affects the voltage (both its
magnitude and angle) across the
transmission line and therefore it is
reasonable to expect that it is able to
control, by varying the magnitude
and angle of Vpq, the transmittable
real power as well as the reactive
power demand of the line at any
given transmission angle between
the sending-end and receiving-end
voltages.







Control Of Series Injected
Voltage:
The series injected voltage can be
adjusted to meet a required P and Q
demand in the transmission line.
The series injected voltage can be
decomposed into two components:
a component in phase with the
sending (receiving) end voltage
which mainly affects the reactive
power flow and a component in
quadrature with the sending
(receiving) end voltage which
mainly affects the real power flow.
These components can be
controlled to meet the required
power demand.
An alternative to using either
the sending-end or the receiving-
end voltages as reference is to use
current as the reference. The
injected voltage can be split into
two components: one component in
phase with the current and the other
in quadrature with the current.
Inserting a component of voltage in
phase with the current is equivalent
to inserting a resistance (positive or
negative) in the line and inserting a
voltage component in quadrature
with current is equivalent to
inserting a reactance (capacitive or
inductive). The controller discussed
in this paper is designed to control
the magnitudes of the two
components of the series injected
voltage: Vser1 in phase with the
current and Vser2 in quadrature
with the current, independently to
regulate P and Q at the receiving-
end. In addition, the control scheme
is aimed at damping the power
swings and maintaining stability
after a disturbance.
Controller for UPFC:
If the sending end voltage is
Vs and the receiving end voltage is
Vr, and injected voltage is assumed
to be made of two voltage sources
whose magnitudes are Vser1 and
Vser2 in series, the line with the
UPFC can be represented by Fig 5.,
if the power demand at the
receiving end (PR and QR ) and the
receiving end voltage Vr are
specified, the current required to
meet this demand and the voltage
Vs2 can be computed from
Vr Ir = Pr + j Qr (1)
Vs2 = Vr + j Ir Xr (2)
The magnitude of the in-
phase component, Vser1, is
controlled to maintain the
magnitude of Vs2 at the value
obtained from (2) and the
magnitude of the quadrature
component, Vser2, controlled to
meet the required power demand
PR. PR and QR demands are
decided by the changing conditions
in the system and can be varied
according to the load conditions at
any given point of time. However,
during a contingency the constant
power control is not desirable in the
interest of stability. Hence, the
power flow in the line has to be
suitably modulated to improve
stability and damp the oscillations.



Controller Structure for
UPFC:
Controller for Vser1:
The in phase component is used
to regulate the magnitude of voltage
Vs2. The controller structure is as

Fig. 6(a)-Controller for Vser1
shown in Fig 6a. in the figure Vs2ref is
the value of the desired magnitude of
voltage Vs2 obtained from equation
(2). Tmeas is the time constant to
represent delay in measurements. A
simple integral controller is used for
control for Vser1. A positive voltage
insertion corresponds to a negative
series resistance. During a
contingency, Vs2ref can be varied.




Controller for Vser2:
Vser2 is controlled to meet
the real power demand in the line.
The controller structure is shown in
Fig 6b. Peo is the steady state
power, Dc and Kc are constants to
provide the damping and
synchronizing powers in the line, sm
is the generator slip, Tmeas is the
measurement delay and Pline is the
actual power flowing in the line. It
is to be noted that a positive voltage
injection corresponds to a
capacitive voltage. A wash out
circuit is provided to eliminate any
steady state bias in the controller.
A few points are to be noted
in the above controller structure.
Setting of Dc and Kc to zero results
in a constant power controller
where the injected voltage is
controlled so as to maintain the line
power at Peo. Peo itself can be
changed so as to obtain different
steady state power flows depending
on changing network conditions.
The constants Dc and Kc have to be
carefully chosen so that the system
is neither overdamped nor
underdamped. The gain of the
integral controller should be tuned
properly to prevent too frequent
hitting of the limits that would give
an undesirable response.


COMPARISON OF UPFC TO
THE TCPAR:
1. TCPAR consists of shunt
connected excitation transformer,
series insertion transformer and
thyristor switch arrangement.
UPFC consists of two
switching converters, shunt
transformer.
2. In TCPAR total VA
exchanged by the series insertion
transformer appears at the primary
of the excitation transformer, as
load demanded. Thus, both the real
and reactive power the phase angle
regulator supplied to, or absorb
from the line.
UPFC itself generates the
reactive power part of the total VA
it exchanges as the result of the
series voltage injection and it
presents only the real power part to
the AC system as a load demand.
3. The UPFC has a wider range
for real power control and facilitates
the independent control of the
receiving end reactive power
control over a broad range.

COMPARISON OF UPFC AND
TCSC:
1. TCSC is an actively
controlled but functionally passive
impedance. If the current through
the line is zero then both P and Q
are zero irrespective of the value of
Xc.
UPFC is an actively
controlled voltage source, it can
force upto 0.5p.u. real power flow
in either direction and also control
reactive power exchange between
the sending and receiving end
buses.
2. TCSC is series impedance
and thus the compensating voltage
it produces is proportional to the
line current, which is as a function
of line voltage.
UPFC is a voltage source the
maximum compensating voltage it
produces is independent of line
current.
3. The range of TCSC for real
power control remains a constant
percentage of the power transmitted
by the uncompensated line at all
transmission angle. The actual
changes in transmitted power
progressively increases with
increase in and it reaches that of
the UPFC at =90.
4. Maximum transmitted power
of 1.5p.u. obtained with the TCSC
at full compensation. Is associated
with 1.5p.u. reactive power demand
at the receiving end when
compensated with TCSC.
The 1.5p.u. power transmission is
achieved by 1.0p.u. reactive power
demanded when the line is
compensated with UPFC.
5. UPFC has superior power
flow control characteristics
compared to TCSC.
6. UPFC cant produce series
resonance with the line reactance
while TCSC produces series
resonance with line reactance.

CONCLUSION:
The UPFC being a very
versatile device can be used for fast
control of active and reactive power
in the line. In this paper, we have
proposed a control scheme for the
series injected voltage of the UPFC,
wherein, the injected voltage is split
into components in phase with and
in quadrature with the line current.
This control scheme provides a
locally measurable control signal as
opposed to using the sending end
voltage as control signals which
would require either synthesis or
telemetry of these signals, unless
the UPFC is located close to them.
The component in phase with the
current is used indirectly to control
the reactive power by voltage
regulation of the UPFC receiving
end bus and the quadrature
component controlled to control the
real power flow in the line. By
addition of damping and
synchronizing torques during
contingencies, it is possible to
modulate the real power flow so as
to damp the power oscillations very
fast and also improve transient
stability.

References:
1. L. Gyugyi et al., The
unified power flow controller: a
new approach to power
transmission control, IEEE Trans.
On power delivery, vol. 10, No.2
April 1995, pp.1085-1097.
2. L. Gyugyi, Unified power
flow concept for flexible AC
transmission systems, IEE
proceedings-C, J uly 1992.














TRIPLE MODULAR REDUNDANT PLC
(Controlling nuclear power station)












By,

R.Sridhar and N.Manikandan
III ECE
Mepco Schlenk Engg. College
Sivakasi 626005

E-mail : ntmanikandan@hotmail.com
jjxantony@gmail.com
Phone : 04562 235965,66,68


1
INTRODUCTION:

The programmable logic controller (PLC) or programmable
controller can be classified as a solid state member of the computer
family .The programmable controller is a industrial computer in which
control devices such as limit switches , pushbuttons ,
Photoelectric sensors, flow switches or pressure switches, to name a few,
provide incoming control signals into the unit, an incoming control signal
is called an input.
Incoming control signals, or inputs, indirect with instructions
specified in the user program, which tells the PLC how to react to the
incoming signals. The User program also directs the PLC on how to
control field devices like motor starters, pilot lights and solenoids. A
signal going out of the PLC to control a field device is called an output.
A programmable controller is a digitally operated electronic
system, designed for use in an industrial environment, which uses a
programmable memory for the internal storage user oriented instructions
for implementing specific instructions such as logic, sequencing, timing,
counting, and arithmetic to control , through digital or analog inputs and
outputs, various types of machines or processes. Both the PLC and its
associated peripherals are designed so that they can be easily integrated
into an industrial control system and easily used all their intended
functions.


WHY USE A PLC:

Gain complete control of the manufacturing process.
Achieve consistency in manufacturing.
Improve quality and accuracy.
Work in difficult or hazardous environment.
Increase productivity
Shorten the time to market
Lower the cost of quality, scrap and rework.
Offer greater product variety.
Quickly change over from one product to another
Control inventory.






2
























TRIPLE MODULAR REDUNDANT PLC:
(Advanced trend in PLC family)

This safety controller is designed with triple modular redundant
(TMR) hardware system. This hardware architecture is combined with
software implemented fault tolerance (SIFT) to achieve extremely high
operational availability and function on demand performance
This is certificated for use in safety applications up to and including SIL
level 3, such as process and emergency shut down, critical control and
fire and gas protection. M/s ABB has successfully designed a highly
modular system, for use by integrators familiar with PLC design and
integration practices.
The three key aspects are:
Triple Modular Redundant architecture-TMR
Software implemented Fault Tolerance-SIFT (with HIFT output
voter)
Online hot repair facility


3
Input
Termination
Processor
A
Processor
B
Processor
C
Output from A
Micro
Controller
Output from B
Micro
Controller
Output from C
Micro
Controller
Input Module
Hot Repair Module
Output from A
Micro
Controller
Output from B
Micro
Controller
Output from C
Micro
Controller
Output Module
Hot Repair Module







HIFT
Voter
2 - 3
Product Architecture-Triguard:
(Trade name of M/s ABBs system)

Triguards triplicate architecture has been developed and refined
over 20 years to produce the SC300E. The principles of the SC300Es
TMR architecture are shown in Figure-2

Fault Tolerant:

Fault tolerance in the SC300E system is achieved by using
triplicated processors, triplicated I/O circuitry, redundant power supplies
and intelligent diagnostics. Three independent processors are used,
connected to each other by read only communication links. These
communication links allow comprehensive diagnostic routines to make
available system status data such that each processor can determine its
own health with respect to the rest of the system. Any faults detected can
then be alerted and isolated, while maintaining the systems functionality.
The triplicated I/O circuitry functions in much the same way, each
I/O module having three circuits on one board.
Redundant power supplies provide the processor and I/O module
power (excluding the field). Each module connected to these supplies
monitors the health of the power supply system as part of its diagnostic
capability. Full system functionality is available after a failure of a power
supply. The heart of the SC 300Es diagnostic capability is the operating
system. The SIFT operating system functions independently in each of
the three processors and is responsible for both application execution and
diagnostics. Each processor presents data required by the diagnostics or
the application to its two neighbors for comparison by allowing the
processors to synchronize whenever such data becomes available. This
will typically happen many times during an application logic execution
cycle and typically consists of data such as input states, output states,
logic results and diagnostic states of its two neighbors. With this data
each processor then correlates and corrects its memory view of the
current state of the system using a 2003 (2 out of 3) software vote,
logging any discrepancies.
This means that Triguard SC300E has a fully triplicated
architecture from input module to output modules. All input and output
modules interface to three isolated communications buses, each being
controlled by one the three processor modules.




4
THEORY OF OPERATION:

At the input modules, field signals are filtered signals are filtered
and then split via isolating circuitry, into three identical signal-
processing paths. Each path is controlled by a micro controller that
coordinates signal status reporting to its respective processor, via one of
the I/O communications buses. once each processor has a copy of the
input state it votes on that data against the input states presented by the
other two processor .the voted result is then used in the in the application
logic . Once the application has been processed, the results of the
application logic are again compared with the other two processors.
These voted results are then written to the output cards by the processors.
The output modules share the same microcontroller architecture as the
input cards, a single result is presented to the field by passing the three
processor signals through a hardware 2003 voting circuit.

Processor modules:
The SC300E processors have been designed around Intel
microprocessors. Key features of the processor modules are:
Intel 32 bit processor.
Support for up to 1Mb of battery backed static RAM for
application logic storage.
Error detection and correction circuitry is used to monitor all data
accessed from the RAM. Onboard lithium batteries provide a back
up supply to the RAM for up to 6months in the event of system
power failure.
Up to 2 Mb of EPROM programmed with triguard SC300Es real
time operating system. EPROM can also be used for the storage of
application logic.
Buffer I/O communication bus via a special 96 way DIN41612
connector permitting the living insertion of a processor module.
Real time calendar clock circuitry used for data logging to a
resolution of 10 ms.
The size of a processors sequence of events log is typically 5000
events.
Two 8 Mps, read-only, serial communications links used by a
processor to read I/O and diagnostic information from its two
processor neighbors.
Front panel mounted RS232 serial communications port used for
engineering diagnostic purposes.
Watchdog circuitry and front panel mounted control switches and
indicators.
5

DIGITAL INPUT MODULES:





















Figure-3 shows the input circuit for digital input module-TMR. In the
TMR circuit in figure-3, field input signals are split on entry between
three identical channels. Each channel comprises a filter that removes
noise from the input signal, followed by an opt-isolating circuit, followed
by a signal processing circuit .The input circuits are supervised by the
microcontroller, which report the status of the input signals and their
associated test signals, upon demand, to the processors. Front panel LED
indicators are driven by the outputs of the isolating circuits and reflect the
status of the field input loops .An LED is illuminated if the field loop is
energized, i.e.: field contact closed = LED on .Each of the
microcontroller supervises signal test circuits that ensure that the signal
paths can be switched to the off state .Failure of a signal path test will
extinguish the healthy LED.







6
Micro
Controller
Micro
Controller
Micro
Controller
Signal
Status LED
Noise
Filter
Noise
Filter
Noise
Filter
Isolator
Isolator
Signal
Processin
Signal
Processin
Signal
Processin
Driver
Isolator
LED
Test
Test
Test
1 of 32
Input Signals

DIGITAL OUTPUT MODULES:


All 32 channel digital output modules, solid state, and support
monitoring circuitry to confirm the integrity of the output field loop .The
function of a digital output module with a solid state drive is illustrated in
figure-4.
Output state commands are received by a digital output modules
microcontroller via their respective I/O buses. Each microcontroller
controls two solid state isolated switches, which form part of a six
element, 2003 voting network per module output. Voltage and current
monitoring circuits are connected across each upper switch in the voting
network providing feedback to the microcontroller to confirm:
Output switched to the energized field load is not open or short
circuit Output to the de-energized field load is not open circuit Front
panel LED indicators are connected across the outputs of the voting
networks and reflect. An LED is illuminated if the field loop is energized,
i.e. Output switch closed =LED on. For 2003 to energize or 2003 to de-
energize output modules, testing of the output voting networks is
coordinated by the microcontroller. When all outputs are in the healthy
condition and the microcontroller confirm no faults present, then the
processors will instruct each microcontroller in turn to switch its output to
the opposite state and confirm correct operation of its output switch.
Digital output modules used for ESD (emergency shut down)
applications have normally energized outputs and their voting network is
switched to 3-2-0 operation i.e. 2003 voting reverting to 2002 in the
presence of two concurrent faults.
7
Digital output modules for use in fire and gas applications may
have their voting switched to 3-2-1operation, i.e. outputs can continue to
be controlled by a single processor (non certified).
Normally de-energized outputs (energize to trip) field loads,
aremonitored for open circuit faults.
If outputs are required to be fully supervised, open and short circuit
monitoring of the de-energized load can be achieved with the addition of
an active termination card(non certified).
A fault-tolerant control system identifies and compensates for
failed control system identifies and compensates for failed control system
such as task without process interruption. A high integrity control system
such as the tricon is used in critical process applications that require a
significant degree of safety and availability.

ELEMENTS OF PROGRAMMING:

A program is the highest-level executable logic element within a
Tristation project. It is an assembly of programming language elements
(function blocks, functions, and data variables) that work together to
allow a programmable control system to achieve control of a machine or
process. Each program is uniquely identified by a user-defined type
name. One Tristation project supports multiple programs.

FUNCTION BLOCKS:

A function block is a logic element which yields one or more
results and is uniquely identified by a user-defined type name. To use a
function block type must first be declared. Each instance s identified by a
user-defined instance name. All the data associated with a specific
instance of a function block is retained from one evaluation of the
function block to the next.

ELEMENTS OF PROGRAMMING:

A program is the highest-level executable logic element within a
Tristation project. It is an assembly of programming language elements
(function blocks, functions, and data variables) that work together to
allow a programmable control system to achieve control of a machine or
process. Each program is uniquely identified by a user-defined type
name. One Tristation project supports multiple programs.



8



FUNCTION BLOCKS:

A function block is a logic element which yields one or more results
and is uniquely identified by a user-defined type name. To use a function
block in a program, an instance of the function block type must first be
declared. Each instance is identified by a user-defined instance name. All
the data associated with a specific instance of a function block is retained
from one evaluation of the block to the next.

FUNCTIONS:

A function is a logic element which yields exactly one result and is
uniquely defined by a user-defined type name. Unlike the function block,
the data associated with a function is retained from one evaluation of the
next. Functions do not have to be instanced

DATA TYPES:

A data types the size and characteristics of variables declared in a
program, function or function block. Examples of data types are BOOL.
DINT and REAL.

SHARED LIBRARIES:

Shared libraries contain predefined functions block and function
that can be used to develop programs as well as other function blocks and
functions.

PROGRAMMING LANGUAGES:

Funtional Block Diagram:

A graphical language that corresponds to circuit diagrams. FBD elements
appear as block that is wired together, to form circuits. The wires transfer
binary and other types of data between elements.





9
Structured Text (St):
A high -level, textual programming language, that is to PASCAL. ST
allows user to create Boolean and arithmetic expressions, as well as
programming structures such as conditional (IF .. THENELSE)
Statements. Functions and function blocks may be invoked in ST.

Ladder Diagram :
A graphical language that uses a standard set of symbols for representing
relay logic .the basic elements are coils and contacts, which are connected
by links . Links are different from the wires in FBO in that they transfer
only binary data between the elements.


CAUSE & EFFECTS MATRIX PROGRAMMING LANUAGE
(CEMPLE):

A high-level graphical language provides a two_ dimensional matrix in
which the user can easily associates a problem in a process with one or
more corrective actions. The problem is known as the action as the effect.
The matrix associates a cause with an effect in the intersection of the
cause row and the effect column.

ADVANTAGES OF TMR PLC SYSTEM:

No single point of failure.
Ability to operate with 3, 2 or 1 main processor before shutdown.
Fully implemented and transparent triplication.
Comprehensive system diagnostics.
Complete range of I/O module for safety critical points with a
limited need for availability.
Remote I/O up to 12 kilometer (7.5 cable miles) away.
Simple, on-line module repair.
Unsurpassed reliability and availability










10
APPLICATIONS:



NUCLEAR PLANT CONTROL WITH PLC

In this application we have consider three points
Control on the temperature on the furnace
Control on the flow of water as the coolant.
Condition to change the stream into water (condenser).

In the control of the temperature, thermocouple is the best
temperature measuring device. In this application the process will be in a
high temperature, so we have use an optical cable in the measurement of
the temp.
In the control of the flow of the water, we can use values as a
helping media. In this measurement an ordinary conducting rod used as a
sensor.
In this control there are two condition,
There should not be more water flow.
There should not be a low flow of water.



When there is high temperature, then there is need of high flow of
water so as to the valve condition to HIGH
When there is low temperature, and then there is need of low flow of
water so as to the valve condition to LOW.
11
Heat
Process
Flow
Process
i/p
o/p
Desired
Point
Controller
Desired
Point
Controller
Heat
Process
Flow
Process
i/p
o/p
Desired
Point
Controller
Desired
Point
Controller
In the control of converting the stream into water, there should be a
proper conversion of water .this should be properly control by a special
control devices such as water measuring sensors such as floats if there
any problem in the flow of water there should be an indication of an
alarm or try to control by itself


Operation:

In the operation of the nuclear control there must be a control in
measurement of heat (THERMOCOUPLE).the heat will be
produced by the breakage of atoms reaction.
This heat then converted to stream by transferring the heat
Through a pipe of water
Then this stream is passed through a turbine, this stream helps to
rotate the turbine and this helps to produce the current.
Then this stream is send to a condenser so as to change the stream
into water. Then it is give to convert it into stream, this is a cyclic
process.
This flow of is then controlled by a valve which is controlled (P,
PI, PID), other than this controllers then are other controllers
pneumatic, hydraulic controllers.


CONCLUSION:

PLC is widely used in large industries application like automobile
industries, boiler& turbine, satellite applications, etc. PLC is used as an
aiding tool for fulfilling critical process requirements in the above areas.
TMR-PLC is the current trend in industrial area which is used in the most
stringent environment. This increases the system availability &reliability.
Also, this is considered as the most powerful system for emergency
shutdown application.




12

Tuning of PID controller using Fuzzy Logic
AUTHORS: P.PAVANKUMAR
&
SK.YASEEN
EMAI L: p_pavan04@yahoo.com
&
yaseen_raoshan@rediffmail.com


III /IV Electrical and Electronics Engineering




NARASARAOPETA ENGINEERING COLLEGE

NARASARAOPETA





ABSTRACT

The best-known controllers used in industrial control process are proportional
integral and derivative (PID) controllers because of their simple structure and robust
performance in wide range of operating conditions. The design of such controllers
requires specifications of three parameters viz., proportional gain, integral and derivative
time constants. PID controllers are particularly adequate for the process whose dynamics
can be modeled by first and second order systems.
A novel methodology, based on fuzzy logic, for the tuning of proportional-
integral-derivative (PID) controllers is presented. A fuzzy inference system is adapted to
determine the value of the weight that multiplies the set-point for the proportional action,
based on the current output error and its time derivative. In this way, both the overshoot
and the rise time in set-point following can be reduced. The values of the proportional
gain and the integral and derivative time constant are determined according to the well-
known ZieglerNichols formula, so that good load disturbance attenuation is also
assured. The methodology is shown to be effective for a large range of processes and is
valuable to be adopted in industrial settings since it is intuitive, it requires only a small
extra computational effort, and it is robust with regard to parameter variations. The
tuning of the parameters of the fuzzy module can be easily done by hand or by means of
an auto-tuning procedure based on genetic algorithms.
1.INTRODUCTION:
In the process control systems it is necessary to control the plant, i.e. in the sense
of rise time, settling time and maximum over shoot. For the control of the plant many
controllers are designed. In the industry, PID controllers are widely used controllers
because of their simple structure. Normally, PID controllers are mostly suited for first
and second order systems.
Tuning is the process of adjusting the parameters of PID controller. This work
consists of adjusting the parameters of PID controller with Ziegler-Nichols method based
on ultimate gain and ultimate period. Ultimate gain is the gain above which the system
merges to the state of instability. Ultimate gain and ultimate period also some times are
calculated by Routh criteria, but this method is not applicable to all the plants. The
general procedures for calculating the ultimate gain and ultimate period is, set the
proportional gain in the loop by removing the integral and derivative gains, and increase
the proportional gain until the response obtained has the constant gain. The
corresponding gain at the point is called ultimate gain and the period corresponding to the
response called ultimate period. From ultimate gain and ultimate period, the parameters
K
P,
K
I
and K
D
are calculated; but the Ziegler-Nichols method alone didnt give the good
response for the load disturbance.
In this work, fuzzy-set point weight method use parameters obtained from the
Ziegler-Nichols method. Fuzzy-set point weight method consists of fuzzifying the set
point corresponding to proportional gain and the fuzzified value is multiplied with
proportional gain. The parameters should be adjusted by Ziegler-Nichols method based
on ultimate gain and ultimate period. The inputs to the fuzzy inference system are error
and change in error, the output of the fuzzy interface system is obtained by the fuzzy
rules based on the Macvier-Whelean matrix.
2. Ziegler-Nichols tuned PID control:
The Ziegler-Nichols tuning formula is based on the empirical knowledge of
the ultimate gain ku and ultimate period tu, as shown in Table 1.
Table 1: Zeigler Nichols tuning formula

PID PI
Proportional gain kp=0.6 ku kp=0.45 ku
Integral time Ti=0.5 tu Ti=0.85 tu
Derivative time Td= 0.125 tu

The PID controller is usually implemented as follows:
u(t) = k
p
e(t) + k
i
e(t) dt + k
d
[d/dt (e(t))] ----------(1)
Where kp, ki, and kd are the proportional, integral and derivative constants respectively

The performance of the Ziegler-Nichols tuned control system varies significantly
with the process dead-time. For PID control, when the dead time is small, the set-point is
well tuned in terms of speed of response, as shown in Fig. 2. The over shoot in the set-
point response is, however, excessive and it often needs to be reduced to 10% or 20%,
depending on applications. The closed loop step response of the system is given fig.1 Its
performance indices are as given below:
% over shoot = 83% Rise time = 1.08 s Settling time = 18.2 s

Fig.1
0 0.5 1 1.5 2
x 10
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
-1
0
1
time in seconds
y(t)
Z-N
PID
Fig.2 Response of conventional PID controller



3. PID CONTROLLER TUNING WITH FUZZY INFERENCE
SYSTEM
The best-known controllers used in industrial control processes are proportional
integral derivative controllers because of their simple structure and robust performance in
a wide range of operating conditions. Hence, PID controllers provide, in industrial
environment, a cost/benefit performance that is difficult to beat with other kind of
controllers. PID controllers are particularly suitable for pure first or second order
process. Unfortunately, real systems have, in general significant characteristics such as
high-order, nonlinearities, dead time, and they can be affected by noise, load
disturbances. Tuning of controller means adjusting the parameters of controller in order
to satisfy the required conditions such as rise time, settling time, peak overshoot etc. The
design of PID controller requires specification of three parameters; proportional gain,
integral time constant, and derivative time constants. The tuning of parameters is a
crucial issue and the many tuning formulae, such as Ziegler-Nichols, often fails to
achieve satisfactory performance. It is highly desirable to increase the capabilities of PID
controllers by adding new features. Fuzzy inference system is used to increase the
capabilities of the PID controller gains depending on the current operating conditions of
the controlled system.
3.1 PID CONTROLLER:
The typical PID control law in its standard form is

----------- 2

where e = y
r
y is the system error (difference between the reference input and
the system output), u the control variable, K
p
the proportional gain, T
d
the derivative time
constant and T
i
the integral time constant. The above equation can be written as
u(t) = Kp e(t) + K
d
de(t)/dt + K
i
e(t) dt - - - - - 3
The transfer function of a PID controller has the following form:
G
c
(S) = K
p
+ K
i
/S + K
d
S -------- 4
Where K
d,
K
i
and K
p
are the derivative, integral and proportional gains.
Another useful equivalent form of controller is
G
c
(s) = K
p
(1+ 1/T
i
S + T
d
S) ---------- 5
Where T
i
= K
p
/K
i
and T
d
= K
d
/K
p
.
3.2 PID TUNING WITH FUZZY INFERENCE SYSTEM:
Specifically, to reduce the overshoot, the set-point for the proportional action can
be weighted by means of a constant parameter b<1, so that we have the following more
general expression.
u(t) = K
p
(b y
r
(t) y(t)) + K
d
de(t)/dt + K
i
e(t) dt --------- 6
b(t) = w + f(t) --------- 7
where w is a positive constant parameter less than or equal to 1 and f(t) is output
of the fuzzy inference system. In this way a two degrees of freedom controller is
implemented.
However, the use of set-point weighting generally leads to an increase in the rise
time since the effectiveness of the proportional action is somewhat reduced. This
significant drawback can be avoided by using a fuzzy inference system to determine the
value of the weight b(t) depending on the current value of the system error e(t) and its
time derivative (t). simply the idea in a few words is that b has to be increased, when
the convergence of the process output y(t) to y
r
(t) has to be speeded up, and decreased
when the divergence tend of y(t) from y
r
(t) has to be slowed down. For sake of
simplicity, the methodology is implemented in such a way that the output of fuzzy
module is added to a constant parameter w, resulting in a coefficient b(t) that multiplies
the set-point. The overall control scheme is shown below.
Plant
P
I
D
FIS
d/dt
w
f b
y r

Fig. 3. Overall Control Scheme of the PID controller with fuzzy inference system

The two inputs of fuzzy inference system, the system error e(t) and its derivative
(t) are in the range [-1, 1] on which the membership function are defined. The output f
is also defined between range [-1, 1].
The rule matrix for f(t) is given by



Table 2 Rule matrix for f(t)

This method consists of fuzzified set point weight leaving other three fixed
parame

---------
lculated by Ziegler-Nichols formula.


as one integrator (pole at origin) and two poles at the left of the s-

pt is made to design a fuzzy logic PID controller for the
prov
5 shows the fuzzy response of the plant. From the fig.5 it is observed that
indices
= 32.5% Rise time = 0.95 s
ters. And they are calculated by Ziegler-Nichols method.
3.3 MODEL CALCULATONS:
For given plant,
1
= G(S) ----------
3 2
S +6S +5S
oller are ca The parameters of PID contr
K
P
=18
T
I
= 0.706
T
d
= 0.306
The plant h
plane. After fine tuning of PID controller parameters using trail and error method still to
obtain the target. After fine tuning the parameters are
K
P
=36.6
T
I
= 0.702
T
d
= 0.706
3.4 RESULTS:
In this section, an attem
im ed performance over normal PID controller w.r.t. rise time, settling time,
maximum over shoot, consists of tuning parameters K
P
, T
d,
T
I
respectively. For
calculating the parameters Ziegler-Nichols formula based on ultimate gain and ultimate
period is used.
The fig
the response of the plant with fuzzy PID control has less rise time, setting time,
maximum overshoot as compared with response obtained with the normal PID controller.
The closed loop step response of the system is given fig.4 Its performance
are as given below:
% over shoot
Settling time = 10 s

Overall Model of PID ControllerwithFuzzy Logic
Conventional PIDController
Td
Kp
de
e
b
Yf
Yn
1
w
1
s +6s +5s
3 2
plant
fuzzyoutput
change2
change1
F
i
g
.

4

u
0.706
Ti1
0.706
Ti
0.306
Td
36.6
1
s +6s +5s
3 2
Plant
Normal controller
pidoutput
Mux3
18.04
Kp1
36.6
Kp
1
s
Integrator3
1
s
Integrator1
FuzzyLogic
Controller
DotProduct
Demux
du/dt
D2
du/dt
D1
du/dt
D
0.706
U
Response of Fuzzy PID controller
0 0.5 1 1.5 2
x 10
4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
-1
0
1
Time in seconds
Y
(
t
)
Normal PID controller
Fuzzy PID


Fig. 5 Simulation output of Fuzzy PID controller


Table 3. Test Results of fuzzy Logic and Conventional controller

Fuzzy Logic Conventional controller
Overshoot 32.5% 83%
Rise time(sec) 0.95 1.08
Settling time(sec) 10 18.2
Z-N PID
10


















4. CONCLUSION:
In this work, a comparison between different methodologies regarding the tuning
of PID controller by means of fuzzy logic has been presented. The proposed method use
fuzzy rules and reasoning to determine the PID controller parameters. From the results it
is concluded that the use of fuzzy logic is that it gives the smooth control for plant
operation.
From fig. 4 it is observed that the use of fuzzy set-point weighting, in conjunction
with the Ziegler-Nichols method for the tuning of parameters of PID controller, leads to
significant improvement in the step response of the plant and preserves the good
performance in attenuation of the load disturbance. Hence this approach seems to be
particularly appropriate to be adopted in industrial environment, since it requires small
computation effort, it is easy to apply, intuitive and robust.

REFERENCES

[1] P. E. Wellstead and RI. B. Zarrop, "Self-tuning Systems : Control and Signal
Processing", John Wiley & Sons, 1991

[2] K. Astrom and T. Hagglund, PID Controllers: Theory, Design and Tuning. New
York: ISA, 1995.

[3] Saiful Akhyar and Sigeru Omatu S e l f - T u n i n g P I D C o n t r o l b y N e u r a l
N e t w o r k s, Proceedings o f 1993 International Joint Conference on Neural Networks

[4] S.-Z. He, S. Tan, and F.-L. Xu, Fuzzy self-tuning of PID controllers,Fuzzy Sets
Syst., vol. 2, pp. 3746, 1993.

[5] Antonio Visioli Fuzzy Logic Based Set-Point Weight Tuning of PID Controllers
IEEE Trans. on systems, man and cybernetics Vol.29 Nov.1999

[6] W. K. Ho, C. C. Hang, and J. Zhou, Self-tuning PID control of a plant with under-
damped response with specifications on gain and phase margins, IEEE Trans. Contr.
Syst. Technol., vol. 5, pp. 446452, 1997.

[7] W. K. Ho, C. C. Hang, and L. S. Cao, Tuning of PID controllers based on gain and
phase margins specifications, Automatica, vol. 31, no. 3, pp. 497502, 1995.

[8]A Survey of PID Controller Design Based on Gain and Phase Margins (Invited
Paper) , International Journal of Computational Cognition
(http://www.YangSky.com/yangijcc.htm), Volume 2, Number 3, Pages 63100,
September 2004.



































GITAM INSTITUTE
VISHAKAPATNAM








Wavelet Modeling of Motor Drives Applied to the
Calculation of Motor Terminal Over voltages




















Ch.Meher Gayatri, N.swathi
III year EEE, III year EEE,
: EMAIL ID:gayatrichigurupati@gmail.com EMAIL ID:swathi3086@yahoo.com




ABSRACT:

The wavelet is a powerful tool that has emerged for the calculation of power
system transients. In this paper, a wavelet motor drive model is developed and applied to
the over voltage problem which occurs with high dv/dt inverters. This technique is
fundamentally different from Spice, which is time-domain method. Wavelet models of
typical passive parameters are derived and then applied to the problem of motor terminal
over voltages, which occurs with long cable lengths. It is proved that the simulation
results with wavelet modeling are faster in the solution of this complex, power system
simulation problem.

I.INTRODUCTION

Induction machines fed by variable-speed drives through long cables are used in
many industrial applications where the motor and inverter must be at separate locations.
Fig. 1 shows a motorinverter system composed of an inverter, cable, motor, and load.


Fig 1.Typical structure of motor inverter system
Most industrial motor drives use pulse width-modulated voltage-source inverters
switched by power semiconductor switching devices. A high switching frequency of the
power electronic device is required to improve current waveform quality and reduce
audible noise. In recent years, switching frequencies of 220 kHz are common with
IGBT technology
while allowing for power levels of over 200 kW.
The high switching frequency and high rate of voltage rise (dv/dt) of 0600 V in
less than 0.1s has adverse effects on the motor turn insulation and can contribute to
bearing currents. AC motor transient over voltages resulting from the motor-cable
response to inverter pulse voltages have steadily increased in magnitude as
semiconductor rise times and fall times have decreased from the GTO, to the BJ T, and
presently to the IGBT.

Cable lengths had to exceed 10002000 ft for GTO drives and 5001000 ft for BJ T
drives before producing over voltages at the motor terminals. Presently, IGBT drives may
create over voltages that exceed the motor ratings for cable lengths as low as 50200 ft.
Long cable lengths also contribute to a damped high frequency ringing at the
motor terminals due to the distributed nature of the cable leakage inductance and
coupling capacitance (LC). This high-frequency oscillation results in over voltages that
further stress the motor insulation. The voltage reflections are a function of the inverter
output pulse rise time and the length of the motor cables, which behave as a transmission
line for the inverter output pulses. It has been found
that the pulses travel at approximately half the speed of light (150200 m s or 500 ft s),
and if the pulses take longer than half the rise time to travel from the inverter to the
motor, then a full reflection will occur at the motor terminals and the pulse amplitude will
approximately double.
The modeling and simulation of this problem and other transients has been based
on Mat lab, or Spice or other time-domain methods. The computation time using these
packages can be large so an alternative method, the wavelets transform is proposed here
to examine its effectiveness in solving the motor terminal over voltage problem.
The wavelet modeling of power systems and its application to the solution of
differential equations is beginning to attract attention. It is possible to use wavelets to
solve a class of power engineering circuit problems relating to the calculation of
transients in the system. The wavelet solution of a differential equation is an example of
the use of multiresolution analysis is based on the discrete-time domain approximation of
linear power system components while the discrete wavelet transform is applied to
achieve the wavelet-domain approximation. The wavelet-domain impedance and
equivalent circuits are, thus, formed to solve the linear power system circuit.
This paper first presents the wavelet modeling method . Wavelet models of
basic circuit components are given. Finally, the wavelet modeling is applied to the
overvoltage problem that exists with the long-cable problem in high dv/dt motor drives.


II. WAVELET TRANSFORM AND MODELING

The wavelet transform has been used in the field of signal processing as well as
for the study of power transients, because of its ability to deal with nonstationary signals.
In the solution of the long-cable problem, the model used is the equivalent circuit,
which consists of resistors, inductors,and capacitors. For wavelets to be used in this
problem, it is necessary to transform all these models to the wavelet domain.After the
wavelet-domain solution is obtained, the inverse transform is used to get the time-domain
solution.
The wavelet transform is being used in the solution of differential equations and
opens a new field to its applications to power system transients. The basic transients in a
typical Power system can be formulated as ordinary differential equations(ODEs).
Based on this, a new wavelet model is provided to solve power system transients. The
discrete wavelet transform (DWT) is the discrete form of the wavelet transform. A signal
vector X has its image Y through a linear operator T.

Let WX be the wavelet coefficients of the vector X, and WY be the wavelet coefficients
of the vector Y, then

Then, this operator is represented in the wavelet domain as

and

Equations (2.5) and (2.6) represent the wavelet-domain representation of the discrete
time domain (2.1). For fast numerical computation, matrix WT is calculated from the
DWT. The wavelet transform of each column of matrix T is calculated and stored as the
column of WT, and then the wavelet transform of each row of the resulting matrix is
computed to obtain the WT matrix.
The basic elements of a typical electrical power system can be represented as
resistors, inductors, and capacitors.

A. Resistor Transient Model
For a resistor, the VI characteristic is represented as

Its transient model in the wavelet domain is

in which WV and WI are the DWT coefficients of the voltage V and current I.

B. Inductor Transient Model
For an inductor, the V-I characteristic is represented as

Its transient model in the wavelet domain is

Where WD
T
is the transient differential operator, and is LWD
T
the wavelet-domain
inductor impedance. WVLO is the wavelet coefficients of the vector VLO related to the
initial current


C. Capacitor Transient Model
The time-domain equation of a capacitor is
Its wavelet-domain equivalent is

where WIN
T
is the transient integral operator, and WIN
T
/C is the wavelet capacitor
Impedance. WVCO is the wavelet coefficients
of the vector VC0 which is related to the initial voltage





III. APPLICATION TO SIMPLE CIRCUIT TRANSIENTS

A typical power system can be modeled as different combinations of R, L, and C.
An RLC series circuit, with dc power source is given in Fig. 2. The dc source will be
switched at time t=0.
The wavelet model of the RLC series circuit with zero initial conditions is given in
Fig. 3. Its VI equation is the following:

It shows that the original time-domain differential equationsare transformed into a
linear equation in the wavelet domain. Conventional linear circuit theory can be applied
to solve the equations easily. The problem is that in the wavelet modeling


Fig. 2. RLC series circuit with a dc source.



Fig. 3. Wavelet model of RLC series circuit with zero initial conditions.


Fig. 4. Wavelet model of RLC series circuit with initial conditions.



method there are computations of a large-dimension matrix if done simultaneously, thus
resulting in the considerable increase of computation time. It is found that with a constant
dc voltage value, such as VD in the above circuit, the wavelet transform coefficients
WVD shows significant values only in a very small percentage of the total coefficients.
This will greatly reduce the computation time in the matrix operation since only a small
number of columns in the matrix need to be computed.
The wavelet model of the RLC series circuit with initial conditions is presented
in Fig. 4. Its V-I equation is the following:

Fig. 5. (a) Inductor current and capacitor voltage using wavelet modeling.
(b) Inductor current and capacitor voltage using wavelet modeling.

In this case, on the left-hand side of the equation, there are additional terms resulting
from the initial conditions of the inductor and capacitor. WVL0 is the wavelet
coefficients of the vector VL0. It is clear that this case requires additional computations
than the zero-initial-condition case. The solution of the RLC series circuit is carried out
using the methods above.


The inductor current and capacitor voltage are shown in Fig. 5. Compared with the
Pspice simulation results in Fig. 6, it is obvious that the accuracy is comparable.

IV.LONG-CABLE OVERVOLTAGE

A. Long-Cable Model

Fig. 7 shows a one-phase equivalent model of the long cable to simulate the
overvoltage at the motor terminals. The cable used in this model is AWG #14, and the
switching frequency of the PWM inverter is 2 kHz. The cable model used in the
simulation is a three-sector lumped model. The output of the PWM inverter is shown in
Fig. 8. The motor terminal voltage response to a single PWM
pulse is shown in Fig. 9. Compared with the experimental results of the motor terminal
voltage for a single PWM pulse in Fig. 10, the wavelet method shows its accuracy.

B. Effect of Cable Length on the Overvoltage

Fig. 11 shows the overvoltage at the motor terminal for a single PWM pulse with
the cable length changed to 250 ft, with all other cable parameters kept unchanged. At
first glance, there is no significant difference between this voltage waveform and the one
for the 100-ft cable. The ringing frequency in Fig. 9 is more serious than that in Fig. 11.
To explore the difference, the motor terminal voltage ringing caused by a single PWM
pulse with 500 ft is shown in Fig. 12, where the frequency of the oscillation is lower. The
oscillation frequency is inversely proportional to cable length. The cable length is a
dominant factor in determining the terminal voltage damping. Generally, the longer the
cable, the longer the time taken for transients to decay.



Fig. 6.(a) Capacitor voltage using Pspice simulation. (b) Inductor current and
capacitor voltage from Pspice simulation.
Fig. 7. Long-cable simulation model.



Fig. 8. Voltage from the PWM output.


Fig. 9.Overvoltage at the motor terminal with long cableof 100 ft.



Fig. 10. Voltage at the motor terminal for 100-ft cable (measurement).

Fig. 11. Overvoltage at the motor terminal with long cable of 250 ft.

C. Effects of Rise Time

In the case of Fig. 9, the power semiconductors in the PWM inverter are assumed
to be ideal and the effects of the rise time of the switches are not included in the
simulation. In fact, the rise time of the power electronics has a considerable effect on the
overvoltage of the motor terminals.

Fig. 12. Overvoltage at the motor terminal with long cable of 500 ft.

.
Fig. 13. Overvoltage at the motor terminal of a 100-ft cable with rise time =0:2 s.

Fig. 9 shows the motor terminal voltage for switches with a zero rise time. In Fig.
13, the motor terminal voltage is shown with a rise time of 0.2s. The voltage waveform
with a rise time of 2s is shown in Fig. 14. By comparing Fig. 14 with Fig. 9,it is obvious
that the peak voltage decreases with an increase inthe rise time. That means that the peak
voltage increases with anincrease in dv/dt. Then, the effects of the rise time of the power
electronic devices should be carefully considered when used in industrial applications.

D. Simulation Results
All the wavelet simulations presented in the sections above are based on the
condition of a single PWM pulse as the input to the long-cable circuit. As stated in the
wavelet model of the circuit elements, the selection of the step size determines the
dimension of the matrices related to the model. For the 100-ft cable, the cable oscillation
frequency will be 1.32 MHz. According to the Nyquist Rule, the sampling frequency
must be larger than 2.64 MHz for an effective representation. The step size must be
smaller than 0.37s. For a simulation with 100s, the transform matrices will be in the
order of 270.
The actual simulation includes many operations of matrix inversion and multiplication. It
takes more than 1 h in a Pentium 200-MHz digital computer to obtain the result.
In (2.10) and (2.15), the initial inductor current and the initial capacitor voltage
are used in the wavelet domain characteristic equation of inductor and capacitor. The
simulation is divided into short periods.







This reduces the order of the matrices in the simulation, thus reducing simulation time.
The final value of each inductor current and capacitor voltage for each period is used as
the initial condition in the next period of simulation.



Fig. 14. Overvoltage at the motor terminal of a 100 feet cable with rise time =2s.

For a 270-square matrix, its inversion takes on the order of 270270=72900
operations. If there are only eight sampling points in each period, the required time will
be approximately35 operations of the matrix inversion of order 8 which requires
3588=560 operations The 35 is derived by270/8 plus a few operations after the matrix
inversion. The first method requires at least 130 times more operations than that required
of the second method.
To illustrate this principal the circuit in Fig. 2 was simulated using Matlab and
Simulink.The resulting waveforms can be seen in Figs. 5(a) and (b) and 6(a) and (b).
Fig. 15 shows the motor terminal voltage for a 100-ft-long cable during a 20-ms
period. The step size T is 0.05s and there is a total of 40000 data points. The direct use
of the wavelet model in (2.8), (2.10), and (2.15) will make the implementation
prohibitive. With the selection of 8 points each period, the dimension of the matrices is
reduced to 8. After including the method from Fig. 7, the total computation time is
reduced.
The wavelet modeling will need extensive computation and lose its effectiveness
when used to solve systems with high dimension or high resolution. However, it has
been proved in this paper that with the selection of a short basic computation period, the
dimension of the matrices in the simulation can be greatly reduced and hence the
computation time.



Fig. 15. Overvoltage at the motor terminal in 20 ms.

V. CONCLUSION

Wavelet modeling has been used to solve the long-cable overvoltage problem.
It is proved that the wavelet modeling is effective in the solution of the power system
problem with the method proposed.

REFERENCES

1. www.google.com

2. www.ieee.com

3. www.altavista.com




Multiresolution Watermark Based On Wavelet
Transform For Digital Images
-by
B.SANTOSH KUMAR,
M.VIJAY BHASKAR,
V R Siddhartha Engineering College,
Vijayawada,
Email id:bsk_santosh79@rediff.com
vijaybhaskarece@yahoo.co.in
phone no:9985793376
Abstract:

The overabundant and easily accessible digital
data on the internet, has made it the most
vulnerable information-store subject to piracy.
Digital watermarking is a tool developed to fight
piracy. The rapid expansion of the internet in the
recent years has rapidly increased the availability
of digital data such as audio, images and videos
to the public. As we have witnessed in the past
few months, the problem of protecting
multimedia information becomes more and more
important and a lot of copyright owners are
concerned about protecting any illegal
duplication of their data or work. Some serious
work has to be done to maintain the availability
of multimedia information and at the same time
protecting the intellectual property of creators,
distributors or simple owners of such data. This
is an interesting challenge and is probably the
reason why so much attention has been drawn





toward the development of digital images
protection schemes. Of the many approaches
available to protect visual data, digital
watermarking is probably the one that has
received the greatest attention. The idea of robust
watermarking of images, is to embed
information data within the image with an
insensible form for human visual system but in a
way that protects from attacks such as common
image processing operations. The goal is to
produce an image that looks exactly the same to
a human eye but still allows its positive
identification in comparison with the owner's
key if necessary. This paper attempts to first
introduce the general idea behind digital
watermarking as well as some of its basic
notions. It is followed by describing some
applications of watermarking techniques and the
difficulties faced in this new technology. The
paper ends with an overview on some copyright
protection techniques, involving watermarking
and it is also seen as to why digimarks have
become an important research subject of late.

Basis on Watermarking
The increasing amount of applications using
digital multimedia technologies has accentuated
the need to provide copyright protection to
multimedia data. Digital watermarking is a
method that has received a lot of attention in the
past few years. A digital watermark can be
described as a visible or preferably invisible
identification code that is permanently embedded
in the data . It means that it remains present
within the dataafter any decryption process .a
general definition can be given:
"!Hiding of a secret message or information
within an ordinary message and the extraction
of it at its destination!"
Complementary to encryption, it allows some
protection of the data after decryption. As we
know, encryption procedure aims at protecting
the image (or other kind of data) during its
transmission. Once decrypted, the image is not
protected anymore. By adding watermark, we
add a certain degree of protection to the image
(or to the information that it contains) even after
the decryption process has taken place. The goal
is to embed some information in the image
without affecting its visual content. In the
copyright protection context, watermarking is
used to add a key in the multimedia data that
authenticates the legal copyright holder and that
cannot be manipulated or removed without
impairing the data in a way that removes any
commercial value. Figure 1 is a general
watermarking scheme intended to give an idea of
the different operations involved in the
process.


The first distinction that one needs to do in the
study of watermarking for digital images is the
notion of visible watermarks versus invisible
ones. The first ones are used to mark, obviously
in a clearly detectable way, a digital image in
order to give a general idea of what it looks like
while preventing any commercial use of that
particular image. The purpose here is to forbid
any unauthorized use of an image by adding an
obvious identification key, which removes the
image's commercial value. On the other hand,
invisible watermarks are used for content and/or
author identification in order to be able to
determine the origin of an image. They can also
be used in unauthorized image's copies detection
either to prove ownership or to identify a
customer. The invisible scheme does not intend
to forbid any access to an image but its purpose
is to be able to tell whether a specified image has
been used without the owner's formal consent or
if the image has been altered in any way. This
approach is certainly the one that has received
the most attention in the past couple of years. In
that line of thought, it is possible to differentiate
two ways of embedding
Invisible information in digital image.
In Spatial Domain
The first watermarking scheme that was
introduced, works directly in the spatial domain.
By some image analysis operations (e.g. Edge
detection), it is possible to get perceptual
information about the image, which is then used
to embed a watermarking key, directly in the
intensity values of predetermined regions of the
image. That pretty simple technique provide a
simple and effective way for embedding an
invisible watermark into an original image but
doesnt show robustness to common image
alterations.
In Transform Domain
Another way to produce high quality
watermarked image, is by first transforming the
original image into the frequency domain by the
use of Fourier, Discrete Cosine or Wavelet
transforms for example. With this technique, the
marks are not added to the intensities of the
image but to the values of its transform
coefficients. Then, inverse-transforming the
marked coefficients forms the watermarked
image. The use of frequency based transforms
allows the direct understanding of the content of
the image, therefore, characteristics of the human
visual system (HVS) can be taken into account
more easily when it is time to decide the
intensity and position of the watermarks to be
applied to a given image.
Types Of Digital Watermarks
Two types of digital watermarks may be
distinguished, depending upon whether the
watermark appears visible or invisible to the
casual viewer. Visible watermarks are used in
much the same way as their bond paper
ancestors, where the opacity of paper is altered
by physically stamping it with an identifying
pattern. This is done to mark the paper
manufacturer or paper type. One might view
digitally watermarked documents and images as
digitally "stamped". Invisible watermarks, on the
other hand, are potentially useful as a means of
identifying the source, author, creator, owner,
distributor or authorized consumer of a
document or image. For this purpose, the
objective is to permanently and unalterably mark
the image so that the credit or assignment is
beyond dispute. In the event of illicit usage, the
watermark would facilitate the claim of
ownership, the receipt of copyright revenues, or
the success of prosecution.
VISIBLE VS. INVISIBLE
WATERMARKS
Visible and invisible watermarks both serve to
deter theft but they do so in very different ways.
Visible watermarks are especially useful for
conveying an immediate claim of ownership.
The main advantage of visible watermarks, in
principle at least, is that they virtually eliminate
the commercial value of the document to a
would-be thief without lessening the document's
utility for legitimate, authorized purposes.
Invisible watermarks, on the other hand, are
more of an aid in catching the thief than
discouraging the theft in the first place. Though
neither exhaustive nor definitive, Table 1 shows
some anticipated primary (p) and secondary (s)
benefits to digital watermarking.




Table 1.

TABLE 1 CAPTION: Visible watermarks
diminish the commercial value of the
document or image. Invisible watermarks
increase the likelihood of successful
prosecution. Invisible watermarks may also
act as a deterrent if perpetrator is aware of
their possible use.
REQUIREMENTS OF
WATERMARKS
To be effective in the protection of the
ownership of intellectual property, the invisibly
watermarked document should satisfy several
criteria:
1. the watermark must be difficult or
impossible to remove, at least without
visibly degrading the original image,
2. the watermark must survive image
modifications that are common to
typical image-processing applications
(e.g., scaling, color requantization,
dithering, cropping, and image
compression),
3. an invisible watermark should be
imperceptible so as not to affect the
experience of viewing the image, and
4. for some invisible watermarking
applications, watermarks should be
readily detectable by the proper
authorities, even if imperceptible to the
average observer. Such decodability
without requiring the original, un-
watermarked image would be necessary
for efficient recovery of property and
subsequent prosecution.
FEATURES OF DIGITAL
WATER MARKING
As the name suggests, the fragile watermarks are
meant to disappear if the image (or other data
used) is corrupted. It can be really useful in
judiciary process, for example, where it is
imperative to be certain that what is used is
genuine. In contrast, robust watermarks are
designed to resist attacks (malicious or not) in
order for the key to still be detectable after it has
undergone them. The range of applications,
which include copyright protection, asks for very
different specific characteristics. Nevertheless, it
is possible to list a set of basic requirements for
robust watermarks.
Those are a generalization, but should allow the
reader to understand, once again, the main
difficulties involved in the design of digimarks
systems and should also act as natural
preliminaries to the following paragraph
discussing the difficulties of that technology.
Perceptual Transparency : Use characteristics
of the HVS to assure that the watermark is not
visible under typical viewing conditions.
Basically, it means that a watermarked image
Purpose
vi
si
bl
e
invi
sible
validation of intended
recipient
- p
non-reputable transmission - p
deterrence against theft p p
diminish commercial value
without utility
- p
Discourage unauthorized
duplication
p s
digital notarization and
authentication
s p
identify source p s
should not seem any different from the original
(unwatermarked) one; i.e. one should not notice
any degradation in the perceived quality. Other
types of watermarks are meant to be visible, but
in most applications they are not and this is why
we treat transparency as a basic requirement of
digital watermarking.

Robustness : Watermark still can be detected
after the image has undergone attacks (malicious
or not) compression, halftoning, etc. Ideally, the
amount of image distortion necessary to remove
the watermark should degrade the desired image
quality to the point of becoming commercially
valueless.

Capacity : Allows insertion of multiple,
independently detectable watermarks in an
image.
Security : The basis of watermarking security
should lie on Kerckhoff's assumption that one
should assume that the method used to encrypt
the data is known to the unauthorized party. It
means that watermarking security can be
interpreted as encryption security leading
directly to the principle that it must lie mainly in
the choice of the embedded key.

Specificity : Watermark should be universal, i.e.
applicable to images as well as audio and video
media. In fact, that has been found not to be true;
the general concept might be the same in
multiple applications but, in watermarking, one
size does not fit all.

As it might already be obvious for an attentive
reader, a lot of difficulties are encountered while
trying to define the ideal watermarking scheme
for a particular application.
Digital watermarking algorithms
usually use the lower-order bit planes of
the original image but so do intentional
disturbance algorithms. The availability
of watermark readers allows copyright
thieves to determine if the watermark
exists in the image or not.
In highly compressed J PEG images,
there is a limited amount of data that
can be used to insert digital watermarks.
The introduction of artifacts (such as
blockiness) by compression techniques
usually destroys watermarks easily.
Robustness and transparency are two
fundamentally opposed requirements; a
tradeoff between the two must then be
made.
Those form only a brief overview of some
problems involved in digital watermarking
technologies. Of course, as in any area of
research, unexpected troubles always arise either
with the underlying principles or during the
technical evolution of the concept.
Attacks On Watermarks
There are several kinds of malicious attacks,
which result in a partial or even total destruction
of the embed identification key and for which
more advanced watermarking scheme should be
employed
Active attacks : Here, the hacker tries
deliberately to remove the watermark or simply
make it undetectable. This is a big issue in
copyright protection, fingerprinting or copy
control for example
Passive attacks : In this case, the attacker is not
trying to remove the watermark, but simply
attempting to determine if a given mark is
present or not. As the reader should understand,
protection against passive attacks is of utmost
importance in covert communications where the
simple knowledge of the presence of watermark
is often more one wants to grant.
Collusion attacks : In collusive attacks, the goal
of the hacker is the same as for the active attacks
but the method is slightly different. In order to
remove the watermark, the hacker uses several
copies of the same data, containing each
different watermark, to construct a new copy
without any watermark. This is a problem in
fingerprinting applications (e.g. In the film
industry) but is not widely spread because the
attacker must have access to multiple copies of
the same data and that the number needed can be
pretty important.
Forgery attacks : This is probably the main
concern in data authentication. In forgery
attacks, the hacker aims at embedding a new,
valid watermark rather than removing one. By
doing so, it allows him to modify the protected
data as he wants and then, re-implants a new
given key to replace the destructed (fragile) one,
Thus making the corrupted image seem genuine.
Those four types of malicious attacks are
only a categorization of what has been
encountered in the past. Of course, a lot of new
attacks can be designed and it is impossible to
know what will come out next from hackers'
imagination. To those malicious attacks, one
must add all signal processing operations
involved in the transmission or storage of data,
which can naturally degrade the image and alter
the watermarked information to the point of not
being detectable anymore. In fact, the
identification and classification of attacks, as
well as the implantation of a standard benchmark
for robustness testing is of great importance and
will be a key issue in the future development of
watermarking.
Description
Let us discuss an example on watermarking
method proposed by Ward. This first generation
technique called Multiresolution Watermark for
Digital Images works in the wavelet domain.
The modus operandi is pretty simple since it
consists of adding weighted pseudo-random
codes to the large coefficients at the high and
medium frequency bands of the discrete wavelet
transform of an image. Even if it might first
appear theoretically straightforward, the actual
implementation is a little more complicated than
it seems and great effort will be made here to
clarify it. First of all, one must know why the
wavelets approach is used here. The
characteristics of this transform domain are well
suited for masking consideration since it is well
localized both in time and frequency. Besides,
wavelet transforms match the multi-channel
model of the HVS, so one can actually set a
numerical limit to the wavelets coefficients
alteration in order to stay under HVS just
noticeable difference 3 (JND), for which our
eyes start to become aware of the modifications
in the image. Moreover the wavelet transforms
match the HVS(Human Visual System) model,
so we can change the values of the wavelet
coefficients in such a way that image alterations
is not perceptible to our eyes. But the changes
should be below the HVS just noticeable
difference value 3, beyond which the eyes starts
to recognize the changes made in the image.
Besides, wavelet transforms is a part of
upcoming compression standards (such as J PEG-
2000), so wavelet-based techniques would allow
a much easier and optimized way to include a
copyright protection device in the compression
code itself.
A second important aspect of the technique
used here is the way to introduce a watermark in
an image. A watermark should be introduced in
perceptually significant regions of the data in
order to remain robust. However, by doing so,
we risk to alter the image (i.e. perceivably). The
technique described here follows this
requirement to a certain degree, but tries to make
the introduced watermark as invisible as it can
while showing good robustness. As the reader
will understand, we have chosen to ensure
transparency of the watermark and, at the same
time keep the robustness by embedding the
information within only the high and medium
frequencies while keeping third resolution low
frequencies, for which the HVS is most
sensitive, untouched. The implemented
technique uses no oblivious watermark. In
copyright protection, it is reasonable to assume
that the owner or the user who wants to verify
the presence of a particular watermarking key
has access to the original unwatermarked image.
This type of watermarking scheme is called
nonoblivious (or private) watermarking in
opposition to oblivious (or public) techniques,
such as copy protection where the image is not
accasible.First, the multiresolution and
hierarchical approach save computational load in
detection. Then, it allows good transparency as it
uses characteristics of our visual system. As we
know, the HVS is more sensible to small
perturbation in the lower frequency bands, thus
keeping the low frequencies intact. This assures
that the watermarked image will be as close as
possible to the original one. It has to be noted
that, from a signal processing point of view, the
requirements of transparency and robustness
conflict each other. The developed method is
well suited, because it allows a certain tradeoff
between the two. Finally, and as already stated,
the use of wavelet transform matches the
upcoming image or video compression
standards, which means that such a
watermarking scheme could easily be included
as a part of JPEG-2000 or other wavelet-based
standard. Before proceeding with the description
of encoding - decoding scheme, let us revisit the
goals we had set at the beginning of the paper. In
the first place, we were willing to implement a
perceptual watermarking scheme for digital
images based on wavelet transforms. Then we
wanted to test the robustness of our
implementation to different type of attacks such
as common image processing operations.
Watermark
The first part of water marking process is
encoder. The first step is to decompose the image
into three frequency bands using three
resolutions of Haar wavelets. Figure 2s Bank of
filters represent the idea of octave band structure
of Haar wavelets, which gives pyramid structure
frequency localizations as shown in Figure 2.
Before we go on, we must point out that an
undersampling operation is done after every
filtering. It must be understood that the choice of
the Haar wavelet in our system was one made for
simplicity. However, we had in mind to
investigate the influence of the choice of the
wavelet function in our results. But, in order to
test the robustness truthfully, we had to give up
the idea in favor of the addition of extra
robustness testing procedures.The next operation
is to add a pseudo random sequence N , in fact a
Gaussian distribution of mean zero and variance
one, to the coefficients of the medium and high
frequency bands (i.e. all the bands except the
lowest one which is represented by the top left
corner in Figure 3). The normal distribution is
used because it has been proven to be quite
robust to collusive attacks. In order to weight the
watermark according to the magnitude of the
wavelet coefficients, we used one of the two
following relations between the original
coefficients y and , the ones containing
the watermark:
[m,n]=y[m,n]+alpha(y[m,n])
2
N[m,n] (1)
,[m,n]=y[m,n]+alpha abs(y[m,n])N[m,n] (2)

It has to be mentioned that the relations (1) and
(2), even though are mathematically different,
have the same goal, which is to put more weight
to the watermark added to high value wavelet
coefficients. The parameter alpha is to control
the level of the watermark. It is in fact a good
way to choose between good transparency or
good robustness or a tradeoff between the two.
Finally, the two-dimensional inverse wavelet
transform is computed to form , the
watermarked image. Figure 4 gives a good idea
of the main components of the encoder that we
have implemented for our project.
Decoder
At the other end of the communication channel, a
decoder is used to extract the watermarked
information from the received image. Upon
reception of the supposedly watermarked image,
the algorithm first isolates the signature included
in this image by comparing the DWT
coefficients of the image with those of the
original (non-watermarked) one. The following
operation consists of taking the identified key to
put in contrast with the found signature by
computing the crosscorrelation at the first
resolution level (i.e. highest frequency
coefficients). The watermark is called detected,
if there is a peak in the crosscorrelation
corresponding to a positive identification. If
there is no central peak, the decoder adds the
second resolution level (i.e. the bottom left
square in the pyramid structure of Figure 3. to
the computation aiming at finding for a peak.
Once again, if there is a peak, the watermark is
called detected and if not, we go to the third
resolution and so on until we reach the ninth
resolution limit. The main advantage of this
technique is that while allowing good detection,
even in the presence of corruption, it keeps the
level of false positive detection to a minimum
since the found signature has to go through
detection step of positive identification to be
called detected. The detector step aims at
ensuring the maximum exactitude in the
detection of the owner identification key and, as
said previously, minimizing the number of false
positive detection. The results presented later on
should convince the reader of the performance of
our decoder.
Robustness Testing
We'll focus only on the result of attacks due to
widely spread image processing operations for
simplicity. By doing so, we make sure that our
system could be used in the transmission of
images. Since this is an introduction to the
watermarking subject, it gives a general idea of
the kind of problems involved in the process and
see how simple image processing operations, can
affect the functioning of a watermarking method.
Furthermore, it must be pointed out that, one of
the main problems to be solved in the field of
digital watermarking is the absence of general
testing benchmark. Such examination procedure
has to be implemented and generalized in order
to be able to compare different watermarking
techniques. Nevertheless, in order to test the
robustness of any implementation, here are three
testing procedures. The characteristics of chosen
attacks can be summarized as follows (Fig 6):

* JPEG : Introduces blocking artifacts in
the compressed images.
* Halftoning: It is used for printing; it
gives the impression of grey levels while, in fact,
there are only two levels (black & white).
* Filtering: Of course, a lot of different
kinds of filter are used in all areas of signal
processing. As samples, we have used low pass,
average and unsharp filters.

Applications
1. Finger Printing.
2. Indexing.
3. Copyright Protection.
4. Broadcast monitoring.
5. Data Authentication and Data Hiding.
DIFICULTIES
As it might already be obvious for an attentive
reader, a lot of difficulties are encountered while
trying to define the ideal watermarking scheme
for a particular application.
Some particular attention must be paid in
order not to downgrade the quality of the
source image too much by the insertion of digital
watermark (i.e. Transparency requirement).
Digital watermarking algorithms
usually use the lower-order bit planes of
the original image but so do intentional
disturbance algorithms. The availability
of watermark readers allows copyright
thieves to determine if the watermark
exists in the image or not.
In highly compressed J PEG images,
there is a limited amount of data that
can be used to insert digital watermarks.
The introduction of artifacts (such as
blockiness) by compression techniques
usually destroys watermarks easily.
Robustness and transparency are two
fundamentally opposed requirements; a
tradeoff between the two must then be
made.




The Future of Digital Watermarking

The field of digital watermarking is still evolving
and is attracting a lot of research interest. The
watermarking problem is inherently more
difficult that the problem of encryption, since it
is easier to execute a successful attack on a
watermark. In cryptography, a successful attack
often requires deciphering an enciphered
message. In the case of digital watermarking,
merely destroying the watermark, usually by
slightly distorting the medium containing it, is a
successful attack, even if one cannot decipher or
detect any hidden message contained in the
medium.
Thus far, the SDMI has been unsuccessful in
their attempts to devise a secure watermarking
scheme for audio files, and Stir Mark has been
able to defeat all the commercially available
watermarking systems for image files. These
types of schemes still have the potential to
provide copyright protection, if more robust
watermarks can be developed along with
carefully designed protocols governing their use.
However, the technology must be combined with
proper legal enforcement, and industry standards
in order to be ultimately successful.

References
1. I J Cox, M L Miller and J A Bloom,
Watermarking Applications and their
Properties.
2. A H Paquet, R K Ward, Wavelet Based
Digital Watermarking .
A TECHNICAL PAPER PRESENTATION ON
BIOMETRICS
A RENEWED SYSTEM IN CUSTOM SECURITY

By
CH.SIDDARTHA SAGAR
&
M.PRANEETH








Sri Venkateswara University College of
Engineering,
TIRUPATI 517502.


Address:

CH.SIDDARTHA SAGAR (III ECE),
M.PRANEETH(III ECE),
Room No: 1213,
Visweswara Block,
S.V.U.C.E Hostels,
Tirupati.
PIN -517502.



E-mails:siddartha_437ece@yahoo.co.in














ABSTRACT


Biometrics refers to the automatic
identification of a person based on his/her
physiological or behavioral characteristics. Thi s
method of identification is preferred over
traditional methods involving passwords and PIN
numbers for various reasons: (i) the person to be
identified is required to be physically present at
the point-of-identification; (ii) identification based
on biometri c techniques obviates the need to
remember a password or carry a token. With the
increased use of computers as vehicles of
information technology, it is necessary to restrict
access to sensitive/personal data. By replacing
PINs, biometric techniques can potentially prevent
unauthorized access to or fraudulent use of ATMs,
cellular phones, smart cards, desktop PCs,
workstations, and computer networks. PINs and
passwords may be forgotten, and token based
methods of identification like passports and
driver's l icenses may be forged, stolen, or lost.
Thus biometric systems of identification are
enjoying a renewed interest. Vari ous types of
biometric systems are being used for real-time
identification, the most popular are based on face
recognition and fingerprint matching. However,
there are other biometric systems that utilize iris
and retinal scan, speech, facial thermograms, and
hand geometry.


Biometric-based solutions are
able to provide for confidential financial
transactions and personal data privacy. The need
for biometrics can be found in federal, state and
local governments, in the military, and in
commercial applications. Enterprise-wide
network security infrastructures, government IDs,
secure electronic banking, investing and other
financial transactions, retail sales, law
enforcement, and health and social services are
already benefiting from these technologies.



INTRODUCTION

Biometrics is automated methods
of recognizing a person based on a physiological
or behavioral characteristic. Among the features
measured are; face, fingerprints, hand geometry,
handwriting, iris, retinal, vein, and voice.
Biometric technologies are becoming the
foundation of an extensive array of highly secure
identification and personal verification solutions.
As the level of security breaches and transaction
fraud increases, the need for highly secure
identification and personal verification
technologies is becoming apparent.
Biometric-based authentication
applications include workstation, network, and
domain access, single sign-on, application logon,
data protection, remote access to resources,
transaction security and Web security. Trust in
these electronic transactions is essential to the
healthy growth of the global economy. Utilized
alone or integrated with other technologies such
as smart cards, encryption keys and digital
signatures, biometrics are set to pervade nearly all
aspects of the economy and our daily lives.
Utilizing biometrics for personal authentication is
becoming convenient and considerably more
accurate than current methods (such as the
utilization of passwords or PINs). This is because
biometrics links the event to a particular
individual (a password or token may be used by
someone other than the authorized user), is
convenient (nothing to carry or remember),
accurate (it provides for positive authentication),
can provide an audit trail and is becoming
socially acceptable and inexpensive.
There are two different ways to
resolve a person's identity: verification and
identification. Verification (Am I whom I claim I
am?) involves confirming or denying a person's
claimed identity. In identification, one has to
establish a person's identity (Who am I?). Each
one of these approaches has its own complexities
and could probably be solved best by a certain
biometric system.
A biometric system is essentially

a pattern recognition system which makes a
personal identification by determining the
authenticity of a specific physiological or
behavioral characteristic possessed by the user.
An important issue in designing a practical
system is to determine how an individual is
identified. Depending on the context, a biometric
system can be either verification

APPLICATIONS:


Biometrics is a rapidly evolving
technology, which has been widely used in
forensics such as criminal identification and
prison security. Recent advancements in
biometric sensors and matching algorithms have
led to the deployment of biometric authentication
in a large number of civilian applications.
Biometrics can be used to prevent unauthorized
access to ATMs, cellular phones, smart cards,
desktop PCs, workstations, and computer
networks. It can be used during transactions
conducted via telephone and Internet (electronic
commerce and electronic banking). In
automobiles, biometrics can replace keys with
key-less entry and key-less ignition.

FINGERPRINT MATCHING:


Among all the biometric
techniques, fingerprint-based identification is the
oldest method, which has been successfully used
in numerous applications. Everyone is known to
have unique, immutable fingerprints. A
fingerprint is made of a series of ridges and
furrows on the surface of the finger. The
uniqueness of a fingerprint can be determined by
the pattern of ridges and furrows as well as the
minutiae points. Minutiae points are local ridge
characteristics that occur at either a ridge
bifurcation or a ridge ending. Fingerprint
matching techniques can be placed into two
categories:

Minutiae-based


Correlation based.


Minutiae-based techniques first



find minutiae points and then map their
relative placement on the finger. However,
there are some difficulties when using this
approach. It is difficult to extract the
minutiae points accurately when the
fingerprint is of low quality. Also this
method does not take into account the global
pattern of ridges and furrows. The
correlation-based method is able to overcome
some of the difficulties of the minutiae-based
approach. However, it has some of its own
shortcomings. Correlation-based techniques
require the precise location of a registration
point and are affected by image translation
and rotation.











Fingerprint matching based on
minutiae has problems in matching different sized
(unregistered) minutiae patterns. Local ridge
structures cannot be completely characterized by
minutiae. An alternate representation of
fingerprints has been tried, which will capture
more local information and yield a fixed length
code for the fingerprint. The matching will then
hopefully become a relatively simple task of
calculating the Euclidean distance between the

two codes.


Algorithms have been developed


which are more robust to noise in fingerprint
images and deliver increased accuracy in real-
time. A commercial fingerprint-based
authentication system requires a very low False
Reject Rate (FRR) for a given False Accept Rate
(FAR). This is very difficult to achieve with any
one technique. Methods have been investigated to
pool evidence from various matching techniques
to increase the overall accuracy of the system. In


Fingerprint classification is a
technique to assign a fingerprint into one of the
several pre-specified types already established in
the literature, which can provide an indexing
mechanism. Fingerprint classification can be
viewed as a coarse level matching of the
fingerprints. An input fingerprint is first matched
at a coarse level to one of the pre-specified types
and then, at a finer level, it is compared to the
subset of the database containing that type of
fingerprints only. An algorithm has developed to
classify fingerprints into five classes, namely,
whorl, right loop, left loop, arch, and tented arch.
The algorithm separates the number of ridges
present in four directions (0 degree, 45 degree, 90
degree, and 135 degree) by filtering the central
part of a fingerprint with a bank of Gabor filters.
This information is quantized to generate a
FingerCode, which is used for classification. The
classification is based on a two-stage classifier,
which uses a K-nearest neighbor classifier in the
first stage and a set of neural networks in the
second stage. The classifier is tested on 4,000
images in the NIST-4 database. For the five-class
a real application, the sensor, the acquisition
system and the variation in performance of the
system over time is very critical.

FINGERPRINT CLASSIFICATION:


Large volumes of fingerprints are
collected and stored everyday in a wide range of
applications including forensics, access control,
and driver license registration. An automatic
recognition of people based on fingerprints
requires that the input fingerprint be matched
with a large number of fingerprints in a database
(FBI database contains approximately 70 million
fingerprints!). To reduce the search time and
computational complexity, it is desirable to
classify these fingerprints in an accurate and
consistent manner so that the input fingerprint is
required to be matched only with a subset of the
fingerprints in the database.
fingerprint verification system. Experimental
results show that incorporating the enhancement
algorithms improves both the goodness index and
the verification accuracy.
problem, classification accuracy of 90% is
achieved. For the four-class problem (arch and
tented arch combined into one class), a
classification accuracy of 94.8% has been
achieved. By incorporating a reject option, the
classification accuracy can be increased to 96%
for the five-class classification and to 97.8% for
the four-class classification when 30.8% of the
images are rejected.



FINGERPRINT IMAGE ENHANCEMENT:




A critical step in automatic
fingerprint matching is to automatically and
reliably extract minutiae from the input
fingerprint images. However, the performance of
SPEAKER RECOGNITION:


The speaker-specific
characteristics of speech are due to differences in
physiological and behavioral aspects of the
speech production system in humans. The main
physiological aspect of the human speech
production system is the vocal tract shape. The
vocal tract is generally considered as the speech
production organ above the vocal folds, which
consists of the following: (i) laryngeal pharynx
(beneath the epiglottis), (ii) oral pharynx (behind
the tongue, between the epiglottis and velum),
(iii) oral cavity (forward of the velum and
bounded by the lips, tongue, and palate), (iv)
nasal pharynx (above the velum, rear end of nasal
cavity), and (v) nasal cavity (above the palate and
extending from the pharynx to the nostrils). The
shaded area in the following figure depicts the
vocal tract.
a minutiae extraction algorithm relies heavily on
the quality of the input fingerprint images. In
order to ensure that the performance of an
automatic fingerprint identification/verification
system will be robust with respect to the quality
of the fingerprint images, it is essential to
incorporate a fingerprint enhancement algorithm
in the minutiae extraction module. A fast
fingerprint enhancement algorithm has
developed, which can adaptively improve the
clarity of ridge and furrow structures of input
fingerprint images based on the estimated local
ridge orientation and frequency. The performance
of the image enhancement algorithm
has evaluated using the goodness index
of the extracted minutiae and the accuracy of
an online
nearly closed vocal folds. Frication excitation is
produced by constrictions in the vocal tract.
Compression excitation results from releasing a
completely closed and pressurized vocal tract.
Vibration excitation is caused by air being forced
through a closure other than the vocal folds,
especially at the tongue. Speech produced by
phonated excitation is called voiced, that
produced by phonated excitation plus frication is
called mixed voiced, and that produced by other
types of excitation is called unvoiced.


It is possible to represent the
The vocal tract modifies the spectral content of
an acoustic wave as it passes through it, thereby
producing speech. Hence, it is common in
speaker verification systems to make use of
features derived only from the vocal tract. In
order to characterize the features of the vocal
tract, the human speech production mechanism is
represented as a discrete-time system of the form
depicted in following figure.

vocal-tract in a parametric form as the transfer
function H (z). In order to estimate the parameters
of H (z) from the observed speech waveform, it is
necessary to assume some form for H (z). Ideally,
the transfer function should contain poles as well
as zeros. However, if only the voiced regions of
speech are used then an all-pole model for H (z)
is sufficient. Furthermore, linear prediction
analysis can be used to efficiently estimate the
parameters of an all-pole model. Finally, it can
also be noted that the all-pole model is the
minimum-phase part of the true model and has an
identical magnitude spectra, which contains the
bulk of the speaker-dependent information.

The acoustic wave is produced
when the trachea through the vocal folds carries
the airflow from the lungs. This source of
excitation can be characterized as phonation,
whispering, frication, compression, vibration, or a
combination of these. Phonated excitation occurs
when the airflow is modulated by the vocal folds.
Whispered excitation is produced by airflow
rushing through a small triangular opening
between the arytenoid cartilages at the rear of the

The above discussion also
underlines the text-dependent nature of the vocal-
tract models. Since the model is derived from the
observed speech, it is dependent on the speech.
The following figure illustrates the differences in
the models for two speakers saying the same
vowel.

CHOICE OF FEATURES:

The LPC features were very
popular in the early speech-recognition and
speaker-verification systems. However,
comparison of two LPC feature vectors requires
the use of computationally expensive similarity
measures such as the Itakura-Saito distance and
hence LPC features are unsuitable for use in real-
time systems. Furui suggested the use of the
cepstrum, defined as the inverse Fourier
transform of the logarithm of the magnitude
spectrum, in speech-recognition applications. The
use of the cepstrum allows for the similarity
between two cepstral feature vectors to be
computed as a simple Euclidean distance.
Furthermore, Atal has demonstrated that the
cepstrum derived from the LPC features results in
the best performance in terms of FAR and FRR
for a speaker verification system. Consequently,
we have decided to use the LPC derived cepstrum
for our speaker verification system.









SPEAKER MODELING:
Using cepstral analysis as described in the
previous section, an utterance may be represented
as a sequence of feature vectors. Utterances
spoken by the same person but at different times
result in similar yet a different sequence of feature
vectors. The purpose of voice modeling is to build
a model that captures these variations in the
extracted set of features. There are two types
of models that have been used extensively in
speaker verification and speech recognition
systems: stochastic models and template models.
The stochastic model treats the speech production
process as a parametric random process and
assumes that the parameters of the underlying
stochastic process can be estimated in a precise,
well-defined manner. The template model
attempts to model the speech production process
in a non-parametric manner by retaining a
number of sequences of feature vectors derived
from multiple utterances of the same word by the
same person. Template models dominated early
work in speaker verification and speech
recognition because the template model is
intuitively more reasonable. However, recent
work in stochastic models has demonstrated that
these models are more flexible and hence allow
for better modeling of the speech production
process. A very popular stochastic model for
modeling the speech production process is the
Hidden Markov Model (HMM). HMMs are
extensions to the conventional Markov models,
PATTERN MATCHING: wherein the observations are a probabilistic
function of the state, i.e., the model is a doubly
embedded stochastic process where the
underlying stochastic process is not directly
observable (it is hidden). The HMM can

The pattern matching process
involves the comparison of a given set of input
feature vectors against the speaker model for the
claimed identity and computing a matching score.
For the Hidden Markov models discussed above,
the matching score is the probability that the





model generated a given set of feature vectors.


A SPEAKER VERIFICATION SYSTEM:









only be viewed through another set of stochastic
processes that produce the sequence of
observations. Thus, the HMM is a finite-state
machine, where a probability density function p
(x | s_i) is associated with each state s_i. The
states are connected by a transition network,
where the state transition probabilities are a_{ij}















=p(s_i | s_j). A fully connected three-state HMM


is depicted in the following figure.



For speech signals, another type of

HMM, called a left-right model or a Bakis model,

is found to be more useful. A left-right model has
the property that as time increases, the state index
increases (or stays the same)-- that is the system
states proceed from left to right. Since the
properties of a speech signal change over time in

FACE LOCATION:


The face retrieval problem,
known as face detection, can be defined as
follows: given an arbitrary black and white, still
image, find the location and size of every human
face it contains. There are many applications in
which human face detection plays a very
a successive manner, this model is very well
suited for modeling the speech production
process.
important role: it represents the first step in a
fully automatic face recognition system, it can be
used in image database indexing/searching by
content, in surveillance systems and in human-
computer interfaces. It also provides insight on
how to approach other pattern recognition
problems involving deformable textured objects.
At the same time, it is one of the harder problems
in pattern recognition.
INTEGRATING FACES AND FINGERPRINTS
FOR PERSONAL IDENTIFICATION:

An automatic personal
identification system based solely on fingerprints
or faces is often not able to meet the system
performance requirements. Face recognition is
fast but not reliable while fingerprint verification
is reliable but inefficient in database retrieval. A
prototype biometric system has been developed,
which integrates faces and fingerprints. The
system overcomes the limitations of face

An inductive learning detection
method has been designed that produces a
maximally specific hypothesis consistent with the
training data. Three different sets of features were
considered for defining the concept of a human
face. The
recognition systems as well as fingerprint
verification systems. The integrated prototype
system operates in the identification mode with
performance
achieved is as
follows: 85%
detection rate, a
false alarm rate of
0.04 % of the
number of
windows analyzed
and 1 minute
detection time on
a 320 x 240 image
on a
SunUltrasparc. .

MULTIBIOMET
RICS:
an admissible response time. The identity
established by the system is more reliable than
the identity established by a face recognition
system. In addition, the proposed decision fusion
schema enables performance improvement by
integrating multiple cues with different
confidence measures. Experimental results
demonstrate that the system performs very well.
limitations of a single biometrics. Preliminary
experimental results demonstrate that the identity
established by such an integrated system is more
reliable than the identity established by a face
recognition system, a fingerprint verification
system, and a speaker verification system.

It meets the response time as well as the accuracy
requirements.




A MULTIMODAL BIOMETRIC SYSTEM
USING FINGERPRINT, FACE AND
SPEECH:




A biometric system, which relies
only on a single biometric identifier in making a
personal identification, is often not able to meet
the desired performance requirements.
Identification based on multiple biometrics
represents an emerging trend. A Multimodal
biometric system has been introduced, which
integrates face recognition, fingerprint
verification, and speaker verification in making a
personal identification. This system takes
advantage of the capabilities of each individual
biometric. It can be used to overcome some of the




BIBILOGRAPHIC NOTES:

1. Julian Ashbourn. (2002), Biometrics:
Advanced Identity Verification, London:
2. Zeena Marchant, Biometrics: Fingerprint

Authentication, SANS Reading Room,

3. Samir Nanvati. (2002), Biometrics: Identity
Verification in a Networked World, New York:
Wiley and Sons, Inc,

















BY

.com)


DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING







C.ANITHA

(anithac_in@yahoo


SRIKALAHASTHISWARA INSTITUTE OF TECHNOLOGY
S IKALAHASTI.



R










TABLE OF CONTENTS:
ION
S
STICS
IOMETRIC DEVICES


BSTRACT

ABSTRACT

INTRODUCT

CLASSIFICATION

PROPERTIES

CHARACTERI

APPLICATIONS OF B

FUTURE PROSPECTS

CONCLUSION


A

the modern world, there is an ever-increasing need to authenticate and identify individuals
INTRODUCTION
In
automatically. Securing personal privacy and deterring identity theft are national priorities.
Biometrics, the physical traits and behavioral characteristics that make each of us unique, are a
natural choice for identity verification. It is an emerging technology that promises an effective
solution to our security needs. It can accurately identify or verify individuals based upon their
unique physical or behavioral characteristics. It is a key that can be customized to an individuals
access needs opening doors for one while keeping others out. We can use a biometric to access
our home, our account, or to invoke a customized setting for any secure area or application. In
this paper we explore the various types of biometric authentication techniques and their
deployment potential. We take a look into the emerging technologies in this field and note their
potential applications and future prospects.




The art and science of biometrics is all ll-purpose personal identifier.
utomated methods of recognizing a person based on a physiological or behavioral characteristic
about coming up with an a
A
is the basic fact underlying it. Biometric authentication is the "automatic", "real-time", "non-
forensic" subset of the broader field of human identification. Humans recognize each other
according to their various characteristics. For example, we recognize others by their face when
we meet them and by their voice as we speak to them. Identity verification in computer systems
has traditionally been based on something that one has or one knows. These however tend to get
lost or stolen. To achieve more reliable verification or identification something that really
characterizes the given person should be used for this purpose.
A biometric system is essentially a pattern recognition system that establishes a persons
identity by comparing the binary code of a uniquely specific biological or physical
ATIONS:
characteristic to the binary code of the stored characteristic. This is accomplished by
acquiring a live sample from an individual who is requesting access. The system then
applies a complex and specialized algorithm to the live sample and converts it into a
binary code and then compares it to the reference sample to determine the individual's
access status.
CLASSIFIC
ify biometric systems and devices. Let us take a look at some of
them. First, we look at the classification of systems based on their functions. Biometric systems
Identification.
Identity urs when the user claims to be already enrolled in the system, and the
biometric data obtained from the user is compared to the data stored in the database.
to
two distinct functions:
cation: To prove one is enrolled in the system.
Negative Identification: To prove one is not enrolled in the system.
There are various ways to class
can be used in two different modes:
Verification, and
verification occ
Identification, on the other hand, occurs when the identity of the user is a priori unknown. The
users data is matched against all the records in the database. Identification is technically more
challenging and costly. Its accuracy decreases as the size of the database grows. For this reason
records in large databases are categorized according to a sufficiently discriminating characteristic
in the biometric data. Subsequent searches for a particular record are searched within a small
subset only. This lowers the number of relevant records per search and increases the accuracy.
Now we take a look at the classification of biometric devices. These are classified according
Positive Identifi

These functions are "duals" of each other. In the first function, the present person is linked with
an identity previously registered, or enrolled, in the system. The second function, establishes that
the present person is not already present in the system. The purpose of this negative identification
system is to prevent the use of multiple identities by a single person. If a negative identification
system fails to find a match between the submitted sample and all the enrolled templates, an
"acceptance" results. A match between the sample and one of the templates results in a
"rejection".
ERRORS:
No biometric system can verify the identity of a person absolutely. It cannot give simple yes/no
answers. Instead, it measures how similar the current biometric data is to the record stored in the
database, and makes a decision according to the probability that the two biometric samples come
from the same person. No biometric system is flawless. There are two kinds of errors present in
any biometric system:
False rejection: A legitimate user is rejected because the system does not find his current
biometric data similar enough to the master template stored in the database.
False acceptance: An impostor is accepted as a legitimate user because the system finds
the impostors biometric data similar enough to the master template of a legitimate user.
If this margin is too small, the system will reject a righteous person (false-rejection), while if this
margin is too large, malicious persons will be accepted by the system (false-acceptance).
PROPERTIES:
Any human physiological or behavioral characteristics can become a biometric provided the
following properties are fulfilled:
Universality: Every person should have the characteristic. There are mute people, people
without fingers or with injured eyes; all of these must be taken into account.
Uniqueness: No two persons should have the same biometric characteristics. Identical
twins cannot be easily distinguished by face recognition and DNA-analysis systems.
Permanence: The characteristics should be invariant with time. A persons face changes
significantly with time and the signature and its dynamics may change as well.
Performance: It is the accuracy, resources and environmental conditions required to
achieve the desired results. The accuracy of some signature dynamics systems is as low
as 75% and the verification decision takes over one second.
Acceptability: This indicates to what extent people are willing to accept the biometric
system. Face recognition systems are personally not intrusive, but there are countries
where taking pictures of persons is not viable. The retina scanner requires an infrared
laser beam directed through the cornea of the eye. This is rather invasive and only few
users accept this technology.
CHARACTERISTICS:
There are two major categories of biometric technologies according to what they measure:
Behavioral characteristics
Keystroke dynamics,
Voice,
Gait, and
Signature dynamics.
Physical characteristics
Fingerprint,
Face,
Retina,
Iris,
Vein pattern, and
Hand and finger geometry.
We now look into each of the aforementioned characteristics and delve into their underlying
technology and applications.
KEYSTOKE DYNAMICS:
Keystroke dynamics, also referred to as typing rhythms, is one of the most unusual and
innovative biometric technologies in use today. It is an automated method of examining an
individuals keystrokes on a keyboard. This technology examines such dynamics as speed,
pressure, total time taken to type particular words, and the time elapsed between hitting certain
keys.
The algorithms are still being developed to improve robustness and distinctiveness. It works well
with users that can "touch type". The main advantage in applying keystroke dynamics is that the
device used in this system, the keyboard, is unobtrusive and does not detract from one's work.
Enrolment as well as identification goes undetected by the user. Keystroke dynamics has many
applications in the computer security arena, like restricting root level access to the master server
hosting a key database.
VOICE:
Voice identification is a hybrid between behavioral and physiological biometric. A voice
identification system, like other biometric technologies, requires that a "voice reference template"
be constructed so that it can be compared against subsequent voice identifications. To construct
the "reference template" an individual must speak a given pass phrase several times as the system
builds the template based on numerous characteristics, including: cadence, pitch, tone, shape of
larynx, dynamics, and waveform. There are five specific forms of voice identification
technologies that are currently available or under development:
1. Speaker Dependent,
2. Speaker Independent,
3. Discrete Speech Input,
4. Continuous Speech Input, and
5. Natural Speech Input.
Voice

A major concern for voice identification systems is how to account for the variations in one's
voice each time identification occurs. The rate and pitch at which an individual speaks is not
always the same.
Most applications of voice identification today fall under the industries of call-answering and
contact-management services. Some designs have wall-mounted readers whilst others integrate
voice verification into conventional telephone handsets. Speaker verification works with a
microphone or with a regular telephone handset, although performance increases with higher
quality capture devices.
There are many advantages to using voice identification. It provides eyes and hands-free
operation, is reliable, flexible, and has a good data accuracy rate. In the future, voice
identification will not only be used for text dictation but to run applications and control
predetermined commands. It has also been estimated that if voice identification technology
continues to progress as it has, keyboards will become obsolete in a decade.
GAINT:
Gait recognition can be used to monitor people without the need for their cooperation. It can spot
people who are moving around in suspicious ways, which may include repetitive walking patterns
or movements that don't appear natural given their physicality. It could also be used in
conjunction with other biometric systems to verify people's identities and weed out impostors.
Though still in its infancy, the technology is growing in significance. At present, gait recognition
is much less diagnostic than other methods, but it can act as a filter and a screening tool in
conjunction with other biometric methods.
Gait recognition can be achieved by computer vision or with the help of a radar system. The
former uses visually-based systems that use video cameras to analyze the movements of each
body part - the knee, foot, shoulder, and so on, while the latter uses a radar-based system. As an
individual walks towards the system, they're bombarded with invisible radio waves. Each
individual's walking speed and style will make those waves bounce back a little differently. These
new identification methods hold promise as tools in the war on terrorism and in medical
diagnosis. Basic changes in someone's walking pattern can be an early indicator of the onset of
Parkinson's disease, multiple sclerosis and normal pressure hydrocephalus (NPH).
SIGNATURE DYNAMICS:
Signature identification, also known as Dynamic Signature Verification (DSV), analyses the way
a user signs his name. It analyses two different areas of an individual's signature: the specific
Signature

Features of the signature and specific features of the process of signing like speed, pen pressure,
directions, stroke length, and the points in time when the pen is lifted from the paper. Signature
identification devices can also analyze the "static" image of one's signature. In this case the
device captures the image of one's signature and saves it for future comparisons to the stored
template. To account for the change in one's signature over time, signature identification systems
adapt to any slight variances over time. The way a dynamic signature identification system
accomplishes this is by recording the time, history of pressure, velocity, location and acceleration
of a pen each time a person uses the system.
With a good amount of practice, a person might be able to duplicate the visual image of someone
else's signature but it is difficult if not impossible to duplicate "how" that person signs their name.
The healthcare industry is aggressively adopting signature identification for the submission of
new drug applications.
FINGERPRINT:
Fingerprint identification is perhaps the oldest of all the biometric techniques. The skin on the
inside of a finger is covered with a pattern of ridges and valleys. Every individual is believed to
have unique fingerprints. The papillary ridges in the fingerprint pattern are not continuous lines
but lines that end, bifurcate, or form an island. These special points are called minutiae and,
although in general a fingerprint contains about a hundred minutiae, for a positive identification
12 minutiae are all that have to be identified, according to the 12-point rule. The fingerprint
scanners are based on a variety of techniques such as:

Optical sensors with CCD or CMOS cameras,
Ultrasonic sensors,
Solid state electric field sensors,
Solid state capacitive sensors, and
Solid state temperature sensors.
Fingerprint scanners currently cannot make the distinction between a real, living finger and a
dummy created from silicone rubber or any other material. Comparing all biometric verification
possibilities, fingerprint scanners are perhaps the one of the least secure means of verification. It
is the only system where the biometrical characteristic can be stolen without the owner noticing it
or reasonably being able to prevent it. But, fingerprint verification is good for in-house systems,
where the system operates in a controlled environment and adequate explanation and training can
be given to its users.
FACE:
Face recognition uses the visible physical structure of the face and analyses the spatial geometry
of distinguishing features in it to identify an individual. It is accomplished in a five-step process:
1. An image of the face is acquired by digitally scanning an existing photograph or by using
a camera to acquire a live picture of the subject.
2. Software is then employed to detect the location of the face in the acquired image.
3. The identifying features of the face are extracted and analyzed.
4. This template is then compared with those in a database of known faces.
5. Depending on the extent of match, a positive or negative result in declared.


The face is registered as a biometric signature after being normalized, so that it is in the same
format (size, resolution, view, etc.) as the signatures on the systems database. A matcher
compares the normalized image with the set of normalized signatures on the systems database. A
measure of similarity or difference is computed for each comparison of normalized signatures.
Principal component analysis, elastic graph matching, neural networks and distortion-tolerant
template matching are some of the techniques used for face recognition. The better the quality of
the captured image, the better the system performs. The development of a hybrid system to take
advantage of more than one single approach is the present task at hand. Facial recognition has
been used in projects to monitor card counters, shoplifters in stores, criminals.
RETINA:
A retina-based biometric involves analyzing the layer of blood vessels situated at the back of the
eye. It projects a low-intensity infrared light through an optical coupler to the back of the eye and
onto the retina to scan its unique patterns. Infrared light is used due to the fact that the blood
vessels on the retina absorb the infrared light faster than surrounding eye tissues. The infrared
light with the retinal pattern is reflected back to a video camera. The video camera captures the
retinal pattern and converts it into data that is only a few bytes in size. Retinal scanning can be
quite accurate but does require the user to look into a receptacle and focus on a given point. The
user must be standing very still within inches of the device. This is not particularly convenient if
one wears glasses or is concerned about having close contact with the reading device.

Retinal identification has several disadvantages such as:
Susceptible to disease damage (e.g. Cataracts),
Viewed as intrusive and not user friendly, and
High amount of user and operator skill required.
However, retinal identification continues to be one of the best biometric performers on the market
with low false reject rates, a nearly zero percent false acceptance rate, small data template, and
quick identity confirmations.
IRIS:
The iris is the coloured ring of textured tissue that surrounds the pupil of the eye. Each iris has a
unique structure featuring a complex pattern. This can be a combination of specific characteristics
known as corona, crypts, filaments, freckles, pits, furrows, striations, and rings. Iris recognition
involves analyzing features found in the iris using a special grey-scale camera at the distance of
10 - 40 cm from the camera. The iris is an excellent choice for identification: it is formed
randomly, stable throughout ones life, not very susceptible to wear and injury and it contains a
pattern unique to the individual. Indeed, an individuals right and left iris patterns are completely
different, and so are the iris patterns of identical twins.
In the iris, there are over 400 distinguishing characteristics, or Degrees of Freedom (DOF), that
can be quantified and used to identify an individual. Approximately 260 of those are used or
captured in a "live" iris identification application. These identifiable characteristics include:
contraction furrows, striations, pits, collagenous fibers, filaments, crypts (darkened areas on the
iris), serpentine vasculature, rings, and freckles. Due to these unique characteristics, the iris has
six times more distinct identifiable features than a fingerprint. The iris scanner does not need any
special lighting conditions or any special kind of light (unlike the infrared light needed for the
retina scanning).



Iris identification can be broken down into the following fundamental steps:
1. The user stands in front of the system, while a wide-angle camera calculates the position
of his eye.
2. A second camera zooms in on the eye and takes a black and white image.
3. After the system has the iris in focus, it overlays a circular grid on the image of the iris
and identifies the areas where light and dark fall.
4. The system then recognizes a pattern within the iris and generates 'points' within
the pattern into an 'eye print'.
Finally, the captured image or 'eye print' is checked against a previously stored 'reference
template' in the database and a decision made
VEIN PATTERN:
Vein pattern identifies an individual by the unique pattern of veins on his palm. Palm vein
patterns are unique from one person to the next and, except for the size, they do not change with
time. Here the palm is first illuminated by an infra-red light. The veins just beneath the skin of the
palm then emit a black reflection, giving a picture of the veins in the palm. Using an algorithm, a
pattern is extracted from this picture and checked against patterns already stored in the system. If
there is a match, the person's identity is confirmed.
HAND AND FINGER GEOMETRY:
One of the oldest biometric methods after fingerprint recognition, hand geometry has been in use
for over three decades. It involves analyzing and measuring physical characteristics of the users
hand and fingers, from a three dimensional perspective. Spatial geometry is examined as the user
puts his hand on the sensors surface and uses guiding poles between the fingers to properly place
the hand and initiate the reading. Finger geometry usually measures two or three fingers.
Hand geometry is essentially based on the fact that virtually every individual's hand is shaped
differently and over the course of time the shape of a person's hand does not change significantly.
The basic principle of operation behind the use of hand geometry is to measure the physical
geometric characteristics of an individual's hand. From these measurements a profile or 'template'
is constructed which is then used to compare against subsequent hand readings of the user.
Hand geometry is a well-developed technology that has been thoroughly field-tested and is easily
accepted by users. It offers a good balance between performance-characteristics and ease of use.
The system is employed where robustness and low-cost are of primary concern. Accuracy can be
high if desired, whilst flexible performance tuning and configuration can accommodate a wide
range of applications. Hand geometry applications are finding their way into mainstream
industries including child day care centers, health clubs, and universities.




APPLICATIONS OF BIOMETRIC DEVICES:
Leading car manufacturers use fingerprint recognition as a requirement for ignition of the
engine.
Disney World uses a fingerprint scanner to verify season-pass holders entering the theme
park.
GM and Hertz use voice identification technology to protect their computer facilities.
Net Nanny has developed a user identification system using keystroke dynamics.
British Employment Services use signature verification to verify an individual who is
claiming benefits.
Pentonville Prison in England employs signature identification to prevent prisoner's
signing off food against other prisoner's accounts.
Heathrow airport uses iris recognition to check the identity of travellers entering UK.
Iris scanners were used in the Winter Olympics in Nagano to identify biathlon
participants before they were granted access to their rifles.
The US Immigration and Naturalization Service currently allow international travellers to
bypass long lines at busy airports by using an automated kiosk with a hand geometry
recognition system.
Child day care centers use hand geometry systems to verify the identity of parents.
Hospitals use hand geometry systems to monitor payroll accuracy and access control.
Fujitsu has developed a mouse that can verify a person's identity by recognizing his
pattern of blood veins.
FUTURE PROSPECTS:
Biometry is one of the most promising and life-altering technologies in existence today. It is all
set to change the way we live in the future. Some of the emerging biometrics technologies in the
near future are:
1. Ear shape identification - measures the shape and geometry of the ear. Ears have more
identification richness than any other part of the human body except fingerprints. They do
not change significantly from the time the subject reaches adult age.
2. Body odour identification - body odour can be digitally recorded for identification.
3. Body salinity identification exploits the natural level of salinity in the human body. An
electric field is passed through the body on which data can be carried.
4. EEG Fingerprint - the baseline brainwave pattern of every individual is unique and thus
could feasibly meet the qualifications of a biometric.
5. DNA matching - the "Ultimate" biometric technology that would produce proof-positive
identification of an individual. This technology is still not considered a "biometric"
technology and is years away from any kind of implementation.
CONCLUSION:
Even though the accuracy of the biometric techniques is not perfect yet, there are many mature
biometric systems available in the market today. Many successful applications of biometric
technology currently exist. This technology has proven capable of decreasing costs and increasing
convenience for both users and system administrators. Further, these systems are capable of
increasing both privacy and identity security. There is no reason why these devices could not
currently be used within the financial services community for internal applications and
infrastructure protection, such as for access control to sensitive areas and computers. The major
impediment to universal implementation at the consumer level is the wide variety of competing,
vendor-proprietary devices, all without general standardization. This cacophony of devices,
however, further serves as a protection to privacy, preventing any one measurement to be used to
access non-communicating systems.
Utilized alone or integrated with other technologies such as smart cards, encryption keys and
digital signatures, biometrics is set to pervade all aspects of the economy and our daily lives.
Utilizing biometrics for personal authentication is convenient and more secure than current keys,
passwords and PINs. With rapid progress in electronic and Internet commerce, there is a growing
need to authenticate the identity of a person for secure transaction processing. These technologies
are the foundation of an extensive array of highly secure, fast, accurate and user-friendly
identification and personal verification solutions.
REFERENCES:
The Biometric Consortium - www.biometrics.org
National Institute of Standards and Technology - www.nist.gov
The Biometric Catalog - www.biometricscatalog.org
ITPro - www.computer.org













Bluetooth Security

By


G.V.KAVYA D.RAMYA KRISHNA
Cse(2/4) Cse(2/4)
gottumukkala_kavya@yahoo.com d_ramya12@yahoo.co.in
ph: 9885424967 ph: 9985273688


Abstract:

In the past we have seen technologies like PC_Card, PCI, USB, and
IrDA introduced to enrichen the mobile computing environment.
Bluetooth technology is the latest such intiative to add a connectivity
feature to mobile systems.

Bluetooth provides a short range wireless communication between
devices making it convenient for users and thus eliminating the need
for messy cables. According to Bluetooth Special Interest Group
(2006), .Bluetooth wireless technology is the most widely supported,
versatile, and secure wireless standard on the market today..
Bluetooth operates in the open 2.4 GHz ISM band and is .now found in
a vast array of products such as input devices, printers, medical
devices, VoIP phones, whiteboards, and surveillance cameras.
[However], the proliferation of these devices in the workplace exposes
organizations to security risks... (Detecting Bluetooth Security
Vulnerabilities, 2005).

The research presented in this paper deals with the performance
analysis of Bluetooth technology. This study focuses on the mechanism
of bluetooth bonding and authentication. when two units wants to
communicate in a secure way, they need be pair to each other. In
pairing process units are exchanging keys and authenticate each other.
we have mention about bluetooth security breaks ansd holes for
attecker using finally this study ends with behaviour of attecker and
counter measures of it.
And some of the vulnerabilities and risks associated with it. We have
vast research work on how to protect the sharing information via
Bluetooth and how to acknowledge the people for effective use of
Bluetooth and secure the information."

What is Bluetooth?

In the past, the only way to connect computers together for the purpose
of sharing Information and/or resources was to connect them via
cables. This can be not only Cumbersome to set up, but it can get
messy real quick. Bluetooth provides a solution to
this problem by providing a cable-free environment. According to the
official Bluetooth website, www.bluetooth.com, Bluetooth wireless
technology is a short-range communications technology intended to
replace the cables connecting portable and/ore fixed devices while
maintaining high levels of security. The key features of Bluetooth
technology are robustness, low power, and low cost. The Bluetooth
specification defines a uniform structure for a wide range of devices to
connect and communicate with each other. The idea behind Bluetooth
technology was born in 1994, when a team of researchers at Ericsson
Mobile Communications. Initiated a feasibility study of universal
short-range, low-power wireless connectivity as a way of eliminating
cables between mobile phones and computers, headsets and other
devices.. (2005, Bialoglowy) In 1998, this group evolved to the
Bluetooth Special Interest Group (SIG). Along with Ericsson, other
founding members included Nokia, Intel, IBM and Toshiba. Today,
.the SIG is comprised of over 4,000 members who are
leaders in the telecommunications, computing, automotive, music,
apparel, industrial automation, and network industries,

and a small group of dedicated staff in Hong Kong, Sweden, and the
USA.. (Bluetooth
SIG, 2006) Many people wonder where the name .Bluetooth. came
from. According to Bluetooth SIG (Bluetooth SIG, 2006), The name
"Bluetooth" is taken from the 10th century Danish King Harald
Blatand - or Harold Bluetooth in English. During the formative stage of
the trade association a code name was needed to name the effort. King
Blatand was instrumental in uniting warring factions in parts of what is
now Norway, Sweden, and Denmark - just as Bluetooth technology is
designed to allow collaboration
between differing industries such as the computing, mobile phone, and
automotive markets. The code name stuck.

How Bluetooth Works
Bluetooth can be used to connect almost any device to another
device.Bluetooth can be used to form ad hoc networks of several (up to
eight) devices, called piconets. (Vainio, 2000). When Bluetooth
devices first connect, there is a piconet master that initiates the
connection, and the others are slave devices. .One piconet can have a
maximum of seven active slave devices and one master device. All
communication within a piconet goes through the piconet master. Two
or more piconets together form a scatternet, which can be used to
eliminate Bluetooth range restrictions.. (Haataja, 2006) It is not
possible to be a master of two different piconets because a piconet is a
group of devices all synchronized on a hopping sequence set by the
master. For that reason, any devices that share a master must be on the
same piconet. .Scatternet environment requires different piconets must
have a common device (so-called scatternet member) to relay data
between the piconets.. (Haataja, 2006) As stated in the Bluetooth SIG
website, .Bluetooth technology operates in the unlicensed
industrial, scientific and medical (ISM) band at 2.4 to 2.485 GHz,
using a spread spectrum, frequency hopping, full-duplex signal at a
nominal rate of 1600 hops/sec. The 2.4 GHz ISM band is available and
unlicensed in most countries.. Bluetooth devices within a 10 to 100
meters (or 30 to 300 feet) range can share data with a throughput of 1
Mbps for Version 1.2 and up to 3 Mbps for Version 2.0 +Enhanced
Data Rate (EDR).

Data is transmitted between Bluetooth devices in packets across the
physical channel that is subdivided into time units known as slots. As
described in an article of J DJ , the world.s leading java resource, The
radio layer is the physical wireless connection. To avoid interference
with other devices that communicate in the ISM band, the modulation
is based on fast frequency hopping.Bluetooth divides the 2.4 GHz
frequency band into 79 channels, 1 MHz apart (from 2.402 to 2.480
GHz), and uses this spread spectrum to hop from one channel to
another, up to 1,600 times per second.(Mikhalenko, 2006) Bluetooth
SIG further explains that
Within a physical channel, a physical link is formed between any two
devices that transmit packets in either direction between them. In a
piconet physical channel there are restrictions on which devices may
form a physical link. There is a physical link between each slave and
the master. Physical links are not formed directly between the slaves in
a piconet. (Bluetooth SIG, 2006) Profiles are used with Bluetooth so
that devices can communicate with each other and that there is
interoperability between vendors. These profiles define behaviors of
the Bluetooth devices and .the roles and capabilities for specific types
of applications. (Mikhalenko, 2005). Each profilespecification
contains information on the following
Topics:
Dependencies on other profiles
Suggested user interface formats
Specific parts of the Bluetooth protocol stack used by the
profile. To perform
each task, each profile uses particular options and parameters at each
layer of the
Stack. (Bluetooth SIG, 2006)


Bluetooth Security
Security has played a major role in the invention of Bluetooth. The
Bluetooth SIG has put much effort into making Bluetooth a secure
technology and has security experts who provide critical security
information. In general, Bluetooth security is divided into three
modes: (1) non-secure; (2) service level enforced security; and (3) link
level enforced security. In non-secure, a Bluetooth device does not
initiate any security measures. In service-level enforced security mode,
.two Bluetooth devices can establish a nonsecure Asynchronous
Connection-Less (ACL) link. Security procedures, namely
authentication, authorization and optional encryption, are initiated
when a L2CAP (Logical Link Control
and Adaptation Protocol) Connection-Oriented or Connection-Less
channel request is made.. (Haataja, 2006). The difference between
service level enforced security and link level enforced security is that
in the latter, the Bluetooth device initiates security procedures before
the channel is established. As mentioned above, Bluetooth. security
procedures include authorization, authentication and optional
encryption. Authentication involves proving the identity of a computer
or computer user, or in Bluetooth.s case, proving the identity of one
piconet member to another. Authorization is the process of granting or
denying access to a network resource. Encryption is the translation of
data into secret code. It is used
between Bluetooth devices so that eavesdroppers can not read its
contents. However, even with all of these defense mechanisms in
place, Bluetooth has shown to have some security risks. The next
section of this paper will describe some of these vulnerabilities
associated with Bluetooth technology.


Bluetooth Vulnerabilities and Security Risks:
Blue jacking is the process of sending unsolicited
messages, or business cards, to Bluetooth-enabled devices. This
does not involve altering any data from the device, but
nonetheless, it is unsolicited. Devices that are set in non-
discoverable mode are not susceptible to blue jacking. In order
for blue jacking to work, the sending and receiving devices must
be within 10 meters of one another. While this method has been
widely used for promotional purposes, Bluetooth device-owners
should be careful never to add the contact to their address book.
While bluejacking is usually not done with malicious intent,
repetitive bogus messages can be annoying to the user, and in
some cases, can render the product inoperable. This can also
open the door to a variety of social engineering attacks.
Bluesnarfing is a method of hacking into a Bluetooth-
enabled mobile phone and copying its entire contact book,
calendar or anything else stored in the phone.s memory. By
setting the device in non-discoverable, it becomes significantly
more difficult to find and attack the device. However, .the
software tools required to steal information from Bluetooth-
enabled mobile phones are widely available on the Web, and
knowledge of how to use them is growing.. (Kotadia, 2004)
Companies such as Nokia and Sony Ericsson are making sure
new phones coming to market will not be susceptible to
bluesnarfing.
The backdoor attack involves establishing a trust
relationship through the .pairing. mechanism, but ensuring that it
no longer appears in the target.s register of paired devices. In
this way, unless the owner is actually observing their devices at
the precise moment a connection is established, they are unlikely
to notice anything untoward, and the attacker may be free to
continue to use any resource that a trusted relationship with that
device grants access to.This means that not only can data be
retrieved from the phone, but other services, such as modems, or
Internet, WAP and GPRS gateways may be accessed without the
owner.s knowledge or consent.. (The Bunker, 2003)
The cabir worm is malicious software that uses Bluetooth
technology to seek out available Bluetooth devices and send
itself to them. According to Bluetooth SIG (2006), .The cabir
worm currently only affects mobile phones that use the Symbian
series 60 user interface platform and feature Bluetooth wireless
technology. Furthermore, the user has to manually accept the
worm and install the malware in order to infect the phone..
Although this may be the case, this shows that it is achievable to
write mobile viruses that spread via Bluetooth and may cause
other hackers to explore the possibilities of writing Bluetooth
viruses. The Mabir worm is essentially a variant of the Cabir
worm where it uses Bluetooth and Multimedia Messaging
Service messages (MMS) to replicate.


Conclusion :
Bluetooth wireless is constantly growing in popularity because of the
convenience of exchanging information between mobile devices. As
Bluetooth usage rises, so do the security risks associated with the
technology. Advantages to Bluetooth include .the ability to
simultaneously handle both data and voice transmissions [which]
enables users to enjoy [a] variety of innovation solutions such as a
hands-free headset for voice calls, printing and fax capabilities, and
synchronizing PDA, laptop, and mobile phone applications..
(Bluetooth SIG, 2006) Bluetooth users should familiarize themselves
with Bluetooth security issues before using Bluetooth devices, and
especially before they bring these devices into the work place.
References

Bialoglowy, Marek. 2005. Bluetooth Security Review, Part 1.
Security Focus. Retrieved from
http://www.securityfocus.com/print/infocus/1830 on J uly 1,
2006.

DATA SECURITY AND AUTHENTICATION


Authors:

Mr. Sunil Vijay kumar, G. Mr. Arun kumar, S.
Associate Professor, Mtech CS II year,
Dept Of Computer Science, roll no: 05091D0501,
RGMCET Nandyal. RGMCET Nandyal.
E-mail sunilvkg@yahoo.com E-mail ranaarunkumar_1@yahoo.co.in
Contact :09849190236 contact : 09393871382


Abstract

The project entitled DATA
SECURITY AND AUTHENTICATION
(USING STEGANOGRAPHY) deals with
implementing during transmission of data
using cryptography & steganography. A
steganalysis is performed and security
implemented. The project implements both
the forms of security comparing
steganography against cryptography-
steganography combined. The goal of
steganography is to hide messages inside
other harmless messages in a way that does
not allow any enemy to even detect that there
is a second secret message present.
In first phase, the end user identifies an
image which is going to act as the carrier of
data. The data file is also selected and then to
achieve greater speed of transmission the
data file is compressed, embedded in the
image file. The image if hacked or
interpreted by a third party user will open up
in any image previewer but will not display
the data. This protects the data from being
visible and hence be secure during
transmission. The user in the receiving end
uses another piece of code to retrieve the data
from the image.


1. Introduction

The system implements steganography as
the technique to hide data and provide security
between the sender and the receiver on the
network.

There are two types of hackers:
1. One with the intention of knowing data .

2. The other with the intention of damaging the
data

Here we are more interested in the data
embeded inside the image and least worried
about the image data. The goal of
steganography is to hide messages inside the
other harmless message and maintain secrecy of
communication happening.
Authentication is the process by which a
computer, computer program, or another user
attempts to confirm that the computer, computer
program, or user from whom the second party
has received some communication is, or is not,
the claimed first party.
Steganography, from the Greek, means
covered, or secret writing, and is a long-
practised form of hiding information.
Steganography's intent is to hide the existence
of the message, while cryptography scrambles a
message so that it cannot be understood.

The advantage of steganography is that
it can be used to secretly transmit messages
without the fact of the transmission being
discovered. However, steganography has a
number of disadvantages as well. Unlike
encryption, it generally requires a lot of
overhead to hide a relatively few bits of
information. Also, once a steganographic
system is discovered, it is rendered useless. This
problem, too, can be overcome if the hidden
data depends on some sort of key for its
insertion and extraction. In fact, it is common
practice to encrypt the hidden message before
placing it in the cover message.However, most
steganographers like the extra layer of
protection that encryption provides. If your
hidden message is found, then at least make it as
protected as possible.

2. History of Steganography
Our earliest records of steganography were
recorded by the Greek historian Herodotus and
date back to Greek times. When the Greek
tyrant Histiaeus was held as a prisoner by king
Darius in Susa during the 5th century BCE, he
had to send a secret message to his son-in-law
Aristagoras in Miletus. Histiaeus shaved the
head of a slave and tattooed a message on his
scalp. When the slave's hair had grown long
enough he was dispatched to Miletus.
Invisible inks have always been a popular
method of steganography. Ancient Romans used
to write between lines using invisible inks based
on readily-available substances such as fruit
juices, urine and milk. When heated, the
invisible inks would darken, and become
legible. Invisible inks were used as recently as
World War II. An early researcher in
steganography and cryptography was J ohannes
Trithemius (1462-1526), a German monk. His
first work on steganography, Steganographia,
described systems of magic and prophecy, but
also contained a complex system of
cryptography.

3. Aims And Objectives:

The main aim is implementing a system that
can assure security to the data transmitted,
resisting to the attacks from the hackers on the
network.

The Objectives include
1. Secure transmission of data on network .
2. Data to be made invisible using
steganography.
3. To detect damage or malicious script or
viruses
that would have altered the data on the
network.

4. Main Contents:

The main modules that are included are

4.1 Security and Login Module.
The module identifies users who can use
this application . It prompt with the login and
password before enabling access to the other
module.

4.2 CRC generation and verification Module.
In the generation module CRC is
calculated on the basis of the data being
transmitted. The CRC is then stored back into
the same file. At the receiving end the CRC is
calculated for the data & then compared to the
CRC that was generated at the transmitting end.
The module indicates if the received file is
corrupted, damaged or altered in the process of
transmission.

4.3 Compression and Decompression
Module.
The module ensures that compression
to the data before it can be transmitted is done
in order to reduce the file transfer time. The
compression is performed after the CRC is
generated for the data that means to be
transmitted. At the receiving end it is
decompressed to regenerate the file that contain
CRC and data. A new file is generated on
decompression

4.4 Embedded and De-embedded Module.
In this module an image file is
identified and the data to be embedded . The
compressed data and CRC file is then appended
to the end of the image file without affecting the
image data or the header information of that
image information about the size of the file
name etc are also written which is then stored
in the image file. The software stores the data in
a format that is not recognizable even when
viewed from a text or document editor. At the
receiving end the module identify the location
of the data picks up the size of the file, filename
and reads the data to the length of the data file
and regenerate the text file that was embedded
with in the image file.

4.5 User interface and Manual Module.
A user friendly environment that allows
keyboard short cuts and mouse interaction and
frames oriented windows that allow easy
navigation, file selection and guided control
flow. An online help that explain the procedure
that operate the system is provided. The
application is fully menu driven & is supported
by dialogues.


5. Analyzing Drawbacks Of Earlier Systems:

The earlier existing system had the
following drawbacks.
1.There is a Propagation delay in the system.
2.There is limitation on the amount of data
that we can send. and system fails when the
image size is at least four times greater than the
data size.
3. We can send only normal images of (.jpeg,
.bmp, .gif) formats.
4.Limitations exist on the data (data <[image / 4
]), and no animations are allowed on the images.
6. Proposed System Implementation:

In the Proposed system, we can also
add data to the image and then sent to the
receiver. This is done by appending the binary
form of the data to
the image. The Append Binary algorithm is
used to accomplish the purpose. There are no
alterations in the image clarity, none knows that
there is secret information appended to image
and is transmitted. When the image is opened it
previews only the image content, the data
associated is not viewed. This is the importance
of Append Binary algorithm. Only the intended
receiver can know the starting position where
the data exactly begins in the image. The data is
de-embed and is converted to text format from
the binary format. Thus the receiver gets the
original text data sent to him by the authorized
sender.

7. Advantages Of Proposed System:

1. There is no limitation on the amount of data
that we can send along the image.
2. Here the system can have as much data
possible (data >=Image, data <=Image).
3. To reduce the size of the data for faster
transmission, we can compress the text data and
append to image in the binary format.
4. Any type of images are allowed even
animated images.

8. Process Carried Out:

In the technique implementation, the identified
data undergoes the following sequence :-

8.1 Identify a text file.
8.2 Identify any target container, such as image,
text, audio, video etc file
8.3 Generate the CRC for the data being
transmitted using CRCSET.
8.4 CRCSET is an anti-virus utility. Before
transmitting the data, the utility want to
calculate the CRC value and attach with
data. In receiving end, the CRC recalculated
and based on that calculated value, it will
specify the acceptation or rejection.
8.5 Encrypt the data using a local algorithm
8.6 Associate a password with the encrypted
data to provide additional security.
8.7 Implement steganography using a local
algorithm and embed the data into the target
container.
8.8 Perform the reverse process at the
destination to retrieve the actual data if not
hacked or tampered.


9. Event Flow Diagram:





























10. Result:

The System thus developed is secure and
can resist hackers on the network. It can convey
the secret information hidden in the image only
to the authorized recipient only.


11. Acknowledgements:

I am grateful to my internal guide
Mr.Sunil Vijay kumar and external guide
Mr.Prasantha Padhy working for CMTES,
Hyderabad; for giving me guidelines to proceed
in my project work. I thank my project guide in
CMTES, Mr.Anil K. Menon, who has helped
me to reach out a succesful solution through out.
Also I extend my thanks to the Head
Of the Department of Computer Science
Department, in
Data
Compress
CRC
Embed
Data
Decompress
CRC
Retrieve
Image
Image
Text
CompressedData
Compresseddata+CRC
ImageFile
Transmit Receive
Image
Compresseddata+CRC
CompressedData
Text
Image
RGMCET, Nandyal; who stood by the side of
students and encouraged them to participate in
the conferences being held.


12. Discussion And Conclusions:

It is a remarkable feature of the
Steganography to hide the communication
happening. The receipient of the message will
get the exact message sent by the sender. The
user will be assured that the information sent by
him reaches to the desired recipient only. At any
point if the data is hacked the CRC-check will
point out that the information is modified. Thus
a secure and authenticated system is developed.

13. Keywords:

Cryptography, Steganography,
Steganalysis, Stegosystems, Authentication,
Security.




14. References

[1] William Stallings-- Network Security and
Cryptography, 4 th edition
www.williamstallings.com/
[2] Steganos explanation in
http://www.steganos.com/
[3] http://www.cryptography.com/
[4] Atul KahateNetwork Security and
Cryptography,TMH publications.
Authors:
T . Devendra Reddy and B . Naveen Singh
III CSE II Sem., Sri Kalaheeswara Institute of Tech.,
Sri Kalahasthi.
E-mail:devamails2020@yahoo.com

Data Mining and Compression
Abstract:
The increasing processing power and sophistication of analytical tools techniques
have resulted in the development of what are know as data mining. Today data mining
has been defined independently of statistics though mining data for patterns and
predictions is really what statistics is all about. Some of the techniques that are classified
under data mining such as CHAID and CART really grew out of the statistical profession
more than anywhere else, and the basic ideas of probability, independence and causality
and overfitting are the foundation on which data mining is built.
In modern organizations, user of data is often completely removed from the data
source. Many people only need read access to data, but still need a very rapid access to a
larger volume of data than can conveniently be downloaded to the desktop. Often such
data comes from multiple databases. Because many of the analyses performed are
recurrent and predictable, software vendors and systems support staff have begun to
design systems to support these functions. At present there is a great need to provide
decision marker from middle management upward with in formations at the correct level
of detail to support decision making. Compression techniques are applied to the raw data
and processed to have a minimal length. Data warehousing, online analytical processing
(OLAP) and data mining provide this functionality. Data Mining and Compression plays
a key role during the extraction and portability of data. These two aspects successfully
cope up with the rising demands of present day business.
Introduction
Data mining, the extraction of
hidden predictive information from large
databases, is a powerful new technology
with great potential to help companies
focus on the most important information
in their data warehouses. Data mining
tools predict future trends and behaviors,
allowing businesses to make proactive,
knowledge-driven decisions.Data mining
tools can answer business questions that
traditionally were too time consuming to
resolve.
Most companies already collect
and refine massive quantities of data.
Data mining techniques can be
implemented rapidly on existing
software and hardware platforms to
enhance the value of existing
information resources, and can be
integrated with new products and
systems as they are brought on-line.
Data mining tools can analyze massive
databases to deliver answers to questions
such as, "Which clients are most likely
to respond to my next promotional
mailing, and why?". Illustrate its
relevance to todays business
environment as well as a basic
description of how data warehouse
architectures can evolve to deliver the
value of data mining to end users.
The Foundations of Data Mining
Data mining techniques are the result
of a long process of research and product
development. This evolution began
when business data was first stored on
computers, continued with
improvements in data access, and more
recently, generated technologies that
allow users to navigate through their
data in real time. Data mining takes this
evolutionary process beyond
retrospective data access and navigation
to prospective and proactive information
delivery. Data mining is ready for
application in the business community
because it is supported by three
technologies that are now sufficiently
mature:
Massive data collection
Powerful multiprocessor
computers
Data mining algorithms
Commercial databases are growing at
unprecedented rates. A recent META
Group survey of data warehouse projects
found that 19% of respondents are
beyond the 50 gigabyte level, while 59%
expect to be there by second quarter of
1996.In some industries, such as retail,
these numbers can be much larger. The
accompanying need for improved
computational engines can now be met
in a cost-effective manner with parallel
multiprocessor computer technology.
Data mining algorithms embody
techniques that have existed for at least
10 years, but have only recently been
implemented as mature, reliable,
understandable tools that consistently
out perform older statistical methods.
Evolutionary
Step
Business Question Enabling
Technologies
Product
Providers
Characteristics
Data Collection
(1960s)
"What was my total
revenue in the last five
years?"
Computers, tapes,
disks
IBM, CDC Retrospective,
static data
delivery
Data Access
(1980s)
"What were unit sales
in New England last
March?"
Relational databases
(RDBMS),
Structured Query
Language (SQL),
ODBC
Oracle,
Sybase,
Informix,
IBM,
Microsoft
Retrospective,
dynamic data
delivery at
record level
Data
Warehousing &
Decision
Support
(1990s)
"What were unit sales
in New England last
March? Drill down to
Boston."
On-line analytic
processing (OLAP),
multidimensional
databases, data
warehouses
Pilot,
Comshare,
Arbor,
Cognos,
Microstrategy
Retrospective,
dynamic data
delivery at
multiple levels
Data Mining
(Emerging
Today)
"Whats likely to
happen to Boston unit
sales next month?
Why?"
Advanced
algorithms,
multiprocessor
computers, massive
databases
Pilot,
Lockheed,
IBM, SGI,
numerous
startups
(nascent
industry)
Prospective,
proactive
information
delivery
Table 1. Steps in the Evolution of Data Mining
In the evolution from business data to
business information, each new step has
built upon the previous one. For
example, dynamic data access is critical
for drill-through in data navigation
applications, and the ability to store
large databases is critical to data mining.
From the users point of view, the four
steps listed in Table 1 were
revolutionary because they allowed new
business questions to be answered
accurately and quickly.
The Scope of Data Mining
Data mining derives its name from the
similarities between searching for
valuable business information in a large
database for example, finding linked
products in gigabytes of store scanner
data and mining a mountain for a vein
of valuable ore. Both processes require
either sifting through an immense
amount of material, or intelligently
probing it to find exactly where the
value resides. Given databases of
sufficient size and quality, data mining
technology can generate new business
opportunities by providing these
capabilities:
Automated prediction of trends
and behaviors. Data mining
automates the process of finding
predictive information in large
databases. Questions that
traditionally required extensive
hands-on analysis can now be
answered directly from the data
quickly.
Automated discovery of
previously unknown patterns.
Data mining tools sweep through
databases and identify previously
hidden patterns in one step. An
example of pattern discovery is
the analysis of retail sales data to
identify seemingly unrelated
products that are often purchased
together..
Data mining techniques can yield the
benefits of automation on existing
software and hardware platforms, and
can be implemented on new systems as
existing platforms are upgraded and new
products developed. When data mining
tools are implemented on high
performance parallel processing systems,
they can analyze massive databases in
minutes. Faster processing means that
users can automatically experiment with
more models to understand complex
data. High speed makes it practical for
users to analyze huge quantities of data.
Larger databases, in turn, yield improved
predictions.
Databases can be larger in both depth
and breadth:
More columns. Analysts must
often limit the number of
variables they examine when
doing hands-on analysis due to
time constraints. Yet variables
that are discarded because they
seem unimportant may carry
information about unknown
patterns. High performance data
mining allows users to explore
the full depth of a database,
without preselecting a subset of
variables.
More rows. Larger samples
yield lower estimation errors and
variance, and allow users to
make inferences about small but
important segments of a
population.
A recent Gartner Group Advanced
Technology Research Note listed data
mining and artificial intelligence at the
top of the five key technology areas that
"will clearly have a major impact across
a wide range of industries within the
next 3 to 5 years."2 Gartner also listed
parallel architectures and data mining as
two of the top 10 new technologies in
which companies will invest during the
next 5 years. According to a recent
Gartner HPC Research Note, "With the
rapid advance in data capture,
transmission and storage, large-systems
users will increasingly need to
implement new and innovative ways to
mine the after-market value of their vast
stores of detail data, employing MPP
[massively parallel processing] systems
to create new sources of business
advantage (0.9 probability)."3
The most commonly used techniques in
data mining are:
Artificial neural networks:
Non-linear predictive models that
learn through training and
resemble biological neural
networks in structure.
Decision trees: Tree-shaped
structures that represent sets of
decisions. These decisions
generate rules for the
classification of a dataset.
Specific decision tree methods
include Classification and
Regression Trees (CART) and
Chi Square Automatic
Interaction Detection (CHAID) .
Genetic algorithms:
Optimization techniques that use
processes such as genetic
combination, mutation, and
natural selection in a design
based on the concepts of
evolution.
Nearest neighbor method: A
technique that classifies each
record in a dataset based on a
combination of the classes of the
k record(s) most similar to it in a
historical dataset (where k 1).
Sometimes called the k-nearest
neighbor technique.
Rule induction: The extraction
of useful if-then rules from data
based on statistical significance.
Many of these technologies have been in
use for more than a decade in specialized
analysis tools that work with relatively
small volumes of data. These capabilities
are now evolving to integrate directly
with industry-standard data warehouse
and OLAP platforms. The appendix to
this white paper provides a glossary of
data mining terms.
How Data Mining Works
How exactly is data mining able to tell
you important things that you didn't
know or what is going to happen next?
The technique that is used to perform
these feats in data mining is called
modeling. Modeling is simply the act of
building a model in one situation where
you know the answer and then applying
it to another situation that you don't. For
instance, if you were looking for a
sunken Spanish galleon on the high seas
the first thing you might do is to research
the times when Spanish treasure had
been found by others in the past. You
might note that these ships often tend to
be found off the coast of Bermuda and
that there are certain characteristics to
the ocean currents, and certain routes
that have likely been taken by the ships
captains in that era. You note these
similarities and build a model that
includes the characteristics that are
common to the locations of these sunken
treasures. With these models in hand you
sail off looking for treasure where your
model indicates it most likely might be
given a similar situation in the past.
Hopefully, if you've got a good model,
you find your treasure.
This act of model building is thus
something that people have been doing
for a long time, certainly before the
advent of computers or data mining
technology. What happens on
computers, however, is not much
different than the way people build
models. Computers are loaded up with
lots of information about a variety of
situations where an answer is known and
then the data mining software on the
computer must run through that data and
distill the characteristics of the data that
should go into the model. Once the
model is built it can then be used in
similar situations where you don't know
the answer. For example, say that you
are the director of marketing for a
telecommunications company and you'd
like to acquire some new long distance
phone customers. You could just
randomly go out and mail coupons to the
general population - just as you could
randomly sail the seas looking for
sunken treasure. In neither case would
you achieve the results you desired and
of course you have the opportunity to do
much better than random - you could use
your business experience stored in your
database to build a model.
As the marketing director you have
access to a lot of information about all of
your customers: their age, sex, credit
history and long distance calling usage.
The good news is that you also have a
lot of information about your
prospective customers: their age, sex,
credit history etc. Your problem is that
you don't know the long distance calling
usage of these prospects (since they are
most likely now customers of your
competition). You'd like to concentrate
on those prospects who have large
amounts of long distance usage. You can
accomplish this by building a model.
Table 2 illustrates the data used for
building a model for new customer
prospecting in a data warehouse.

Customers Prospects
General
information
(e.g.
demographic
data)
Known Known
Proprietary
information
(e.g.
customer
transactions)
Known Target
Table 2 - Data Mining for Prospecting

The goal in prospecting is to make some
calculated guesses about the information
in the lower right hand quadrant based
on the model that we build going from
Customer General Information to
Customer Proprietary Information. For
instance, a simple model for a
telecommunications company might be:
98% of my customers who make more
than $60,000/year spend more than
$80/month on long distance
This model could then be applied to the
prospect data to try to tell something
about the proprietary information that
this telecommunications company does
not currently have access to. With this
model in hand new customers can be
selectively targeted.
Test marketing is an excellent source of
data for this kind of modeling. Mining
the results of a test market representing a
broad but relatively small sample of
prospects can provide a foundation for
identifying good prospects in the overall
market. Table 3 shows another common
scenario for building models: predict
what is going to happen in the future.

Y
ay
d
y
m
ow
esterd To a To orr
Static
informati
on and
current
plans
K Kno
wn
w nown Kno n
Dynamic
informati
on
K Kno
wn
g nown Tar et
Table 3 - Data Mining for Predictions
If someone told you that he had a model
that could predict customer usage how
would you know if he really had a good
model? The first thing you might try
would be to ask him to apply his model
to your customer base - where you
already knew the answer. With data
mining, the best way to accomplish this
is by setting aside some of your data in a
vault to isolate it from the mining
process. Once the mining is complete,
the results can be tested against the data
held in the vault to confirm the models
validity. If the model works, its
observations should hold for the vaulted
data.
An Architecture for Data Mining
To best apply these advanced
techniques, they must be fully integrated
with a data warehouse as well as flexible
interactive business analysis tools. Many
data mining tools currently operate
outside of the warehouse, requiring extra
steps for extracting, importing, and
analyzing the data. Furthermore, when
new insights require operational
implementation, integration with the
warehouse simplifies the application of
results from data mining. The resulting
analytic data warehouse can be applied
to improve business processes
throughout the organization, in areas
such as promotional campaign
management, fraud detection, new
product rollout, and so on. Figure 1
illustrates an architecture for advanced
analysis in a large data warehouse.


Figure 1 - Integrated Data Mining
Architecture
The ideal starting point is a data
warehouse containing a combination of
internal data tracking all customer
contact coupled with external market
data about competitor activity.
Background information on potential
customers also provides an excellent
basis for prospecting. This warehouse
can be implemented in a variety of
relational database systems: Sybase,
Oracle, Redbrick, and so on, and should
be optimized for flexible and fast data
access.
An OLAP (On-Line Analytical
Processing) server enables a more
sophisticated end-user business model to
be applied when navigating the data
warehouse. The multidimensional
structures allow the user to analyze the
data as they want to view their business
summarizing by product line, region,
and other key perspectives of their
business. The Data Mining Server must
be integrated with the data warehouse
and the OLAP server to embed ROI-
focused business analysis directly into
this infrastructure. An advanced,
process-centric metadata template
defines the data mining objectives for
specific business issues like campaign
management, prospecting, and
promotion optimization. Integration with
the data warehouse enables operational
decisions to be directly implemented and
tracked. As the warehouse grows with
new decisions and results, the
organization can continually mine the
best practices and apply them to future
decisions.
This design represents a fundamental
shift from conventional decision support
systems. Rather than simply delivering
data to the end user through query and
reporting software, the Advanced
Analysis Server applies users business
models directly to the warehouse and
returns a proactive analysis of the most
relevant information. These results
enhance the metadata in the OLAP
Server by providing a dynamic metadata
layer that represents a distilled view of
the data. Reporting, visualization, and
other analysis tools can then be applied
to plan future actions and confirm the
impact of those plans.
Applications
A wide range of companies have
deployed successful applications of data
mining. While early adopters of this
technology have tended to be in
information-intensive industries such as
financial services and direct mail
marketing, the technology is applicable
to any company looking to leverage a
large data warehouse to better manage
their customer relationships. Two critical
factors for success with data mining are:
a large, well-integrated data warehouse
and a well-defined understanding of the
business process within which data
mining is to be applied (such as
customer prospecting, retention,
campaign management, and so on).
Some successful application areas
include:
A pharmaceutical company can
analyze its recent sales force
activity and their results to
improve targeting of high-value
physicians and determine which
marketing activities will have the
greatest impact in the next few
months. The data needs to
include competitor market
activity as well as information
about the local health care
systems. The results can be
distributed to the sales force via a
wide-area network that enables
the representatives to review the
recommendations from the
perspective of the key attributes
in the decision process. The
ongoing, dynamic analysis of the
data warehouse allows best
practices from throughout the
organization to be applied in
specific sales situations.
A credit card company can
leverage its vast warehouse of
customer transaction data to
identify customers most likely to
be interested in a new credit
product. Using a small test
mailing, the attributes of
customers with an affinity for the
product can be identified. Recent
projects have indicated more than
a 20-fold decrease in costs for
targeted mailing campaigns over
conventional approaches.
A diversified transportation
company with a large direct sales
force can apply data mining to
identify the best prospects for its
services. Using data mining to
analyze its own customer
experience, this company can
build a unique segmentation
identifying the attributes of high-
value prospects. Applying this
segmentation to a general
business database such as those
provided by Dun & Bradstreet
can yield a prioritized list of
prospects by region.
A large consumer package goods
company can apply data mining
to improve its sales process to
retailers. Data from consumer
panels, shipments, and
competitor activity can be
applied to understand the reasons
for brand and store switching.
Through this analysis, the
manufacturer can select
promotional strategies that best
reach their target customer
segments.
Each of these examples have a clear
common ground. They leverage the
knowledge about customers implicit in a
data warehouse to reduce costs and
improve the value of customer
relationships. These organizations can
now focus their efforts on the most
important (profitable) customers and
prospects, and design targeted marketing
strategies to best reach them.
Conclusion
Comprehensive data warehouses that
integrate operational data with customer,
supplier, and market information have
resulted in an explosion of information.
Competition requires timely and
sophisticated analysis on an integrated
view of the data. However, there is a
growing gap between more powerful
storage and retrieval systems and the
users ability to effectively analyze and
act on the information they contain. Both
relational and OLAP technologies have
tremendous capabilities for navigating
massive data warehouses, but brute force
navigation of data is not enough. A new
technological leap is needed to structure
and prioritize information for specific
end-user problems. The data mining
tools can make this leap. Quantifiable
business benefits have been proven
through the integration of data mining
with current information systems, and
new products are on the horizon that will
bring this integration to an even wider
audience of users.
References
META Group Application Development
Strategies: "Data Mining for Data
Warehouses: Uncovering Hidden
Patterns.", 7/13/95 .

Gartner Group Advanced Technologies
and Applications Research Note, 2/1/95.

Gartner Group High Performance
Computing Research Note, 1/31/95.





DIGITAL IMAGE PROCESSING
Authors: D.HemaKiran
1
,
D.Penchala Narasimhulu
Dept of Computer Science and Engineering,

Chadalawada Ramanamma Engineering College,
Tirupathi 517501. Andhra Pradesh
E-mails: 1: hema4ideas@yahoo.com 2. vivek.514@gmail.com

ABSTARCT:

The present state of art
applications is mainly focused by
Digital images. Data and Pictures are
the two key factors in Data
Communication. To store, access and
process the digital image we need
highly sophisticated computers.
Digital Image is defined as an
image with two functions of amplitude
and two coordinates (X, l). An image
consists of a set of either continuous or
discrete coordinates. In the concept of
DIP and CG Pixel is the basic element
which forms the image.





Applications of DIP:
X-Rays in medical diagnosis.
Nuclear medicine: By injecting
radio active isotope to a patient
which emits gama ay!>.' when it
decays. This process detects
infections or tumors in the body.
Magnetic resonance imaging: This
method passes short pulses of radio
waves into patients body and
outputs the 2-D picture of a section
of the patient.
Marine Acquisition: Seismic
imaging algorithms are tested.
Radar Applications: Through
imaging radar data can be
collected from any region at any
time.


Image processing is used to make
visual inspections of goods. It is
used for detecting missed parts in
the manufactured goods.

The basic tool in mathematical
morphology is a set. Objects of an
image are represented by sets.
Morphology operations are used on
binary images. Morphology can be
used i!1 many diverse areas as follows:
Weather observation and prediction:
The image prepared by the satellite
is scanned and observed for certain
parts.
Mechanical Scanners: Mechanical
scanners will scan large still images
and DIP techniques are used to
process these images.
Computer Axial Tomography
(CAT)
Remote Sensing: LANDSA T
satellite will obtain and transmit
images.

Morphology:
Morphology is the combination of
structure and form. Generally the word
morphology is used in the field of
biology of animals and plants. In the
context of mathematical morphology
image components can be extracted
with exploiting the region shape,
boundaries, skeletons etc.

1. To count the number of regions: To
check the number of dark cells or thick
cells in the image.
2. For Edge Smoothing: With the help
of segmentation techniques the noisy
images can be smoothed away.
3. To estimate sizes of regions.

Digitized image and its
properties: Digital image
properties
Metric and topological properties of
digital images
Some intuitively clear
properties of continuous images
have no straightforward analogy
in the domain of digital images
Metric properties of digital images
Distance is an important
example.
The distance between two
pixels in a digital image is a
significant quantitative measure.
The distance between points
with co-ordinates (i,j) and (h,k)
may be defined in several
different ways;

the Euclidean distance is
defined by Eq. 2.42
DE((i,j),(h,k) =V(i - h)2 +(j - k)2
A hexagonal grid solves many
problems of the square grids ... any
point in the hexagonal raster has the
same distance to all its six
neighbors.
--------- (2.42)

city block distance ... Eq. 2.43
D
4
((i,j), (h, k)) =| i-h| + |j - k |
--------- (2.43)

chessboard distance Eq. 2.44
D
8
((i,j), (h, k)) = max{1 i - h I, I j - k I}
--------- (2.44 )

Border R is the set of pixels within the
region that have one or more neighbors
outside R ... inner borders, outer
borders exist

Pixel adjacency is another important
concept in digital images.

4-ncighborhood
8-neighborhood

It will become necessary to consider
important sets consisting of several
adjacent pixels -- regions.
Region is a contiguous set.

Contiguity paradoxes of the square
grid... Figures 2.7, 2.8

One possible solution to contiguity
paradoxes is to treat objects using 4-
neighborhood and background using 8-
neighborhood (or vice versa).


Edge is a local property of
a pixel and its immediate neighborhood
--it is a vector given by a magnitude
and direction.
The edge direction is perpendicular
to the gradient direction which
points in the direction of image
function growth.
Border and edge... the border is a
global concept related to a region,
while edge expresses local
properties of an image unction
Crack edges. four crack edges arc
attached to each pixel, which are
defined by its relation to its 4-
neighbors. The direction of the crack
edge is that of increasing brightness,
and is a multiple of 90 degrees, while
its magnitude is the absolute
difference between the brightness of
the relevant pair of pixels.

Topological properties of digital
images

Topological properties of
images arc invariant to rubber
sheet transformations.
Stretching does not change
contiguity of the object parts
and docs not change the number
of holes in regions. One such
image property is the Euler--
Poincare characteristic defined
as the difference between the
number of regions and the
number of holes in them.
Convex hull is used to describe
topological properties of
objects.

The convex hull is the smallest
region which contains the
object, such that any two points
of the region can be connected
by a straight line, all points of
which belong to the region.
Fig. 2.10







Histograms
Brightness histogram provides
the frequency of the brightness
value z in the image.

Algorithm
1. Assign zero values to all
elements of the array h.
2. For all pixels (x,y) of the
image f, increment
h(f(x,y) by one.




Histograms may have many
local maxima ... histogram smoothing

1 D Gaussian noise - ~ is the
mean and IT is the standard
deviation of the random
variable.

Images are often degraded by random
noise.

Noise can occur during image
capture, transmission or
processing, and may be dependent
on or independent of image
content.
Noise is usually described by its
probabilistic characteristics.
White noise - constant power
spectrum (its intensity does not
decrease with increasing
frequency); very crude
approximation of image noise
Gaussian noise is a very good
approximation of noise that occurs
in many practical cases
probability density of the random
variable is given by the Gaussian
curve;
During image transmission, noise
which is usually independent of the
image signal occurs. Noise may be

Data Structures in Image Analysis

Computer program = data + algorithm
Data organization can considerably affect
the simplicity of the selection and the
implementation of an algorithm, The choice
of data structures is fundamental when
writing a program

Levels of image data
representation
Iconic images - consists of
images containing original data;
integer matrices with data about
pixel brightness,
E.g... Outputs of pre-processing
operations (e.g." filtration or
edge sharpening) used for
highlighting some aspects of the
image important for further
treatment.
Segmented images - parts of the
image are joined into groups
that probably belong to the
same objects,
It is useful to know something
about the application domain
while doing image
segmentation; it is then easier to
deal with noise and other
problems associated with
erroneous image data,
Geometric representations -
hold knowledge about 2D and
3D shapes, The quantification
of a shape is very difficult but
very important.
Relational models - give the
ability to treat data more
efficiently and at a higher level
of abstraction,
A priori knowledge about the
case being solved is usually
used in processing of this kind,
Example - counting planes
standing at an airport using
satellite images
A priori knowledge
o position of the airport
(e.g." from a map)
o relations to other
objects in the image
(e.g." to roads, lakes,
urban areas)
o geometric models of
planes for which we are
searching etc,

Traditional image data
structures

matrices,
chains,
graphs,
lists of object properties,
relational databases,
etc.
Used not only for the direct
representation of image
information, also a basis of more
complex hierarchical methods of
image representation
Matrices
Most common data structure for
low level image representation
Elements of the matrix are
integer numbers
Image data of this kind are
usually the direct output of the
image capturing device, eg., a
scanner

Chains
Chains are used for description
of object borders
Symbols in a chain usually
correspond to the neighborhood
of primitives in the image
information is then
concentrated in relations
between semantically important
parts of the image objects - that
are the result of segmentation
Appropriate for higher level
image understanding








Hierarchical data structures

Computer vision is by its nature
very computationally expensive,
if for no other reason than the
amount of data to be processed.
One of the solutions is using
parallel computers = brute force
Many computer vision problems
are difficult to divide among
processors, or decompose in any
way.
Hierarchical data structures
make it possible to use
algorithms which decide a
strategy for
processing on the basis
ofrelatively small quantities of
data.
They work at the finest
resolution only with those parts
of the image for which it is
necessary,
using knowledge instead of
brute Jc)rce to ease and speed
up the processing.
Two typical structures -
pyramids and quad trees.

1 "un while 5 4D 2
2 ok)' blue 0 0 -
3 cloud gr ey 20 180 2
4 tr<;e trunk brown 95 75 6
5 tr<;e crown gr<;en 53 63
-
6 hill light en !}7 0
-
7 pond blue 100 1m 6
Pyramids
M-pyramid - Matrix pyramid ".
is a sequence {ML, ML-I, ",
Mo} of images ML has the
same dimensions and elements
as the original image
M'-I is derived hom the M, by
reducing the resolution by one
half.

Conclusion:
This paper explains fundamental
features of Digital Image Processing.
The present state of art technology is
fully focused on large databases and
different images. The concept of the
image, its storage, restoration are the
key factors of digital image processing.
This paper also explains the different
data structures that are used to deal the
different digital image processing
applications. Almost all the recent
medical, space and other applications
are using Morphological techniques .




Bibliography:

1) Digital Image Processing
Gonzaleez
2) Computer Vision and Image
ProcessingAndrion low



1
DIGITAL VOICE ENHANCEMENT
Under (Digital Signal Processing)






By
P.ANANTHA NAGA LAKSHMI-3/4-ECE
panlorama@gmail.com
9849804447
P.PADMAVATHI-3/4-ECE
palla_padmavathi7@yahoo.co.in
9866139231

G.Narayanamma Institute of Technology and Science,
Shaikpet, Hyderabad.




2
Contents:
Abstract
What is DSP?
Analog and Digital Signals
Signal Processing
Development of DSP
Digital Signal Processors(DSPs)
Why DSP? Its Applications
Digital Voice Enhancement
Conclusions
References




















3
ABSTRACT

Digital signal processing (DSP) is the
study of signals in a digital
representation and the processing
methods of these signals. DSP and
analog signal processing are subfields of
signal processing.
A Digital signal processor (DSP) is a
specialized microprocessor designed
specifically for digital signal processing,
generally in real-time
Digital signal processing can be done
on general-purpose microprocessors.
However, a Digital signal processor
contains architectural optimizations to
speed up processing. These
optimizations are also important to lower
costs, heat-emission and power-
consumption.
DSP technology is nowadays
commonplace in such devices as mobile
phones, multimedia computers, video
recorders, CD players, hard disc drive
controllers and modems, and will soon
replace analog circuitry in TV sets and


telephones. An important application of
DSP is in Digital Voice Enhancement.
DVE technology will enhance
communication in passenger vehicles,
but works especially well in vans and
sport utility vehicles where noise levels
are higher and the distance between
passengers in greater. The system is also
well suited for luxury vehicles which are
typically designed to be quiet, but
achieve that level of quiet by using
sound absorbing materials, which also
absorb speech.
4
What is DSP?
DSP, or Digital Signal Processing, as the
term suggests, is the processing of
signals by digital means. A signal in this
context can mean a number of different
things. Historically the origins of signal
processing are in electrical engineering,
and a signal here means an electrical
signal carried by a wire or telephone
line, or perhaps by a radio wave. More
generally, however, a signal is a stream
of information representing anything
from stock prices to data from a remote-
sensing satellite. The term "digital"
comes from "digit", meaning a number
(you count with your fingers - your
digits), so "digital" literally means
numerical; the French word for digital is
numerique. A digital signal consists of a
stream of numbers, usually (but not
necessarily) in binary form. The
processing of a digital signal is done by
performing numerical calculations.
Analog and digital
signals
In many cases, the signal of interest is
initially in the form of an analog
electrical voltage or current, produced
for example by a microphone or some
other type of transducer. In some
situations, such as the output from the
readout system of a CD (compact disc)
player, the data is already in digital
form. An analog signal must be
converted into digital form before DSP
techniques can be applied. An analog
electrical voltage signal, for example,
can be digitized using an electronic
circuit called an analog-to-digital
converter or ADC. This generates a
digital output as a stream of binary
numbers whose values represent the
electrical voltage input to the device at
each sampling instant.
Signal processing
Signals commonly need to be processed
in a variety of ways. For example, the
output signal from a transducer may well
be contaminated with unwanted
electrical "noise". The electrodes
attached to a patient's chest when an
ECG is taken measure tiny electrical
voltage changes due to the activity of the
heart and other muscles. The signal is
often strongly affected by "mains
pickup" due to electrical interference
from the mains supply. Processing the
5
signal using a filter circuit can remove or
at least reduce the unwanted part of the
signal. Increasingly nowadays, the
filtering of signals to improve signal
quality or to extract important
information is done by DSP techniques
rather than by analog electronics
Development of DSP
The development of digital signal
processing dates from the 1960's with
the use of mainframe digital computers
for number-crunching applications such
as the Fast Fourier Transform (FFT),
which allows the frequency spectrum of
a signal to be computed rapidly. These
techniques were not widely used at that
time, because suitable computing
equipment was generally available only
in universities and other scientific
research institutions.
Digital Signal Processors
(DSPs)
The introduction of the microprocessor
in the late 1970's and early 1980's made
it possible for DSP techniques to be used
in a much wider range of applications.
However, general-purpose
microprocessors such as the Intel x86
family are not ideally suited to the
numerically-intensive requirements of
DSP, and during the 1980's the
increasing importance of DSP led
several major electronics manufacturers
(such as Texas Instruments, Analog
Devices and Motorola) to develop
Digital Signal Processor chips -
specialized microprocessors with
architectures designed specifically for
the types of operations required in digital
signal processing. (Note that the
acronym DSP can variously mean
Digital Signal Processing, the term used
for a wide range of techniques for
processing signals digitally, or Digital
Signal Processor, a specialized type of
microprocessor chip). Like a general-
purpose microprocessor, a DSP is a
programmable device, with its own
native instruction code. DSP chips are
capable of carrying out millions of
floating point operations per second, and
like their better-known general-purpose
cousins, faster and more powerful
versions are continually being
introduced.

6


(A Digital Signal Processor)

DSPs can also be embedded within
complex "system-on-chip" devices, often
containing both analog and digital
circuitry.


Why DSP? Its
Applications
The world of science and engineering is
filled with signals: images from remote
space probes, voltages generated by the
heart and brain, radar and sonar echoes,
seismic vibrations, and countless other
applications. Digital Signal Processing is
the science of using computers to
understand these types of data. This
includes a wide variety of goals:
filtering, speech recognition, image
enhancement, data compression, neural
networks, and much more. DSP is one of
the most powerful technologies that will
shape science and engineering in the
twenty-first century. Suppose we attach
an analog-to-digital converter to a
computer, and then use it to acquire a
chunk of real world data. DSP answers
the question: What next?
















7

DIGITAL VOICE
ENHANCEMENT

Digital Voice Enhancement


(DVE

) technology provides
effortless communication in passenger
vehicles.
How DVE works?
DVE, made possible by DIGISONIX
voice enhancing technology allows for
comfortable and safe conversations,
even at highway speeds. The
Digisonix real-time DVE software
uses microphones in the vehicle cabin
to detect voice signals amid the
vehicle noise. The DVE software then
enhances the voice signal and removes
unwanted noise to create a natural
sounding reproduction of the voice
through the vehicles audio system
loudspeakers. The DVE system can
be combined with voice recognition
software to become the gateway to
voice command in an automobile.


Better Hands - Free Mobile
The DVE technology also provides a
clear voice signal to the hands-free
mobile phone and allows all
passengers to participate in the hands-
free call. This is a great feature for
business meetings or family calls.
Included in the DVE technology is
Digisonix voice message system
allowing the passengers to record a
message in the vehicle to be played
back later.

Designed for Vehicles
Digisonix DVE technology will
improve communication in passenger
vehicles, but works especially well in
vans and sport utility vehicles where
8
noise levels are higher and the
distance between passengers in
greater.

Enhanced passenger to
passenger communication
All passengers participate
in hands-free phone calls.
Clear voice for mobile
phone
Voice messages in the
vehicle
Gateway to voice
recognition/commands
systems
Improved passenger
comfort and safety
Designed for all vehicles
works great in luxury cars,
vans and sport utility
vehicles.



The First Automotive
Implementation of a Digital Voice
Enhancement System
Applied Signal Processing, Inc.
(ASP) has applied its knowledge of
vehicle acoustics, adaptive modeling,
and digital signal processing to
implement a Digital Voice
Enhancement (DVE) system for
Volkswagen AG. The system, which
uses a Texas Instrument
TMS320C55x DSP-based
controller, is offered as an optional
feature to enhance voice
communication in the new 2004
Volkswagen Multivan.
Minivans and sport utility vehicles are
tremendously popular, but these
vehicles have a greater distance
between seated passengers and higher
interior noise levels. Luxury vehicles
typically incorporate signicant
9
acoustic treatments, which absorb road
noise, but also affect voice
communication. These characteristics
often make normal conversation
among the vehicle occupants difficult.
The ASP DVE system improves the
environment for natural conversations
in vehicles by using speech-enhancing
signal processing techniques to
amplify the voice signals, while
minimizing the amplication of other
noises. Safety is a direct benet of the
DVE system, because the driver does
not have to turn his head or take his
eyes off the road to converse
effectively with the other passengers.
Also, passengers are more comfortable
when they can speak in normal tones
and can hear others without having to
lean forward or change seats.
The DVE system uses microphones,
mounted overhead, to pick up the
occupants voices. It also deals with
non-speech inputs, such as road, wind,
engine, and accessory-generated
noises. The DVE system applies a
discriminating function to detect voice
activity from the dynamically
changing noise oor. The vehicles
audio system loudspeakers broadcast
speech from one zone to another
within the vehicle, Figure 1 (above).
The DVE system maintains high
sound quality and speech intelligibility
by properly equalizing the
communication channel and
integrating vehicle-specic
compensation routines for volume and
tone. It also addresses classic feedback
problems by removing compensating
for reverberations and feedback for
each of the talkers microphones. If
the optional VW cell phone car kit is
installed the DVE system becomes a
digital hands-free system in which all
passengers can participate in a phone
call.


Figure 1 - Digital Voice Enhancement
System
10
Other technical features of the DVE
system include smart-gating and a
variety of signal management tools
that compensate for voice levels,
reception-level requirements, and
ambient noise levels. Dynamic Gain
Control increases the total dynamic
range, effectively equalizing the sound
levels of both loud and soft talkers to
increase speech intelligibility and
listener comfort.
CONCLUSIONS
Digital signal processing (DSP)
is the study of signals in a digital
representation and the processing
methods of these signals
Digital signal processing can be
done on general-purpose
microprocessors.
By utilizing speech microphones,
standard audio loudspeakers with
amplication, and advanced
digital signal processing
techniques, the ASP DVE system
allows for conversation within
vehicles at normal speech levels.
It provides an ideal way to
acquire speech signals, giving
automotive designers a new
gateway for implementing such
features as a digital voice
notepad, voice recognition
systems, and hands-free cellular
telephony.
It can provide increased driver
safety and passenger comfort for
a very reasonable cost.

REFERENCES:
http://www.appliedsignalp
rocessing.com/applications
.htm

http://www.appliedsignalp
rocessing.com/dve.htm

http://www.wikipedia.com/

http://www.dsptutor.freeu
k.com/intro.htm




A BIOMETRIC APPLICATION
FINGERPRINT BASED ELECTION SYSTEM
Its the time to enter the Engineers into Politics,

Presented by
G.V.Swaroop Raju
K.Rajesh






Department of Electronics and Communication Engineering
SRI VENKATESWARA UNIVERSITY COLLEGE OF ENGINEERING
Tirupati-517502

Address for Communication:
G.V.Swaroop Raju,
K.Rajesh,
3
rd
B.Tech(ECE),
Room No-2210,
Viswakarma Block,
S.V.U.C.E.H,
Tirupati-517502.

E-mail Address:gvsr458@yahoo.co.in
rajesh_kolla435@yahoo.com











ABSTRACT

In todays election
system, human intervention and
impersonation changes the phase of
democracy in a country. One area where
this can be eliminated is, developing an
advanced machine which can promise
genuine voting pattern to take place.
Biometrics provides one of the most
secure methods of authentication and
identification. Biometric identification
utilizes physiological and behavioral
characteristics to authenticate a persons
identity, which are both unique and
measurable.
Some common physical
characteristics that may be used for
identification include fingerprints, palm
prints, hand geometry, retinal patterns
and iris patterns. Behavioral
characteristics include signature, voice
pattern and keystroke dynamics. Among
all these techniques we have chosen
fingerprint recognition because,
authentication and voting should be
carried out at the same time.
In this presentation, a
simple, reliable, elegant and efficient
electronic voting machine is proposed
and designed to meet the needs of
present day modern India. Traditional
voting was totally dependent on ballot
paper. Now, we have moved a step
further in evolution with the introduction
of electronic voting machine (EVM) by
the election commission. We would like
to refine the work as we noticed some
drawbacks in the present system. We
propose the system FINGERPRINT
BASED ELECTION PROCESS.
This system will count
only the vote given by a genuine voter
according to his/her biometric and
finally helps in the fast announcement of
the results. The main motive behind this
project is to remove human intervention
and impersonation or circumvention.
Through this system we can eliminate
the discrepancies arising out of the same
person having a double id at the same
center during enrollment. So we can
have fair and fool proof elections at
some extra cost of the finger print based
election process.







INTRODUCTION
To ensure real democracy the
election process should be conducted in
an environment where in a voter can
exercise franchise freely; such a system
should prevent impersonation and
rigging. The present system of casting
votes using ballot papers and electronic
voting machines(EVMs) can not
prevent impersonation and rigging as it
is possible some body else other than the
voter can intervene in the process of
casting votes. To prevent completely
impersonation and rigging it is
absolutely essential to avoid human
intervention at the time of casting a vote
except the voter.
In this paper an attempt has been
made to develop such a system where a
voter is authorized to cast his vote by
using his fingerprint technology.
Fingerprints are fixed by birth and
remain fixed for life. Fingerprints are
permanent. Fingerprint recognition has a
very good balance of all desirable
properties.
Fingerprints are distinctive and are
permanent even if they may temporarily
change slightly due to cuts and bruises
on the skins or weather conditions. Even
the hardware needed for the fingerprint
acquisition is affordable. The software
development tool removes the need for
the storage of the fingerprint directly,
instead it is converted into a string and
stored.
The main objectives of the proposed
system Fingerprint Based election
process are:
Maintain secrecy of voting
Complete prevention of
impersonation and rigging
Minimize the expenditure
Make counting reliable and
simple
Minimize the efforts, time, and
cost for conduct of election
FINGERPRINT SCANNERS:
Characteristics to be
taken into account when a fingerprint
scanner has to be chosen for specific
application are:
Interface
Frames/second
Automatic fingerprint detection
Encryption
Supported operating system

How biometrics work :
User enrollment
Image capture
Image processing
Feature extraction
Comparison
Verification
Identification
Verification
1:1 matching
To verify that the person
is who he says he is
Identification
1:n search
To find a person out of
many in a database

Finger print recognition
technique:
Fingerprint-based identification
can be placed into two categories:
Minute-based matching (analyzing the
local structure) and global pattern
matching (analyzing the global
structure). Currently the computer aided
fingerprint recognition is using the
minutiae-based matching. Minutiae
points are local ridge characteristics that
appear as either a ridge ending or a ridge
bifurcation. The uniqueness of a
fingerprint can be determined by the
pattern of the ridges and the valleys a
fingerprint is made of.
The principle pattern of
fingerprint:


The uniqueness
of a fingerprint can be determined by the
pattern of the ridges and the valleys a
fingerprint is made of. A complete
fingerprint consists of about 100
minutiae points in average. The
measured fingerprint-area consists in
average of about 30-60 minutiae points
depending on the finger and on the
sensor area. These minutiae points are
represented by a cloud of dots in a
coordinate system. They are, stored
together with the angle of the tangent of
a local minutiae point in a fingerprint-
code or directly in a reference template.


Enrollment and Matching:
As an initial step, the fingerprints of the
identified population are enrolled.
Operationally, fingerprints are captured
and compared against the database of
enrolled individuals, or a subset of it (in
the limit, one single enrolled
fingerprint). This process is known as
matching. Enrolment consists in
capturing and storing the reference
version of the fingerprint of an identified
individual.
The fingerprint obtained, or the
set of data extracted from it, is known as
the template (also called enrolled
template). The data obtained, known as a
sample (also called sampled template),is
compared against the set (or a subset) of
templates. If a match is obtained, the
individual presenting the sample is
identified. (If the sample is compared
against a single template, for example to
confirm the identify of the owner a
Smart Card, the process is known as
authentication or validation.) Both
enrolment and matching follow the same
initial series of data processing steps.

Image Acquisition :
Image acquisition consists in obtaining a
bitmap, at an adequate resolution, of all
or part of the fingerprint.

Template or Sample Extraction:
For reasons of security, and due to data
storage limitations, it is not advisable to
store images of entire fingerprints in the
fingerprint recognition system. (A
reference image may be stored in a
secure location during enrolment as a
backup and for access in exceptional
circumstances; but it is not required for
the normal functioning of the system.)
The normal procedure is to extract a
unique template from the image, using
pattern recognition or the principle of
minutiae as described before. During
enrolment, this gives the enrolled
template, and during verification it gives
the sampled template. The procedure is
identical in both cases.

Template/Sample Matching:

The final stage in the matching
procedure is to compare the sampled
template with a set of enrolled templates
(identification), or a single enrolled
template (authentication) if the identity
of a single person is have being
established. It is highly improbable that
the sample is bit-wise identical to the
template. This is due to approximations
in the scanning procedure (at 50 _m
resolution this is far from exact),
misalignment of the images and errors or
approximations introduced in the process
of extracting the minutiae. Accordingly,
a matching algorithm is required that
tests various orientations of the image
and the degree of correspondence of the
minutiae, and assigns a numerical score
to the match.

PROPOSED SYSTEM
DESCRIPTION
Our Electronic voting machine
consists of mainly two units: - 1.Control
Unit, 2.Balloting Unit with a cable for
connecting it with the control unit. A
balloting unit caters up to 16 candidates.
Four balloting units are cascaded
together to cater 64 candidates with the
help of only one control unit. EVM uses
a modern microcomputer and other large
scale integration (LSI) chips. It operates
on a special power pack. It is a tamper-
proof, error free and easy to operate. It is
easily portable. The polling information
once recorded is retained in its memory
even when the power pack is removed.
The EVM with the balloting unit and
control unit is as shown in figure.

BALLOTING UNIT (BU)
The ballot unit is that unit
of the machine which the voter operates
to exercise his franchise. The balloting
unit is a rectangular box which consists
of fingerprint scanners to each candidate
contesting in the election along with the
party icon. An interconnecting cable is
used to connect the balloting unit to the
control unit. Each balloting unit consists
of at most 16 scanners. At most four
balloting units are cascaded to cater 64
candidates. When ever a voter comes to
the polling booth to cast his vote he is
first authorized by the presiding officer
of that booth. If the voter is the right
person then the presiding officer will
switch on the green ready lamp on the
BU by pressing the ballot button present
in the control section of the control unit,
which indicates that the balloting unit is
ready to accept the vote. When the voter
places his finger (thumb) on the scanner,
the following algorithm is carried out:


PROPOSED ALGORITHM
1. Verifies whether voter is a valid user
or not
2. If the voter is valid
3. Checks a data field in the database to
ensure voting only once
4. If the field is not set, vote is
incremented for the corresponding
nominee and data field is set
5. If the field is set, user is provided with
a message YOUR VOTE
HAS BEEN EXPIRED
6. End if
7. If the voter is not a valid user
8. User is provided with a
message INVALID USER
9. End if
Control is transferred back into
the presiding officer

CONTROL UNIT

Control unit controls the polling
process. It is operated by the presiding
officer.CU has four units:

Display section
Candidate set section
Results section
Control section

Display section comprises of 2
LEDS, 1 two digit display and a
four digit display. One LED is
used to indicate the supply of
power and other is used to
indicate busy state of the EVM.
Two digit display panel is used to
display labels like error, total
etc., and the 4 digit display panel
is used to display number.
Candidate set section provides
the provision for the power pack
and setting the number of
candidates contesting for the
election.
Results section provides the
provision for closing the election
and viewing the results and
clearing the total votes stored.
Control section contains two
press buttons one is total button
which is used to view the total
votes polled till now and the
ballot button to enable the
balloting unit from voter to voter.
FLOW CHART INDICATING THE
PROCESS


WORKING OF THE SYSTEM:





In the implementation of
the fingerprint based election system, a
random fingerprint is generated to the
voter during registration. Later during
the time of the election, a random
fingerprint is generated and checked
with the data field in the database
whether he has voted already. The single
queue single server model is used in the
design of this system. During
registration phase, the voters are
registered in the election database with
all the information like name, age, sex,
address etc. along with the fingerprint.
The fingerprint is converted as a 400
byte string by the software development
tool and then stored in the database.

During
election, the database is loaded in the
system. When a voter enters the election
booth and press the finger against the
fingerprint scanner corresponding to the
candidate icon to whom he wishes to
vote, then the scanner checks whether
the voter has pressed more than one
scanner if not then the candidate icon
and the fingerprint of the voter is taken
and then the match in the database
corresponding to the voters fingerprint
is done. If a match is not found then the
voter is not an authorized voter else his
database is retrieved and the flag is
checked which indicates whether the
voter has voted already, if not then the
voters vote is accepted and the count
corresponding to the candidate is
incremented and the voters flag is set.
Then the
system moves to the initial state to allow
the voting process to continue as usual.
Finally, the results are announced at the
end of the elections. Fingerprint based
election system provides the voter with
the right to vote without any
discrepancies. The system checks the
fingerprint which is encoded into a string
which is unique for a person. So, a
persons vote can be casted by only him
without any impersonation and rigging
can be eliminated.

CONCLUSION:
A secure voting system with the
features such as accuracy,
invulnerability, convenience, flexibility
has been designed. The system does not
accept impersonation, nor does it accept
more than one vote from each registered
voter. The characteristic of Fingerprint
Based Election System is security.
Security is provided by choosing
fingerprint for authentication.

The advantages of proposed system
are:
Authentication of the voter using
fingerprint
Ease of use of the system by an
illiterate to vote

Though we have the online protocol for
voting and many more advanced online
voting protocols fingerprint based
election system has been designed
because India consists of a majority of
people in villages even without the
knowledge of what a computer is. So,
this system can be an understandable and
easy interface for the illiterate voters to
use. The record of the voter in the
database consists of his information and
fingerprint in the encoded form. So, the
memory requirements are reduced to a
large extent. The advancement of the
internet should pave for voting anywhere
with the maintenance of a centralized
database using the Fingerprint Based
Election System.


Finally we conclude that - Technology
gets in the way of accuracy by adding
steps. Each additional step means
more potential errors, simply because
no technology is perfect.










Rust In Peace
Hard disks might soon be dead. But what will replace them?
Heres a look at seven technologies that could be called hard
disk killers





Presented by

V.V Pavan Kumar R. Vijay Krishna Nag
IIIrd year Computer Science Engg IIIrd year Computer Science Engg
Narasaraopeta Engg College Narasaraopeta Engg College
Email:pawan.you@gmail.com Email: vijay.repala@gmail.com
Contact: 9948494981 Contact: 9908706318








Abstract

Storage researchers have their work cut out all nice and neat: all thats needed is
a way to make 1s and 0s. A Yes and a No; a hole and a bump; a groove and a ridge. This
means a lot of freedom to experimentthere are a lot of materials and mechanisms
researchers can tinker around with, with the goal of finding what materials can be in two
states (or be brought into two states), and how to go about making those state changes.
Given a decent lab, you could probably think up a storage mechanism yourselfwith the
caveat that whats needed is higher densities!
When you look at whats being researched and actually think about our
current storage devices (and those in the past, such as punch cards), youll see that
theyre primitive. They almost seem stupid. Holes in paperhow straight forward can
you get? The hard disk, too, is based on an extremely simple, almost simplistic
principledifferent magnetic orientations on a layer of iron oxide. It turns out we can
now do much better. From a fairly lofty perspective, therefore, the hard disk is dead. This
century belongs nanotech, the science of the extremely small.
And small is, of course, good news when it comes to storage, because it
means higher densities. In what follows, we discuss several storage techniques in various
stages of research and implementation, and with very different theoretical capsfrom the
one CD per square centimetre achieved using a type of rubber stamp, to the beyond-the-
imagination one crore CDs on a square inch that might be
achieved with the help of carbon nanotubes. Some of these technologies youll see in the
next couple of years; some youll see sometime in your lifetime. Some will die because
of competing technologies; some will tame the worlds information. But they all
demonstrate that innovation is alive, and kicking as hell!

















Holographic Storage
Holographic Storage has been talked about for a long time; indeed, the Holographic
Versatile Disc (HVD) is being eagerly awaited by people around the world. This
technology uses lasers to record data in the volume of the medium, rather than on the
surface. The idea, surprisingly, is not new, but its only now that the technology seems to
be getting up to speed.

How Does It Work?
A laser beam is split in two, the reference beam and the signal beam (called so because it
carries the data). A device called a spatial light modulator (SLM) translates 1s and 0s into
an optical pattern of light and dark pixels, as in the figure above. These pixels are
arranged in an array (or page) of about a million bits. The signal beam and the
reference beam intersect in the storage medium, which is light-sensitive. And at the point
of intersection, a hologram is formed because of a chemical reaction in the medium, and
gets recorded there. (A hologram is the interference pattern that results when two light
waves meet; for more on interference, visit www.physicsclassroom.com). For reading the
data, only the reference beam is used: it deflects off the hologram and a detector picks up
the data pages in parallel. The 1s and 0s of the original data can be read from the data
pages. By varying the angle or wavelength of the reference beam, or by slightly changing
the placement of the media, lots of holograms can be stored in the volume of the medium.


What Work Is On?
Optware Corporation of J apan have already come out with their HVD: the HVD holds 1
TB (a terabyte), and is the same size as a regular optical disc. Enterprise versions were
planned for this year: the estimated costs were something like $20,000 (Rs 9 lakh) for
players and $100 (Rs 4,500) for discs. In the meanwhile, InPhase Technologies,
Optwares main competitor, is coming out with products of its own. In partnership with
Maxell, InPhase has already come up with a 300 GB disk, with an 800 GB disc expected
in 2008. And if the 1 TB of HVD werent enough, Fuji Photo Film USA has
demonstrated a type of HVD with a claimed capacity of 3.9 TB!

What Promise Does It Hold?
Besides high storage densities, holographic storage means fast access times, because
there are no actuators as in hard diskslaser beams can be focused around much more
rapidly. Holographic storage can kill off the hard disk within the next 10 years, but thats
again speculationsome technologies step out quickly and graciously, but some are
pretty stubborn!

Near-Field Optical Recording Using A Solid Immersion Lens

Think of a CD or DVD; while recording, the lens focuses the laser onto a tiny spot on the
medium. This spot is tinier for DVD than for CD, and is even tinier in Blu-ray, for
example. Near-field optical recording (NFOR) refers to the extremely sharp focusing of a
laser beam, which means an extremely small distance between the lens and the recording
medium. NFOR using a solid immersion lens (SIL) would be the child of Blu-ray and
HD-DVD, and therefore, the grandchild of the DVD.

How Does It Work?
The density of the data that can be achieved on a disc is roughly proportional to the
square of the numerical aperture (NA) of the lens, and inversely proportional to the
wavelength of the laser (refer The Battle Of The Blue, Digit December 2005). The NA of
a lens dictates how sharply it can focus the beam falling on it. The NA of a SIL is made
very high, and the achievable data densities are therefore that much higher.
In NFOR using a SIL (refer figure alongside), the laser is very sharply
focused: it converges at a point within the lens, instead of on the medium. The air gap
between the lens and the medium is just about 25 nm! The photons tunnel through the
air gap onto the surface of the medium (See Jargon Buster).


Whats Being Done?
About half a year ago, Philips researchers reported significant progress in developing
NFOR. Up to 150 GB of data on a dual-layer disc would be possible, they said, although
they also said the technology was several years away from commercialization.

What Promise Does It Hold?
NFOR using SILs seems in no way to us a hard disk killer, but its the natural progression
from blue-laser systems. NFOR is therefore a potential Blu-ray killer. The promise
NFOR holds all seems to hinge on how much comes in terms of research dollars. Will the
HVD have already taken over by the time NFOR takes off?


MEMS-Based Storage

MEMS (Micro-Electro-Mechanical Systems) are, according to memsnet.org, the
integration of mechanical elements, sensors, actuators, and electronics on a common
silicon substrate through micro fabrication technology. The mechanical elements
referred to here, range in size from a few micrometers to a millimetre. Actuators are just
devices that convert an electronic signal to a physical actionfor example, the device in
the hard disk that is responsible for positioning the head precisely. In fact, we can take
the example of a MEMS-based storage system to better explain what MEMS are.

How Does It Work?
Different MEMS storage systems work differently, but we can describe the concept. Take
a look at the figure alongside. This isnt a working system, but just an example of a
general MEMS storage system. The data sled at the top can move in all three
directions; it is spring-mounted over the probe tip array, an array of mechanical tips that
do the reading and writing (we arent getting into the details here). Theres an actuator on
each side of the data sled, and it moves the sled in response to electric currents. Now
when the first bit is written, the sled and the tip array are aligned, and then the sled moves
along one axis while the tips do their workwriting a 1 or a 0. Note that the sled doesnt
rotate; it slides. Also note that everything in this arrangement is mechanical and
electronic.

Whats Being Done?
At CeBit 2005 in Hannover, Germany, IBM showed off a MEMS-based storage device
that it said could achieve densities in the range of 1 TB per square inch. The device is
called the Millipede, because of the thousands of probe tips. The tips are of silicon, and
the data substrate is a material called plexiglass. To write a bit of data, a tip is heated to
400 degrees C. When it pokes the plexiglass, it softens it and makes a dent there. To
read data, the tips are heated to 300 degrees C and pulled across the surface of the
plexiglass. When it falls into a dent, the tip cools down because more surface area comes
in contact with the (cooler) plexiglass. The temperature drop reduces its resistance, which
can be measured. Finally, to erase a bit, a hot tip is passed over the dent, making it pop
back up.

What Promise Does It Hold?
Plenty! MEMS-based storage devices such as Millipede could well be hard disk killers,
depending on the research dollars spent. Seek times are lower and more stable than those
of hard disks. In the range of 1 to 10 GB, MEMS-based storage has the lowest cost per
byte compared to non-volatile memory and hard disks. Data transfer rates can reach 1
gigabyte per second. Also, MEMS-based devices are smaller and use considerably less
power.


Jargon Buster
Spatial Light Modulator (SLM)
An SLM consists of an array of optical elementspixelsin which each pixel acts
independently as an optical valve to adjust or modulate light intensity.
Solid Immersion Lens
A type of lens that has a high refractive index, hemispherical in shape, optimised for
precision.
Polymers
These are natural or synthetic plastic-like structures where two or more like molecules
are joined to form a more complex molecular structure.
Photolithography
This is a process used in semiconductor device fabrication to transfer a pattern from a
photomask to the surface of a wafer or substrate. Here, a chip, typically of silicon, is
coated with a chemical called a photoresist. Flashing a pattern of light and dark onto the
photoresist causes it to harden in the areas exposed to light. The parts not exposed to light
stay soft and are etched away. (A photomask is a high-precision plate containing
microscopic images of electronic circuits.)
Electron beam lithography (EBL)
As opposed to photolithography, in EBL, electron beams are used to create the patterns
directly on-chip.
Scanning Tunneling Microscope
The STM allows scientists to visualize regions of high electron density and hence infer
the position of individual atoms.
Atomic Force Microscope (AFM)
This kind of microscope works by scanning a semiconductor tip over a surface. The tip is
positioned at the end of a cantilever beam. As the tip is repelled by or attracted to the
surface, the beam deflects. The magnitude of the deflection is captured by a laser that
reflects at an oblique angle from the very end of the cantilever. A plot of the laser
deflection versus tip position on the sample surface provides the resolution of the
topography of the surface.
Tunneling Current
Tunneling is a quantum mechanical effect. A tunnelling current occurs when electrons
move through a barrier that they shouldnt be able to move though. In the quantum
mechanical world, electrons have wave-like properties. These waves dont end abruptly
at a wall or barrier, but taper off quite quickly. If the barrier is thin enough, given enough
electrons, some will move through and appear on the other side. When an electron moves
though the barrier in this fashion, it is called tunnelling.
Cantilever
Simply speaking, a cantilever is a projecting structure supported only at one end, like a
diving board.


Molecular Switches

Since were talking miniature, lets go all the way: how about storing a bit in a single
molecule? Short of using a single atomwhich is more difficult this is about as small
as it gets. (Later, we do talk, though, about single atoms!) Like weve said, its all about
1s and 0s; so what one needs is a molecule with two stable states, and a means of
switching the molecule between these states.

How Does It Work?
A molecule currently being researched is rotaxane. Now this may sound way out: the
rotaxane molecule has a thread-like section, with a ring structure that moves around it.
The ring doesnt fall off because of the dumbbell-type bulbs at the ends of the thread.
Ways have been found to move the ring from one end of the thread to the other, both of
which are stable states for the molecule. The reading procedure is complex, and cannot be
discussed in this space.

What Is Being Done?
Scientists successfully demonstrated, in J anuary 2003, that they could store information
using rotaxane molecules. They reportedly achieved storage densities in the range of 1 to
10 gigabits per square inch.

What Promise Does It Hold?
Theres currently more research going on in terms of using molecular switches as logic
gates and such. But researchers are already envisaging fully-molecular computers, where
the storage mechanisms, too, would be molecular. This is looking way into the future, but
fully-molecular computers are an important ideathey represent the smallest you can
get!


Atomic Storage Using STMs And AFMs
Take a look at the figure alongside. Notice the IBM? Its not a photograph, but it
depicts what researchers have etched at the atomic scale: each of the little hills is an
individual xenon atom! The thing was produced using an Scanning Tunnelling
Microscope (STM) operating a few degrees above absolute zero.

How It Works
An STM has the ability to give a view of surfaces at the atomic scale, and researchers
have envisioned the application of the technique to achieve ultra-high-density storage.
The STM has an ultra-sharp tip placed extremely close to the substrate being written
onto. A voltage applied between the tip and the substrate gives rise to a tunnelling
current. The tunnel current depends on the separation between the tip and the substrate.
As the tip is moved over the surface, the tunnel current is monitored, and the position of
the tip is changed such that the current is constantthis way, the topology of the surface
can be mapped out. The beauty of the STM is that it can be used not only to map a
surface, but also to modify it.
There are difficulties with the STM approachone is the problem of
maintaining the distance between the tip and the surface at the angstrom level (an
angstrom is 0.1 nm). To overcome these difficulties, researchers are concentrating more
on the Atomic Force Microscope (AFM). Here, the tip rests on a cantilever spring. This
allows for two things: first, the tip can actually touch the surface, because of the
bounce enabled by the spring. Second, by monitoring and controlling the spring,
extremely small forces can be sensed as well as applied.

Whats Being Done?
The letters IBM have been etched on a surface using an STM, as mentioned above.
Researchers are playing around with the idea of using an AFM in contact with a hard disk
like surface to etch data pits. It will take a long time for this to materialise, but remember
that were talking about a hard disk that writes individual atoms! Disk storage just
cannot get any denser than that now, can it?

What Promise Does It Hold?
The use of STMs, AFMs, and similar devices are almost the Ultimate application of
nanotechnology to data storage. The potential storage capacities are enormousfor
example, the etched letters in the figure represent a storage density of 1 million gigabits
per square inch! Thats 2 lakh CDs on a square inch!

Rubber Stamps
In 2001, two Harvard University researchers designed a high-capacity data storage device
based on the concept of a rubber stamp transferring patterns1s and 0s, of courseonto
a plastic film. The system could hold about 650 MB (one CD) of data on a square
centimetre.

How Does It Work?
The method works on the idea of using electric charge rather than magnetic
orientationsto store bits. First, a pattern of 1s and 0s were etched on a piece of
semiconductor. This piece was a mould for a liquid polymer, and solidifying the polymer
resulted in the rubber stamp (see figure alongside).

1. The rubber stamp with the gold film 2. The stamp
2. in close contact with a PMMA-coated surface
3. 3. The PMMA holds the stamped electric charge

The 5-mm-thick stamp was then coated with an 80-nm gold film. To write the pattern, the
researchers placed the stamp on a silicon wafer covered with an 80-nm film of a
substance called polymethyl methacrylate (PMMA), a polymer that can hold electric
charge. They then applied an electric pulse to the gold plate. The current passed right
through the PMMA film to the silicon wafer, and what resulted was an electric charge in
the areas of the PMMA film that were in contact with the stamp. Where theres an
electric charge, its a 1; where its not, its a 0.

What Promise Does It Hold?
The system is low-cost, which is its main advantage. But this seems to us an exotic piece
of research, and its immediate or long-term value is not clear. What is also not clear is
how data would be transferred between two systems: the rubber stamp was created using
photolithography and electron beam lithography, so how would data patterns be created
on an everyday basis? Still, the research shows that it is possible to use electric charge
storage as an alternative to magnetic storage; and that it can lead to high data densities.
So OK, heres yet another nanotech hard disk alternative!

Multiwalled Carbon Nanotubes
Carbon nanotubes have been much researched. They are folded sheets of carbon atoms,
as in the picture alongside, where several nanotubes are shown in different colors.
Multiwalled nanotubes are concentric tubes that hold together as a structure, as in the
picture shown below. Theyre nanotubes, so, of course, they are measured on the
nanoscalea nanotube can be smaller than a nanometre in diameter. It turns out that
nanotubes can be used to write data: the tube is treated like a needle to make fine changes
in a medium.



Whats Being Done?
Researchers from IBM Research in Zurich, have demonstrated a storage density of 250
Gigabits per square inch, using the tips of multiwalled nanotubes to write bits onto a film
of a certain polymer. The nanotube tip works something like a probe, pressing 1s onto
the polymer surface; obviously, the absence of a 1 means a 0.




What Promise Does It Hold?
The Swiss and J apanese researchers said that using nanotube tips in practical devices
could not be possible until around 2008. 250 gigabits per square inch is high, but not
astronomical by any means; what is astronomical is the 50 million gigabits per square
inch that is being envisaged in a different scheme! Thats one crore CDs on a square
inch! Here, nanotube tips are used to place hydrogen atoms on a diamond or silicon
surface. The state of the art certainly does not match up to what is possible in theory, but
you never know when a nanotech breakthrough just happens!











How Much, And When?
If youve read this far, you probably want to know exactly when youre going to be able
to upgrade! The graph above indicates storage densities (on a logarithmic scale) and
timelines. Though the hard disk seems pretty mighty in this graph, there are other
considerationsfor example, theyve pretty much reached their theoretical limit; theres
only so much you can pack onto a single platter; and access times are shorter with, for
example, MEMS-based and holographic storage. With so many labs doing so much work,
standardization is going to be a huge issue. Also, the technologies will directly compete
with each otherso even just two years down the line, the scene is likely to be quite
different. In any case, its almost time to look lovingly at your hard disk and pay your last
respects!



PAPER
PRESENTATION
ON
GLOBAL
POSITIONING
SYSTEMS









Abstract

Satellites already play a significant role in our daily lives, aiding communication, exploration
and research. In the future, we will undoubtedly see their influence grow. GPS is perhaps one of
satellites most successful applications, and for consumers, receivers are becoming ever more
affordable and reliable. The recent signing of a cooperative agreement between the United States
and the European Union will expand this system, laying the foundation for a compatible and
interoperable Global Navigation Satellite System, the GNSS. With this relatively young
technology, improved accuracy, better reception and altogether new applications lie in wait for
us in the near future.


1 Introduction

Im astounded by people who want to know the universe when its hard enough to find your
way around Chi-natown. - Woody Allen

Using todays state of the art global navigation satellite systems (GNSS), you can pinpoint your
location anywhere on earth with an accuracy of less than fifteen meters. Currently, the only
system available to the general public is the American Global Positioning System, which has
been fully functional since mid-1994. The upcoming European competitor, Galileo,
promises to improve accuracy. This would in fact make it possible to find your way through a
city, solely based on the data gathered by your receiver from space. Perhaps someday GNSS
will find extremely useful
applications, such as replacing seeing-eye dogs and guiding motor vehicles.

This article discusses the historical significance of the TRANSIT and GPS systems, including
their technical specifications and workings. Then it will briefly cover Galileo, a new satellite
positioning system. The article will conclude by explaining the advantages of this system and
why we need yet another satellite navigation system.

2 TRANSIT

In 1959, the American Navy discovered the benefits of inverting the Doppler shift concept,
thereby starting the development of the TRANSIT system, initially known as the Navy
Navigation Satellite System [3].If you know the exact position of the satellite, you can determine
your relative position to it.

2.1 Configuration

The TRANSIT system is configured in such a way that the six satellites in orbit provide
maximum coverage. To do so, scientists put the satellites in uniform orbital precession, in six
separate polar orbits.


Six satellites
Six polar orbits
Altitude: 960 km
Period2: 106 minutes
Inclination3 : 90
Three ground-based Monitor Stations



2.2 Requirements

For a successful positioning measurement, the TRANSIT system specifications state that only
one satellite is required. A position can be calculated as soon as the satellite passes over head.
TRANSIT could guarantee a successful measurement within 110 minutes at the equator, as long
as a satellite was in range of the receiver. The greater the latitude, the more satellites become
visible. For example, at 80 latitude,
the average fix time was only 30 minutes. Presently, as the system is no longer maintained, the
satellites have lost their uniform orbital precession, this guarantee no longer applies.

2.3 Concept
The theory of the relationship between the satellite and receiver is described in the following five
steps.
1. A satellite sends its exact position and time over frequency f0.
2. A receiver, expecting to receive signals from the satellite on frequency f0 , searches for the
signal over a certain frequency range above f0.
3. If the signal can be found on a certain frequency f , the receiver will continue to track this
frequency as it continues to drop.
4. When f0 =f , then the satellite is some where overhead.
5. At this point, a calculation can be performed, and the receiver can stop listening.

2.4 Positioning


As mentioned, we know that the satellite is neither
approaching nor departing (i.e. the satellites relative
velocity is zero) when f0 =f . This can be proven with
the Doppler frequency expression in equation 1, where
f is the observed frequency, f0 the source frequency, vs
the source velocity, and c the speed of light.



When vs =0, the frequencies must evidently be equal. f
=f0 With the information from the
signal carried over the frequency, the receiver knows
the exact position and time of the satellite at the moment that it passed overhead (figure 1).
Therefore, the receiver must be located somewhere on a line perpendicular to the orbit of the
satellite.
Although it may seem that we dont know the distance from the orbit, we can calculate this using
a Doppler shift expression. However, this equation is rather complex and goes beyond the scope
of this article.What we dont know, is to which side of the orbit the receiver is located, because
on the line perpendicular to the satellites orbit, we only know the distance to the intersection.
This means that still two possible locations remain, shown in figure

2.5 Disadvantages
Not only do the calculations only narrow the receivers position down to two possible locations,
theyre based on it being at sea level. For anything but maritime expeditions, this would render
the system useless unless the altitude is known. Other disadvantages include bad coverage, poor
accuracy and the equirement that the receiver physically has to wait until a satellite passes
overhead. The TRANSIT system was abandoned in 1996, due to the great success of GPS.

3 GPS

Before the first TRANSIT satellite even went into orbit, the US Department of Defense already
had something much bigger in mind: the Navigation Satellite Timing and Ranging Global
Positioning System (NAVSTAR GPS), known to most as GPS. Development, however, only
started in 1973.J ust like TRANSIT, GPS satellites send the time and position in a signal carried
over a given frequency. However, the similarity stops there. GPS receivers now only use the
frequency as means of obtaining the signal thats carried over it. The Doppler shift is no longer
relevant to the positioning, and the receiver
simply transposes the observed frequency to the source frequency. This is done to ensure that
the carried signal is decoded at the correct bitrate.Since 1994, GPS is fully functional with all 24
satellites in orbit. The United States had planned on reaching this stage by the late 80s, but due
to several delays - amongst the Challenger Space Shuttle disaster in 1986, the deadline could not
be met.

3.1 Configuration

Scientists developed a configuration for the GPS
system that provided global coverage using at least 21
satellites in a medium earth orbit (MEO).

21 active satellites
3 spare satellites
Six orbital planes
Altitude: 20,200 km
Period: 11 hours 58 minutes
Inclination: 53
Four satellites per plane
Five Monitor Stations


Initially, researchers contemplated a geostationary earth orbit (GEO) configuration at 36,000
km. This was discarded, however, because the satellites would require a stronger transmitter and
a more powerful launch vehicle. More importantly, a GEO would provide poor coverage of
polar regions

Instead, the preliminary test configuration Block I - specified that the planes be inclined
at 63 . Being the first generation of GPS satellites, the ten satellites successfully launched
into orbit from 1978 to 1985 are designated as the GPS/Block I satellites.


The 24 current GPS satellites are in the Block II configuration, and were launched between 1989
and 1994. This configuration specifies that the six planes are inclined at about 55. Evenly spaced
at 60 longitude, this inclination provides best global coverage, including the polar regions, as
shown in figure 3.
These satellites are in fact divided into four generations: II, IIA, IIR and IIF. The primary di
erences have to do with accuracy and maximum number of days without contact from
monitoring and control stations.
The Monitor Stations upload new, corrected data to each satellite every four hours. This data
includes a corrections on the exact time and position of that and other GPS satellites in orbit. An
update of the satellites position can be determined by performing a GPS measurement to a
ground antenna of which the exact location is known. The Monitor Stations are located near the
equator to reduce the ionospheric e ects (figure 4).

3.2 Requirements

The GPS system specifies that a measurement requires data from four di erent satellites. This
means that at any point in time,at least four satellites must be in range of the receiver (figure 5).If
all four of these separate signals can be received, GPS guarantees a successful measurement
within 36 seconds.6
Each GPS satellite must know the exact time, with an accuracy of at least 10 nanoseconds. Each
satellite is therefore equipped with two rubidium and two cesium atomic clocks.



3.3 Concept

The basic relationship between the satellite and receiver is described in the following five steps,
which will be explained in greater detail in sections 4.4 through 4.7.
1. A receiver receives a signal from a GPS satellite.
2. It determines the di erence between the current time and the time submitted over the
frequency.
3. It calculates the distance of the satellite
from the receiver, knowing that the signal
was sent at the speed of light.
4. The receiver receives a signal from another
two satellites, and again calculates the dis-
tance from them.
5. Knowing its distance from three known lo-
cations, the receiver triangulates its posi-
tion.

3.4 Positioning

3.4.1 Ideal
In figure 6a you can see that by calculating the distance d0 to satellite A, the receiver can place
itself on a sphere with radius d0 from A. We can then continue to determine the radius d1 from
satellite B . These two spheres must touch or
intersect if the measurement was successful.

If the spheres merely touch, which is highly
unlikely, we can already determine our position.
However, if they intersect, we must be
somewhere on a circle where any point of the circle
is d0 from A and d1 from B , as shown in
figure 6b.Finally, figure 6c shows us how a third
measurement will put the receiver d2 from satellite
C . This narrows our position on the circle down to
one position (if the circle and sphere touch) or two
(if they intersect).One of these positions can be
disqualified, because the location or velocity is
virtually impossible. For example, it could put us
at an altitude above the satellites.




3.4.2 Inaccuracy

Since a GPS completely relies on correct timing to make a successful measurement, both
receiver and satellite must know the time very precisely. While the satellites are equipped with
four atomic clocks that are updated everyfour hours, the receiver only has a simple clock, no
better than a cheap digital watch.
To explain this, well simplify things to two dimensions. An ideal triangulation would
then only need two satellites (assuming we can still disqualify one location). However,
because were using an inaccurate clock, we need an extra measurement. The ideal situation
with three measurements is shown in figure 7. In reality, these three circles dont align
perfectly at all. The receiver cannot keep the exact time as precisely as the satellites, so well
have to make the circles align by hand. We can do this, because we know that the circles
are supposed to align. If the circles are too large, we adjust our clock by moving it forward in
time until the circles are small enough to intersect in one point (figure 8). If the circles
are too small, we move our clock backwards. In essence, the receivers clock doesnt have
to know the exact time; it only has to determine the travel time of each satellites signal relative
to each other. Since it only takes a signal 63 to 70 milliseconds to reach the receiver, its still
possible that the clock loses accuracy during the measurements, which results in that the four
spheres still dont align properly.

is
alled the pseudo range. In some cases, it may be that only one sphere misaligns.
eceivers are equipped with an algorithm that makes an educated guess within the pseudo range.
.5 Broadcast
ation to
e
cond frequency, L2, is 1227.6 MHz, and is used only by the military.120 f0 =1227.6
MHz
This means that theres not a single exact position where the receiver can be. Instead, it
a certain area, c
R

3

Now lets take a look at the actual broadcast. We already know that the actual frequency is not
important; it is only used for carrying the signal. This signal contains all the vital inform
the proper functioning of the GPS system.There are actually two frequencies that are
broadcasted. Both are in the microwave range (i.e. above 1,000 MHz), and are so identified by
the prefix L.The developers decided to use the rubidium atomic clock to set the frequencies. Th
clock has a nominal frequency f0 of 10.23 MHz [6], which is used internally by the satellite as
the fundamental frequency, and can simply be multiplied to reach the microwave range. The first
frequency, L1, is 1575.42 MHz, which is derived from the fundamental frequency f0. It carries
a signal for both civil and military receivers.154 f0 =1575.42 MHz
The se
Within the satellite, a data signal is created that contains the Navigation Message. This message
contains all the information required by the receiver.7 The signal is made at 50 bits per second
(50 Hz).
The satellite must also be able to identify itself over the frequency. To do so, a Pseudo Random
Noise code, or PRN-code is created. This is a signal that seems to be pure noise, but in fact is a
sequence of n bits that repeats after the nth bit. Each satellite is assigned one of the 32 unique
PRN-codes. There are two different kinds of PRN-codes: one called the coarse acquisition code,
or C/A code, used for civil receivers, and another called the precise code, or P-code for military
receivers. The C/A-code is 10n - 1 bits long, where n is the number of digital shifting elements
the device contains. In GPS satellites, this is 10,
so the code is 1023 bits long. Subsequently, it is sent at a rate of 1.023 megabits per second,
which is a tenth of the fundamental frequency:f0 10 =1.023 MHz At this frequency, it takes
precisely 1 ms to send the 1023 bit C/A code. The P-code is sent exactly at the fundamental
frequency f0 . Because the frequency is 10 times as high, the data is 10 times as accurate. The P-
codes are not publicly known, and
therefore cannot be used by the general public. For security reasons, the P-code is not as
short as the C/A code. It repeats precisely once every seven days, therefore making it
practically impossible to find, unless you know what youre looking for.Finally, the PRN-code
is combined with the Navigation Message by a modulo-2 adder, then mixed with the frequency,
which will carry it to the receiver. This process is illustrated in figure 9.


3.6 Reception

For the receiver to revert the frequency to the Navigation Message, it replicates both the
frequency on the L-band as the 32 PRN-codes for every possible satellite. Subtracting the pure
frequency from the received signal, the PRNcode combined with the Navigation Message
remains.

Once again, exact timing is essential to receive the message correctly. As the receiver generates
the matching PRN-code, it doesnt align properly. The receiver tries to make the signals line up,
acquiring full PRN correlation, correcting its clock to do so (figure 10). Once aligned, it will
remove the PRN- code from the signal to recover the Navigation Message. The chosen
unique PRN-code used to decode
the Navigation Message identifies which satellite sent the broadcast. The Navigation Message,
which contains the time, ephemeris and other data, can now be analyzed.

3.7 Signal

The Navigation Message is a continuously
repeating frame of 1500 bits, split up in 5 sub-
frames. Each sub-frame, being 300 bits and sent
at 50 Hz, takes precisely 6 seconds to send (see
figure 11), and contains a 60 bit header and a 240
bit data block.





3.7.1 Header

The header is subsequently split up in two words.The first is the Telemetry Word (TLM). Led by
an 8 bit preamble, this is the first stage of recognition for the receiver. Followed by 16 bits of
reserved data, which matches the ending 6 bit checksum, or parity.A receiver would search for
the preamble, which is defined as 10001011. This marks the beginning of a new sub-frame. To
confirm this, the receiver gathers the reserved data, creates a parity, and checks to see if this
corresponds with the last 6 bits of the TLM. If this doesnt match, the receiver backtracks to look
for the next preamble. The second part of the header is the Handover Word (HOW). Spanning
the first 17 bits of the HOW is the time of week (TOW). This
is the first bit of significant data the receiver can use. Although we have already synchronized
our
clock correctly, we have not discussed the difference between the way time is kept by satellites
and how it is kept here on earth. The GPS systems do not take leap seconds into account, while
this is a requirement. Therefore, the Navigation Message tells the receiver how to update its
clock to correct Coordinated Universal Time, UTC.The next 7 bits contain general sub-frame
data. This consists of the sub-frame ID (the number of the sub-frame, 1 through 5), a reserved
alert flag and the Anti-Spoofing flag11 . The alert flag notifies the receiver that the satellite may
be giving an inaccurate measurement. 12
Before the receiver can store any data, it has to double-check that it is actually reading the
header, and not some series of bits that so happens to resemble it. It does this by making another
parity, and checking it with the last 6 bits of the HOW. If this does not match, it backtracks to
look for the next preamble, followed by the rest of the TLM, etc.Each sub-frame has a separate
header for the primary reason that a receiver can step into the middle of a broadcast, and not
have to wait up to 30 seconds for the next cycle. This way, a full data frame can be downloaded
in less than 36 seconds, as long as the signal persists.
For example, if a receiver starts listening directly after the header of sub-frame 3 was
sent, he will have to wait until sub-frame 4s header is sent to start downloading, which could
take up to six seconds. It will take an additional thirty seconds to download the full frame,
totaling 36 seconds in the worst-case scenario.

3.7.2 Data

While the two header word types are identical in each of the five sub-frames, the enclosed data is
entirely different.
Sub-frame 1 contains the exact time gathered from the four on-board atomic clocks.
Sub-frame 2 and 3 are grouped, and contain satellite ephemeris data.
Sub-frame 4 and 5 are grouped, and split up in 25 separate pages, which can be gathered over
12.5 minutes. This data is pretty much only of interest to the Monitoring Stations, and isnt
covered in this
article.
Using the data from sub-frames 1 through 3, a receiver can pinpoint his relative position to a
particular satellite, because he knows (1) the time elapsed to send the signal and (2) the position
of the satellite at the time the broadcast was sent. As explained in section 4.4.2, repeating this
process for three more satellites
will put the receiver in a pseudo range.

3.8 Disadvantages
In reality, GPS coverage is relatively poor. This is especially the case in cities, in dense
forests and other places where there are many large obstructions in the receivers horizon. This is
because the microwave frequencies that GPS broadcasts on are very sensitive. Signals may
bounce, or be blocked entirely; both of these effects obviously have negative effects to a
successful measurement.
When a signal bounces (o a building, for example) and is read by a receiver, a miscalculation
occurs known as a multipath error. Since the signal didnt travel in a straight line from the
satellite, the receiver will expect to be further away from it than it actually is.From a civil point
of view, another disad-
vantage of GPS is Selective Availability (SA), used by the military to reduce accuracy over a
certain region. When enabled, SA increases the C/A codes inaccuracy by adding random noise.
As GPS becomes integrated in more and more systems, perhaps the most important disadvantage
is reliability. Because GPS is run by the U.S. Department of Defense and is the only open
service satellite navigation system, there is no guarantee that the service will always be available.
The United States government could, for example, decide to block civil GPS altogether.

4 Galileo
A European global navigation satellite system
was proposed in the early 1990s, an initiative to
boost the economic advantages already proven by the
GPS system. Disadvantages of GPS had come in to
play, such as the long lasting reduced accuracy over
southern Italy resulting from Selective Availability
during the Bosnia crisis.
Until the final agreement to start the Galileo
project, the EU was not in a favorable position with
respect to the current GNSS systems, being a
consumer in the background instead of an
active partner. From a civil point of view, accuracy
was very limited and the whole system was in m
hands. Therefore, civil aviation is still restricted
land-based navigation beacons.
ilitary
to
In 2004, the first stage of development was completed and deployment started in 2005.
Satellite contracts have been awarded amongst the Galileo Industries, a consortium of several
European aerospace companies, therefore manufacturing the satellites will progress rapidly. The
relatively lightweight 625 kg satellites will be able to be deployed in clusters of two to eight
satellites per launch vehicle, thereby reducing expenses and deployment time. If Galileo
continues to keep on schedule, the system should be fully functional by 2008.















4.1 Configuration

Galileos configuration is slightly different than GPS, the leading difference being that it uses
only three planes instead of six.[4]
27 active satellites 3 spare satellites
Three orbital planes; Altitude: 23,616 km; Period: 14 hours 4 minutes; Inclination: 56
10 satellites per plane
This configuration describes three planes in medium earth orbit. The satellites in MEO are at
such a steep inclination to improve coverage at higher latitudes, unfortunately reducing coverage
at the equator. The next step of development would allow the installation of eight geostationary
navigation satellites. Al-
though this doesnt have the highest priority at the moment, GEO satellites will certainly be
included later, to compensate for this reduced accuracy. (See figure 13.) Totalling 38 new
satellites, this more than
doubles the current 32 GNSS satellites in orbit.

4.2 Requirements

While GPS only provides two services - one public and one private - Galileo will provide five:
1. Open Service Free of user charge, this service is available to anybody with a GNSS receiver.
It
provides simple positioning and timing. Accuracy: 5 to 10 meters14
2. Safety of Life Service This addition to the Open Service provides authentication of the signal,
improved integrity, and timely warnings of any problems.Accuracy: 5 to 10 meters
3. Commercial Service Subscribed users can decode two additional signal to improve accuracy.
Accuracy: less than 1 meter
4. Public Regulated Service For subscribed users requiring high continuity, an additional two
signals can be decoded. Accuracy: 4 to 6 meters
5. Search and Rescue Service The satellite listens for beacons broadcasting a distress signal, and
globally broadcasts its position and message.

Galileo satellites broadcast over a wider spectrum of frequencies than GPS satellites, not only
to improve reception, but also to incorporate the numerous services listed above. The system will
send over the L1 and L2 discussed in section 4.5 on 8 and on another frequency, L5 (1176.45
MHz).The Open Service is free of charge, so there is no liability on the part of the system
operator when the service is disrupted. However, concession-holders will charge for other
services that o er a guarantee of continuity of service. As stated in the anticipated modus
operandi, Galileo will be liable for damages in the case of failure.[8] This legal jargon would, for
example, finally allow civil aircraft to make full use of satellite navigation. This, in its turn, will
lead to a new era in civil air transportation. Equally important is the economical aspect. The
estimated cost being between 3 and 3.5 billion euros, Galileo will cost no more than laying 150
kilometers of highway. Subscriptions alone will repay this in a matter of years, but
the primary reward is the growth of the Gross National Product. According to the CBA EC,
Europe can expect a benefit of 62 billion euros in total from economic benefits and 12 billion
euros in total
from social benefits. Including approximately 2.5 billion euros in operation costs, the 74 billion
euro profit significantly outweighs the 6 billion euro expenses within twenty years.
Additionally, Galileo will be interoperable and compatible with GPS and GLONASS.
Because more than twice as many satellites will be available from which to take a position, even
in places that would normally obscure signals
from GPS satellites low on the horizon will receive coverage. Although Galileo can be used in
combination with the existing GNSS satellites, it will be self-sufficient and independent. This
makes it not only a back-up to GPS, but also a complete, alternative system.


4.2 Concept
The relationship between receiver and satellite is identical to GPS, described in section 4.3 on
page 6.
The European Space Agency put a lot of effort in designing a state of the art satellite for use in
the Galileo system. Their design, which this year will go into production, has several


4.3 advantages:

Increased power output with Lithium-Ion batteries
Lightweight and compact
Laser retro-reflector allows pinging from earth by laser
Upgradeability with extra payloads
Communication amongst satellites by Inter-Satellite Link (ISL)
Can be injected directly into the correct orbit by the launcher
Launchers can accommodate two to eight satellite vehicles


4.4 Applications

The scientists and politicians moving Galileo forward are
very enthusiastic about the new European civil GNSS. In
2008, we can expect our cell phones to have built-in
GNSS receivers, allowing us to use them to find our
exact position not only worldwide, but even inside a
building. When making a 112 distress
call, our telephones will automatically divulge our exact
position to help emergency services find us.
Someday GNSS might be steering our cars or guiding the
blind. Until then, well certainly see more reliable
receivers in aircraft, shipping and road transportation for
navigation and search and rescue.


5 Conclusion
GPS is perhaps one of satellites most successful applications so far, and for consumers,
receivers are becoming ever more a ordable and reliable. Now over 10 years old, the system
was revolutionary in its day and its popularity is still growing.However, being under military
control, there is no guarantee that the service will always be available. In 2008, the ESAs civil
Galileo system will be fully functional and will grant consumers accuracy equal to that of the
military. Anybody who owns a GNSS receiver
will be able to make free use of Galileos open service. Commercial users will be allowed to
purchase subscriptions to two additional services, which provide extra accuracy and continuity.
Aircraft will be granted a special service that provides authentication of the signal, improved
integrity and timely warnings of problems. Finally, a search and rescue service will help
broadcast distress signals, wherever they may originate from.In the 45 years of satellite
navigation, GNSS has proven its economic, technological and practical value. Certainly new
innovations will continue to show the benefits of this continuously expanding field.

REFERENCES

[1] European Space Agency. Why Europe needs Galileo.
http://www.esa.int/esaNA/GGG0H750NDC index 0.html.
[2] Stuart at Random Useless Info. GPS Stu
http://www.randomuseless.info/gps/.
[3] Robert J . Danchik and L. Lee Pryor. The legacy of transit. J ohns Hopkins APL Technical
Digest, 11(1,2), 1990
http://www.globalsecurity.org/space/systems/transit.htm.
[4] European Commission Directorate-General Energy and Transport. Galileo: European
Satellite Navigation System.
http://europa.eu.int/comm/dgs/energy transport/galileo/index en.htm.
[5] William H. Guier and George C. Wei
enbach. The Early Days of Sputnik.
http://sd-www.jhuapl.edu/Transit/sputnik.html.
[6] PerkinElmer Inc. Rubidium Frequency Standard Model RFS-IIF.
http://optoelectronics.perkinelmer.com/content/Datasheets/rfs2f.pdf.






























Presented by:-


G.SREENIVASULU
Email: srnvsl_svu@yahoo.co.in

P.R.SIVA BHARATH KUMAR REDDY

Email: pr_sivabharath@yahoo.co.in


From S.V.UNIVERSITY COLLEGE
OF ENGINEERING
GRID COMPUTING

Presented by :
M.DEEPTHI
3/4 Computer Science & Engineering,
Sanketika Engineering College,
Visakhapatnam,
Andhra Pradesh.
E-mail: deepumandalapu@gmail.com

AND
D.DEEPTHI
3/4 Computer Science & Engineering,
Sanketika Engineering College,
Visakhapatnam,
Andhra Pradesh.
E-mai:deepsshara@gmail.com
1

GRID COMPUTING






ABSTRACT :


The last decade has seen a substantial increase in commodity computer and
network performance, mainly as a result of faster hardware and more sophisticated
software. Nevertheless, there are still problems, in the fields of science, engineering, and
business, which cannot be effectively dealt with using the current generation of
supercomputers.
To solve these problems, this new approach is known by several names, such as
Meta computing, scalable computing, global computing, Internet computing and more
recently peer-to-peer or Grid computing. The early efforts in Grid computing started as a
project to link supercomputing sites, but have now grown far beyond their original intent.
This paper deals with many applications can benefit from the Grid infrastructure,
including collaborative engineering, data exploration, high-throughput computing, and of
course is tribute supercomputing.
Moreover, due to the rapid growth of the Internet and Web, there has been a rising
interest in Web-based distributed computing, and many projects have been started and
aim to exploit the Web as an infrastructure for running coarse-grained distributed and
parallel applications. In this context, the Web has the capability to be a platform for
parallel and collaborative work as well as a key technology to create a pervasive and
ubiquitous Grid-based infrastructure. This paper aims to present the state-of-the-art of
Grid computing and attempts to survey the major international efforts in developing this
emerging technology, and also deals with grid computing applications which were used
in several fields.









2
INDEX
1. INTRODUCTION
Computational services
Data services
Application services
Information services
Knowledge services
2. GRID CONSTRUCTION: GENERAL PRINCIPLES
Multiple administrative domains and autonomy
Heterogeneity
Scalability
Dynamicity or adaptability
Grid components
Grid fabric
Core grid middle ware
User level grid middleware
Grid applications and portals
Design features
Administrative hierarchy
Communication services
Information services
Naming services
Distributed file systems and caching
Security and authorization
System status and fault tolerance
Resource management and scheduling
Computational economy and resource trading
Programming tools and paradigms
User and administrator GUI
3.GRID AP
TURE TRENDS
PLICATIONS
4. CONCLUTION AND FU
3
1. INTRODUCTION
The popularity of the Internet as well as the availability of powerful computers and
high-speed network technologies as low-cost commodity components is changing the way
we use computers today. These technology opportunities have led to the possibility of using
distributed computers as a single, unified computing resource, leading to what is popularly
known as Grid computing. The term Grid is chosen as an analogy to a power Grid that
provides consistent, pervasive, dependable, transparent access to electricity irrespective of its
source. This new approach to network computing is known by several names, such as meta-
computing, scalable computing, global computing, Internet computing, and more recently
peer to- peer (P2P) computing.

Grids enable the sharing, selection, and aggregation of a wide variety of resources
including supercomputers, storage systems, data sources, and specialized devices (see Figure
1) that are geographically distributed and owned by different organizations for solving large-
scale computational and data intensive problems in science, engineering, and commerce.
The concept of Grid computing started as a project to link geographically dispersed
supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure
can benefit many applications, including collaborative engineering, data exploration, high-
throughput computing, and distributed supercomputing.
A Grid can be viewed as a seamless, integrated computational and collaborative
environment (see Figure 1) and a high-level view of activities within the Grid is shown in
Figure 2. The users interact with the Grid resource broker to solve problems, which in turn
performs resource discovery, scheduling, and the processing of application jobs on the
distributed Grid resources. From the end-user point of view, Grids can be used to provide the
following types of services
4

Computational services. These are concerned with providing secure services for
executing application jobs on distributed computational resources individually or
collectively. A Grid providing computational services is often called a computational Grid.
Some examples of computational Grids are: NASA IPG, the World Wide Grid, and the NSF
Tara Grid.
Data services. These are concerned with proving secure access to distributed
datasets and their management. To provide a scalable storage and access to the data sets, they
may be replicated, catalogued, and even different datasets stored in different locations to
create an illusion of mass storage. The processing of datasets is carried out using
computational Grid services and such a combination is commonly called data Grids. Sample
applications that need such services for management, sharing, and processing of large
datasets are high-energy physics and accessing distributed chemical databases for drug
design.
Application services. These are concerned with application management and
providing access to remote software and libraries transparently. The emerging technologies
such as Web services are expected to play a leading role in defining application services.
They build on computational and data services provided by the Grid. An example system that
can be used to develop such services is Net Solve.
Information services. These are concerned with the extraction and presentation of
data with meaning by using the services of computational, data, and/or application services.
The low-level details handled by this are the way that information is represented, stored,
accessed, shared, and maintained. Given its key role in many scientific endeavors, the Web is
the obvious point of departure for this level.
5
Knowledge services. These are concerned with the way that knowledge is acquired,
used, retrieved, published, and maintained to assist users in achieving their particular goals
and objectives. Knowledge is understood as information applied to achieve a goal, solve a
problem, or execute a decision. An example of this is data mining for automatically building
a new knowledge.
To build a Grid, the development and deployment of a number of services is required.
These include security, information, directory, resource allocation, and payment mechanisms
in an open environment, and high-level services for application development, execution
management, resource aggregation, and scheduling.
A set of general principles and design criteria that can be followed in the Grid
construction. Some of the current Grid technologies, selected as representative of those
currently available, are presented.

2. GRID CONSTRUCTION: GENERAL PRINCIPLES
This section briefly highlights some of the general principles that underlie the
construction of the Grid. In particular, the idealized design features that are required by a
Grid to provide users with a seamless computing environment are discussed. Four main
aspects characterize a Grid.
Multiple administrative domains and autonomy. Grid resources are geographically
distributed across multiple administrative domains and owned by different organizations. The
autonomy of resource owners needs to be honored along with their local resource
management and usage policies.
Heterogeneity. A Grid involves a multiplicity of resources that are heterogeneous in nature
and will encompass a vast range of technologies.
Scalability. A Grid might grow from a few integrated resources to millions. This raises the
problem of potential performance degradation as the size of Grids increases. Consequently,
applications that require a large number of geographically located resources must be
designed to be latency and bandwidth tolerant.
Dynamicity or adaptability. In a Grid, resource failure is the rule rather than the exception.
In fact, with so many resources in a Grid, the probability of some resource failing is high.
Resource managers or applications must tailor their behavior dynamically and use the
available resources and services efficiently and effectively.
The steps necessary to realize a Grid include:
6
The integration of individual software and hardware components into a combined
networked resource (e.g. a single system image cluster).
The deployment of low-level middleware to provide a secure and transparent access to
resources user-level middleware and tools for application development and the aggregation
of distributed resources;
The development and optimization of distributed applications to take advantage of the
available resources and infrastructure.
The components that are necessary to form a Grid (shown in Figure 3) are as follows.

Grid fabric. This consists of all the globally distributed resources that are accessible from
anywhere on the Internet. These resources could be computers (such as PCs or Symmetric
Multi- Processors) running a variety of operating systems (such as UNIX orWindows),
storage devices, databases, and special scientific instruments such as a radio telescope or
particular heat sensor.
Core Grid middleware. This offers core services such as remote process management,
7
co-allocation of resources, storage access, information registration and discovery, security,
and aspects of Quality of Service (QoS) such as resource reservation and trading.
User-level Grid middleware. This includes application development environments,
programming tools, and resource brokers for managing resources and scheduling application
tasks for execution on global resources.
Grid applications and portals. Grid applications are typically developed using Grid-
enabled languages and utilities such as HPC++or MPI. An example application, such as
parameter simulation or a grand-challenge problem, would require computational power,
access to remote data sets, and may need to interact with scientific instruments. Grid portals
offer Web-enabled application services, where users can submit and collect results for their
jobs on remote resources through the Web.
In attempting to facilitate the collaboration of multiple organizations running diverse
autonomous heterogeneous resources, a number of basic principles should be followed so
that the Grid environment:
Does not interfere with the existing site administration or autonomy;
Does not compromise existing security of users or remote sites;
Does not need to replace existing operating systems, network protocols, or services;
Allows remote sites to join or leave the environment whenever they choose;
Does not mandate the programming paradigms, languages, tools, or libraries that a user
wants;
Provides a reliable and fault tolerant infrastructure with no single point of failure;
Provides support for heterogeneous components;
Uses standards, and existing technologies, and is able to interact with legacy applications;
Provides appropriate synchronization and component program linkage.
As one would expect, a Grid environment must be able to interoperate with a whole
spectrum of current and emerging hardware and software technologies. An obvious analogy
is the Web. Users of the Web do not care if the server they are accessing is on a UNIX or
Windows platform. From the client browsers point of view, they just want their requests to
Web services handled quickly and efficiently. In the same way, a user of a Grid does not
want to be bothered with details of its underlying hardware and software infrastructure. A
user is really only interested in submitting their application to the appropriate resources and
getting correct results back in a timely fashion.
An ideal Grid environment will therefore provide access to the available resources in
a seamless manner such that physical discontinuities, such as the differences between
8
platforms, network protocols, and administrative boundaries become completely transparent.
In essence, the Grid middleware turns a radically heterogeneous environment into a virtual
homogeneous one.
The following are the main design features required by a Grid environment.
Administrative hierarchy. An administrative hierarchy is the way that each Grid
environment divides itself up to cope with a potentially global extent. The administrative
hierarchy determines how administrative information flows through the Grid.
Communication services. The communication needs of applications using a Grid
environment are diverse, ranging from reliable point-to-point to unreliable multicast
communications. The communications infrastructure needs to support protocols that are used
for bulk-data transport, streaming data, group communications, and those used by distributed
objects. The network services used also provide the Grid with important QoS parameters
such as latency, bandwidth, reliability, fault-tolerance, and jitter control.
Information services. A Grid is a dynamic environment where the location and types of
services available are constantly changing. It is necessary to provide mechanisms to enable a
rich environment in which information is readily obtained by requesting services. The Grid
information (registration and directory) services components provide the mechanisms for
registering and obtaining information about the Grid structure, resources, services, and status.
Naming services. In a Grid, like in any distributed system, names are used to refer to a wide
variety of objects such as computers, services, or data objects. The naming service provides a
uniform name space across the complete Grid environment. Typical naming services are
provided by the international X.500 naming scheme or DNS, the Internets scheme.
Distributed file systems and caching. Distributed applications, more often than not, require
access to files distributed among many servers. A distributed file system is therefore a key
component in a distributed system. From an applications point of view it is important that a
distributed file system can provide a uniform global namespace, support a range of file I/O
protocols, require little or no program modification, and provide means that enable
performance optimizations to be implemented, such as the usage of caches.
Security and authorization. Any distributed system involves all four aspects of security:
Confidentiality, integrity ,authentication and accountability. Security within a Grid
environment is a complex issue requiring diverse resources autonomously administered to
interact in a manner that does not impact the usability of the resources or introduces security
holes/lapses in individual systems or the environments as a whole. A security infrastructure is
the key to the success or failure of a Grid environment.
9
System status and fault tolerance. To provide a reliable and robust environment it is
important that a means of monitoring resources and applications is provided. To accomplish
this task, tools that monitor resources and application need to be deployed.
Resource management and scheduling. The management of processor time, memory,
network, storage, and other components in a Grid is clearly very important. The overall aim
is to efficiently and effectively schedule the applications that need to utilize the available
resources in the Grid computing environment. From a users point of view, resource
management and scheduling should be transparent; their interaction with it being confined to
a manipulating mechanism for submitting their application. It is important in a Grid that a
resource management and scheduling service can interact with those that may be installed
locally.
Computational economy and resource trading. As a Grid is constructed by coupling
resources distributed across various organizations and administrative domains that may be
owned by different organizations, it is essential to support mechanisms and policies that help
in regulate resource supply and demand. An economic approach is one means of managing
resources in a complex and decentralized manner. This approach provides incentives for
resource owners, and users to be part of the Grid and develop and using strategies that help
maximize their objectives.
Programming tools and paradigms. Grid applications couple resources that cannot be
replicated at a single site even or may be globally located for other practical reasons. A Grid
should include interfaces, APIs, utilities, and tools to provide a rich development
environment. Common scientific languages such as C, C++, and FORTRAN should be
available, as should application-level interfaces such as MPI and PVM. A variety of
programming paradigms should be supported, such as message passing or distributed shared
memory. In addition, a suite of numerical and other commonly used libraries should be
available.
User and administrative GUI. The interfaces to the services and resources available should
be intuitive and easy to use. In addition, they should work on a range of different platforms
and operating systems. They also need to take advantage of Web technologies to offer a view
of portal supercomputing. The Web-centric approach to access supercomputing resources
should enable users to access any resource from anywhere over any platform at any time.
That means, the users should be allowed to submit their jobs to computational resources
through a Web interface from any of the accessible platforms such as PCs, laptops, or
Personal Digital Assistant, thus supporting the ubiquitous access to the Grid. The provision
10
of access to scientific applications through the Web (e.g. RWCPs parallel protein information
analysis system ) leads to the creation of science portals.



4. GRID APPLICATIONS

A Grid platform could be used for many different types of applications. In Grid-
aware applications are categorized into five main classes:
distributed supercomputing (e.g. stellar dynamics);
high-throughput (e.g. parametric studies);
on-demand (e.g. smart instruments);
data intensive (e.g. data mining);
collaborative (e.g. collaborative design).
A new emerging class of application that can benefit from the Grid is:
service-oriented computing (e.g. application service provides and the users QoS
requirements driven access to remote software and hardware resources ).
There are several reasons for programming applications on a Grid, for example:
To exploit the inherent distributed nature of an application;
To decrease the turnaround/response time of a huge application;
To allow the execution of an application which is outside the capabilities of a single
(sequential or parallel) architecture;
To exploit the affinity between an application component and Grid resources with a
specific functionality.
The existing applications developed using the standard message-passing interface
(e.g. MPI) for clusters, can run on Grids without change, since an MPI implementation for
Grid environments is available. Many of the applications exploiting computational Grids are
embarrassingly parallel in nature. The Internet computing projects, such as SETI@Home and
Distributed. Net, build Grids by linking multiple low-end computational resources, such as
PCs, across the Internet to detect extraterrestrial intelligence and crack security algorithms,
respectively. The nodes in these Grids work simultaneously on different parts of the problem
and pass results to a central system for post processing.
Grid resources can be used to solve grand challenge problems in areas such as
biophysics, chemistry, biology, scientific instrumentation , drug design , tomography , high
11
energy physics , data mining, financial analysis, nuclear simulations, material science,
chemical engineering, environmental studies, climate modeling , weather prediction,
molecular biology, neuroscience/brain activity analysis , structural analysis, mechanical
CAD/CAM, and astrophysics.

In the past, applications were developed as monolithic entities. A monolithic
application is typically the same as a single executable program that does not rely on outside
resources and cannot access or offer services to other applications in a dynamic and
cooperative manner. The majority of the scientific and engineering (S&E) as well as
business-critical applications of today are still monolithic. These applications are typically
written using just one programming language. They are generally computational intensive,
batch processed, and their elapsed times are measured in several hours or days. Good
examples of applications in the S&E area are: Gaussian , PAM-Crash, and Fluent .
Today, the situation is rapidly changing and a new style of application development
based on components has become more popular. With component-based applications,
programmers do not start from scratch but build new applications by reusing existing off-the-
shelf components and applications. Furthermore, these components may be distributed across
a wide-area network. Components are defined by the public interfaces that specify the
functions as well as the protocols that they may use to communicate with other components.
An application in this model becomes a dynamic network of communicating objects. This
basic distributed object design philosophy is having a profound impact on all aspects of
information processing technology.We are already seeing a shift in the software industry
towards an investment in software components and away from handcrafted, stand-alone
applications. In addition, within the industry, a technology war is being waged over the
design of the component composition architecture .
Meanwhile, we are witnessing an impressive transformation of the ways that research
is conducted. Research is becoming increasingly interdisciplinary; there are studies that
foresee future research being conducted in virtual laboratories in which scientists and
engineers routinely perform their work without regard to their physical location. They will be
able to interact with colleagues, access instrumentation, share data and computational
resources, and access information in digital libraries. All scientific and technical journals will
be available on-line, allowing readers to download documents and other forms of
information, and manipulate it to interactively explore the published research. The
complexity of future applications will grow rapidly, and the timeto-market pressure will
mean that applications can no longer be built from scratch. Hence, mainly for cost reasons, it
12
is foreseeable that no single company or organization would be able to, for example, create
by itself complex and diverse software, or hire and train all the necessary expertise necessary
to build an application. This will heighten the movement towards component frameworks,
enabling rapid construction from third-party ready-to-use components.


5. CONCLUSIONS AND FUTURE TRENDS
There are currently a large number of projects and a diverse range of new and
emerging Grid developmental approaches being pursued. These systems range from Grid
frameworks to application test beds, and from collaborative environments to batch
submission mechanisms.
It is difficult to predict the future in a field such as information technology where the
technological advances are moving very rapidly. Hence, it is not an easy task to forecast what
will become the dominant Grid approach. Windows of opportunity for ideas and products
seem to open and close in the blink of an eye. However, some trends are evident. One of
those is growing interest in the use of J ava and Web services for network computing.
The J ava programming language successfully addresses several key issues that
accelerate the development of Grid environments, such as heterogeneity and security. It also
removes the need to install programs remotely; the minimum execution environment is a
J ava-enabled Web browser. J ava, with its related technologies and growing repository of
tools and utilities, is having a huge impact on the growth and development of Grid
environments. It is very hard to ignore the presence of the Common Object Request Broker
Architecture (CORBA) in the background. We believe that frameworks incorporating
CORBA services will be very influential on the design of future Grid environments.
The two other emerging J ava technologies for Grid and P2P computing are J ini and
J XTA .
In the long term, backing only one technology can be an expensive mistake. The
framework that provides a Grid environment must be adaptable, malleable, and extensible.
As technology and fashions change it is crucial that Grid environments evolve with them.
In , Smarr observes that Grid computing has serious social consequences and is going
to have as revolutionary an effect as railroads did in the American Midwest in the early 19th
century. Instead of a 3040 year lead-time to see its effects, however, its impact is going to
be much faster. Smarr concludes by noting that the effects of Grids are going to change the
world so quickly that mankind will struggle to react and change in the face of the challenges
and issues they present. Therefore, at some stage in the future, our computing needs will be
13
satisfied in the same pervasive and ubiquitous manner that we use the electricity power grid.
The analogies with the generation and delivery of electricity are hard to ignore, and the
implications are enormous. In fact, the Grid is analogous to the electricity (power) Grid and
the vision is to offer (almost) dependable, consistent, pervasive, and inexpensive access to
resources irrespective of their location for physical existence and their location for access.

14
JOYSTICK
A SOFTWARE DEVICE TO INSTRUCT A ROBOT

S.NARAYANAN & K.NAVEEN,
DEPARTMENT OF EIE,
DVR & Dr HS MIC COLLEGE OF TECHNOLOGY,
VIJ AYAWADA.
narayen_27@yahoo.co.in

ABSTRACT:

Multiagent Telerobotic is
the emerging field of research. The
military is already gravitating towards
multiagent robotics system for scouting
missions into hostile territory and teams
of robots into urban territory. Providing
Telerobotic control capabilities to these
multiagent systems allows rapid
development and flexibility, by allowing
a human to assist the robots in
unforeseen situations. In order to
effectively communicate between the
robot(s) and human i.e., to give
commands and control robots, an
effective device is must, which is very
efficient and convenient to use. One
such software device is the J OYSTICK.
This is the device which can control both
single and multiagent robots. With this
device we can easily control the robots
in space shuttles to do the required tasks
effectively.

INTRODUCTION:

Multiagent Telerobotics is
the new and emerging field of research
because of combining the power and
robustness of multiagent robotic systems
with the flexibility of Telerobotic
systems. There are numerous examples
where these telerobots are being used
which include Unmanned Ground
Vehicle (UGV) Demo Project II and



Tactical Mobile Robotics (TMR)
Project. Both Multiagent Robotics and
Telerobotics are the parents for the
introduction of Multiagent Telerobotics.
The Multiagent robotic system has
following advantages over other robotic
systems are
A large range of possible
tasks
Greater efficiency
Robustness
Lower economic costs
Ease of development
The Telerobotics systems has following
properties when compared to fully
autonomous robots such as
Greater capability
Providing for
opportunism
Social acceptance
Appropriateness for one
of a kind tasks
Support for robot learning
from humans
Multiagent robotics refers to using more
than one robot to complete one task.
The robots may be working on the same
or different subtasks and they are not
required to use the same approach to a
task.
Telerobot is the robot that
determines its actions based on some
combination of Human inputs and
autonomous control. A telerobot can use
share control or strict shared control.
A Teleoperator is a machine
that uses Direct Manual Control. This
means that the human operator has
complete control over the robots
actions. A Multiagent Telerobotic
System (MTS) is a group of more than
one telerobot controlled by a human
operator.
A Mobile Telerobot is a
telerobot that is capable of moving itself.

Now the goal is to control,
i.e., how to give signals to robot and the
instruct it to do this particular function.
This can be achieved by different
methods depending on the type of
function to be performed. These methods
should provide a basis for building safe,
effective, and easy to use MTSs. Safe
control indicates that the robots will not
cause unintended harm to themselves,
other robots or objects in their
environment. Effective control indicates
that the human operator will be able to
accomplish the tasks that he intends to
do with the telerobots as quickly as
possible. Easy-to-use control indicates
that the human operator will be able to
accomplish these tasks with minimum of
stress and cognitive overload. While
developing such a control certain
questions are to be considered. Some of
them are listed below
How many robots should the
operator control at one time for
particular classes of work?
What form of control should
the human operator have in order to
control a group of mobile telerobots for
particular classes of tasks?
How many robots should the
operator control at one time for
particular classes of tasks?
What form should the
interaction between the human operator
and the robot group takes for particular
classes of tasks?
How can multiagent
Telerobotic systems be evaluated based
on the criterias of safety, ease of use,
and effectiveness?

Multiagent Telerobotic
systems (MTS) differ in many ways.
They differ in
1. Amount of Autonomy 2. The
Hardware 3. The number of Telerobots
can be controlled at one time 4. The
form of control 5. The nature of
interaction. Based on these differences
The control of robots can be divided into
four different ways. They are
1. Direct Manual Individual
2. Direct Manual Group
3. Supervisory Individual
4. Supervisory Group Controls.

THE JOYSTICK:

Any CONTROL
SYSTEM, allows user to control group
of mobile robots in strict teleoperative
sense. The operator can give instructions
in terms of Compass direction and speed
for travel. For this purpose, an ON-
SCREEN J OYSTICK is used to input
the direction and speed. The J oystick is
marked with the Compass directions N,
S,E and W. The operator uses a mouse to
position this joystick. Clicking any
where in the




White portion of joystick with the left
mouse button sets the joystick, drawing
a line from the center to the point where
the user clicked, and sending
corresponding movement command to
the robots based on the direction and
distance from the center to the particular
point. The farther the distance from the
center, the greater the magnitude of
speed command that is sent to the robots.



Any of the above mentioned control
systems can make use of this joystick
with certain minimum differences in the
type of or number of controls available.
The multiagent Telerobotic interfaces
have been incorporated into a MISSON
LAB system.
Mission Lab is a system for specifying,
simulating, and executing multiagent
mobile robot missions. Mission Lab
takes high level specifications and
executes them with teams of real or
simulated robot vehicles. It provides
tools for quickly creating behavior based
robot programs, which can then be run
either in simulation or on hardware
without altering the control software.


FUTURE SCOPE:

With the development of this
software device, the control of the robots
can be productively done at its lowest
economic rate. The simplicity and low
cost over effective working attracts
more.




A NATIONAL LEVEL TECHNICAL PAPER
PRESENTATION CONTEST
ON
EMERGING TRENDS IN ELECTRONICS, COMMUNICATION, ELECTRICAL,
INFORMATION TECHNOLOGY AND COMPUTER SCIENCE

PAPER PRESENTATION ON
NETWORK SECURITY & PROTOCOLS

Submitted At

SRI VENKATESWARA UNIVERSITY
EEE DEPARTMENT

PAPER SUBMITTED BY
A.NAGA SOWMYA, III B.Tech, IT
E-mail Id:sowmya2910@yahoo.co.in
Ph No: 08772232049

&

T.PRANITA REDDY, III B.Tech, IT
E-mail Id: pranita_csit@yahoo.co.in
Ph No: 08772245070

SREE VIDAYANIKETHAN ENGINEERING COLLEGE,
A.Rangampet,
1
Tirupati.


ABSTRACT:

In the last decade, the number of
computers in use has exploded. The
growth of this industry has been driven
by two separate forces which until
recently have had different goals and end
products. The first factor has been
research interests and laboratories, these
groups have always needed to share
files, email and other information across
wide areas. The research labs developed
several protocols and methods for this
data transfer, most notably TCP/IP.
Business interests are the second factor
in network growth. For quite sometime,
businesses were primarily interested in
sharing data within an office or campus
environment, this led to the development
of various protocols suited specifically
to this task.
This is a very rosy picture: businesses,
governments and individuals
communicating with each other across
the world. While reality is rapidly
approaching this utopian picture, several
relatively minor issues have changed
status from low priority to extreme
importance. Security is probably the
most well known of these problems.
When businesses and private
communications obviously desire secure
communications. Finally, connecting a
system to a network can open the system
itself up to attacks. If a system is
compromised, the risk of data loss is
high.
A Protocol is an agreement between the
communicating parties on how
communication is to proceed. To enable
networked communications, computers
must be able to transmit data among one
another. Protocols, agreed-upon methods
of communication, make this possible.
In this paper we basically discuss about
network security, various protocols and
its features. We clearly discuss tunneling
and routing concepts.









2


1. INTRODUCTION:
Network security covers issues
such as network communication privacy,
information confidentiality and integrity
over network, controlled access to
restricted network domains and sensitive
information, and using the public
network such as Internet for private
communications. To address those
issues, various network and information
security technologies are developed by
various organizations and technology
vendors. Here are a summary of the
technologies:
2. AAA:
Authorization, Authentication and
Accounting is a technology for
intelligently controlling access to
network resources, enforcing policies,
auditing usage, and providing the
information necessary to bill for
services. Authentication provides a way
of identifying a user, typically by having
the user enter a valid user name and
valid password before access is granted.
The authorization process determines
whether the user has the authority to
access certain information or some
network sub-domains. Accounting
measures the resources a user consumes
while using the network, which includes
the amount of system time or the amount
of data a user has sent and/or received
during a session, which could be used
for authorization control, billing, trend
analysis, resource utilization, and
capacity planning activities. A dedicated
AAA server or a program that performs
these functions often provides
authentication, authorization, and
accounting services.
2.1. Kerberos: Network
Authentication Protocol
3
Kerberos is a network
authentication protocol. Kerberos is
designed to provide strong
authentication for client/server
applications by using secret-key
cryptography. This is accomplished
without relying on authentication by the
host operating system, without basing
trust on host addresses, without requiring
physical security of all the hosts on the
network, and under the assumption that
packets traveling along the network can
be read, modified, and inserted .
Kerberos performs authentication under
these conditions as a trusted third-party
authentication service by using
conventional cryptography, i.e., shared
secret key.

2.2. Radius: Remote Authentication
Dial in User Service
RADIUS: Remote Radius is a
protocol for carrying authentication,
authorization, and configuration
information between a Network Access
Server which desires to authenticate its
links and a shared Authentication Server.
RADIUS also carries accounting
information between a Network Access
Server and a shared Accounting Server.
Radius uses UDP as the transport
protocol.
Key features of RADIUS are:
Client/Server Model.
Network Security.
Flexible Authentication
Mechanisms.
Extensible Protocol.
2.3: Secure Shell Protocol:
SSH is a protocol for secure
remote login and other secure network
services over an insecure network. SSH
consists of three major components:
The Transport Layer Protocol [SSH-
TRANS] provides server authentication,
confidentiality, and integrity. It may
optionally also provide compression.
The transport layer will typically be run
over a TCP/IP connection, but might
also be used on top of any other reliable
data stream. SSH-Trans provide strong
encryption, cryptographic host
authentication, and integrity protection.
Authentication in this protocol level is
host-based; this protocol does not
perform user authentication. A higher
level protocol for user authentication can
be designed on top of this protocol.
The User Authentication Protocol [SSH-
USERAUTH] authenticates the client-
side user to the server. It runs over the
transport layer protocol SSH-TRANS.
When SSH-USERAUTH starts, it
receives the session identifier from the
lower-level protocol (this is the
exchange hash H from the first key
exchange). The session identifier
uniquely identifies this session and is
suitable for signing in order to prove
ownership of a private key. SSH-
USERAUTH also needs to know
whether the lower-level protocol
provides confidentiality protection.
4
The Connection Protocol [SSH-
CONNECT] multiplexes the encrypted
tunnel into several logical channels. It
runs over the user authentication

protocol. It provides interactive login
sessions, remote execution of
commands, forwarded TCP/IP
connections, and forwarded X11
connections.
3. Tunneling:
3.1. L2F: Level 2 Forwarding Protocol
The Layer 2 Forward protocol (L2F) is
used to establish a secure tunnel across a
public infrastructure (such as the
Internet) that connects an ISP POP to a
enterprise home gateway. This tunnel
creates a virtual point-to-point
connection between the user and the
enterprise customer"s network.
Layer Two Forwarding protocol (L2F)
permits the tunneling of the link layer
(i.e., HDLC, async HDLC, or SLIP
frames) of higher level protocols. Using
such tunnels, it is possible to divorce the
location of the initial dial-up server from
the location at which the dial-up protocol
connection is terminated and access to
the network provided.
L2F allows encapsulation of PPP/SLIP
packets within L2F. The ISP NAS and
the Home gateway require a common
understanding of the encapsulation
protocol so that SLIP/PPP packets can
be successfully transmitted and received
across the Internet.
3.2. L2TP: Layer 2 Tunneling
Protocol
The L2TP Protocol is used for
integrating multi-protocol dial-up
services into existing Internet Service
Providers Point of Presence. PPP defines
an encapsulation mechanism for
transporting multiprotocol packets
across layer 2 (L2) point-to-point links.
Typically, a user obtains a L2
connection to a Network Access Server
(NAS) using one of a number of
techniques (e.g., dialup POTS,
ISDN,ADSL, etc.) and then runs PPP
over that connection. In such a
configuration, the L2 termination point
and PPP session endpoint reside on the
same physical device (i.e., the NAS).
5
L2TP extends the PPP model by
allowing the L2 and PPP endpoints to
reside on different devices
interconnected by a packet-switched
network. With L2TP, a user has an L2
connection to an access concentrator
(e.g., modem bank, ADSL DSLAM,
etc.), and the concentrator then tunnels
individual PPP frames to the NAS. This

allows the actual processing of PPP
packets to be divorced from the
termination of the L2 circuit.
L2TP utilizes two types of messages,
control messages and data messages.
Control messages are used in the
establishment, maintenance and clearing
of tunnels and calls. Data messages are
used to encapsulate PPP frames being
carried over the tunnel. Control
messages utilize a reliable Control
Channel within L2TP to guarantee
delivery (see section 5.1 for details).
Data messages are not retransmitted
when packet loss occurs.
3.3. PPTP: Point to Point Tunneling
Protocol
Point-to-Point-Tunneling Protocol
(PPTP) is a networking technology that
supports multiprotocol virtual private
networks (VPN), enabling remote users
to access corporate networks securely
across the Microsoft Windows NT
Workstation, Windows95, and
Windows 98 operating systems and
other point-to-point protocol (PPP)-
enabled systems to dial into a local
Internet service provider to connect
securely to their corporate network
through the Internet.
. PPTP is implemented only by the PAC
and PNS. No other systems need to be
aware of PPTP. Dial networks may be
connected to a PAC without being aware
of PPTP. Standard PPP client software
should continue to operate on tunneled
PPP links.
PPTP uses an extended version of GRE
to carry user PPP packets. These
enhancements allow for low-level
congestion and flow control to be
provided on the tunnels used to carry
user data between PAC and PNS. This
mechanism allows for efficient use of
the bandwidth available for the tunnels
and avoids unnecessary retransmisions
and buffer overruns. PPTP does not
dictate the particular algorithms to be
used for this low level control but it does
define the parameters that must be
communicated in order to allow such
algorithms to work.
4. Secured Routing:
4.1 .DiffServ: Differenciated Service
6
DiffServ defines architecture for
implementing scalable service

differentiation in the Internet. A
"Service" defines some significant
characteristics of packet transmission in
one direction across a set of one or more
paths within a network. These
characteristics may be specified in
quantitative or statistical terms of
throughput, delay, jitter, and/or loss, or
may otherwise be specified in terms of
some relative priority of access to
network resources. Service
differentiation is desired to
accommodate heterogeneous application
requirements and user expectations, and
to permit differentiated pricing of
Internet service.
4.2. ESP: Encapsulating Security
Payload
Encapsulating Security Payload (ESP) is
a key protocol in the IPsec (Internet
Security) architecture, which is designed
to provide a mix of security services in
IPv4 and IPv6. The IP Encapsulating
Security Payload (ESP) seeks to provide
confidentiality and integrity by
encrypting data to be protected and
placing the encrypted data in the data
portion of the IP ESP. Depending on the
user"s security requirements, this
mechanism may be used to encrypt
either a transport-layer segment (e.g.,
TCP, UDP, ICMP, IGMP) or an entire
IP datagram. Encapsulating the protected
data is necessary to provide
confidentiality for the entire original
datagram.
4.3. GRE: Generic Routing
Encapsulation
Generic Routing Encapsulation is a
protocol for encapsulation of an arbitrary
network layer protocol over another
arbitrary network layer protocol.
In the most general case, a system has a
packet that needs to be encapsulated and
delivered to some destination, which is
called payload. The payload is first
encapsulated in a GRE packet. The
resulting GRE packet can then be
encapsulated in some other protocol and
then forwarded. This outer protocol is
called the delivery protocol.
7
Security in a network using GRE should
be relatively similar to security in a
normal IPv4 network, as routing using
GRE follows the same routing that IPv4
uses natively. Route filtering will remain
unchanged. However packet filtering
requires either that a firewall look inside

the GRE packet or that the filtering is
done on the GRE tunnel endpoints. In
those environments in which this is
considered to be a security issue it may
be desirable to terminate the tunnel at
the firewall.
4.4. IKE: Internet Key Exchange
Protocol
Internet Key Exchange (IKE) Protocol, a
key protocol in the IPsec architecture, is
a hybrid protocol using part of Oakley
and part of SKEME in conjunction with
ISAKMP to obtain authenticated keying
material for use with ISAKMP, and for
other security associations such as AH
and ESP for the IPsec DOI.
ISAKMP provides a framework for
authentication and key exchange but
does not define them. ISAKMP is
designed to be key exchange
independent, which supports many
different key exchanges. The Internet
Key Exchange (IKE) is one of a series of
key exchangescalled "modes".
4.5. IPsec: Security Architecture for
IP network
IPsec provides security services at the IP
layer by enabling a system to select
required security protocols, determine
the algorithm(s) to use for the service(s),
and put in place any cryptographic keys
required to provide the requested
services. IPsec can be used to protect
one or more "paths" between a pair of
hosts, between a pair of security
gateways, or between a security gateway
and a host.
The set of security services that IPsec
can provide includes access control,
connectionless integrity, data origin
authentication, rejection of replayed
packets (a form of partial sequence
integrity), confidentiality (encryption),
and limited traffic flow confidentiality.
Because these services are provided at
the IP layer, they can be used by any
higher layer protocol, e.g., TCP, UDP,
ICMP, BGP, etc.
4.6.ISAKMP: Internet Security
Association and Key Management
Protocol
8
ISAKMP, a key protocol in the IPsec
(Internet Security) architecture,
combines the security concepts of
authentication, key management, and
security associations to establish the
required security for government,

commercial, and private
communications on the Internet.
The Internet Security Association and
Key Management Protocol (ISAKMP)
define procedures and packet formats to
establish, negotiate, modify and delete
Security Associations (SA). SAs contain
all the information required for
execution of various network security
services, such as the IP layer services
(such as header authentication and
payload encapsulation), transport or
application layer services, or self-
protection of negotiation traffic.
ISAKMP defines payloads for
exchanging key generation and
authentication data. These formats
provide a consistent framework for
transferring key and authentication data
which is independent of the key
generation technique, encryption
algorithm and authentication
mechanism.
4.7. TLS: Transport Layer Security
Protocol
Transport Layer Security (TLS) Protocol
is to provide privacy and data integrity
between two communicating
applications. The protocol is composed
of two layers: the TLS Record Protocol
and the TLS Handshake Protocol. At the
lowest level, layered on top of some
reliable transport protocol (TCP) is the
TLS Record Protocol. The TLS Record
Protocol provides connection security
that has two basic properties:
Private - symmetric cryptography
is used for data encryption (DES,
RC4, etc.) The keys for this
symmetric encryption are
generated uniquely for each
connection and are based on a
secret negotiated by another
protocol (such as the TLS
Handshake Protocol). The
Record Protocol can also be used
without encryption.
9
Reliable - message transport
includes a message integrity
check using a keyed MAC.
Secure hash functions (SHA,
MD5, etc.) are used for MAC
computations. The Record
Protocol can operate without a
MAC, but is generally only used
in this mode while another
protocol is using the Record
Protocol as a transport for
negotiating security parameters.

Secure Socket Layer (SSL), a
protocol originally created by
Netscape. One advantage of TLS
is that it is application protocol
independent. The TLS protocol
runs above TCP/IP and below
application protocols such as
HTTP or IMAP. The HTTP
running on top of TLS or SSL is
often called HTTPS. The TLS
standard does not specify how
protocols add security with TLS;
the decisions on how to initiate
TLS handshaking and how to
interpret the authentication
certificates exchanged are left up
to the judgment of the designers
and implementers of protocols
which run on top of TLS.
5. Others

Socks: Protocol for sessions traversal
across firewall securely
Socks protocol provides a framework for
client-server applications in both the
TCP and UDP domains to conveniently
and securely use the services of a
network firewall. The protocol is
conceptually a "shim-layer" between the
application layer and the transport layer,
and as such does not provide network
layer gateway services, such as
forwarding of ICMP messages.
The use of network firewalls, systems
that effectively isolate an organizations
internal network structure from an
exterior network, such as the
INTERNET is becoming increasingly
popular. These firewall systems typically
act as application-layer gateways
between networks, usually offering
controlled TELNET, FTP, and SMTP
access. Socks provide a general
framework for these protocols to
transparently and securely traverse a
firewall.
6. Conclusion

I conclude by saying that as hackers and
many algorithms are there to break
passwords and much valuable
information, which leads to a great loss.
Hence network security provides the
remedy by many ways. Hence much
more advanced security measures would
be more helpful. So always that should
be an eye on network security as it is
much and more important.

10


REFERENCES: Websites:

www.yahoosearch.com
www.google.com
Books:

[1] DAVI84a Davies, D.W. and Price,
W.L., Security for Computer Networks,
J ohn Wiley and Sons, 1984.




[2] TANE96 Tanenbaum, A., Computer
Networks, Prentice Hall, 1996.
11



NETWORK SECURITY
WITH
QUANTUM CRYPTOGRAPHY





BY


G.NAGA SAI SANTOSH A. PRITHVI RAJ


nagasai_gunturu@yahoo.com
prudhvi_raj43@yahoo.com




DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
NARAYANA ENGINEERING COLLEGE
(Affiliated to Jawaharlal Nehru Technological University, Hyderabad)
NELLORE






Abstract

Quantum cryptography is an effort to
allow two users of a common communication
channel to create a body of shared and secret
information. This information, which
generally takes the form of a random string
of bits, can then be used as a conventional
secret key for secure communication. It is
useful to assume that the communicating
parties initially share a small amount of
secret information, which is used up and then
renewed in the exchange process, but even
without this assumption exchanges are
possible.
The recent applications of the
principles of quantum mechanics to
cryptography have led to remarkable new
dimension in secret communication. As a
result of these developments , it is now
possible to construct cryptographic
communication systems which detect
unauthorized eavesdropping should it occur,
and which give a guarantee of no
eavesdropping should it not occur.
The advantage of quantum cryptography over
traditional key exchange methods is that the
exchange of information can be shown to be
secure in a very strong sense, without making
assumptions about the intractability of certain
mathematical problems. Even when assuming
hypothetical eavesdroppers with unlimited
computing power, the laws of physics
guarantee (probabilistically) that the secret
key exchange will be secure, given a few
other assumptions.

Introduction

Classical physics is adequate for the
description of macroscopic objects. It applies
to systems larger than one micron (1micron =
1 millionth of a meter).The fact that classical
physics did not always provide an adequate
description of physical phenomena became
clear. A radically new set of theories,
quantum physics, was consequently
developed by physicists such as Max Planck
and Albert Einstein, during the first thirty
years of the XXth century. Quantum physics
describes adequately the microscopic world
(molecules, atoms, elementary particles),
while classical physics remains accurate for
macroscopic objects. The predictions of
quantum physics drastically differ from those
of classical physics. Quantum physics
features, for example, intrinsic randomness,
while classical physics is deterministic. It
also imposes limitation on the accuracy of the
measurements that can be performed on a
system (Heisenberg's uncertainty principle).
Quantum information processing is a new
and dynamic research field at the crossroads
of quantum physics and computer science. It
looks at the consequence of encoding digital
bits the elementary units of information
on quantum objects. Applying quantum
physics to information processing yields
revolutionary properties and possibilities,
without any equivalent in conventional
information theory .Here a digital bit is called
a quantum bit or a "qubit" in this context.
With the miniaturization of microprocessors,
which will reach the quantum limit in the
next fifteen to twenty years, this new field
will necessarily gain prominence . Its
ultimate goal is the development of a fully
quantum computer, possessing massively
parallel processing capabilities.
Although this goal is still quite distant, the
first applications of quantum information
processing are the generation of random
numbers, ( It exploits the fundamentally
random nature of quantum physics to
produce high quality random numbers, for
cryptographic applications) and second
application, called quantum cryptography,
exploits Heisenberg's uncertainty principle to
allow two remote parties to exchange a
cryptographic key. It is the main focus of this
paper.
Quantum cryptography is a technology that
exploits the laws of quantum physics to
securely distribute symmetric cryptographic
keys over a fibre optic link. The keys are then
used with symmetric cryptographic
algorithms to guarantee the confidentiality
and integrity of data transmission over the
link.
While conventional key distribution
techniques rely on public key cryptography
or manual exchange, and offer therefore only
limited and conditional security, the secrecy
of keys distributed by quantum cryptography
is guaranteed in an absolute fashion by
quantum physics. Quantum cryptography
also allows fully automated key management,
with frequent key replacement, and
irrefutably reveals eavesdropping.
Implementing quantum cryptography
consequently ensures future-proof
confidentiality of data exchanged over a link,
which is extremely difficult to obtain with
conventional techniques. Its deployment
allows to raise the security of mission critical
information to an unprecedented level.

The Importance of Information Security
At a time when the reliance upon electronic
data transmission and processing is becoming
every day more prevalent, unauthorized
access to proprietary information is a real
threat. In 2004, 53% of the respondents of the
CSI/FBI Computer Security and Crime
Survey1 admitted having been subjected to
unauthorized use of computer systems. These
attacks caused a total loss of more than 140
million USD for the respondents of the
survey. Moreover, it is generally admitted by
experts that the vast majority of information
security incidents and attacks go unreported.
These facts clearly demonstrate that it is vital
for organizations to implement
comprehensive information security policies
and countermeasures in order to protect
reputation, ensure business continuity and
guarantee information availability, integrity
and confidentiality. Besides, legal and
compliance requirements also often demand
such measures. Last but not least, the way an
organization protects its information assets
increasingly impacts the image projected to
customers and partners.

Protecting Information
Efficiently protecting critical information
within an organization requires the definition
and the implementation of a consistent
information security policy. Such a policy
describes which processes and means must
be applied within the company to achieve this
goal. It puts into practice technologies such
as biometrics or smartcards, for instance, to
control access to the data processing and
storage infrastructures whether electronic
or not and guarantee the physical security
of the information. It also resorts to solutions
such as Intrusion Prevention and Detection
systems, Firewalls and Antivirus Software to
defend a secure perimeter around the internal
computer network of the organization and
prevent hackers from penetrating it. Finally,
it defines measures to protect information
transmission between remote sites. This last
aspect of information security is often
overlooked.

Cryptography

Cryptography is the art of rendering
information exchanged between two parties
unintelligible to any unauthorized person.
Although it is an old science, its scope of
applications remained mainly restricted to
military and diplomatic purposes until the
development of electronic and optical
telecommunications. In the past twenty-five
years, cryptography evolved out of its status
of "classified" science and offers now
solutions to guarantee the secrecy of the ever-
expanding civilian telecommunication
networks. Although confidentiality is the
traditional application of cryptography, it is
used nowadays to achieve broader objectives,
such as authentication, digital signatures and
nonrepudiation .
The way cryptography works is illustrated in
Fig. 1. Before transmitting sensitive
information, the sender combines the plain
text with a secret key, using some encryption
algorithm, to obtain the cipher text. This
scrambled message is then sent to the
recipient who reverses the process to recover
the plain text by combining the cipher text
with the secret key using the decryption
algorithm. An eavesdropper cannot deduce
the plain message from the scrambled one
without knowing the key.
Numerous encryption algorithms exist. Their
relative strengths essentially depends on the
length of the key they use. The more bits the
key contains, the better the security. The DES
algorithm Data Encryption Standard
played an important role in the security of
electronic communications. It was adopted as
a standard by the US federal administration
in 1976. The length of its keys is however
only 56 bits. Since it can nowadays be
cracked in a few hours, it is not considered
secure any longer. It has been replaced
recently by the Advanced Encryption
Standard AES which has a minimum key
length of 128 bits. In addition to its length,
the amount of information encrypted with a
given key also influences the strength of the
scheme. The more often a key is changed, the
better the security. In the very special case
where the key is as long as the plain text and
used only once this scheme is called the
one-time pad it can be shown that
decryption is simply impossible and that the
scheme is absolutely secure.
As one usually assumes that the encryption
algorithm is disclosed, the secrecy of such a
scheme basically depends on the fact that the
key is secret. This means first, that the key
generation process must be appropriate, in
the sense that it must not be possible for a
third party to guess or deduce it. Truly
random numbers must thus be used for the
key. Second, it must not be possible for a
third party to intercept the key during its
exchange between the sender and the
recipient. This so-called key distribution
problem is very central in cryptography.


3. Key Distribution
For years, it was believed that the only
possibility to solve the key distribution
problem was to send some physical medium
a disk for example containing the key. In
the digital era, this requirement is clearly
unpractical. In addition, it is not possible to
check whether this medium was intercepted
and its content copied or not. In the late
sixties and early seventies, researchers of the
British "Government Communication
Headquarters" (GCHQ) invented an
algorithm solving this problem. To take an
image, it is as if they replaced the safe
mentioned above by a padlock. Before the
communication, the intended recipient sends
an open padlock to the party that will be
sending valuable information, while keeping
its key. The sender uses this open padlock to
protect the data. The recipient is then the only
one who can unlock the data with the key he
kept. Public key cryptography was born.
This invention however remained classified
and was independently rediscovered in the
mid-seventies by American researchers.
Formally, these padlocks are mathematical
expressions called one-way functions,
because they are easy to compute but difficult
to reverse .As public key cryptography
algorithms require complex calculations, they
are slow. They can thus not be used to
encrypt large amount of data and are
exploited in practice to exchange short
sessions keys for secret-key algorithms such
as AES.

In spite of the fact that it is extremely
practical, the exchange of keys using public
key cryptography suffers from two major
flaws. First, it is vulnerable to technological
progress. Reversing a one-way function can
be done, provided one has sufficient
computing power or time available. The
resources necessary to crack an algorithm
depend on the length of the key, which must
thus be selected carefully. One must indeed
assess the technological progress over the
course of the time span during which the data
encrypted will be valuable. In principle, an
eavesdropper could indeed record
communications and wait until he can afford
a computer powerful enough to crack them.
This assessment is straightforward when the
lifetime of the information is one or two
years, as in the case of credit card numbers,
but quite difficult when it spans a decade. In
1977, the three inventors of RSA the most
common public key cryptography algorithm
issued in an article entitled A new kind of
cipher that would take million of years to
break a challenge to crack a cipher
encrypted with a 428-bits key. They
predicted at the time that this might not occur
before 40 quadrillion years. The 100$ prize
was however claimed in 1994 by a group of
scientists who worked over the Internet.
Besides, Peter Shor has proposed in 1994 an
algorithm, which would run on a quantum
computer and allow to reverse one-way
functions, to crack public key cryptography.
The development of the first quantum
computer will consequently immediately
make the exchange of a key with public key
algorithms insecure.
The second flaw is the fact that public key
cryptography is vulnerable to progress in
mathematics. In spite of tremendous efforts,
mathematicians have not been able yet to
prove that public key cryptography is secure.
It has not been possible to rule out the
existence of algorithms that allow reversing
one-way functions. The discovery of such an
algorithm would make public key
cryptography insecure overnight. It is even
more difficult to assess the rate of theoretical
progress than that of technological advances.
There are examples in the history of
mathematics where one person was able to
solve a problem, which kept busy other
researchers for years or decades. It is even
possible that an algorithm for reversing one-
way functions has already been discovered,
but kept secret. These threats simply mean
that public key cryptography cannot
guarantee future-proof key distribution.

4. Quantum Cryptography
4.1 Principle
Quantum cryptography solves the key
distribution problem by allowing the
exchange of a cryptographic key between
two remote parties with absolute security,
guaranteed by the laws of physics. This key
can then be used with conventional
cryptographic algorithms. One may thus
claim, with some merit, that quantum key
distribution may be a better name for
quantum cryptography.
Contrary to what one could expect, the basic
principle of quantum cryptography is quite
straightforward. It exploits the fact that
according to quantum physics, the mere fact
of observing a quantum object perturbs it in
an irreparable way. When you read this
article for example, the sheet of paper must
be lighted. The impact of the light particles
will slightly heat it up and hence change it.
This effect is very small on a piece of paper,
which is a macroscopic object. However, the
situation is radically different with a
microscopic object. If one encodes the value
of a digital bit on a single quantum object, its
interception will necessarily translate into a
perturbation, because the eavesdropper is
forced to observe it. This perturbation causes
errors in the sequence of bits exchanged by
the sender and recipient. By checking for the
presence of such errors, the two parties can
verify whether their key was intercepted or
not. It is important to stress that since this
verification takes place after the exchange of
bits, one finds out a posteriori whether the
communication was eavesdropped or not.
That is why this technology is used to
exchange a key and not valuable information.
Once the key is validated, it can be used to
encrypt data. Quantum physics allows to
prove that interception of the key without
perturbation is impossible.
4.2 Quantum Communications
What does it mean in practice to encode the
value of a digital bit on a quantum object? In
telecommunication networks, light is
routinely used to exchange information. For
each bit of information, a pulse is emitted and
sent down an optical fiber a thin fiber of
glass used to carry light signals to the
receiver, where it is registered and
transformed back into an electronic signal.
These pulses typically contain millions of
particles of light, called photons. In quantum
cryptography, one can follow the same
approach, with the only difference that the
pulses contain only a single photon. A single
photon represents a very tiny amount of light
(when reading this article your eyes register
billions of photons every second) and follows
the laws of quantum physics. In particular, it
cannot be split into halves. This means that
an eavesdropper cannot take half of a photon
to measure the value of the bit it carries,
while letting the other half continue its
course. If he wants to obtain the value of the
bit, he must observe the photon and will thus
interrupt the communication and reveal his
presence. A more clever strategy is for the
eavesdropper to detect the photon, register
the value of the bit and prepare a new photon
according to the obtained result to send it to
the receiver. In quantum cryptography, the
two legitimate parties cooperate to prevent
the eavesdropper from doing so, by forcing
him to introduce errors. Protocols have been
devised to achieve this goal.

4.3 Quantum Cryptography Protocols
Although several exist, a single quantum
cryptography protocol will be discussed here.
This is sufficient to illustrate the principle of
quantum cryptography. The BB84 protocol
was the first to be invented in 1984 by
Charles Bennett of IBM Research and Gilles
Brassard of the University of Montreal. In
spite of this, it is still widely used and has
become a de facto standard.
An emitter and a receiver can
implement it by exchanging single-photons,
whose polarization states are used to encode
bit values (refer to Box 4 for an explanation
of what polarization is) over an optical fiber.
This fiber, and the transmission equipment, is
called the quantum channel. They use four
different polarization states and agree, for
example, that a 0-bit value can be encoded
either as a horizontal state or a 45 diagonal
one (see Box 5). For a 1-bit value, they will
use either a vertical state or a +45 diagonal
one.
For each bit, the emitter sends a photon
whose polarization is randomly selected
among the four states. Herecords the
orientation in a list.
The photon is sent along the quantum
channel.
For each incoming photon, the receiver
randomly chooses the orientation horizontal
or diagonal of a filter allowing to
distinguish between two polarization states.
He records these orientations, as well as the
outcomeof the detections photon deflected
to the right or the left.
After the exchange of a large number of
photons, the receiver reveals over a
conventional communication channel, such
as the internet or the phone this channel is
also known as the classical channel the
sequence of filter orientations he has used,
without disclosing the actual results of his
measurements. The emitter uses this
information to compare the orientation of the
photons he has sent with the corresponding
filter orientation. He announces to the
receiver in which cases the orientations
where compatible and in which they were
not. The emitter and the receiver now discard
from their lists all the bits corresponding to a
photon for which the orientations were not
compatible. This phase is called the sifting of
the key. By doing so, they obtain a sequence
of bits which, in the absence of an
eavesdropper, is identical and is half the
length of the raw sequence. They can use it
as a key.
An eavesdropper intercepting the photons
will, in half of the cases, use the wrong filter.
By doing so, he modifies the state of the
photons (refer to Box 4) and will thus
introduce errors in the sequence shared by the
emitter and receiver. It is thus sufficient for
the emitter and the receiver to check for the
presence of errors in the sequence, by
comparing over the classical channel a
sample of the bits, to verify the integrity of
the key. Note that the bits revealed during
this comparison are discarded as they could
have been intercepted by the eavesdropper.
It is important to realize that the interception
of the communications over the classical
channel by the eavesdropper does not
constitute a vulnerability, as they take place
after the transmission of the photons.
Key Distillation
The description of the BB84 quantum
cryptography protocol assumed that the only
source of errors in the sequence exchanged
by the emitter and the receiver was the action
of the eavesdropper. All practical quantum
cryptography will however feature an
intrinsic error rate caused by component
imperfections or environmental perturbations
of the quantum channel.
In order to avoid jeopardizing the security of
the key, these errors are all attributed to the
eavesdropper. A post processing phase, also
known as key distillation, is then performed.
It takes
place after the sifting of the key and consists
of two steps. The first step corrects all the
errors in the key, by using a classical error
correction protocol. This step also allows to
precisely estimate the actual error rate. With
this error rate, it is possible to accurately
calculate the amount of
information the eavesdropper may have on
the key. The second step is called privacy
amplification and consists in compressing the
key by an appropriate factor to reduce the
information of the
eavesdropper. The compression factor
depends on the error rate. The higher it is, the
more
information an eavesdropper might have on
the key and the more it must be compressed
to be

secure. Fig. 2 schematically shows the impact
of the sifting and distillation steps on the key
size. This procedure works up to a maximum
error rate. Above this threshold, the
eavesdropper can have too much information
on the sequence to allow the legitimate
parties to produce a key. Because of this, it is
essential for a quantum cryptography system
to have an intrinsic error rate that is well
below this treashold.
Key distillation is then complemented
by an authentication step in order to prevent a
man in the middle attack, where the
eavesdropper would cut the communication
channels and pretend to the emitter that he is
the receiver and viceversa. This is possible
thanks to the use of a pre-established secret
key in the emitter and the receiver, which is
used to authenticate the communications on
the classical channel. This initial secret key
serves only to authenticate the first quantum
cryptography session. After each session, part
of the key produced is used to replace the
previous authentication key.

Real World Quantum Cryptography
The first experimental demonstration
of quantum cryptography took place in 1989
and was performed by Bennett and Brassard.
A key was exchanged over 30 cm of air.
Although its practical interest was certainly
limited, this experiment proved that quantum
cryptography was possible and motivated
other research groups to enter the field. The
first demonstration over optical fiber took
place in 1993 at the University of Geneva.
The 90s saw a host of experiments, with key
distribution distance spans reaching up to
several dozens of kilometers.
The performance of a quantum
cryptography system is described by the rate
at which a key is exchanged over a certain
distance or equivalently for a given loss
budget. When a photon propagates in an
optical fiber, it has, in spite of the high
transparency of the glass used, a certain
probability to get absorbed. If the distance
between the two quantum cryptography
stations increases, the probability that a given
photon will reach the receiver decreases.
Imperfect single-photon source and detectors
further contribute to the reduction of the
number of photons detected by the receiver.
The fact that only a fraction of the photons
reaches the detectors, however, does not
constitute a vulnerability, as these do not
contribute to the final key. It only amounts to
a reduction of the key exchange rate.

When the distance between the two stations
increases, two effects reinforce each other to
reduce the effective key exchange rate. First,
the probability that a given photon reaches
the receiver decreases. This effect causes a
reduction of the raw exchange rate. Second,
the signal-to-noise ratio decreases the
signal decreases with the detection
probability, while the noise probability
remains constant which means that the
error rate increases. A higher error rate
implies a more costly key distillation, in
terms of the number of bits consumed, and in
turn a lower effective key creation rate. Fig. 3
summarizes this phenomenon.
Typical key exchange rates for
existing quantum cryptography systems
range from hundreds of kilobits per second
for short distances to hundreds of bits per
second for distances of several dozens of
kilometers. These rates are low compared to
typical bit rates encountered in conventional
communication systems. In a sense, this low
rate is the price to pay for absolute secrecy of
the key exchange process. One must
remember though that the bits exchanged
using quantum cryptography are only used to
produce relatively short keys (128 or 256-
bits). Nothing prevents transmitting data
encrypted with these keys at high bit rates.
The span of current quantum
cryptography systems is limited by the
transparency of optical fibers and typically
reaches 100 kilometers (60 miles). In
conventional telecommunications, one deals
with this problem by using optical repeaters.
They are located approximately every 80
kilometers (50 miles) to amplify and
regenerate the optical signal. In quantum
cryptography, it is not possible to do so.
Repeaters would indeed have the same effect
as an eavesdropper and corrupt the key by
introducing perturbations. Note that if it were
possible to use repeaters, an eavesdropper
could exploit them. The laws of quantum
physics forbid this. It is obviously possible to
increase this span by chaining links
.




Perspectives for Future Developments
Future developments in quantum
cryptography will certainly concentrate on
the increase of the key exchange rate. Several
approaches have also been proposed to
increase the range of the systems. The first
one is to get rid of the optical fiber. It is
possible to exchange a key using quantum
cryptography between a terrestrial station and
a low orbit satellite Such a satellite moves
with respect to the earth surface. When
passing over a second station, located
thousands of kilometers away from the first
one, it can retransmit the key. The satellite is
implicitly considered as a secure
intermediary station. This technology is less
mature than that based on optical fibers.
Research groups have already performed
preliminary tests of such a system, but an
actual key exchange with a satellite remains
to be demonstrated.
There are also several theoretical
proposals for building quantum repeaters.
They would relay quantum bits without
measuring and thus perturbing them. They
could, in principle, be used to extend the key
exchange range over arbitrarily long
distances. In practice, such quantum repeaters
do not exist yet, not even in laboratories, and
much research remains to be done. It is
nevertheless interesting to note that a
quantum repeater is a rudimentary quantum
computer. At the same time as it will make
public key cryptography obsolete, the
development of quantum computers will also
allow to implement quantum cryptography
over transcontinental distances.

5. Conclusion

For the first time in history,
the security of cryptography does not depend
any more on the computing resources of the
adversary, nor does it depend on
mathematical progress. Quantum
cryptography allows exchanging encryption
keys, whose secrecy is future-proof and
guaranteed by the laws of quantum physics.
Its combination with conventional secret-key
cryptographic algorithms allows raising the
confidentiality of data transmissions to an
unprecedented level. Quantum cryptography
allows to reach unprecedented levels of
security guaranteed by quantum physics for
data transmissions over optical networks.
Recognizing this fact, the MIT Technology
Review and Newsweek magazine identified
in 2003 quantum cryptography as one of the
ten technologies that will change the world.








NETWORK SECURITY-KERBEROS
(Paper presentation)

BY

G. Sravanthi (2nd Year CSE)

M. Sruthi (2nd Year CSE)

JNTU COLLEGE OF ENGINEERING

ANANTAPUR (AUTONOMOUS)

Email IDs: msruthi1@gmail.com, sravanthi_jntucea@yahoo.com

Phone Nos: 08554 243522, 08554 225688

Address: 2-428, 2
nd
road, 3
rd
cross, Anantapur-515004.



ABSTRACT
Imagine the situation
that you are sending a file to your
friend, which contains sensitive data
and is to be protected from disclosure.
A third person gains unauthorized
access to monitor the transmission
and is able to modify the contents of
the file. Consider another situation
where instead of modifying the
contents the third person constructs a
new file and forwards it to your friend
as if it had come from you. So neither
your data remains protected nor will
your friend get the correct data. It is
not surprising to face such kind of
problems, since as the network is
growing day by day many such
problems related to security of data
are coming into picture.
So, there is a need of proper
mechanism, which will ensure the
security of data during its
transmission over the network. There
must be some authentication
mechanism, which will take care that
only the authorized party gets the
access to the service. Kerberos is an
attempt in this direction.
Kerberos is a trusted third-
party authentication service based on
the conventional encryption. It is
trusted in the sense that both the
clients and servers trust Kerberos to
mediate their mutual authentication.
In a more open environment, in
which network connections to other
machines supported there is a
requirement of the user to prove
identity for each service invoked; also
it requires that servers prove their
identity to clients. This approach to
security is supported by Kerberos.
This paper gives a brief
introduction of security services and
attacks followed by a detail discussion
about Kerberos, the motivation behind
Kerberos and how it has been
implemented. It also presents the
issues and problems which are yet to
be solved in Kerberos.

INTRODUCTION
In a computer the
importance of protecting files and
other stored information is very
evident. This becomes especially
important in case of shared systems
and is even more where systems can be
accessed over network. The overall
process to protect the manipulation of
the data from the hackers is known as
computer security.
But with the advent of
distributed systems the whole scenario
changed as in this system, networks are
in use for carrying data between the
end-users. So, it is more viable to call
computer security as network security
where measures are needed to deter,
prevent, detect and correct security
violations that involve transmission,
interception and modification of the
information so network security is both
fascinating and complex. One
approach is to consider following
aspects of information security:
1) Security services: A service that
enhances the security of the data
processing systems and the
information transfers of an
organization. The services are intended
to counter security attacks, and they
make one or more security mechanism
to provide the services. One useful
classification of security services is the
following:
Confidentiality: Ensures that
the information in a computer
system and transmitted
information are accessible only
for reading by authorized
parties.
Authentication: Ensures that
the origin of a message or
electronic document is
correctly identified, with an
assurance that the identity is
not false.
Integrity: Ensures that only
authorized parties are able to
modify computer systems
assets and transmitted
information.
Nonrepudiation: Requires that
neither the sender nor the
receiver of a message be able to
deny the transmission.
Access control: Requires that
access to information resources
may be controlled by or for the
target system.
. Availability: Requires that
computer system assets
be available to authorized
parties when needed.

2) Security attacks: Any action that
compromises the security of
information owned by an organization.
Attacks on the security of a computer
system or network are best
characterized by viewing the function
of the computer system as providing
information. In general, there is a flow
of information from a source, such as a
file or a region of main memory, to a
destination such as another file or a
user. Security attacks can be
categorized in following four general
categories:
Interruption: An asset of the
system is destroyed or becomes
unavailable or unusual. This is
an attack of availability.
Interception: An unauthorized
party gains access to an asset.
This is an attack on
confidentiality.
Modification: An unauthorized
party not only gains access to
but tempers with an asset. This
is an attack on integrity.
Fabrication: An unauthorized
party inserts counterfeit objects
into the system. This is an
attack on authenticity.
Source destination
(a)Normal Flow
(b)Interruption
(c) Interception
(d)Modification (e)Fabrication

MODEL FOR NETWORK
SECURITY
A message is to be transferred from
one party to another across some sort
of internet .The two parties, who are
principals in this transaction, must
cooperate for the exchange to take
WHAT IS KERBEROS? place. A logical information channel is
established by defining a route through
the internet from source to destination
and by the cooperative use of
communication protocols (e.g.,
TCP/IP) by the two principals.
Security aspects come into play when
it is necessary or desirable to protect
the information transmission from an
opponent who may present a threat to
confidentiality, authenticity, and so on.
A trusted third party may be needed to
achieve secure transmission.

It's a third party authentication
service based on the model presented
by Needham and Schroeder .Kerberos
assumes a distributed client/server
architecture and employs one or more
Kerberos servers to provide an
authentical-service.Timestamps
(represents current date and time) have
been added to the original model to aid
in the detection of the replay.

MOTIVATION OF KERBEROS






Information
channel



What is

WHAT
Trusted third
party
(e.g.,arbitor,
distributor of
secret
information)
Princi
pal
Princi
pal
Security-
related
Transfor
mation
Security-
related
Transfor
mation

Oppone
nt
Messa
ge


Secret
Inform
ation

Messa
ge


Secret
Inform
ation

1. Secure: Someone watching the
network should not be able to obtain
the necessary information to
impersonate a user. More generally,
Kerberos should be strong enough that
a potential opponent does not find it to
be the weak link.

2. Reliable: For all services that rely
on Kerberos for access control, lack of
availability of the Kerberos service
means lack of the availability of the
supported services. So, it should be
highly reliable and should deploy a
distributed server architecture, with
one system able to back up another.

3. Transparent: Ideally, the user
should not be aware that authentication
is taking place, beyond the requirement
to enter a password.

4. Scalable: The system should be
capable of supporting large numbers of
clients and servers .This suggests a
modular, distributed architecture. .
Many systems can communicate with
Athena hosts. Not all of these will
support our mechanism, but software
should not break if they did.

HOW KERBEROS WORKS?
Kerberos keeps a
database of the clients and their private
keys .The private key is a large no.
known only to the Kerberos and the
client .In the case the client is the user,
it is an encrypted password. The
private keys are negotiated at
registration. Because Kerberos knows
these private keys, it creates messages
which convince one client that another
is really who it claims to be. It can also
generate session keys (Temporary
private keys) which are given to the 2
clients and no one else. A session key
can be used to encrypt messages
between two partners.
When a user requests
a service, her/his identity must be
established. To do this, a ticket is
presented to the server, along with
proof that the ticket was originally
issued to the user and not stolen. There
are 3 phases to authentication through
Kerberos .In the first phase, the user
obtains credentials to be used to
request access to other services. In the
second phase, the user requests
authentication for a specific service.
The user in this final phase presents
those credentials to the end server.

KERBEROS PROTOCOL

One can specify the protocol as
follows in security protocol notation,
where Alice (A) authenticates herself to
Bob (B) using a server S. Here, K
AS
is
a pre-established secret key known
only to A and S. Likewise, K
BS
is
known only to B and S. K is a
AB
session key between A and B, freshly
generated for each run of the protocol.
T and T
S A
are timestamps generated by
S and A, respectively. L is a 'lifespan'
value defining the validity of a
timestamp.

A asks S to initiate communication
with B
S generates a fresh K
AB
, and sends it to
A together with a timestamp and the
same data encrypted for B.
GETTING THE INITIAL KERBEROS
TICKET


When the user walks up to a
workstation, only one piece of
information can prove her/his identity:
the user's password. The initial
exchange with the authentication
server is designed to minimize the
chance that the password will be
compromised, while at the same time
not allowing a user to properly
authenticate her/him without
knowledge of that password. The
process of logging in appears to the
user to be the same as logging in to a
timesharing system. Behind the scenes,
though, it is quite different.
A passes on the message to B, obtains
a new T
A
and passes it under the new
session key

B confirms receipt of the session key
by returning a modified version of the
timestamp to A
We see here that the security of the
protocol relies heavily on timestamps
T and life spans L as reliable indicators
of the freshness of a communication
(see the BAN logic).
The user is prompted
for her/his username. Once it has been
entered, a request is sent
In relation to the following Kerberos
operation, it is helpful to note that the
server S here stands for both
authentication service (AS), and ticket
granting service (TGS).
to the authentication server containing
the user's name and the name of a
special service known as the ticket-
granting service.
The authentication server
checks that it knows about the client. If
so, it generates a random session key
which will later be used between the
client and the ticket-granting server. It
then creates a ticket for the ticket-
granting server which contains the
client's name, the name of the ticket-
granting server, the current time, a


is the client to server ticket,
Is the authenticator,
and confirms B's true
identity and its recognition of A. This
is required for mutual authentication.
lifetime for the ticket, the client's IP
address, and the random session key
just created. This is all encrypted in a
key known only to the ticket-granting
server and the authentication server.

The authentication server then
sends the ticket, along with a copy of
the random session key and some
additional information, back to the
client. This response is encrypted in
the client's private key, known only to
Kerberos and the client, which is
derived from the user's password.

Once the response has been received
by the client, the user is asked for
her/his password. The password is
converted to a Data Encryption
Standard key and used to decrypt the
response from the authentication
server. The ticket and the session key,
along with some of the other
information, are stored for future use,
and the user's password and Data
Encryption Standard key are erased
from memory.

1. User logs onto workstation and
requests service on host.

2. AS verifies users access right in
database, creates ticket-granting ticket
and session key. Results are encrypted
using key derived from users
password.
3. Workstation prompts user for
password and uses password to decrypt
incoming
message, then sends ticket and
authenticator that contains users
name, network address, and time to
TGS.

4. TGS decrypts ticket and
authenticator, verifies request, then
creates ticket for requested server.

5. Workstation sends ticket and
authenticator to server.

6. Server verifies that ticket and
authenticator match, then grants access
to service. If mutual authentication is
required, server returns an
authenticator.

REQUESTING A KERBEROS
SERVICE
In order to gain
access to the server, the application
builds an authenticator containing the
client's name and IP address, and the
current time. The authenticator is then
encrypted in the session key that was
received with the ticket for the server.
The client then sends the authenticator
along with the ticket to the server in a
manner defined by the individual
application.
Once the authenticator and ticket
have been received by the server, the
server decrypts the ticket, uses the
session key included in the ticket to
decrypt the authenticator, compares the
information in the ticket with that in
the authenticator, the IP address from
which the request was received, and
the present time. If everything
matches, it allows the request to
proceed.
Finally, if the client specifies that it
wants the server to prove its identity
too, the server adds one to the
timestamp the client sent in the
authenticator, encrypts the result in the
session key, and sends the result back
to the client.

GETTING KERBEROS SERVER
TICKETS

When a program requires a
ticket that has not already been
requested, it sends a request to the
ticket-granting server. The request
contains the name of the server for
which a ticket is requested, along with
the ticket-granting ticket and an
authenticator built as described in the
previous section.
The ticket-granting server then
checks the authenticator and ticket-
granting ticket as described above. If
valid, the ticket-granting server
generates a new random session key to
be used between the client and the new
server. It then builds a ticket for the
new server.
The ticket-granting server then
sends the ticket, along with the session
key and other information, back to the
client. This time, however, the reply is
encrypted in the session key that was
part of the ticket-granting ticket. This
way, there is no need for the user to
enter her/his password again.

KERBEROS ISSUES AND OPEN
PROBLEMS

In spite of the fact that Kerberos
is gaining widespread popularity it is
still lagging behind in certain areas,
some of the aspects are listed below:
1) How to decide the correct lifetime
for a ticket,
2) How to allow proxies, and how to
guarantee workstation integrity?
3) The ticket lifetime problem is a
matter of choosing the proper tradeoff
between security and convenience. If
the life of a ticket is long, then if a
ticket and its associated session key are
stolen or misplaced, they can be used
for a longer period of time.
4) The problem with giving a ticket a
short lifetime, however, is that when it
expires, the user will have to obtain a
new one which requires the user to
enter the password again.
5) An open problem is the proxy
problem. How can an authenticated
user allow a server to acquire other
network services on her/his behalf?
6) Another problem, and one that is
important in the Athena environment,
is how to guarantee the integrity of the
software running on a workstation.
This is not so much of a problem on
private workstations since the user that
will be using it has control over it.

CONCLUSION

In conclusion that we would like to
mention a quote which is be fitting in
the context of security

The art of war teaches us on the
likelihood of not to rely upon the
chances of enemys not coming but
to prepare ourselves to counter all
the threats posed by the enemy if he
comes and attacks.

So in this fast growing world
where everything is based on networks
we need to have a strong and
invincible security measures which
will allow us to tackle any sort of
problems.









REFERENCES


1. http://www.integritysciences.
com/elliptic.htm#ecgroups
2. http://www.pgpi.org/doc/pgpi
ntro/#p2
3. Cryptography and Network
Security William Stallings.
4. Computer Network
Tanenbaum.
5. http://www.zeroshell.net/eng/
kerberos/
6. http://www.xml-
dev.com/blog/index.php?actio
n=viewtopic&id=21














































NETWORKING OF GPS SENSORS
SOFTWARE SYSTEMS WITH
DBMS

NARAYANA ENGINEERING COLLEGE
NELLORE

A.CHIRANJ EEVI, K.CHANDRAHAS,
ROLL NO: 04711A0410, ROLL NO: 04711A0409,
III/IV B-TECH, E.C.E, III/IV B-TECH, E.C.E,
NARAYANA ENGG. COLLEGE, NARAYANA ENGG COLLEGE,
MUTHURKUR ROAD, MUTHURKUR ROAD,
NELLORE. NELLORE.


Emails: chiru_acsk@yahoo.co.in
Sahchandra_409@yahoo.co.in








Abstract:

Search of knowledge is
the beginning of wisdom. Technology
varies exponentially with centuries. As
the clock ticks, technology picks. In the
world arena GPS technology is moving
to its peak position.

In this paper we reveal
how the latest developments and
applications of the fields like G.P.S.
(Global Positioning System), sensors,
fiber optical networking systems,
software and the database management
systems etc. can be combined together in
a proper manner to serve the purpose of
the safety of the Indian Railways (in
particular).
The paper will show ow the technology
that was initially developed for defence
purposes and various specialized










requirements can be networked together
to contribute towards the
safety of the worlds largest railway
network. The method used here can be
simply stated as checking the vibrations
produced by the train in accordance with
the distance from the sensors and the
speed of the train using the various
components that are mentioned earlier to
tackle the problems ahead (if any in
advance) to avoid the mishaps.

The Infrastructural and
Research needs of this requirement
already exist in India thus it can be
indigenously developed at a very low
cost. It will be a much better method of
safety and communication possibly even
better than those methods proposed for
the future









Introduction:
Indian Railways is having
the worlds largest railway network. The
Indian Railways contribute largely in the
public transport as it becomes a primary
mode of transport. The leading nations
have the separate rail lines for the goods
and for the passenger trains. These are
things that are not practiced in India.

There are a large number
of the accidents that are happening in
which many lives are lost and also in
these accidents there is a huge loss of the
railway and the national property. Most
of the accidents occur due to negligence,
technical faults non-reliable data and late
understanding and slow or no response
towards the danger signs considered
small. Thus there is need to develop the
sensitive Guard Rooms, Cabins,
Stations, Control Rooms and many other
units. There is also need to deploy the
auto-alert and auto-control systems.

Principle And Working:

As we know that most of
the accidents occur due to the collisions,
negligence of the signals i.e. the human
error, track related problems and some

as the technical errors. And yet there is
no measures as such taken for the cause.
The main principle that is used for the
safety of the Indian Railways is that
whenever a train will move over the
track there will be vibrations that will be
created on the track. These vibrations
that are produced on the track are
dependent on the speed of the train, load
or the weight of the train. These
vibrations travel long distances because
of one the metal tracks and the other
reason is the connection of these tracks
by means of the fish plates.
There will be microphones
on the tracks such that they will
constantly note the vibrations that are
coming from the train. The vibrations
that will be coming from the train will
depend on the above mentioned factors
as well as the distance of the sensor from
the moving train. There will be a
database for the vibrations prepared on
the basis of the ideal conditions noted by
the case study of different trains and
with distance of the train from the
sensors. There will be a software that
will convert these vibrations in such a
form that they can be compared with the
already existing data of the database.

TRAI N
CABI N

Fig:Communication link architecture
The GPS will be used here to get
the details of the train such as the speed
and exact position of the train. Thus the
GPS will work as the anti-collision
device as well. The GPS will provide the
means of the communication and input
of the data from the train to all the
centers and the other trains. If there will
be any problem on the track, bridge etc.
such as the removal of the fish plates the
vibrations produced from the train will
change (they will either get damped or
will get excited which is a noticeable
change from the ideal case) the same can
be concluded or developed for the
bridges also. This change can be noted
by the means of the comparison of the
data obtained from train and the data
from the database. There will be use of
the fibre optic network from the sensors
so the data can be transferred
immediately. All these things will work
in tandem and if there will be any error
then the vibrations will differ thus
through the satellite communication we
can tell the Engine Driver in advance
and a preventive measure can be taken
well ahead of the time. The whole
system depends on the proper
functioning of the components and the
proposed method will even work if there
will be failure of one or two components
thus we can say that the proposed
method can contribute in large towards
the railway safety by means of the
network of the following components.

G.P.S (Global Positioning
System):
This is most important of the
component for the safety of the system
which is mentioned.

Principle of Working:
Definition: the GPS is a world wide
network radionavigation system formed
from a constellation of minimum of the
24 satellites and their ground stations.
GPS uses the artificial satellites to
compute the position of the receiver in
terms of the four coordinates that is X,
Y, Z and time. Now a day the accuracy
of the GPS systems is increased up to
nearly the range of the 1cm to even 1mm
for the high cost differential carrier
phase survey satellites. The GPS
technology makes the use of the
triangulation method for the
determination of the four co-ordinates.
The GPS contains a combination of at
least 24 satellites that are moving in six
of the orbital planes which are 60
degrees equally spaced. Apart from each
other and each having a 4 number of the
satellite in them. These are inclined at 55
degrees to the equatorial plane. This
arrangement makes it possible that the
receiver is always in range at least 5
satellites. There are many ground
stations that are keeping a close watch
on these satellites. These ground stations
also maintain the clock of the satellite
and on the earth and if necessary they
can make suitable changes while the
satellite is in the orbit. Now a days the
GPS is used to assist the sailors in the
open sea.


X Y Z T
The Global Positioning System
Functions:

Navigation in the three dimensions
is the primary objective of the GPS. The
GPS will have a very important role in
the mission of the Railway Safety. The
main function of the GPS is to get the
interconnectivity of the various points
that will be mentioned in the later
sections and to note the position and
speed of the train and respond to the
signals that are picked up by the sensors
and send the signals to the Train Drivers
and the Guards of the Train Ground
Stations containing the network of the
Cabins, Station Masters, Control Rooms
and all the related points on the ground.
Thus giving all the details of the train in
a very much understandable and visual
form. The signals that are picked up by
the satellites will be then processed and
checked for any kind of the faulty
situations by simply comparing the
signals using the Software and the
DBMS mentioned later in the paper.
This will help the Drivers, Station
Masters, Cabin men and many other to
operate the traffic of the trains with ease
and this will also help in the
communication of all the units
(stationary and moving).
Current Uses Of The GPS:
The GPS is currently used in the various
activities such as:
1. Rescue missions on the open sea.
2. Military operations.
3. In the forests and tourist places.
4. In the mobile phones.
5. Navigation in the Open Sea.
These are to name a few but
there are many such implementations of
the GPS that are even more advanced
and futuristic.
Microphone Sensors:
This will be the most critical
selection of the unit of the network.
When a train moves there are huge
amount of the vibrations or the noises
are created that are possible to get
recognized from a large distances (every
one knows the childhood game to know
whether a train is coming the children
place their ears on the tracks). Thus with
the advanced form of the microphones
we can note down these vibrations on
the track.


Functions:
The microphones will have
the purpose to note down the vibration
coming from the track by the motion of
the train. These details will be then
compared with the vibrations from the
database in accordance to the distance of
the train from the sensors. The proposed
sensor is the carbon microphones that
are generally used in the telephones.

Fibre Optic Wires:
These are the latest advances
made for the transfer of the data and the
signals with the speed that is nearly
equal to the speed of the light. These are
required to transfer the data that is
collected from the sensors to Safety Of
Indian Railways With The Networking
Of G.P.S., Sensors And Software
Systems With The D.B.M.S. there
respective places in the network that is
needed to be developed by the railways.

Functions:
The main function of
the fibre optic network is to transfer the
data or the signals that are collected
from the sensors quickly to the software
analysis at the respective places in the
network.


Software And DBMS:
Software is the programs that can
perform some tasks that are assigned to
them. The DBMS is the Data Base
Management System which can store a
huge amount of the data in them.
Software and the DBMS will together
work for the safety of the railways.

Functions:
The main purpose of the
software is to convert the vibration noted
by the sensors and convert it into some
graphical or some other form that can be
used to store the details of the train in
the DBMS. The DBMS will contain
many details of the vibrations or the
signals that are observed under good
conditions from the various analyses
done on the trains according to there
weight and distance from the sensors.
The main function of the software is to
compare the vibrations that are produced
by the train with the data that is
stored in the DBMS.




On-board Unit with GPS and
Wireless Communications:

The on-board unit consists of
communication satellite antennae,
satellite positioning antennae, the
on-board receiver system and
communication interface devices, etc
(shown in Figure 3). The on-board unit
can continuously measure different
parameters, tag the data with time and
position information, reports irregular
conditions.
TRAI N TRAI N
GPS Rec ei ver
Communi c ati on
I nt er f ac e
Dat a Col l ect or
Equi pment
Man Machi ne
I nt er f ac e
Dat abas e
Communi c ati on
Sat el l i t e
GSMR Wr el ess cab si gnal
On-Board unit design
GPS Sat el l i t e

For Cabins, Station, Guard Rooms,
Control Rooms & Other Important
Establishments: These mentioned
establishments need a tremendous
development in there current status for the
safety of the Railways. These
requirements are as follows:

1. Need of the towers for the transfer of
the details and the communication

between other establishments, trains and
the satellites.
2. A visual display system to show the
actual position of the trains along with
other details like visual and audio warning
systems.
3. Receivers and the transmitters tuned to
proper frequencies and in accordance to
the network.
The train drivers cabin must be
such that the driver will come to know
about the track, weather, bridges and
constructional status in advance. Thus
there is need of a high tech visual display
screen to tell the details required by the
driver for the smooth operation of the
train, Good communication system. This
can be achieved by the use of the satellites
of the GPS for the communication.

Transmitter & Receiver:
From the standard data available the
frequency should be as follows:
Main allocations 5.925 to 6.425 GHz (for
uplink) & 3.700 to 4.200 GHz (for
downlink) which may change as per the
advancements that are made and the
system requirements. The Range of the
Trains device will be 25-75 Km (as per
the requirements and the system
integration).

System Integration:
After the components will be
fabricated then these will be integrated to
perform the required and proposed
functions. This will help the Indian
Railways for a very longtime. The
integration of the project proposed will
such that each time the Train, Guard
Room, Control Room, Stations and the
satellites all will be simultaneously work
for the safety of the railways. The figure
shows the integration of the system in
diagram

Expected Benefits To The Indian
Railways:

1. Railway Safety
2. Automation in the Work.
3. Reduction in the Stress Levels because
of the good equipments and machinery.
4. Time Delays: As every time the position
of the train will be known thus there will
not be any time delays.
5. Train Traffic Control: due to continuous
monitoring of the trains there will be a
good train traffic control.
6. Futuristic method: with this method it is
possible to make fully automatic train
network which may even not have the
driver in it.

Implementation Problems:

There are very few of the problems
associated with the project which are:
The implementation time is large because
of the training and system integration. The
cost may seem high to some non
technocrats. There by reducing the
principle costs by an extent and quick
implementation of the technology.

Conclusion:

The benefits and advantages
of the proposed method outnumber the
problems and disadvantages. Also this
method is having good expected results.
Thus the proposal should be considered
for the most important transport system in
India and for the safety the costliest
human life and the national property.
The project can also work for a very large
time period. Thus the integration of GPS,
sensors along with software systems can
be effectively used to provide safety as
well as huge income to the Indian
Railways.


Bibilography

1. Satellite Communication by Charles
W. Bostian & Timothy Pratt
2. Electronic Communication Systems by
George Kennedy & Bernard Davis
3.www.trimble.com
4.www.discoveryindia.com
5.www.irsuggestions.org











Presented by-
C.PALGUN KUMAR C.MANOJ KUMAR
III B.Tech (E.C.E) III B.Tech (E.C.E)
S.K.I.T S.K.I.T
SRI KALAHASTI. SRI KALAHASTI.
Email-palgun13@gmail.com Email- manoj_madhu1986@yahoo.co.in














ABSTRACT:
Bluetooth Wireless Technology promises the creation of a world without wires. It is now
positioned to deliver on some of these promises as products incorporating Bluetooth Wireless
Technology become available.Bluetooth technology provides a low cost, low power and low
complexity solution for ad-hoc wireless connectivity.This wireless personal area network
(WPAN) technology, based on the Bluetooth specification, is now an IEEE standard under the
denomination of 802.15 WPANs.
This talk provides an overview of Bluetooth technology and its applications. The Bluetooth
system is introduced and various modes of operation are discussed. A description of
functionalities of Bluetooth layers and the protocol specifications is presented. Various usage
scenarios and the profiles which support them are explained. An insight into the performance of
Bluetooth based systems is provided. Means of evaluating/improving performance of Bluetooth
based systems are discussed. Finally, we provide an overview of a number of open research
problems related to Bluetooth specifically, and short range wireless in general.












INTRODUCTION:
Wireless networking and wireless
communication are fairly broad areas of
technological research and development.
Wireless networking and communications
both deal with transmitting data via some
form of air waves.

Wireless communication involves any
transfer of data from one point to another.
Radios, cell phones, satellite TVs, and
remote controls are just a few of the many
technologies currently being used and
developed in the area of wireless
communication. Wireless technology is
rapidly evolving, and is playing an
increasing role in the lives of people
throughout the world.

Wireless communication can be subdivided
into several categories. These categories are
fixed, mobile, portable, and IR or short-
rangecommunication (ie. Infrared,
Bluetooth
HISTORY:

I. Major Developments in the Formation of
Wireless Comminucations
1. 1968: AT&T proposes the concept of the
cellular telephone to the FCC; by 1976, 543
customers in the New York area could be
accommodated into the current Bell system.
2. Formation of the 802.11 Wireless Local
Area Networks Standards Working Group in
1987; this formed the foundation of modern
Wi-Fi wireless networks used widely in
personal computers.

II. Notable Players in the History of
Wireless Communications

Nicola Tesla - physicist and electrical
engineer, born in 1856, did much research in
the field of wireless communications.
1. Developed a system for wireless
telegraphy; he demonstrated it in St. Louis
in 1893
2. In addition, he did a lot of work in the
field of wireless power transmission


Heinrich Rudolf Hertz - German physicist,
born in 1857
1. The SI unit of frequency, the hertz, was
named after him
2. Like Tesla, was a major supporter and
developer of wireless telegraph
3. Through experimentation, he proved
that electric signals can travel through open
air, as had been predicted by J ames Clerk
Maxwell and Michael Faraday, and which
is the basis for the invention of radio.



Guglielmo Marconi - Italian electrical
engineer, born in 1874; he is known as the
father of radio
1. He made the first wireless transmission
across water May 13th 1897, from
Lavernock Point, South Wales to Flat holm
Island.
2. He made a wireless transmission across
the water from Ballycastle (Northern
Ireland) to Rathlin Island in 1898.
3. Worked on spark gap telegraphy, a
rudimentary form of digital broadcasting

WHAT IS BLUETOOTH?
Look around you at the moment, you have
your keyboard connected to the computer, as
well as a printer, mouse, monitor and so on.
What (literally) joins all of these together?,
they are connected by cables. Cables have
become the bane of many offices, homes
etc. Most of us have experienced the 'joys'
of trying to figure out what cable goes
where, and getting tangled up in the details.
Bluetooth essentially aims to fix this, it is a
cable-replacement technology
ORIGIN FOR THE NAME:

By the way if, you're wondering where the
Bluetooth name originally came from, it
named after a Danish Viking and King,
Harald Bltand (translated as Bluetooth in
English), who lived in the latter part of the
10th century. Harald Bltand united and
controlled Denmark and Norway (hence the
inspiration on the name: uniting devices
through Bluetooth). He got his name from
his very dark hair which was unusual for
Vikings, Bltand means dark complexion.
However a more popular, (but less likely
reason), was that Old Harald had a
inclination towards eating Blueberries , so
much so his teeth became stained with the
colour, leaving Harald with a rather unique
set of molars. And you thought your teeth
were bad...
HOW?
Conceived initially by Ericsson, before
being adopted by a myriad of other
companies, Bluetooth is a standard for a
small , cheap radio chip to be plugged
into computers, printers, mobile phones,
etc.A Bluetooth chip is designed to replace
cables by taking the information normally
carried by the cable, and transmitting it at a
special frequency to a receiver Bluetooth
chip, which will then give the information
received to the computer, phone whatever.
HOW ABOUT?
That was the original idea, but the
originators of the original idea soon realised
that a lot more was possible. If you can
transmit information between a computer
and a printer, why not transmit data from a
mobile phone to a printer, or even a printer
to a printer






FACTORS INVOLVED IN
BLUETOOTH TECHNOLOGY
Bluetooth technology is built according to
following layers':
- Applications
- L2CAP
- LMP
- Base Band
- Radio
L2CAP layer controls only the
asynchronous data transmission. L2CAP
uses the multiplexing, segmentation and
data re-assembly, in order to presert
the data safety during the transmission.
LMP layer controls the good transmission of
the packages.
Base Band layer specifies means of access
and procedures of physical layers to
support real time exchanges of the voice and
the flood of data as well as management
from the networks between the various
elements.
Radio layer corresponds to layer 1 of OSI
model. Here the media isl the air and data
circulate by waves radios (as for the
802.11).
We can establish two differents connections
types with Bluetooth
- A peer-to-peer connection. Some packages
are sent between the two elements in order
to know the identities of each one (standard
ID) and to synchronize them together
(standard HFS).
- A peer to multipeer connection. This link
creates a mini local area network (named
piconet) gathering all the rather close
elements between them to be able to
exchange information.
HOW BLUETOOTH
OPERATES
Bluetooth networking transmits data via
low-power radio waves. It communicates on
a frequency of 2.45 gigahertz (actually
between 2.402 GHz and 2.480 GHz, to be
exact). This frequency band has been set
aside by international agreement for the use
of industrial, scientific and medical devices
(ISM).

We can establish two differents connections
types with Bluetooth
- A peer-to-peer connection. Some packages
are sent between the two elements in order
to know the identities of each one (standard
ID) and to synchronize them together
(standard HFS).
- A peer to multipeer connection. This link
creates a mini local area network (named
piconet) gathering all the rather close
elements between them to be able to
exchange information .

A number of devices that you may already
use take advantage of this same radio-
frequency band. Baby monitors, garage-door
openers and the newest generation of
cordless phones all make use of frequencies
in the ISM band. Making sure that Bluetooth
and these other devices don't interfere with
one another has been a crucial part of the
design process.
One of the ways Bluetooth devices avoid
interfering with other systems is by sending
out very weak signals of about 1 milliwatt.
By comparison, the most powerful cell
phones can transmit a signal of 3 watts.


The low power limits the range of a
Bluetooth device to about 10 meters (32
feet), cutting the chances of interference
between your computer system and your
portable telephone or television. Even with
the low power, Bluetooth doesn't require
line of sight between communicating
devices. The walls in your house won't stop
a Bluetooth signal, making the standard
useful for controlling several devices in
different rooms.
Bluetooth can connect up to eight devices
simultaneously. With all of those devices in
the same 10-meter (32-foot)radius, you
might think they'd interfere with one
another, but it's unlikely.










Bluetooth uses a technique called spread-
spectrum frequency hopping that makes it
rare for more than one device to be
transmitting on the same frequency at the
same time. In this technique, a device will
use 79 individual, randomly chosen
frequencies within a designated range,
changing from one to another on a regular
basis. In the case of Bluetooth, the
transmitters change frequencies 1,600 times
every second, meaning that more devices
can make full use of a limited slice of the
radio spectrum Since every Bluetooth
transmitter uses spread-spectrum
transmitting automatically, its unlikely that
two transmitters will be on the same
frequency at the same time. This same
technique minimizes the risk that portable
phones or baby monitors will disrupt
Bluetooth devices, since any interference on
a particular frequency will last only a tiny
fraction of a second.
sure that Bluetooth and these other
devices don't interfere with one another has
been a crucial part of the design process.
A piconet can regroup, simultaneously, 8
Bluetooth devices and 200 device in
sleep. Sleep devices are not active, but they
remain synchronized with the piconet
channel.
Transmisions management in the piconet, is
done by a Master who is appointed by the
whole of others devices. In general, the
master is the device which is on the
initiative of the connection. Others devices
are called slaves.
All slaves adopt the synchronization of the
master device and answer all the messages
which include the access code of the Master
(derivative of its address).
Each piconet has its own transmission
channel. However the multiplexing of time
makes it possible a slave to be in connection
with another piconet. It is then slave or
Master. It is what we call "networks
chained" or scatternet. A scatternet thus
makes it possible to connect the piconet
ones between them in order to obtain a
wider cover and to obtain a whole of more
than eight peripherals. However, it is
important to note that a Bluetooth device
can be Master in only one piconet.
Inside a piconet, slaves can be, when they
need, be in activity more or less reduced
(SNIFF or HOLD mode) or in sleep (mode
PARK). These various modes give the
possibility to the Bluetooth device to self-
managing their transmit power but also a
management of the communications by the
Master.
It is possible to exchange a lot of types of
packages between the Master and various
slaves. There are data packages more or less
long and voice packages with more or less
flow.
Bluetooth technology is based on jump of
frequency to 1600 jumb/s and TDM (Time
Division Multipexing).
Data exchanges between two Bluetooth
devices are done per packages. These
packages can be synchronous or
asynchronous.
Bluetooth can support differents cases of
transmission:
- 1 asynchronous data channel with a
maximum speed of 732,2 Kbits/s
- 1 to 3 simultaneous synchronous
voice channels with a speed of 64 Kbits/s
- 1 asynchronous data channeland 1
synchronous voice channel
Using jumps of frequency, Bluetooth can
run up to 20 dBm, authorizing a maximum
range of 100 meters.


BLUETOOTH SECURITY

Bluetooth offers several security modes, and
device manufacturers determine which
mode to include in a Bluetooth-enabled
gadget. In almost all cases, Bluetooth users
can establish "trusted devices" that can
exchange data without asking permission.
When any other device tries to establish a
connection to the user's gadget, the user has
to decide to allow it. Service-level security
and device-level security work together to
protect Bluetooth devices from unauthorized
data transmission. Security methods include
authorization and identification procedures
that limit the use of Bluetooth services to the
registered user and require that users make a
conscious decision to open a file or accept a
data transfer. As long as these measures are
enabled on the user's phone or other device,
unauthorized access is unlikely. A user can
also simply switch his Bluetooth mode to
"non-discoverable" and avoid connecting
with other Bluetooth devices entirely. If a
user makes use of the Bluetooth network
primarily for synching devices at home, this
might be a good way to avoid any chance of
a security breach while in public.
Still, early CELLPHONE VIRUS writers
have taken advantage of Bluetooth's
automated connection process to send out
infected files. However, since most cell
phones use a secure Bluetooth connection
that requires authorization and
authentication before accepting data from an
unknown device, the infected file typically
doesn't get very far. When the virus arrives
in the user's cell phone, the user has to agree
to open it and then agree to install it. This
has, so far, stopped most cell-phone viruses
from doing much damage.





ADVANTAGES AND
APPLICATIONS OF
BLUETOOTH

Bluetooth has a tremendous potential in
moving and synchronizing information in a
localized setting. Potential for Bluetooth
applications is huge, because we transact
business and communicate more with people
who are close by than with those who are far
away - a natural phenomenon of human
interaction. The following list represents
only a small set of potential applications - in
future many more imaginative applications
will come along:
By installing a Bluetooth network in
your office you can do away with the
complex and tedious task of
networking between the computing
devices, yet have the power of
connected devices. No longer would
you be bound to fixed locations
where you can connect to the
network. Each Bluetooth device
could be connected to 200 other
devices making the connection of
every device with every other
possible. Since it supports both point
to point and point to multipoint it
will virtually make the maximum
number of simultaneously linked
devices unlimited.
The Bluetooth technology connects
all your office peripherals wirelessly.
Connect your PC or notebook to
printers, scanners and faxes without
the ugly and trouble some cable
attachments. You can increase your
freedom by connecting your mouse
or the keyboard wirelessly to your
computer.
If your digital cameras in Bluetooth
enabled, you can send still or video
images from any location to any
location without the hassle of
connecting your camera to the
mobile phone on the wire line phone.
Bluetooth allows us to have three
way phones. At home, your phone
functions as a portable phone (fixed
line charge). When you're on the
move, it functions as a mobile phone
(cellular charge). And when your
phone comes within range of another
mobile phone with built-in Bluetooth
wireless technology it functions as a
walkie-talkie (no telephony charge).
In meetings and conferences you can
transfer selected documents instantly
with selected participants, and
exchange electronic business cards
automatically, without any wired
connections.
Connect your wireless headset to your
mobile phone, mobile computer or any
wired connection to keep your hands
free for more important tasks when
you're at the office or in your car.
Have automatic synchronization of
your desktop, mobile computer,
notebook (PC-PDA and PC-HPC)
and your mobile phone. For instance,
as soon as you enter your office the
address list and calendar in your
notebook will automatically be
updated to agree with the one in your
desktop, or vice versa.
Automatic Message Delivery:
Compose e-mails on your portable
PC while you're on an airplane. As
soon as you've landed and switched
on your mobile phone, all messages
are immediately sent.
Upon arriving at your home, the door
automatically unlocks for you, the
entry way lights come on, and the
heat is adjusted to your pre-set
preferences.

DRAWBACKS

The main drawback of Bluetooth is its
limited connection distance and less
transmission speeds. It supports data rates
up to 780kb/s which may be used for
unidirectional data transfer. It is perfectly
adequate for file transfer and printing
applications.
Problems like "blue jacking," "blue
bugging" and "Car Whisperer" have turned
up as Bluetooth-specific security issues.
Blue jacking involves Bluetooth users
sending a business card (just a text message,
really) to other Bluetooth users within a 10-
meter (32-foot) radius. If the user doesn't
realize what the message is, he might allow
the contact to be added to his address book,
and the contact can send him messages that
might be automatically opened because
they're coming from a known contact. Blue
bugging is more of a problem, because it
allows hackers to remotely access a user's
phone and use its features, including placing
calls and sending text messages, and the
user doesn't realize it's happening. The Car
Whisperer is a piece of software that allows
hackers to send audio to and receive audio
from a Bluetooth-enabled car stereo. Like a
computer security hole, these vulnerabilities
are an inevitable result of technological
innovation, and device manufacturers are
releasing firmware upgrades that address
new problems as they arise.


CONCLUSION:

With its relatively low implementation
costs, Bluetooth technology seems destined
to dominate the electronic landscape, as
humans worldwide will be able to form
personal area networks with devices and
completely simplify the way in which they
interact with electronic tools and each other.
In the years to come, Bluetooth will become
a worldwide connectivity among electronic
devices, leading to applications unthinkable
by todays technological standards. Because
the radio frequency used is globally
available, Bluetooth can offer fast and
secure connectivity all over the world.

REFERENCES:

1. J .C. Haartsen et al.,The Bluetooth
Radio System IEEE pers.
Commum. Feb.2000.
2. Bluetooth in Wireless
Communications IEEE
Communications Magazine J une
2002
3. Bluetooth IEEE Microwave
Magazine September 2002.


PALM VEIN TECHNOLOGY

PRESENTED BY,
A.S.V.MADHURI
EMAIL-madhuri6891@gmail.com
ECE-3/4,
G.NARAYANAMMA INSTITUTE OF TECHNOLOGY AND
SCIENCE,
SHAIKPET,
HYDERABAD-8.

AND
T.NINEETHA
EMAIL-nineetha_13@yahoo.com
ECE-3/4
G.NARAYANAMMA INSTITUTE OF TECHNOLOGY AND
SCIENCE,
SHAIKPET,
HYDERABAD-8.





1
Abstract:
Palm vein technologies are one of the up coming technologies which is highly
secure. It is the worlds first contactless personal identification system that uses the vein
patterns in human palms to confirm a persons identity. It is highly secure because it uses
information contained within the body and is also highly accurate because the pattern of
veins in the palm is complex and unique to each individual. Moreover, its contact less
feature gives it a hygienic advantage over other biometric authentication technologies.
The palm secure works by capturing a persons vein pattern image while radiating it with
near-infrared rays. The PalmSecure uses a small palm vein scanner and detects the
structure of the pattern of veins on the palm of the human hand with the utmost precision.
The sensor emits a near-infrared beam towards the palm of the hand and the blood
flowing through these back to the heart with reduced oxygen absorbs this radiation,
causing the veins to appear as a black pattern. This pattern is recorded by the sensor and
is stored in encrypted form in a database, on a token or on a smart card.
Veins are internal in the body and have wealth of differentiating features, assuming false
identity through forgery is extremely difficult, thereby enabling an extremely high level
of security. The Palm Secure technology is designed in such a way that it can only detect
the vein pattern of living people. The scanning process is extremely fast and does not
involve any contact meaning that PalmSecure meets the stringent hygienic requirements
that are normally necessary for use in public environments.
The opportunities to implement palmsecure span a wide range of vertical markets,
including security, financial/banking, healthcare, commercial enterprises and educational
facilities. Applications for the device include physical admission into secured areas; log-
in to PCs or server systems; access to POS , ATMs or kiosks; positive ID control; and
other industry-specific applications.
This paper also describes some examples of financial solutions and product applications
for the general market that have been developed based on this technology.




2
Introduction:
In the ubiquitous network society, where individuals can easily access their
information anytime and anywhere, people are also faced with the risk that others can
easily access the same information anytime and anywhere. Because of this risk, personal
identification technology, which can distinguish between registered legitimate users and
imposters, is now generating interest.
Currently, passwords, Personal Identification Numbers (4-digit PIN
numbers) or identification cards are used for personal identification. However, cards can
be stolen, and passwords and numbers can be guessed or forgotten. To solve these
problems, biometric authentication technology, which identifies people by their unique
biological information, is attracting attention. In biometric authentication, an account
holders body characteristics or behaviors (habits) are registered in a database and then
compared with others who may try to access that account to see if the attempt is
legitimate.
Fujitsu has researched and developed biometric authentication technology
focusing on four methods: fingerprints, faces, voiceprints, and palm veins. Among these,
because of its high accuracy, contact less palm vein authentication technology is being
incorporated into various financial solution products for use in public places.
The Palm Secure sensor developed by Fujitsu is a biometric authentication
solution offering optimum levels of security. Palm Secure detects the structure of the
pattern of veins on the palm of the human hand with the utmost precision.
Background:
The ability to verify identity has become increasingly important in many
areas of modern life, such as electronic government, medical administration systems,
access control systems for secure areas, passenger ticketing, and home office and home
study environments. Technologies for personal identification include code numbers,
passwords, and smart cards, but these all carry the risk of loss, theft, forgery, or
unauthorized use. It is expected that biometric authentication technology, which
authenticates physiological data, will be deployed to supplement - or as an alternative to -
these other systems.
3
The Fujitsu Group has developed biometric authentication technologies based on
fingerprints, voice, facial features, and vein patterns in the palm, and has also combined
two or more of these capabilities in multi-biometric authentication systems. Although
biometric authentication is already being used to some extent by companies and
government authorities, for it to gain wider acceptance, it needs to be considered less
intrusive, and concerns about hygiene need to be addressed.
For that reason, there is a market need for voice or facial recognition
systems and other biometric authentication technology that can read physiological data
without requiring physical contact with sensor equipment, and the development of such
systems that are both practical and offer greater precision.
Technology:
Palm vein authentication works by comparing the pattern of veins in the palm (which
appear as blue lines) of a person being authenticated with a pattern stored in a
database.Vascular patterns are unique to each individual, according to Fujitsu research
evenidentical twins have different patterns. And since the vascular patterns exist inside
the body, they cannot be stolen by means of photography, voice recording or fingerprints,
thereby making this method of biometric authentication more secure than others.
Principles of vascular pattern authentication:
Hemoglobin in the blood is oxygenated in the lungs and carries oxygen to the tissues of
the body through the arteries. After it releases its oxygen to the tissues, the deoxidized
hemoglobin returns to the heart through the veins. These two types of hemoglobin have
different rates of absorbency.Deoxidized hemoglobin absorbs light at a wavelength of
about 760 nm in the near-infrared region. When the palm is illuminated with near
infrared light, unlike the image seen by the human eye, the deoxidized hemoglobin in the
palm veins absorbs this light, thereby reducing the reflection rate and causing the veins to
appear as a black pattern.In vein authentication based on this principle, the region used
for authentication is photographed with near-infrared light, and the vein pattern is
extracted by image processing and registered. The vein pattern of the person being
authenticated is then verified against the preregistered pattern.


4

Fig1 visible ray image

Fig: infrared ray image

Fig: Extracted vein pattern
Advantages of using the palm:
In addition to the palm, vein authentication can be done using the vascular pattern on the
back of the hand or a finger. However, the palm vein pattern is the most complex and
covers the widest area. Because the palm has no hair, it is easier to photograph its
vascular pattern. The palm also has no significant variations in skin color compared with
fingers or the back of the hand, where the color can darken in certain areas.

Advantages of reflection photography:
There are two methods of photographing veins: reflection and transm ission. Fujitsu
employs the reflection method.
5
The reflection method illuminates the palm and photographs the light that is
reflected back from the palm, while the transmission method photographs light that
passes straight through the hand. Both types capture the near-infrared light given off by
the region used for identification after diffusion through the hand.
An important difference between the reflection method and transmission method
is how they respond to changes in the hands light transmittance. When the body cools
due to a lowered ambient temperature, the blood vessels (in particular the capillaries)
contract, decreasing the flow of blood through the body. This increases the hands light
transmittance, so light passes through it more easily. If the transmittance is too high, the
hand can become saturated with light and light can easily pass through the hand. In the
transmission method, this results in a lighter, less-contrasted image in which it is difficult
to see the vessels. However, a high light transmittance does not significantly affect the
level or contrast of the reflected light. Therefore, with the reflection method, the vessels
can easily be seen even when the hand/body is cool.
The system configurations of the two methods are also different. The reflection
method illuminates the palm and takes photographs reflected back from the palm, so the
illumination and photography components can be positioned in the same place.
Conversely, because the transmission method photographs light that passes through the
hand, the illumination and photography components must be placed in different locations.
This makes it difficult for the system to be embedded into smaller devices such as
notebook PCs or cellular phones. Fujitsu has conducted an in-depth study of the
necessary optical components to reduce the size of the sensor, making it more suitable for
embedded applications.
Completely contactless design minimizes hygiene concerns and psychological resistance:
Fujitsu is a pioneer in designing a completely contactless palm vein authentication
device. With this device, authentication simply involves holding a hand over the vein
sensor.
The completely contactless feature of this device makes it suitable for use where high
levels of hygiene are required, such as in public places or medical facilities. It also
eliminates any hesitation people might have about coming into contact with something
that other people have already touched.
6
High authentication accuracy:
Using the data of 140,000 palms from 70,000 individuals, Fujitsu has confirmed that the
system has a false acceptance rate of less than 0.00008% and a false rejection rate of
0.01%, provided the hand is held over the device three times during registration, with one
retry for comparison during authentication. In addition, the devices ability to perform
personal authentication was verified using the following: 1) data from people ranging
from 5 to 85 years old, including people in various occupations in accordance with the
demographics released by the Statistics Center of the Statistics Bureau; 2) data about
foreigners living in Japan in accordance with the world demographics released by the
United Nations; 3) data taken in various situations in daily life, including after drinking
alcohol, taking a bath, going outside, and waking up.
Applications:
Product development for financial solutions :
Financial damage caused by fraudulent withdrawals of money using identity
spoofing with fake bankcards has been rapidly increasing in recent years, and this has
emerged as a significant social problem2. As a result, there has been a rapid increase in
the number of lawsuits filed by victims of identity theft against financial institutions for
their failure to control information used for personal identification. The Act for the
Protection of Personal Information came into effect in Japan on May 1, 2005, and in
response, financial institutions have been focusing on biometric authentication together
with IC (smart) cards as a way to reinforce the security of personal identification.
Vein authentication can provide two types of systems for financial solutions,
depending on where the registered vein patterns are stored. In one method, the vein
patterns are stored on the server of a client-server system. The advantage of this system is
that it provides an integrated capability for managing vein patterns and comparison
processing. In the other type, a users vein pattern is stored on an IC card, which is
beneficial because users can control access to their own vein pattern. Suruga Bank uses
the server type for their financial solutions, and The Bank of Tokyo-Mitsubishi uses the
IC card system.
In July 2004, to ensure customer security, Suruga Bank3 launched its Bio-
Security Deposit the worlds first financial service to use PalmSecure. This service
7
features high security for customers using vein authentication, does not require a
bankcard or passbook, and prevents withdrawals from branches other than the registered
branch and ATMs, thereby minimizing the risk of fraudulent withdrawals. To open a Bio-
Security Deposit account, customers go to a bank and have their palm veins
photographed at the counter. In order to guarantee secure data management, the palm
vein data is stored only on the vein database server at the branch office where the account
is opened.
In October 2004, The Bank of Tokyo-Mitsubishi4 launched its Super-IC
Card Tokyo-Mitsubishi VISA. This card combines the functions of a bankcard, credit
card, electronic money and palm vein authentication. From a technical and user-friendly
point of view, The Bank of Tokyo-Mitsubishin arrowed the biometric authentication
methods suitable for financial transactions to palm veins, finger veins and fingerprints.
The bank then mailed a questionnaire to 1,000 customers and surveyed an additional
1,000 customers who used devices in their branches. Finally, the bank decided to employ
Palm Secure because the technology was supported by the largest number of people in
the questionnaire.
The Super-IC Card contains the customers palm vein data and vein
authentication algorithms, and performs vein authentication by itself. This system is
advantageous because the customers information is not stored at the bank. When a
customer applies for a Super-IC card, the bank sends the card to the customers home. To
activate the palm vein authentication function, the customer brings the card and his or her
passbook and seal to the bank counter, where the customers vein information is
registered on the card. After registration, the customer can make transactions at that
branchs counter and any ATM using palm vein authentication and a matching PIN
number.
In 2006, Fujitsu reduced the Palm Secure sensor to 1/4 of its current size for
its next generation product. By using a smaller sensor on existing ATMs there will be
room on the operating panel for a sensor for Felica mobiles, a 10-key pad that meets the
DES (Data Encryption Standard), as well as an electronic calculator and other devices.
The downsized sensor can also be mounted on ATMs in convenience stores.
Product development for general market :
8
In addition to product development for financial solutions, Fujitsu has started
to develop product applications for the general market .Two products are in great demand
in the general market. One is for a physical access control unit that uses Palm Secure to
protect entrances and exits, and the other is a logical access control unit that uses Palm
Secure to protect input and output of electronic data. This section describes the features
of these applications.
Access control unit using Palm Secure:
The Palm Secure access control unit can be used to control entry and exit for
rooms and buildings .This unit integrates the operation and control sections. The
operation section has a vein sensor over which the palm is held, and the control section
performs authentication processing and issues commands to unlock the door. The system
can be introduced in a simple configuration by connecting it to the controller of an
electronic lock. Palm Secure units are used to control access to places containing systems
or machines that manage personal or other confidential information, such as machine
rooms in companies and outsourcing centers where important customer data is kept.
Due to increasing concerns about security, some condominiums and
homes have started using this system to enhance security and safety in daily life.For both
of these applications, the combination of the following features provides the optimum
system: a hygienic and contactless unit ideal for use in public places, user-friendly
operation that requires the user to simply hold a palm over the sensor, and an
authentication mechanism that makes impersonation difficult.


Fig : Palm Vein Access Control Unit

Login unit using PalmSecure:
9
The palm vein authentication login unit controls access to electronically stored
Information .As with the units for financial solutions, there are two types: a server type
and an IC card type. Because the PalmSecure login unit can also be used for
authentication using conventional IDs and passwords, existing operating systems and
applications can continue to be used. It is also possible to build the unit into an existing
application to enhance operability. In the early stage of introduction, the units were
limited to businesses handling personal information that came under the Act for the
Protection of Personal Information enforced in April 2005. However, use of the units is
now expanding to leading-edge businesses that handle confidential information.



Fig : palm Secure Login Unit

Other product applications:
Because of the importance of personal identification, we can expect to see the
Development of new products for various applications, such as:
Management in healthcare
Access control to medication dispensing
Identification of doctors and nurses when accessing protected health records
Patient identification management
Operator authentication
Settlement by credit card
Obtaining various certificates using the Basic Resident Register Card
Owner authentication
Retrieval of checked luggage
Driver authentication
Attendance authentication
10
Checking attendance in schools
Clocking in and out of the workplace.
Conclusion:
This paper explains palm vein authentication. The Fijitsu Palmsecure is a palm-vein
based authentication system that utilizes the latest in Biometric Security Technology.
Answering a worldwide need from governments to the private sector, this contactless
device offers an easy-to-use, hygienic solution for verifying identity. This technology is
highly secure because it uses information contained within the body and is also highly
accurate because the pattern of veins in the palm is complex and unique to each
individual. Moreover, its contactless feature gives it a hygienic advantage over other
biometric authentication technologies. This paper also describes some examples of
financial solutions and product applications for the general market that have been
developed based on this technology.



11



A paper presentation on







Presented by:

R Harisha G Harini
Reg.No.05761A0421 Reg.No.05761A0420
II/IV B.TECH (ECE) II/IV B.TECH (ECE)
E-mail:harisharayudu@yahoo.co.in E-mail:harini_gtati@yahoo.com





DEPARTMENT OF ELECTRONIC AND COMMUNICATION ENGG
LAKI REDDY BALI REDDY COLLEGE OF ENGINEERING
MYLAVARAM-521 230, KRISHNA (DIST) A.P.
(Affiliated to JNT University, Hyderabad)

An Adaptive Color Image Segmentation
Models in Visual Feature Detection of an Image
Abstract:
Images are spatial data patterns, normally represented as bounded and sampled
luminance values over a bounded and sampled two-dimensional spatial domain. One of
the major problems in image analysis is the detection of image features such as edge and
corner points, since it is well accepted that such features contain much of the information
in the image, and occur on a set of small measure. Many approaches have been developed
for the detection and localization of features, including differential techniques surface
fitting techniques, morphological approaches and others Accurate feature detection is
essential for many of the problems currently facing researchers in the field of active
vision, especially with regards to stereo, motion coordination in robotics.
A novel Adaptive Color Image Segmentation (ACIS) System for color image
segmentation ACIS system uses a neural network with architecture similar to the
multilayer perceptron (MLP) network. The main difference is that neurons here use a
multisigmoid activation function. The multisigmoid function is the key for segmentation.
The number of steps i.e. thresholds in the multisigmoid function are dependant on the
number of clusters in the image. The threshold values for detecting the clusters and their
labels are found automatically from the first order derivative of histograms of saturation
and intensity in the HSI color space. Here, the main use of neural network is to detect the
number of objects automatically from an image. The advantage of this method is that no
a priori knowledge is required to segment the color image. ACIS label the objects with
their mean colors. The algorithm is found to be reliable and works satisfactorily on
different kinds of color images. Experimental results show that the performance of ACIS
is robust on noisy images also.










Introduction:
In the field of Computer Vision, image segmentation plays a crucial role as a
preliminary step for high-level image processing. To analyze an image, one needs to
isolate the objects in it and find relation among them. The process of separation of objects
is referred as image segmentation. In other words, segmentation is used to extract the
meaningful objects from the image. The earlier research work in the segmentation field
was mainly concentrated on monochrome images. Hence it is observed that color image
segmentation algorithms are frequently derived from monochrome image segmentation
methods. Color is a useful property, which adds the information to images. The color
perceived by human is a combination of three-color stimuli such as red (R), green (G),
and blue (B), which forms a color space. However, many color models are used to
represent the colors in various representations such as RGB (red, green, blue), HSI (hue,
saturation, intensity), and CMY (cyan, magenta, yellow) .As compared to monochrome
images, color images have the information of brightness, hue and saturation for each
pixel. The existing color image segmentation techniques can be classified into five
approaches based on edge detection, region growing, neural network based, fuzzy based,
and histogram threshold.
Edge detection:
An edge detector finds the boundary of an object. These methods exploit the fact
that the pixel intensity values change rapidly at the boundary (edge) of two regions.
Examples of edge detectors are Sobel, Prewitt, and Roberts. For color images the edge
detection can be performed on color components separately (such as R, G, and B). These
edges are merged to get a final edged image. Jie and Fei proposed an algorithm for
natural color image segmentation .In his technique, edges are calculated in terms of high
phase congruency in the gray level image. It uses a K- means clustering algorithm to
label the long edge lines. The global color information is used to detect approximately the
objects within an image, while the short edges are merged based on their positions.
Region growing:
These techniques find the homogeneous regions in an image. Here, we need to
assume a set of seed points initially. Attaching to each seed point those neighbouring
forms the homogeneous regions pixels that have correlated properties. This process is
repeated until all the pixels within an image are classified. However, the obscurity with
region-based approaches is the selection of initial seed points.

Neural network based techniques:
Neural networks are formed by several elements that are connected by links
with variable weights. Artificial neural networks (ANN) are widely applied for pattern
recognition. Their processing potential and nonlinear characteristics are used for
clustering.
Fuzzy based techniques:
Fuzzy set theory gives a mechanism to represent ambiguity within an image. Each
pixel of an image has a degree of belongingness (membership) to a region or a boundary.
A number of fuzzy approaches for image segmentation are reported in proposed an
unsupervised algorithm for color image segmentation. The algorithm uses a neural
network to extract features of the image automatically. The multiple color features are
analyzed using a self-organizing feature map (SOFM). Then the useful feature sequence
(feature vector) is determined. The encoded feature vector is used for final segmentation.
One of the advantages of this method is that a suitable feature vector for segmentation is
extracted automatically. Consequently, this technique is an adaptive approach for
segmenting different types of color images.
Histogram threshold:
Histogram threshold is one of the popular techniques for monochrome image
segmentation. This technique considers that an image consist of different regions
corresponding to the gray level ranges. The histogram of an image can be separated using
peaks (modes) corresponding to the different regions. A threshold value corresponding to
the valley between two adjacent peaks can be used to separate these object. But one of
the weaknesses of this method is that, it ignores the spatial relationship information of the
pixels. The main advantage of this method is that, it does not require a priori knowledge
about number of objects in the image.
Adaptive Color Image Segmentation System:
ACIS system consists of two main processing units as shown in Figure 1.
Adaptive threshold block (A). Neural network segmentation block (B).

Figure 1. Block diagram of ACIS.
The adaptive threshold block is responsible to find the number of segments in an
image. Neural network segmentation block does the actual segmentation based on the
number of objects found out by adaptive threshold block. A general flowchart of the
working of the proposed method is depicted in Figure 2. As ACIS is a histogram multi
threshold technique, we need to find different thresholds to segment objects in the image.
To find thresholds, image histogram is smoothed and its derivative is utilized. Smoothing
of histogram is required to avoid the small variations in it and to extract a general shape
of the histogram curve. After detecting thresholds, labels for the objects are decided. The
neurons in neural network use a multilevel sigmoid function as an activation function.
The multisigmoid activation for neuron is designed using thresholds found out from the
derivative of histogram. This activation function takes care of threshold and labeling the
pixels during recursive training process.
Adaptive threshold block:
The adaptive threshold block is used to find out number of clusters and to
compute the multi-level sigmoid function for the neurons. It is crucial to determine the
number of clusters in an image so as to segment the objects appropriately. The main
endeavor here is to find number of clusters without a priori knowledge of the image. To
achieve this, first the histograms of given color image for saturation and intensity planes
are found out. From the derivative curve of histogram we find the clusters in saturation
and intensity planes independently. A group of saturation/intensity values with histogram
derivative transiting from negative to positive (zero crossing) and subsequent positive to
negative (zero crossing) is considered into one cluster. Similarly other clusters are found
out with the help of zero crossings. The threshold value is the first zero crossing and
average of two subsequent zero crossings are considered as label (target). The average
value as a target helps to segment the object with a color appropriate to its original color.
Hence in ACIS system objects are colored with their mean color i.e. system tries to
maintain the color property of the object even after segmentation. This can be helpful in
image post-processing. Once threshold and target values are calculated, a neural network
activation function is constructed.




System Flowchart:

Figure 2. System Flowchart

Where Step function
k
Thresholds
y
k
Target level of each sigmoid, will constitute the system labels

0
Steepness parameter d Size of neighborhood.
Neural network segmentation block:
ACIS system consists of two independent neural networks one each for saturation
and intensity planes. The MLP consists of three layers namely input layer, hidden layer,
and output layer as depicted in Figure 3. The input to a neuron in the input layer is
normalized between [0-1]. The output value of each neuron is between [0-1]. Each layer
is having a fixed number of neurons equal to the size (M x N) of the image. Initially All
neurons are having primary connection weight as 1. Each neuron in one layer is
connected to respective neuron in the previous layer with its d
th
order neighborhood as
shown in Figure 4. There are no connections between neurons in the same layer. Neural
network tuning block is used to update the connection weight as in Equation 2 by taking
into consideration the output error in the feedback network. At every training epoch, the
error is calculated by taking difference between the actual output and the desired output
of neuron.

Where
Ii Total input to the ith neuron
Wji Weight of link from neuron i in one layer to neuron j in the next layer
O
i
Output of the ith neuron i in one layer to neuron j in the next layer
E Error in the networks output
n Learning rate.



Figure-3 Neural network
architecture
Neighborhoods of a pixel:

Figure-4 (a) First order neighborhood (b) Second order neighborhood (c)
Sequence of neighborhood.

As the training progresses, a pixel gets the color depending upon its surrounding pixel
colors. From the output image shown in Figure, it can be noted that network tries to label
a cluster with an even color spread. We can see that all pixels which represent the ring are
assigned to one color label similar to its original color after segmentation. The
background is labeled with a color label appropriate to its original color. The
segmentation using multiple thresholds is explained with an example in the next section.
Example:
The technique to find threshold and target is demonstrated in Figure 5. Consider to
realize the segmentation process the given image is as shown in Figure 5(a). As a first
step, thresholds in saturation (S) and intensity (V) planes are found out using histogram
of the image in relevant planes as shown in figure 5(c). The histograms are then
smoothed by passing the histograms through a low pass FIR (Finite impulse response)
with a cut off frequency 0.001. Experimentally it is found that 0.001 cut off frequency is
a suitable value for various types of images. The Figure 5(d) shows the smoothed
histograms. Figure 5(e) shows the first order derivative with threshold. The thresholds are
found out by tracking negative to positive transition (zero crossing) and subsequent
positive to negative transition (zero crossing) in the derivative curve. The target values
are found out by taking the average of two thresholds. By using threshold and target
values, the neural network activation function is constructed as Equation 1. Figure 5(f)
shows the multisigmoid function. Following are the figures for the saturation plane
similar figures are for intensity plane.
Figure-5:


(e) First order derivative with threshold (f) multisigmoid function.
Results and discussion:
Here we discuss the performance of ACIS system on different types of color
images available on the World Wide Web. Experimental results on images such as
Bacteria, Hand, Panda and Peppers are illustrated here. To verify the
effectiveness of the proposed method, three types of experiments are conducted to test the
capability of ACIS system (i) To isolate objects and label them with colors according to
their mean color, (ii) To study effect of neighborhood size discussed on segmentation and
(iii) Performance on noisy images.
Segmentation results:


(a) Original image (b) Segmented image. (a) Original image (b) Segmented image.

Effect of increased neighborhood size

(a) Original image (b) Segmented image with 5x5 neighborhood.

Conclusion:
A novel system for adaptive color image segmentation is described. The segments
in images are found automatically based on adaptive multilevel threshold approach. One
of the advantages of this system is that it does not require a priori information about the
number of objects in the image. The use of first order derivative of histogram is found to
be a powerful method to find clusters in the image. The neural network with
multisigmoid function tries to label the objects with its original color even after
segmentation. ACIS system is tested on several images and its performance was found
satisfactory. The system can be used as a primary tool to segment unknown color images.
Experimental results show that the system performance is robust to noisy images.
References:
[1] A.K. Jain, Fundamentals of Digital Image Processing, Upper Saddle River, NJ:
Prentic Hall, 1989. [2] Simon Haykin, Neural Networks A Comprehensive Foundation,
Pearson Education, 1999. [3] J.S. Weszka, A Survey of threshold selection techniques,
Computer Graphics Image Process, 7: 259-265, 1978. [4] Jacek M. Zurada, Introduction
to Artificial Neural Systems, Jaico publishing house, Mumbai, 2002. [5] K.S. Fu, J. K.
Mui, A survey on image segmentation, Pattern Recognition, 13: 3-16, 1981. [6]
K.S.Deshmukh, G.N. Shinde, A.V. Nandedkar, Y.V. Joshi, Multilevel approach for
color image segmentation, Fourth Indian Conf. on Computer Vision, Graphics and
Image Processing, 338-344, 2004. [7] L.O. Hall, A. Bensaid, L. Clarke, R. Velthuizen,
M. Silbiger, J. Bezdek, A comparison of neural network and fuzzy clustering techniques
in segmenting magnetic resonance images of the brain, IEEE Trans. Neural network, 3:
672-682, 1992. [8] L. Spirkovska, A Summary of image segmentation techniques,
Computer Graphics Image Process ,7: 259-265, 1978.


PhoNET-A Voice Based Web Technology



Abstract
Voice based web access is a rapidly developing
technology. PhoNET is a solution for these and many
other problems faced by the netizens. The basic idea is
that using an ordinary phone to browse the web and the
primary motivations are: to provide a widely available
means for creating new interactive voice applications;
addressing needs for mobility; and addressing issues
inaccessibility. Basis of the idea are the age old IVR
systems used to serve information for the dialers through
a pre programmed process. Phonet is a very long journey
from the IVRs; it involves the most complex technologies
of the century Like Speech Recognition (SR), Text to
speech (TTS) conversion and artificial intelligence (AI).
This enables a user to be connected to internet as long as
he has access to a phone. PhoNET uses the traditional
HTML content so the web site need not be rewritten or
redesigned. We present a detailed analysis in the most
possible simplest way of how the technologies like SR,
TTS and AI are integrated to develop a intelligent
Platform (phoNET) to achieve voice based web access
which involves Document processing and Document
Rendering. In Document Processing we describe two
approaches, telephone browsing and transcoding,
focusing mostly on the former since that work is more
mature. In Document Rendering we present the major
problem i.e., the relevance of cognitive thought to text
rendering along with its most suitable solution. In the end
we examine the challenges and further developments
involved in practical application of the proposed
technology-The phoNET.


1. Introduction

Todays telecombusiness has seen recent growth, especially
in bandwidth infrastructure for long distance (LD) and data.
The industry is currently experiencing strong growth in the
wireless segment as mobile devices prove to be very popular
with both consumers and business. An evolving market
segment is Internet anywhere, and many companies are
trying approaches to present viable products for this market.
One approach is Internet access over wireless devices such
as cell phones with a screen. However, this method has
inherent limitations such as small screen size, lack of a
keyboard, the need for a special device (web-enabled
phone), the need to rewrite and maintain a special website,
and severe bandwidth constraints using wireless data
transfer protocols.
Another approach that is becoming popular is voice-based
limited Internet access, which overcomes all of the
limitations of the wireless data devices but one; they still
limit access to the few sites that are re-engineered for voice.
They typically deliver content such as news, weather,
horoscopes, and stock quotes, etc. over the phone. These
companies are called Voice Portals.
Voice portals were
the first web applications that tried to integrate websites
with voice which gave birth to the enterprise based PBX
systems.

Other solutions, such as Personal Digital Assistants,
phones with display screens and other Internet appliances,
are available, but have limitations. Users must have
special hardware with intelligence built in, and often must
view the Web through small, difficult-to-read screens.
Such devices are often expensive, as well.
Our solution, which presents a third option, gives users all of
the benefits of the voice portals, yet has complete access to
the entire Internet without limitation. With our Voice
Internet technology phoNET, anyone can surf, search, send
and receive email, and conduct e-commerce transactions,
etc. using their voice fromanywhere using any phone, with
the more freedomof movement than a standard Internet
browser which requires a PC and an Internet connection.
PhoNET technology is faster and cheaper than existing
alternatives. Today, only the largest of companies are
making their Web sites telephone-accessible because
existing technology requires a manual, costly and time-
consuming re-write of each page. With the voice internet
technology-phoNET, existing Web pages are used, allowing
users to leverage their Web investment. The software
dynamically converts existing pages into audio format,
significantly lowering the up-front investment a business
must make to allow users to hear and interact with their Web
site by phone.
2.Motivation

The primary method of access today continues to be the
computer, which has certain advantages as well as some
limitations. Computers offer a visual Internet experience that
is usually rich in content. Some basic computer skills and
knowledge are needed to access the Internet. But, computer-
based access is proving insufficient for the professional on
1
the move. When in the car or away fromthe office or
computer, accessing the Web is difficult, if not impossible.
And, an increasing number of people prefer an interface that
allows themto hear and speak rather than see and click or
type.
The computer-based Internet experience also does not meet
the needs of another segment of the population the visually
impaired. Neither visual displays of information, nor
keyboard-based interactions naturally meet their needs, and
this segment is often unable to benefit fromall that the
Information Age has to offer.
Some existing Internet users have also identified problems
with the visual Internet experience. Pages are increasingly
full of graphics, advertisement banners, etc., which move,
flash, and blink as they vie for attention. Some find this
information overload annoying, and lament the delays it
creates by severely taxing the available bandwidth.
The "Digital Divide"
While computers and their use are on the rise, theyre not
ubiquitous yet. A large segment of the population still
doesnt have access in the United and other parts of the
world. Thus, Internet is limited to only a small fraction of
the world population; the majority is left out from the
Internet. This gap between those who can effectively use
new information fromthe Internet, and those who cannot is
known as The digital divide. Bridging this digital divide is
the key to ensure that most people in the world has the
capability to access the Internet. Making computers
ubiquitous is not a very attractive and feasible solution, at
least in the near future, because of various barriers. One key
barrier is cost, although the price of a computer has come
down significantly in recent years. Insufficient visual
Internet Infrastructure is another barrier in many countries
and it will take a while to build such infrastructure. Other
consumers have a basic distaste for complex technology,
which prevents themfromaccessing Web-based information
via a computer. A more natural, less cumbersome way to
interface with the net would provide theman opportunity to
experience the Internet as well, thus bridging the Digital
Divide.

The "Language Divide"
Today more than eighty percent of website contents are
written in English language. People in China, Japan and
other countries in Asia, countries in Europe and Latin
America speak a language other than English as their native
language. These people are left out of a significant portion
of the World Wide Web. For example, Japanese or a
Chinese can not understand the content of CNN or New
York Times. This gap of not having access to major part of
the Internet because of language barrier is called "The
Language Divide".
Bridging this Language divide is the key to ensure that
most people in the world has the capability to access the
major part of the Internet. The demand for machine
translation is growing phenomenally as more people each
day embracing the Internet. A service that translates the
accessed information into the desired language would
clearly add value to these users.
As the need for alternative access to the Internet becomes
more evident, several technology companies are pursuing
solutions. Their products include smart cell phones with
visual displays, intelligence built into the handset, and voice-
activated Web sites. These products address different
aspects of the problems outlined above.
While these alternative technologies are in the pipeline, few
are ready for market. But the very existence of a race to
market by many companies is evidence of a large potential
market.
.

3. The Challenge

To integrate existing technologies, or develop new
technologies, to make simple, affordable, alternative
Internet access possible.
As the need for an alternative access method to the Internet
has become evident, progress continues to be made by
technologists to provide such solutions. One key area of
focus has been voice-based technology, which would allow
a very natural interface for most people, and address the
limitations described earlier. A voice interface provides an
alternative to the visually based interface. A device such as
the telephone provides a readily accessible alternative to the
computer.
Several technologies existing today are keys to the solution,
but the problem lies in successfully integrating these
technologies into useful applications of greater value than
their individual components. These technologies include:
Voice Extensible Markup Language (VXML)
which is an extension of HTML, the normal
language in which Web pages are created. This
technology adds voice capability to a Web page.
The page can then be displayed, as usual, over a
computer, but it can also be presented in audio
format with voice navigation.
Speech Language Application Tags (SALT)
specification for supporting multimodal
communication fromPCs, cell phones, PDAs and
other handheld devices. For example, input can be
2
voice (such as asking for directions) and output can
be data (a map pops up). SALT is a lightweight set
of extensions to existing markup languages,
allowing developers to embed speech
enhancements in existing HTML, xHTML and
XML pages. As with VoiceXML, applications will
be portable - thanks to the separation fromthe
underlying hardware and platform.
Speech recognition (SR), which allows computers,
through the use of software, to recognize spoken
language, eliminating the need for the computer
keyboard as an interface. The vocabulary
recognized in products using this technology tends
to be limited.
Text-to-speech (TTS), which allows text to be
converted automatically to synthetic speech. It
allows communications between computers and
humans through a natural interface, such as
speech.
Telephone integration is the key to interface with
computers froma remote location. A protocol is
needed to communicate with the computer froma
telephone using voice. This also includes
multimedia integration (e.g. with .wav files).
Intelligent software agents are needed to automate
communication between a telephone and a
computer, a computer and a Web site, to interpret
the contents of a web page, to extract key
information that makes sense in audio, to
efficiently navigate through web pages, and to
manage access to the Internet.
Language processing allows translation to other
languages, understanding and interpreting of
structured sentences. Natural language processing
allows us to understand and interpret human
languages.
The first technology listed VoiceXML is a very elegant
solution that leverages technology specifically developed for
audio Internet access. However, it requires that Web sites be
customized, or VXML-enabled. This means rewriting the
web pages in VXML. According to analysts, today there are
more than a billion web pages. Assuming that it takes one
hour and costs about $100 to rewrite one page, the cost to
voice-enable all sites would be about $100B. Clearly, it will
take several years before the majority of popular pages are
VXML-enabled. Today, only a very small portion of the
total Web pages is voice-enabled using VXML.
The second technology listed SALT is another elegant
technology that allows developers to embed speech
enhancements in existing HTML, DHTML and XML pages.
However, like VoiceXML, it requires that websites be
rewritten or enhanced with SALT and hence it will also take
many years before majority of popular pages are SALT-
enabled.
The paper presents a solution that successfully integrates the
other technologies listed into a useful, audio-based approach
for accessing the Internet today. It is independent of the
timeline, interest and willingness of content providers to
update their pages to be VXML or SALT-enabled.
Another approach is to provide Internet access over wireless
devices such as palmpilot or a cell phone with a screen.
However, this method has inherent limitations such as the
small size of the screen and the need for a special phone.
Also there is need to rewrite the website in WML. Todays
wireless Internet industry is facing many challenges due to
limitation of bandwidth and small screen. The cost of cell
phone based Internet access is very high and users do not
want to pay high service fee. Also our eyes and fingers are
not changing but the devices are getting smaller and smaller.
Thus, existing visual based access is going to be even more
difficult in future.











4.
The phoNET Solution
An audio Internet Technology that allows users to listen
to email, buy on-line or surf and hear any Web site,
using a simple and natural interface - an ordinary
telephone. No computer is needed.

3
Subscribers dial a toll-free number, and start accessing the
Internet using voice commands. Speech recognition
technology in the companys systemallows users to give
simple commands, such as "go to Yahoo" or "read my
email" to get to the Net-based information they want, when
they want it, whether theyre out on an appointment, stuck in
traffic, sitting in an airport, or cooking dinner. Theyll be
able to quickly locate information, such as late-breaking
news, traffic reports, directions, or anything else theyre
interested in on the World Wide Web. Our product
phoNET has the capability to automatically down load web
contents, filter out graphics, banners and images. It then
renders extracted texts into concise, meaningful and very
suitable in audio format texts before using TTS to convert
into speech. phoNET also converts the rendered texts into
other languages in real time. It can also be easily integrated
with any back end application such as CRM/SCM, ERP etc.
Thus, phoNET completely eliminates the need to rewrite
any content in VXML, SALT or WML. So we strongly
believe that our automation based approach will be very
successful. Using text-to-speech technology, an "intelligent
agent" will read the requested information out loud via a
computerized voice, and process the users voice
commands.


5.
Technology Overview.
The idea of listening to the Internet may at
first sound a bit like watching the radio. How does a visual
mediumrich in icons, text, and images translate itself into an
audible format that is meaningful and pleasing to the ear?
The answer lies in an innovative integration of three distinct
technologies that render visual content into short, precise,
easily navigable, and meaningful text that can be converted
to audio.

The technologies and steps employed to accomplish this feat
are:
Document Processing
1. Speech recognition
2. Text-to-speech translation, and
Document Rendering
3. Artificial Intelligence

The phoNET platformacts as an Intelligent Agent
(IA) located between the user and the Internet (Figure 1).
The IA automates the process of rendering information from
the Internet to the user in a meaningful, precise, easily
navigable and pleasant to listen to audio format. Rendering
is achieved by using Page Highlights (a method to find and
speak the key contents on a page), finding right as well as
only relevant contents on a linked page, assembling right
contents froma linked page, and providing easy navigation.
These key steps are done using the information available in
the visual web page itself and proper algorithms that use
information such as text contents, color, font size, links,
paragraph, and amount of text. Artificial Intelligence
techniques are used in this automated rendering process.
This is similar to how the human brain renders froma visual
page; selecting the information of interest and then reading
ii.
The IA includes a language
translation engine that dynamically translates web contents
fromone language into another in real time. Thus, a Chinese
speaking person can ask to surf an English website in
Chinese - the Intelligent Agent would access the English
website, extract the content of the website and translate it on
the fly in Chinese and read it back to the user in Chinese.

The platform incorporates the highest quality speech
recognition and text to speech engines from third party
suppliers.









4







ThePhoNET architecture is shown in Figure 2. The process
starts with a telephone call placed by a user. The user is
prompted for a logon pass phrase. The users pass phrase
establishes the first connection to the Web site associated
with that phrase and loads the first Web page. The HTML is
parsed, as described later, separating text fromother media
types, isolating URL fromHTML anchors and isolating the
associated anchor titles (including ALT fields) for grammar
generation. Grammar generation computes combinations of
the words in titles to produce a wide range of alternative
ways to say subsets of the title phrase. In this process simple
function words (i.e., "and," "or," "the," etc.) are not allowed
to occur in isolation where they would be meaningless.
Browser control commands are mixed in to control typical
browser operations like "go back "and "go home" (similar to
the typical browser button commands). Further automatic
expansion of the language models for link titles can include
thesaurus substitution of words, etc. soon to be implemented
.The current processing allows the user to say any keyword
phrase fromthe title with deletions allowed to activate the
associated link. Grammar processing continues with
compilation and optimization into a finite-state network
where redundancies have been eliminated. The word
vocabulary associated with the grammar is further processed
by a Text-To-Speech (TTS) pronunciation module that
generates phonetic transcriptions for each word of the
grammar. Since the TTS engine uses pronunciation rules it
is not limited to dictionary words. The grammar and
vocabulary are then loaded into the speech recognizer. This
process typically takes about a second. At the same time, the
Web document is described to the user, as discussed later.
The user may then speak a navigation or browser command
phrase to control browsing. Each user navigation command




















takes the
user to a
new Web page. If the command is
ambiguous the dialog manager collects the possible
interpretations into a description list and asks the user to
choose one. Recognizing that a collection of Web pages is
inherently a finite-state network, it is easy to see that this
mechanismprovides the basis for a finite-state dialog and
Web application control system.


















6.
Document Processing
Document analysis is performed in the
HTML parser, grammar generator, and Hyper Voice
processor modules. The typical HTML Web page is first
parsed into a list of elements based mostly on the HTML
tags structure. Some elements are aggregations (tables, for
5
instance) but the element list is not a full parse tree, which
we found was not needed and in some cases actually
complicates processing. Images, tables, forms and most text
structure elements like paragraphs are recognized and
processed according to their recognized type. Much of the
effort in building a robust HTML processor is dealing with
malformed HTML expressions such as unclosed tag scope,
overlapping tag scopes, etc. Unfortunately space does not
allow for fully addressing this issue here. Commercial
browsers currently handle these issues in differing ways.
Briefly, the handling of HTML errors by Phone Browser
mostly follows the style of Netscape Communicator. Images
often have ALT attribute tags that are used to derive the
voice navigation commands for these items. The location of
each image is announced along with any associated caption.
This feature can be disabled on a site-by-site basis when the
user does not want to hear about images. Tables are first
classified according to purpose, either layout or content.
Most tables are actually used for page layout which can be
recognized by the variety and types of data contained in the
table cells. Data tables are processed by a parser according
to one of a set of table model formats that Phone Browser
recognizes. This provides primarily a simple way of reading
the table contents row by row, which is often not very
satisfying. Alternatively a transcoder can be used to
reconstruct the table in sentential format. An example of this
is a Phone Browser stock quote service where the transcoder
extracts data fromthe Website table and builds a new Web
page containing sentences describing the company name,
ticker symbol, last trade price, change, percent change and
volume rather than simply reading the numbers. The user
can also ask for related news reports. Forms with pop-up
menus and radio buttons are handled by creating voice
command grammars from the choices described in the
HTML. Each menu or button choice is spoken to the user
who can repeat that phrase or a phrase subset to activate that
choice. Open dialog forms (e.g. search engines) present a
larger challenge. Since there is nothing to define a grammar,
the implication is a full language model. While large
vocabulary dictation speech systems are available, most
require speaker training to achieve sufficiently high
accuracy for most applications. Phone Browser is intended
to be immediately usable without training so dictation is not
yet supported. This also implies that creating arbitrary text
for messaging is also not yet supported. One additional type
of forminput is an extension to HTML. A GSL (Grammar
Specification Language) or JSGF (Java Speech Grammar
Format) specification can be inserted into an HTML anchor
using an attribute tag (currently LSPSGSL). Using this
method an application can specify an elaborate input
grammar allowing many possible sentences to address the
associated hyperlink and construct a GET type form
response where the QUERY_STRING element is
constructed by inserting the speech recognition text results.
Grammar specifications written this way may represent
many thousands of possible sentence inputs giving the end
user great speaking flexibility.


7.
Document-Rendering
Rendering-Definition
Information technology uses this termRendering refers to
how information is presented according to the medium, for
example, graphically displayed on a screen, audibly read
using a recording device, or printed on a piece of paper.
In the context of voice/audio Internet, Web content
rendering entails the translation of information originally
intended for visual presentation into a format more suitable
to audio. Conceptually this is quite a straightforward process
but tactically, it poses some daunting challenges in
executing this translation. What are those challenges and
why are they so difficult to overcome? These questions are
explored in the next section of this paper.
The Rendering Problem
Computers possess certain superhuman attributes, which far
outstrip that of mortal manmost notable are their
computational capabilities. The common business
spreadsheet is a testament to this fact. Other seemingly more
mundane tasks, however, present quite a conundrumfor
even the most sophisticated of processors. Designing a high-
speed special purpose computer capable of defeating a
grandmaster at chess took the computing industry over 50
years to perfect. Employing strategic thinking is not a
computers forte. That is because in all the logic embodied
in their digitized ones and zeroes, there is no inherent
cognitive thought. This one powerful achievement of the
brain along with our ability to feel and express emotion
separates the human mind from its computerized
equivalentthe centralized processing unit (CPU).
The relevance of cognitive thought to text rendering
may not be immediately obvious but it is one of the major
challenges faced when attempting to present information
designed for one mediumand rendering it to another. This is
because there are no hard and fast objective rules to follow.
Computers are very good at following instructions when
they can be reduced to very objective decision points. They
are not so good when value judgments are involved. A
human being can readily distinguish a cat froma dog, or a
relevant news link on a Web page from a link for an
advertisement. For a computer this simple exercise is
significantly more challenging than applying the Taylor
expansion formula to a set of polynomialssomething a
computer can do quite handily.
6
Solving the problem
To solve the rendering problem,some intelligent techniques
must be applied. The relevant data must be selected,
navigated to its conclusion, and reassembled for presentation
by a different medium. All of this must be done for all
web pages, dynamically , in real-time and in an
automated fashion. We have used an Intelligent Agent
(IA) that uses various intelligence techniques
including artificial intelligence.
Using Visual Clues
Understanding the process that our brains go through in
making qualitative choices is key to developing an
artificially intelligent solution. In the example of Web page
navigation we know that our brains do not attempt to read
and interpret an entire page of data rather they take their
cues fromthe visual clues implemented by the Web
designer. These clues include such things as placement of
text, use of color, size of font, and density of content.
Fromthese clues a list of potential areas of interest can be
developed and presented as a list of candidates.
Upon selecting an item of interest it is
common to have to navigate to another Web page to read all
the data of interest (just like in the newspaper example). To
do so we click on a Web link. When following a page link
the problemof continuity of thought is encountered because
almost assuredly the newly linked page contains data in
addition to the thread of information we are attempting to
follow. In order to maintain continuity with the itemfrom
the previous page a contextual correlation must be made.
Once again, this cognitive process poses a formidable
challenge for the computer and requires application of
Intelligent Agent (or artificial intelligence) principles to
solve.
Simplifying for speech
The first step involves dynamically removing all the
programming constructs and coding tags that comprise the
instruction to a Web browser on how to visually render the
data. HTML, CHTML, XML, and other languages are
typically used for this purpose. Because the data is now
being translated or rendered to a different medium, these
tags no longer serve any purpose.
It is doubtful that every single data itemon a page will be
read. Just like reading a newspaper, we read only items of
interest and generally skip advertisements completely. Thus,
we need to automatically render important information on a
page and then when a topic is selected, only the relevant
information from the linked page corresponding to the
selected topic needs to be presented. . Rendering is achieved
by using Page Highlights (using a method to find and speak
the key contents on a page), finding right as well as only
relevant contents on a linked page, assembling right contents
froma linked page, and providing easy navigation.
Finding and Assembling Relevant Information
To find relevant information, the Intelligent Agent (IA) uses
various deterministic and non-deterministic algorithms that
use contextual and non-contextual matches, semantic
analysis, and learning. This is again very similar to how we
do use our eyes and brain to find the relevant contents. To
ensure real-time performance, algorithms are simplified as
needed yet producing very satisfactory results. Once
relevant contents are determined, they are assembled in
appropriate order that makes sense when listen to in audio or
viewed on a small screen.
A content rich page with a small number of links makes
rendering and navigation easy since there are only a few
choices, and one can quickly select a particular topic or
section. If the site is rich in content, links and
images/graphics, the problem is more difficult but good
solution still exists by carefully selecting a built-in feature
called Page Highlights. The most difficult case is when a
page is very rich in images/graphics and links. In such cases,
the main information is located several levels down fromthe
home page and so navigation becomes more difficult as one
has to go through multiple levels. Using multi-level Page
Highlights and customized Highlights, the content can still
be rendered well. But in this case, it is not as easy to
navigate as the other two cases. Usually most of the Internet
contents fall under the first and second categories.
Rendering to a new medium
The two key media for rendering into are Audio using any
phone and Visual using a cell phone screen or PDA. There
is a good synergy between these two modes fromrendering
standpoint. Both need small amount of meaningful
information at a time that can be heard or viewed at ease
with easy navigation. This is achieved by using Page
Highlights mentioned above and finding relevant contents,
column at a time like we do when we read a news paper or
website.
A column of text information can be
converted to audio that can be heard with ease. The rate of
hearing i.e. content delivery can be controlled to suit users
needs. The selection of a website, Page Highlight, speed of
hearing etc can be all done by Voice Commands. This
results Voice Internet i.e. basically talking and listening to
the Internet.
The same column of text can be displayed on a small screen
that can be viewed at ease as a small screen can easily
display a column; but not a whole page. The contents are
then automatically scrolled using various speeds and hence
can easily be viewed and absorbed at ease. This is what
7
results a MicroBrowser or true wireless Internet that does
not need any re-write of the website and presents contents at
ease in a meaningful way.

8. Applications

This technology can find applications for Service
Providers, Businesses and governments in the following
areas:
For Service Providers -
Surf and browse the Web
Email (send, receive, compose, copy, forward,
reply, delete and more)
Search the Web
Voice Portal Features such as News, Weather,
Stock Quotes, Horoscopes and more.

For Businesses
Airline reservations and tracking
Package tracker
Reservations
eCommerce
Customer service
Alert service
Order Confirmation
CRM Applications
For Governments
All key benefits for Businesses Plus
Easy Accessibility to all Government contents
More efficient communication between
Government and citizens, Government and
businesses, and between Government
departments.
Meeting Government obligation to bridge the
Digital and Language Divides (with automatic
translation of Internet content from English to
any other language and vice-versa.), and
providing Internet to elderly, visually impaired
and blind people in a very simple and cost
effective way.


9.
Future Work
Implementation and results of this new proposed technology
depends on the developments in the field of speech
recognition and artificial intelligence. The major challenge
of this technology is its complexity which comes with a high
price tag. But it is not far more than the complexity and cost
involved in voice enabling a web site using the present
technologies. We believe that
Voice/speech based interface
options will become an important part of the overall
solution to access the Internet content. And an automated
approach to Voice-Enable or create Voice Portals would
be most practical and more common way than rewriting
web contents in different languages and maintaining
multiple version of the web sites.


10. Conclusion

we considered the possibility of accessing web through an
ordinary phone . We presented a new technology which
provides a true audio Internet experience. Using an
ordinary telephone and simple voice commands, users
will be able to surf and hear the entire Internet for the
information they desire. A computer is not needed. Any
web page will be accessible , but not limited to sites
written with Wireless Application Protocol, and pages
that are specially written in Voice Extensible Mark-up
Language (VXML). We presented a detailed analysis of
how the technologies like SR, TTS and AI are integrated
to develop a intelligent Platform (phoNET) to achieve
voice based web access. We presented the major
problems involved in Document processing and
document rendering along with solution.


References
1. http://www.w3.org/Voice/
2. http://www.voicexml.org/
4. Internet speech Inc.
5. Avaya Labs
6. http://www.lhs.com/
7. http://trqce.wisc.edu/world/web
8. http://www.dcp.ucla.edu/


8
Quantum Cryptography: The Best-Kept Secrets

P.SANKEERTAHANA
K.SUDHEER
siddu_4u19@rediffmail.com
god_sankee@yahoo.com


Abstract:
Internet or the network of networks has become
the inseparable part of the human life. Security
is essential for the data and communication.
Illegitimate attacks must be shunned. Some of the
information/communications is so pivotal they
employ special technique- cryptography.
Classical cryptography depends mainly on the
assumption of virtual indivisibility of large prime
numbers. If the Moores law validates as
predicted by 2030 the transistors will be of
atomic size. This introduces incongruities in the
processing; we can instead use quantum physics
and produce mind-blowing speeds. If quantum
computers become a reality the present day
security is at a stake. So we have to employ
entirely different approach-Quantum
Cryptography. The Heisenberg uncertainty
principle and quantum entanglement can be
exploited for achieving secure communication
what is known as the Best-Kept Secrets.
Moreover the presence of an eavesdropper can
easily be identified. In this paper we will evince
on cryptography, problems in classical
cryptography, quantum computing and quantum
cryptography. Finally we will examine the
various problems and the trends in this field.


Paper:
Introduction:
Data encryption is overarching in financial
institutes, military and government applications,
commercial satellites where secrecy is of
primary importance and any kind of intrusion is
intolerable. The communication lines should also
be free from eavesdropping. The expansion of
the connectivity of computers makes ways of
protecting data and messages from tampering or
1
reading important. One of the techniques for
ensuring privacy of files and communications is
Cryptography. Cryptography is the art of
devising codes and ciphers, and crypto analysis
is the art of breaking them. The purpose of
cryptography is to transmit information in such a
way that access to it is restricted entirely to the
intended recipient. Originally the security of a
crypto text depended on the secrecy of the entire
encrypting and decrypting procedures; however,
today we use ciphers for which the algorithm for
encrypting and decrypting could be revealed to
anybody without compromising the security of a
particular cryptogram. In such ciphers a set of
specific parameters, called a key, is supplied
together with the plaintext as an input to the
encrypting algorithm, and together with the
cryptogram as an input to the decrypting
algorithm. The encrypting and decrypting
algorithms are publicly announced; the security
of the cryptogram depends entirely on the
secrecy of the key, and this key must consist of
any randomly chosen, sufficiently long string of
bits.
Kinds of cryptosystems:
There are two kinds of cryptosystems: symmetric
and asymmetric. Symmetric cryptosystems use
the same key (the secret key) to encrypt and
decrypt a message, and Asymmetric
cryptosystems use one key (the public key) to
encrypt a message and a different key (the
private key) to decrypt it. Asymmetric
cryptosystems are also called public key
cryptosystems.
In the secret key mode there is a basic problem
How to send the basic key? It must be tamper
proof and covert. If we can find such a channel
for sending it we need not worry ourselves with
this security a fortiori. We can send all the
information through this channel only. So we
have to rely on primordial methods of couriers.
Another, more efficient and reliable solution is a
public key cryptosystem. The 1970s brought a
clever mathematical discovery in the shape of
public key'' systems. In these systems users do
not need to agree on a secret key before they
send the message. They work on the principle of
a safe with two keys, one public key to lock it,
and another private one to open it. Everyone has
a key to lock the safe but only one person has a
key that will open it again, so anyone can put a
message in the safe but only one person can take
it out. In practice the two keys are two large
integer numbers. One can easily derive a public
key from a private key but not vice versa.


General procedure for cryptography:
Cryptographic systems exploit the fact that
certain mathematical operations are easier to
perform in one direction than the other, such as
the difficulty of factoring large integers RSA -
the most popular public key cryptosystem gets
its security from the difficulty of factoring large
numbers. An enemy who knows your public key
can in principle calculate your private key
because the two keys are mathematically related;
however, the difficulty of computing the private
key from the respective public key is exactly that
of factoring big integers. Consider, for example,
the following factorization problem ? * ?=29083
How long would it take you, to find the two
whole numbers which should be written into the
two boxes (the solution is unique)? Probably
about one hour. Solving the reverse problem
127* 129 =? ,
Now it takes less than a minute. All because we
know fast algorithms for multiplication but we
2
do not know equally fast ones for factorization.
Difficulty of factoring grows rapidly with the
size, i.e. number of digits, of a number we want
to factor. To see this take a number N with L
decimal digits (N10L) and try to factor it by
dividing it by 2, 3,...,N and checking the
reminder. In the worst case you may need
approximately N=10L/2 divisions to solve the
problem - an exponential increase as a function
of L. Now imagine a computer capable of
performing 1010 divisions per second. The
computer can then factor any number N, using
the trial division method, in about N/1010
seconds. Take a 100-digit number N, so that
N10100. The computer will factor this number
in about 1040 seconds, much longer than
3.8*1017 seconds (12 billion years) - the
currently estimated age of the Universe!
What is wrong with Classical
Cryptography?
Public-key cryptosystems avoid the key
distribution problem but unfortunately they
depend on the unproven mathematical
assumptions. When mathematicians or
computer scientists come up with fast and clever
procedures for factoring large integers the whole
privacy and discretion of public-key
cryptosystems could vanish overnight. A day
comes when this mathematical assumption gets
shattered. Recent developments in quantum
computation suggest that quantum computers
can factorize much faster than classical
computers.
Quantum Computing:
Moore's Law and the future of
computers:
In 1965 Intel co-founder Gordon Moore noted
that processing power (number of transistors and
speed) of computer chips was doubling each 18
months or so. This trend has continued for nearly
4 decades. But can it continue? The basic
processing unit in a computer chip is the
transistor which acts like a small switch. The
binary digits 0 and 1 are represented by the
transistor being turned off or on.
Currently thousands of electrons are used to
drive each transistor. As the processing power
increases, the size of each transistor reduces. If
Moore's law continues unabated, then each
transistor is predicted to be as small as a
hydrogen atom by about 2030. At that size the
quantum nature of electrons in the atoms
becomes significant. It generates errors in the
computation.
However, rather than being a hindrance, it is
possible to exploit the quantum physics as a new
way to do computation. And this new way opens
up fantastic new computational power based on
the wave nature of quantum particles.
Particle-wave duality:
We normally think of electrons, atoms and
molecules as particles. But each of these objects
can also behave as waves. This dual particle-
wave behavior was first suggested in the 1920's
by Louis de Broglie.
This dual particle-wave property is exploited in
quantum computing in the following way. A
wave is spread out in space. In particular, a wave
can spread out over two different places at once.
This means that a particle can also exist at two
places at once. This concept is called
superposition principle - the particle can be in a
superposition of two places.
Bits and Qubits:
The basic data unit in a conventional (or
classical) computer is the bit, or binary digit. A
bit stores a numerical value of either 0 or 1. We
could also represent a bit using two different
electron orbits in a single atom. In most atoms
there are many electrons in many orbits. But we
need only consider the orbits available to a single
outermost electron in each atom.





3
Electrons Figure-1

The figure-1 shows two atoms representing the
binary number 10. The inner orbits represent the
number 0 and the outer orbits represent the
binary number 1. The position of the electron
gives the number stored by the atom.
However, a completely new possibility opens up
for atoms. Electrons have a wave property which
allows a single electron to be in two orbits
simultaneously. In other words, the electron can
be in a superposition of both orbits

.

Electrons in both
Orbits simultaneously
Figure-2
The figure-2 shows two atoms each with a single
electron in a superposition of two orbits. Each
atom represents the binary numbers 0 and 1
simultaneously. The two atoms together
represent the 4 binary numbers 00, 01, 10 and 11
simultaneously.To distinguish this new kind data
storage from a conventional bit, it is called a
quantum bit which is shortened to qubit.
Each atom in the figure above is a qubit. The
key point is that a qubit can be in a superposition
of the two numbers 0 and 1. Superposition states
allow many computations to be performed
simultaneously, and give rise to an inherent
feature, what is known as quantum parallelism.
According to physicist David Deutsch, this
parallelism allows a quantum computer to work
on a million computations at once, while your
4
desktop PC works on one!!! A 30-qubit quantum
computer would equal the processing power of a
conventional computer that could run at 10
teraflops (trillions of floating-point operations
per second). Whereas today's typical desktop
computers run at speeds measured in gigaflops
(billions of floating-point operations per second).
Another example of a qubit is a photon (a
particle of light) traveling along two possible
paths. Consider what happens when a photon
encounters a beam splitter. A beam splitter is just
like an ordinary mirror, however the reflective
coating is made so thin that not all light is
reflected and some light is transmitted through
the mirror as well. When a single photon
encounters a beam splitter, the photon emerges
in a superposition of the reflected path and the
transmitted path. One path is taken to be the
binary number 0, and the other path is taken to
be the number 1. The photon in a superposition
of both paths and so represents both 0 and 1
simultaneously.
Quantum parallelism:
A one bit memory can store one of the numbers
0 and 1. Likewise a two bit memory can store
one of the binary numbers 00, 01, 10 and 11 (i.e.
0, 1, 2 and 3 in base ten). But these memories
can only store a single number (e.g. the binary
number 10) at a time. As described above, a
quantum superposition state allows a qubit to
store 0 and 1 simultaneously. Two qubits can
store all the 4 binary numbers 00, 01, 10 and 11
simultaneously. Three qubits stores the 8 binary
numbers 000, 001, 010, 011, 100, 101, 110 and
111 simultaneously. The table below shows that
300 qubits can store more than 1090 numbers
simultaneously. That's more than the number of
atoms in the visible universe!!!
This shows the power of quantum computers:
just 300 photons (or 300 ions etc.) can store
more numbers than there are atoms in the
universe, and calculations can be performed
simultaneously on each of these numbers!
qubits stores simultaneously total number.

1 (0 and 1) 2x1 = 2
2 (0 and 1)(0 and 1) 2x2 = 4
3 (0 and 1)(0 and 1)(0
and 1)
2x2x2 =8

. ..
.
.
. .. .
300 (0 and 1)(0 and
1)........(0 and 1)
2x2......x2 =
2300

So the arrival of the quantum computer may
portend the eventual demise of ciphers based on
factorization. Factorization based systems are no
longer safe. So what to do.
Cut the diamond with the diamond.
Use quantum computing even for cryptography!
This originates Quantum cryptography which
forms impregnable codes.
Quantum Cryptography:
The seeds for quantum cryptography were sown
in 1989 when Bennett and colleagues John A.
Smolin and Gilles Brassard undertook a
groundbreaking experiment based on the
principles of quantum mechanics.
5
The qubits in the quantum computer constitutes a
cryptographic "key" that could be used to
encrypt or decipher a message. What kept the
key from prying eavesdroppers was
Heisenberg's uncertainty principle--a
foundation of quantum physics that dictates that
the measurement of one property in a quantum
state will perturb another. In a quantum
cryptographic system, any interloper tapping into
the stream of photons will alter them in a way
that is detectable to the sender and the receiver.
In principle, the technique provides the makings
of an unbreakable cryptographic key. Quantum
cryptography provides means for two parties to
exchange an enciphering key over a private
channel with complete security of
communication. Another great advantage in
quantum cryptography is that the presence of an
eavesdropper can always be identified unlike in
classical cryptography.
There are at least three main types of quantum
cryptosystems for the key distribution, these are:
Cryptosystems with encoding based on
two non-commuting observables
Cryptosystems with encoding built
upon quantum entanglement and the
Bell Theorem
Cryptosystems with encoding based on
two non-orthogonal state vectors
We will explain the first method in detail. In the
crypto slang the sender is named as Alice and
receiver as Bob and the eavesdropper is named
as Eve. Alice and Bob try to keep a quantum-
cryptographic key secret by transmitting it in the
form of polarized photons.
The following steps are implemented for
transmitting the secret code using Quantum
mechanics.

1. To begin creating a key, Alice sends a
photon through either the 0 or 1 slot
rectilinear or diagonal polarizing filters,
while making a record of various
orientations.
2. For each incoming bit Bob chooses
randomly which filter slot he uses for
detection and writes down both the
polarization and the bit value.
3. If Eve tries to spy on the train of
photons quantum mechanics prohibits
her from using both the filters to detect
the orientation of a photon. If she
chooses the wrong filter she may create
errors by modifying their polarization.
4. After all the photons have reached Bob,
he tells Alice over a public channel,
perhaps by telephone or an e-mail, the
sequence of filters he used for the
incoming photons, but not the bit value
of the photons.
5. Alice tells Bob during the same
conversation which filters he chose
correctly. Those instances constitute the
bits that Alice and Bob will use to form
the key that serves as an input for an
algorithm used to encrypt or decipher a
message.
This forms an irrevocable code that cannot be
deciphered by eavesdroppers. Even if they try to
do so, the parties on the both side can identify
the eavesdropper by analyzing the photon
orientation. In case of any doubt they can
retransmit a different key at another time using
different sequence of bits.

6

Limitations:
1. The most fundamental and obvious problem is
that of quantum decoherence. Each quantum
system will have to be so finely tuned and
isolated that any disturbance to the system will
cause decoherence of the information in the
system. Even reading the information stored by
the system may cause it to alter.
2. Quantum cryptography may still prove
vulnerable to some unorthodox attacks. An
eavesdropper might sabotage a receiver's
detector, causing qubits received from a sender
to leak back into a fiber and be intercepted. And
an inside job will always prove unstoppable.
There's nothing quantum mechanics can do about
that. Still, in the emerging quantum information
age, these new ways of keeping secrets may be
better than any others in the codebooks.
Trends and future:
The practical implementations of quantum
cryptography are far behind its theoretical
propositions. Initial efforts were started in 1989
by Benett and John A. Smilon. In that
experiment the transfer distance is only for
30cms! Today quantum cryptography has come a
long way from the jury-rigged project. Recently
many corporate companies are interested and
invested in the research of this field and they
have achieved 150 kms transfer distance. But
still this is a long cry from globalizing the
method. The current uses for quantum
cryptography are in networks of limited
geographic reach.
The main problem is that amplifying the signals
that carry quantum keys is not possible. An
optical amplifier would corrupt qubits. Strangely
it is the strength of the technique that none can
spy on a key transmittal without changing it
unalterably.
To extend the distance of these links, researchers
are looking beyond optical fibers as the medium
to distribute quantum keys. Scientists have
trekked to mountaintops--where the altitude
minimizes atmospheric turbulence--to prove the
feasibility of sending quantum keys through the
air. By optimizing this technology--using bigger
telescopes for detection, better filters
antireflective coatings--it might be possible to
build a system that could transmit and receive
signals over more than 1,000 kilometers,
sufficient to reach satellites in low earth orbit. A
network of satellites would allow for worldwide
coverage.
Conclusion:
Quantum cryptography is a conglomeration of
quantum physics, computers, and cryptography
and it is a relatively young discipline. Hence a
lot of research and developments will eventually
take place. Indubitably it has the potential to
create Best-Kept Secrets on the planet.
Inevitably this field would have an epoch
making influence in the domain of network
security.









7


REMOTE SPEAKER RECOGNITION IN COMMUNICATION CHANNEL
WITH DSP APPLICATION
ST.JOSEPH COLLEGE OF ENGINERRING
S.JAGADEESH AND AGUSTUS CEASER(THIRD YEAR EEE)
Mr.C.RAMESH BABU DURAI., ASST. PROFESOR(Dept of EEE)

ABSTRACT

The paper focuses on speaker identification
procedure using linear predictive coding in a communication
channel. Words uttered by persons were recorded in a
landline telephone. Linear predictive coding (LPC)
parameters were obtained for all these words. The above
process is treated as training. Subsequently the speeches of
the same persons were recorded in a mobile phone and the
wave files are transferred to PC. LPC parameters were found
and compared with the LPC parameters obtained from the
speech of landline telephone. Dissimilarity measures were
obtained. Some people were considered as authenticated
persons and the remaining people were treated as imposter.
False acceptance rate that indicate the number of imposters
accepted and false rejection rate for the authorized persons.
The equal error rate for different combinations of accepted
person and imposter were analyzed.

The paper involves in analyzing the implication of
speaker identification performance between landline
telephone system and cellular mobile phone system. In both
the wired and wireless communication systems, speech is
transferred in the form of signals. On what reasons the
quality of speaker identification differs in cellular phone
when compared to landline telephone.
The aim of the paper is to study the speaker
identification process in cellular phone keeping landline
telephone as template and to study effective method for
tacking the effects of channel variability in speaker
recognition.
1 To analyze the performance of the algorithm for
speaker identification through cellular phone.
2 To record quality speech through landline for text
dependent speaker and text independent speaker.
3 To understand the dissimilarity measures among
the template speech and testing speech.


1.DESIGN LAYOUT

1.1 Overview of speaker recognition
The automated recognition of human speech is
immensely more difficult than speech generation. . Digital
computers can store and recall vast amounts of data perform
mathematical calculation at blazing speeds and do repetitive
tasks without becoming bored or inefficient. Unfortunately,
present day computer perform very poorly when faced with
raw sensory data. Digital signal processing generally
approaches the problem of voice recognition into two steps.
1 Feature extraction
2 Feature matching
Each word in the incoming audio signal is isolated
and then analyzed to identify the type of excitation and
resonate frequency. These parameters are then compared
with previous example of spoken words to identify the
closest match.
The state of the art for speaker identification
techniques include
1 Channel vocoder
2 Linear prediction
3 Formant vocoding
4 Cepstral analysis
This paper focuses on investigating several
different speech recognition techniques, and then
implementing those techniques in Matlab. In this paper we
will recognize the words one,two,three,four,five.

1.2 Description about the data flow diagram
The flowchart explains the sequence of
implementation of speaker identification
A group of three to five authenticated persons are
chosen. Each person is asked to utter words one, two, and
three. Simultaneously recording of words is done. Each
wave file is processed to get LPC coefficients.

1.2.1 Feature extraction method

LPC starts with the assumption that a speech signal
is produced by a buzzer at the end of a tube (voiced sounds),
with occasional added hissing and popping sounds (sibilants
and plosive sounds). Although apparently crude, this model
is actually a close approximation to the reality of speech
production. The glottis (the space between the vocal cords)
produces the buzz, which is characterized by its intensity
(loudness) and frequency (pitch). The vocal tract (the throat
and mouth) forms the tube, which is characterized by its
resonances, which are called formants. Hisses and pops are
generated by the action of the tongue, lips and throat during
sibilants and plosives.
LPC analyzes the speech signal by estimating the
formants, removing their effects from the speech signal, and
estimating the intensity and frequency of the remaining buzz.
The process of removing the formants is called inverse
filtering, and the remaining signal after the subtraction of the
filtered modeled signal is called the residue.
The numbers, which describe the intensity and
frequency of the buzz, the formants, and the residue signal,
can be stored or transmitted somewhere else. LPC
synthesizes the speech signal by reversing the process: use

1.2.2 DATA FLOW DIAGRAM


the buzz parameters and the residue to create a
source signal, use the formants to create a filter (which
represents the tube), and run the source through the filter,
resulting in speech.
Because speech signals vary with time, this process
is done on short chunks of the speech signal, which are
called frames; generally 30 to 50 frames per second give
intelligible speech with good compression.

1.2.3 LPC coefficient representations
LPC is frequently used for transmitting spectral
envelope information, and as such it has to be tolerant for
transmission errors. Transmission of the filter coefficients
directly is undesirable, since they are very sensitive to errors.
A very small error can distort the whole spectrum, or worse,
a small error might make the prediction filter unstable.
There are more advanced representations such as
log area ratios (LAR), line spectral pairs (LSP)
decomposition and reflection coefficients. LSP
decomposition has gained popularity, since it ensures
stability of the predictor, and spectral errors are local for
small coefficient deviations.
Phone companies, for example in the GSM
standard, use it as a form of voice compression. It is also
used for secure wireless, where voice must be digitized,
encrypted and sent over a narrow voice channel.
LPC synthesis can be used to construct vocoders
where musical instruments are used as excitation signal to
the time-varying filter estimated from a singer's speech. This
is somewhat popular in electronic music.

1.3 Speaker modeling
1.3.1 Vector quantization
Vector quantization is used in many
applications such as image and voice compression,
voice recognition (in general statistical pattern
recognition), and in volume rendering
A vector quantizer maps k-dimensional vectors in
the vector space R
k
into a finite set of vectors Y ={y
i
: i =1,
2, ..., N}. each vector y
i
is called a code vector or a
codeword and the set of all the codewords is called a
codebook. Associated with each codeword, y
i
, is a nearest
neighbor region called Voronoi region, and it is defined by:
V
i
={ x R
k
: || x y
i
|| || x y
j
||, for all j i}
(1)
The set of Voronoi regions partition the entire space R
k
such
that:
K
i
N
1 i
R V =
=
I

(2)
=
=
i
N
1 i
V
I
for all i j
(3)
As an example we take vectors in the two dimensional case
without loss of generality. Figure 1 shows some vectors in
space
Associated with each cluster of vectors is a
representative codeword. Each codeword resides in its own
Voronoi region. These regions are separated with imaginary
lines in Figure 1 for illustration. Given an input vector, the
codeword that is chosen to represent it is the one in the same
Voronoi region.
IN FIGURE 1:Input vectors are marked with an x,
codewords are marked with red circles, and the Voronoi
regions are separated with boundary lines. The
representative codeword is determined to be the closest in
Euclidean distance from the input vector. The Euclidean
distance is defined by:








FIGURE 1:Codewords in 2-dimensional space.

=
=
k
j
ij j i
y x y x d
1
2
) ( ) , (

(4)
where x
j
is the j
th
component of the input vector, and y
ij
is
the j
th
is component of the codeword y
i
.
1.3.2 VQ in compression
A vector quantizer is composed of two operations.
The first is the encoder, and the second is the decoder. The
encoder takes an input vector and outputs the index of the
codeword that offers the lowest distortion. In this case the
lowest distortion is found by evaluating the Euclidean
distance between the input vector and each codeword in the
codebook. Once the closest codeword is found, the index of
that codeword is sent through a channel (the channel could
be computer storage, communications channel, and so on).
When the encoder receives the index of the codeword, it
replaces the index with the associated codeword. Figure
shows a block diagram of the operation of the encoder and
decoder.
FIGURE 2
Given an input vector, the closest codeword is
found and the index of the codeword is sent
through the channel. The decoder receives the
index of the codeword, and outputs the codeword.
1.4 Design of codebook
Designing a codebook that best represents the set of
input vectors is NP-hard. It requires an exhaustive search for
the best possible codewords in space, and the search
increases exponentially as the number of
Codewords increases.
The algorithm
1. Determine the number of codewords, N, or the size of
the codebook.
2. Select N codewords at random, and let that be the
initial codebook. The initial codewords can be
randomly chosen from the set of input vectors.
3. Using the Euclidean distance measure clusterize the
vectors around each codeword. Taking each input
vector and finding the Euclidean distance between
it and each codeword do this. The input vector
belongs to the cluster of the codeword that yields
the minimum distance.
4. Compute the new set of codewords. Obtaining the
average of each cluster does this. Add the
component of each vector and divide by the number
of vectors in the cluster.

=
=
m
j
ij i
x
m
y
1
1


(5)
where i is the component of each vector (x, y, z, ...
directions), m is the number of vectors in the cluster.
For each iteration, determining each cluster requires
that each input vector be compared with all the codewords in
the codebook There are many other methods to designing the
codebook, methods such as Pairwise Nearest Neighbor
(PNN), Simulated Annealing, Maximum Descent (MD), and
Frequency-Sensitive Competitive Learning (FSCL), etc.
Measure of performance VQ
How does one rate the performance of a
compressed image or sound using VQ? There is no good
way to measure the performance of VQ. This is because we
humans will evaluate the distortion that VQ incurs and that
is a subjective measure. We can always resort to good old
Mean Squared Error (MSE) and Peak Signal to Noise Ratio
(PSNR). MSE is defined as follows:
2
1
)
1

=
=
M
i
i
x x
M
MSE
(6)
Where M is the number of elements in the signal, or image.
For example, if we want to find the MSE between the
reconstructed and the original image, then we would take the
difference between the two images pixel-by-pixel, square the
results, and average the results.
The PSNR is defined as follows:


=
MSE
1) (2
log 10 PSNR
2 n
10

(7)
where n is the number of bits per symbol.
As an example, if we want to find the PSNR
between two 256 gray level images, then we set n to 8 bits.
For expermental verification two main steps
involved is
1.speaker identification (training)
2.speaker verification (testing)

1.5 The basic structures of speaker identification and
verification systems.
Speaker identification is the process of determining which
registered speaker provides a given utterance. Speaker
verification, on the other hand, is the process of accepting or
rejecting the identity claim of a speaker. Most applications
in which a voice is used as the key to confirm the identity of
a speaker are classified as speaker verification

There is also the case called open set identification, in
which a
reference model for an unknown speaker may not exist.
This is usually the case in forensic applications . In this
situation, an additional decision alternative, the
unknown does not match any of the models, is required.
In both verification and identification processes, an
additional threshold test can be used to determine if the
match is close enough to accept the decision or if more
speech data are needed.It could also be be a text
independant or textdependant recognition method .
2.1 SPEAKER IDENTIFICATION
Recording of speech was done for different persons.
For LPC extraction and the dissimilarity measures are given
in the following tables
Training and testing of the automatic speaker recognition is
carried out systematically;
Training: It is the process of collecting speech samples
from individuals. To achieve this, each person is asked to
speak words like one, two, and three at various times. The
words are recorded by using sound recorder in windows and
stored respectively as
Word one
11.wav
21.wav
31.wav
41.wav
51.wav
61.wav
71.wav
81.wav
The first number indicates the person and the second number
indicates word one Similarly for words two and three are
recorded for all the 8 persons.
To train the words the c program Lpccep is used the input
for the program is a wave file and the output of the program
is the LP coefficients. The outputs are stored in *.out files.
The following table gives the LP coefficients for the
Word one corresponding to first 4 persons to analyis for text
dependent speeches.

LPCCEP <input _file_name>
<output_file_name>
Table 2.1.1: LPC values














11.wav 21.wav 31.wav 41.wav
Header
Size
0 0 0 0
Frame
Size
200 200 200 200
Frame
Overlap
30 30 30 30
LPC Order 8 8 8 8
Energy
Threshold
0.01 0.01 0.01 0.01
Pre-emphasis
constant

0.95

0.95

0.95

0.95
Input File 11.wav 21.wav 31.wav 41.wav
Output File 11.out 21.out 31.out 41.out
Total #of Frames
[Output]

197

158

195

246
Table 2.1.1 gives important parameters like Header of the
file, name size of the samples, overlapping of frames. To
have continuous analysis and avoid any break in the signal
frame overlapping is considered. Other such parameters like
LPC orders and computing energy threshold with 0.95 as pre
emphasis constant are given.
Table 2.1.2:code book values
GENCB <input_file_name> <CB_size>
<output_file_name> [IFE] [IlE]
Table 2.1.2 gives the output of codebook
generation program for various codebook sizes. That is
codebook size of 2,4, 6, 8,16,32. As the codebook size
increases the output of the codebook values reduces. For
example for 1.wav given the codebook of size 2 the value
obtained is 73.97 whereas for codebook size of 32 it is
25.89.A similar change in the value from high to low is
obtained as codebook size increases which can be observed
from the table for the wave files 21.wav, 31.wav, 41.wav.
2.2 SPEAKER VERIFICATION
Testing is a procedure to know if the program
developed on particular concept is correct or not. To test the
performance of the program out of 4 persons few persons are
treated as authenticated persons and the remaining persons
are treated as imposter or unauthorized persons. During
testing, dissimilarity measures are obtained. The
dissimilarity indicates the closeness of a person speech, with
his own speech that is a different word and a speech of
another person. As dissimilarity value increases from zero to
higher value this indicates there is a vast difference. This
vast difference is considered positive if it exists between
words of two different people. If the vast difference is a
disadvantage if exists for the words of the same person..
This difference for the same person exists when doing ASR
in mobile communication.
For example wave file 11.wav is considered as
template. How much the 21.wav is dissimilar is found. This
approach is followed with respect to other wave files
VQTEST <codebook_file_name> <test_file_name>
<result_file_name>
Table 2.2.1:Dissimilarity values
11.
WAV
21.
WAV
31.
WAV
41.
wav
Input File CB11.
OUT
CB21.
OUT
CB31.
OUT
CB41.
OUT
Test File 11.
OUT
21.
OUT
31.
OUT
41.
OUT
Dissimilarity
Score
(Result)
0.22 0.22 0.23 0.26
In Table2.2.1 the dissimilarity values for
different test files are given compared to the same input
file. That is for example input file is taken as CB11.OUT
and a test file is taken as 11.OUT the dissimilarity score is
0.22 whereas the dissimilarity measure is 0.26 when
CB41.OUT and 41.OUT are considered. In fact the
dissimilarity score should become very close to zero.
Table 2.2.2:Dissimilarity values
word 1 of person 1 with word one of all persons
11.
wav
21.
wav
31.
wav
41.
wav
Input File CB
11.
OUT
CB
11.
OUT
CB 11.
OUT
CB
11.
OUT
Test File 11.
OUT
21.
OUT
31.
OUT
41.
OUT
Dissimilarity
Score
(Result)
0.22 0.53 0.43 0.70

Table 2.2.2 gives the dissimilarity values for
different wave files when compared to 11.Wav as the
reference file. Large dissimilarity values are obtained when
files 21.wav, 31.wav, 41.wav are tested with CB11.OUT.
Simillarly other persons were considered as base
and its dissimilarity is compared with others .

2.3 Text independent: It indicates how to identify a
person irrespective the word he utters. Three different tests
were conducted for evaluation of the performance of the
speech recorded for all the a paticular persons and for all the
three words they uttered





11.WAV 21.WAV 31.WAV 41.WAV
Input File 11.OUT 21.OUT 31.OUT 41.OUT
Output File CB11.
OUT
CB21.
OUT
CB31.
OUT
CB41.
OUT
CBSize=2 73.97 93.23 109.46 135.46
CBSize=4 54.08 65.65 89.67 120.47
CBSize=8 45.93 52.52 66.47 93.82
CBSize=16 32.29 44.01 56.12 78.40
CBSize=32 25.89 35.01 44.60 64.49



Cluster
Population
|3|6|4|1|8|3|4|11|4|
4|4|4|2|4|5|1|4|2|4|
0|4|5|5|7|1|3|3|2|2|
4|4|1|0|
|3|5|1|6|3|
2|6|8|4|4|
1|4|3|5|3|
10|5|8|3|
14|5|6|3|
15|3|6|0|
7|1|8|5|1

|5|2|9|17|
2|2|10|5|7
|4|20|8|4|
4|7|3|7|1|
6|8|4|4|8|
5|2|2|8|4|
7|3|9|8

|6|4|7|2|6
|3|12|2|4|
1|20|4|26
|4|7|1|7|3
|13|2|14|
2|12|2|13
|1|23|1|3
0|3|10|1
Table 2.3.1:Results from VQTEST.C for (a) Test
1:wav41 (b) Test2:wav42 and
(c) Test3:wav43
Text-Independent Experiment
(a) Test 1:wav41 (b) Test2:wav42
c) Test3:wav43

Input
File
Name
Test
File
Name
Dissimilarity
Score
(Result)
Cb43.out 41.out 0.54
Cb43.out 42.out 0.58
Cb43.out 43.out 0.22
Cb43.out 51.out 0.58
Cb43.out 52.out 0.48
Cb43.out 53.out 0.53
Cb43.out 61.out 0.66
Cb43.out 62.out 0.70
Cb43.out 63.out 0.65
Cb43.out 71.out 0.55
Cb43.out 72.out 0.56
Cb43.out 73.out 0.50
Cb43.out 81out 0.40
Cb43.out 82out 0.36
Cb43.out 83.out 0.50

The dissimilarity score which are appearing in bold
in the above tables (and in the rest of the following tables)
indicates the dissimilarity score obtained when the reference
model of particular speaker (e.g. from Table 2.3.1a, Test
1:wav41) was compared against the feature parameters of
produced by the same speaker for different speech samples
(e.g. Table 2.3.1b, Test 1:wav42 feature parameter of
Table 2.3.1c, Test 1:wav43 speech 1, 2 and 3
respectively) .similarliy test is conducted on 5th,6th,7th,8th
person and results were analysed.

From the above results, it was observed that the
dissimilarity score produced when the reference model of a
particular speaker was compared and tested against the
feature parameters of the same speaker was low indicating
that the speaker was what he/she claimed to be.
Input
File

CONCLUSION
Analysis of speaker identification in mobile with
reference to landline telephone was carried out.
Programming was done in FC for extracting LPC parameters,
creating vector codebook and testing the accepted speakers
and impact of imposters. Dissimilarity measures that
indicate how much deviation exist and how closeness
presents for the same speaker when his speech obtained from
cellular phone is compared with his speech obtained in
landline telephone. There is some variations in the
recognition and the equal error rate is obtained. Due to
unknown disturbances in the wire less communication the
speech recorded through cellular phone sometimes give
medium dissimilarity.
REFERENCES
1. http://www.speaker_recongition.org
2. DouglasA.Reynolds,speaker
identification&verification using Gaussian
mixturespeaker models.IEEE transaction on
speech and audio processing,3(1) Jan1995
3. http.//geocities.com/mohamedqasm/vector
quantization/vq.html
4. Forcommunication : cezueee@yahoo.co.in

: jack2enjoy@yahoo.co.in
Phone number : 9884594132
College address : St.Joseph college of engg
Dept of EEE
Oldmahapalipuram road
Chennai -119
Name
Test
File

Name
Dissimilarity
Score
(Result)
Input
File
Name
Test
File
Name
Dissimilarity
Score
(Result)
Cb41.out 41.out 0.26 Cb42.out 41.out 0.54
Cb41.out 42.out 062 Cb42.out 42.out 0.17
Cb41.out 43.out 0.46 Cb42.out 43.out 0.44
Cb41.out 51.out 0.62 Cb42.out 51.out 0.64
Cb41.out 52.out 0.48 Cb42.out 52.out 0.52
Cb41.out 53.out 0.61 Cb42.out 53.out 0.60
Cb41.out 61.out 0.58 Cb42.out 61.out 0.66
Cb41.out 62.out 0.78 Cb42.out 62.out 0.62
Cb41.out 63.out 0.89 Cb42.out 63.out 0.64
Cb41.out 71.out 0.63 Cb42.out 71.out 0.59
Cb41.out 72out 0.62 Cb42.out 72.out 0.60
Cb41.out 73.out 0.49 Cb42.out 73.out 0.52
Cb41.out 81out 0.42 Cb42.out 81out 0.46
Cb41.out 82out 0.41 Cb42.out 82out 0.41
Cb41.out 83.out 0.54 Cb42.out 83.out 0.59




A

SOFTWARE PRESENTATION

ON

SHUFFLER GAME




K.Raghuvaran E.Srinivasu
III/IV B-Tech III/IV B-Tech
Adm: 04001A0238 Adm: 04001A0230
Email:raghu_eee38@yahoo.com Email:srenu.shekar@gmail.com





Abstract: This software presentation is a Shuffler game, designed using C language in
simple way. It is developed by SRINIVASU & RAGHUVARAN. Shuffler game is a game of
arranging the numbers from 1 to 15 in serial, in a 4X4 box from a zigzag arrangement. As you
open the executable file an instruction is displayed to press any key to start the game. Arrow
keys are used to move the numbers. A number can be moved according to the direction of the
arrow key pressed. As soon as the game is finished, the time and the number of steps taken to
complete the game, is displayed. This game is build with for loop, if condition and Switch
case and also other functions for counting the time when game starts. Sometime user may not
be interested to continue the game, then just press ESC key to exit the game at any time.


we have included graphics, C or CPP compiler supports the EXE file.
GIET

B-Tech ECE (3rd year)


EVENT: PAPER PRESENTATION



TOPIC: SMART CARDS SECURITY



PRESENTED BY:

N.NAGA DEVI (ECE)

devinemala@gmail.com

D.PADMINI (ECE)


paddudommeti@gmail.com






ABSTRACT


Now-a days Chip card technology (smart cards) is fast becoming commonplace in our
culture and daily lives. A smart card is a card that is embedded with either a
microprocessor and a memory chip or only a memory chip with non-programmable logic.
The microprocessor card can add, delete, and otherwise manipulate information on the
card, while a memory-chip card (for example, pre-paid phone cards) can only undertake a
pre-defined operation.
Smart cards, unlike magnetic stripe cards, can carry all necessary functions and
information on the card. Therefore, they do not require access to remote databases at the
time of the transaction.
This paper deals with what is a smart card, why smart cards are used, what are the
different types of chip cards, multi application card systems, and their security.
This paper mainly concentrates on smart cards security. Lastly this paper discuss on
applications and on future scope.












INTRODUCTION
A smart card, a type of chip card is a plastic card embedded with a computer chip
that stores and transacts data between users. This data is associated with either value or
information or both and is stored and processed within the card's chip, either a memory or
microprocessor. The card data is transacted via a reader that is part of a computing
system. Smart card-enhanced systems are in use today throughout several key
applications, including healthcare, banking, entertainment and transportation. To various
degrees, all applications can benefit from the added features and security that smart cards
provide.
Why Smart Cards
Smart cards greatly improve the convenience and security of any transaction. They
provide tamper-proof storage of user and account identity. Smart cards also provide vital
components of system security for the exchange of data throughout virtually any type of
network. They protect against a full range of security threats, from careless storage of
user passwords to sophisticated system hacks. Multifunction cards can also serve as
network system access and store value and other data.
People worldwide are now using smart cards for a wide variety of daily tasks, these
include:
Loyalty and Stored Value
Securing Information and Physical Assets
E-Commerce
Health Care
Network Security

Loyalty and Stored Value
A primary use of smart cards is stored value, particularly loyalty programs that
track and incentives repeat customers. Stored value is more convenient and safer than
cash. For multi-chain retailers that administer loyalty programs across many different
businesses and Point of sale systems, smart cards can centrally locate and track all data.
The applications are numerous, from parking and laundry to gaming, as well as all retail
and entertainment uses.



Securing Information and Physical Assets
In addition to information security, smart cards achieve greater physical security of
services and equipment, because the card restricts access to all but the authorized
user(s). E-mail and PCs are being locked-down with smart cards. Information and
entertainment is being delivered via to the home or PC. Home delivery of service is
encrypted and decrypted per subscriber access. Digital video broadcasts accept smart
cards as electronic keys for protection. Smart cards can also act as keys to machine
settings for sensitive laboratory equipment and dispensers for drugs, tools, library cards,
health club equipment etc.
E-Commerce
Smart cards make it easy for consumers to securely store information and cash for
purchasing. The advantages they offer consumers are:
The card can carry personal account, credit and buying preference information
that can be accessed with a mouse click instead of filling out forms.
Cards can manage and control expenditures with automatic limits and reporting.
Internet loyalty programs can be deployed across multiple vendors with disparate
POS systems and the card acts as a secure central depository for points or
rewards.
Micro Payments - paying nominal costs without transaction fees associated with
credit cards or for amounts too small for cash, like reprint charges.
Health Care
The explosion of health care data brings up new challenges to the efficiency of patient
care and privacy safeguards. Smart cards solve both challenges with secure storage and
distribution of everything from emergency data to benefits status.
Rapid identification of patients; improved treatment
A convenient way to carry data between systems or to sites without systems
Reduction of records maintenance costs
Network Security
Business to business Intranets and Virtual Private Networks Vans are enhanced by the
use of smart cards. Users can be authenticated and authorized to have access to specific
information based on preset privileges. Additional applications range from secure email to
electronic commerce.


Types of Chip Cards
Smart cards are defined according to 1). How the card data is read and written and 2).
The type of chip implanted within the card and its capabilities. There is a wide range of
options to choose from when designing your system.


Contact Cards
The most common type of smart card. Electrical contacts located on the outside of the
card connect to a card reader when the card is inserted.

Increased levels of processing power, flexibility and memory add cost. Single function
cards are often the most cost-effective solution. Choose the right type of smart card for
your application by evaluating cost versus functionality and determine your required level
of security. All of these variables should be weighted against the expected lifecycle of the
card. On average the cards typically comprise only 10 to 15 percent of the total system
cost with the infrastructure, issuance, training and advertising making up the other 85
percent. The following chart demonstrates some general rules of thumb;

Card Function Trade-Offs

Memory Cards
Memory cards have no sophisticated processing power and cannot manage files
dynamically. All memory cards communicate to readers through synchronous protocols.
In all memory cards you read and write to a fixed address on the card. There are three
primary types of memory cards: 1). Straight, 2). Protected, and 3). Stored Value.
1. Straight Memory Cards
These cards just store data and have no data processing capabilities. These cards are the
lowest cost per bit for user memory. They should be regarded as floppy disks of varying
sizes without the lock mechanism. These cards cannot identify themselves to the reader,
so your host system has to know what type of card is being inserted into a reader. These
cards are easily duplicated and cannot be tracked by on-card identifiers.
2. Protected / Segmented Memory Cards
These cards have built-in logic to control the access to the memory of the card.
Sometimes referred to as Intelligent Memory cards, these devices can be set to write
protect some or all of the memory array. Some of these cards can be configured to
restrict access to both reading and writing. This is usually done through a password or
system key. Segmented memory cards can be divided into logical sections for planned
multi-functionality. These cards are not easily duplicated but can possibly be
impersonated by hackers. They typically can be tracked by an on-card identifier.
3. Stored Value Memory Cards
These cards are designed for the specific purpose of storing value or tokens. The cards
are either disposable or rechargeable. Most cards of this type incorporate permanent
security measures at the point of manufacture. These measures can include password
keys and logic that are hard-coded into the chip by the manufacturer. The memory
arrays on these devices are set-up as decrements or counters. There is little or no
memory left for any other function. For simple applications such as a telephone card the
chip has 60 or 12 memory cells, one for each telephone unit. A memory cell is cleared
each time a telephone unit is used. Once all the memory units are used, the card
becomes useless and is thrown away. This process can be reversed in the case of
rechargeable cards.
Contactless Cards
These are smart cards that employ a radio frequency (RFID) between card and reader
without physical insertion of the card. Instead the card is passed along the exterior of the
reader and read. Types include proximity cards which are implemented as a read-only
technology for building access. These cards function with a limited memory and
communicate at 125 MHz. True read & write contactless cards were first used in
transportation for quick decrementing and re-loading of fare values where their lower
security was not an issue. They communicate at 13.56 MHz, and conform to the
ISO14443 standard. These cards are often straight memory types. They are also gaining
popularity in retail stored value, since they can speed-up transactions and not lower
transaction processing revenues (i.e. VISA and Mastercard), like traditional smart cards.
Variations of the ISO14443 specification include A, B, and C, which specify chips from
either specific or various manufacturers. A=Philips B=Everybody else and C=Sony chips.
Contactless card drawbacks include the limits of cryptographic functions and user
memory versus microprocessor cards and the limited distance between card and reader
required for operation.
Combination Cards
These are hybrids that employ both contact and contactless technology in one card.
Combi-cards can also contain two different types of chips in contrast to a Dual-Interface
card where a single chip manages both functions.

Multi-Application Card Systems
It is highly recommended that you graphically diagram the flow of information as shown
below.

Building a smart card system that stores value i.e. gift certificates, show tickets,
redemption points or cash equivalents requires an attention to detail not necessary in
other information management systems. The key to success is not to overrun the system
with features that can confuse users and cause problems in management. We
recommend that you phase-in each feature set after the first one is working. Here is a list
of some questions that are pertinent to these systems in addition to the above questions.
Deployment
As the minimum steps in deploying a stored value or multi-application system, establish
clear achievable program objectives;
A. Make sure the organization has a stake in the project's success and that
management buys into the project
B. Set a budget
C. Name a project manager
D. Assemble a project team and create a team vision
E. Graphically create an information - card and funds-flow diagram
F. Assess the card and reader options
G. Write a detailed specification for the system
H. Set a realistic schedule with inch-stones and mile-stones
I. Establish the security parameters for both people and the system
J. Phase-in each system element, testing as you deploy
K. Reassess for security leaks
L. Deploy the first phase of cards and test, test
M. Train the key employees responsible for each area
N. Set-up a system user manual
O. Check the reporting structures
P. Have contingency plans should problems arise
Q. Deploy and announce
R. Advertise and market your system
Smart Card Security
Smart cards provide computing and business systems the enormous benefit of portable
and secure storage of data and value. At the same time, the integration of smart cards
into your system introduces its own security management issues, as people access card
data far and wide in a variety of applications.
The following is a basic discussion of system security and smart cards, designed to
familiarize you with the terminology and concepts you need in order to start your security
planning.
What Is Security?
Security is basically the protection of something valuable to ensure that it is not stolen,
lost, or altered. The term "data security" governs an extremely wide range of applications
and touches everyone's daily life. Concerns over data security are at an all-time high, due
to the rapid advancement of technology into virtually every transaction, from parking
meters to national defense.
Data is created, updated, exchanged and stored via networks. A network is any
computing system where users are highly interactive and interdependent and by
definition, not all in the same physical place. In any network, diversity abounds, certainly
in terms of types of data, but also types of users. For that reason, a system of security is
essential to maintain computing and network functions, keep sensitive data secret, or
simply maintain worker safety. Any one company might provide an example of these
multiple security concerns: Take, for instance, a pharmaceutical manufacturer:
Type Of Data Security Concern Type Of Access
Drug Formula Basis of business
income. Competitor
spying
Highly selective list of
executives
Accounting, Regulatory Required by law Relevant executives and
departments
Personnel Files Employee privacy Relevant executives and
departments
Employee ID Non-employee access.
Inaccurate payroll,
benefits assignment
Relevant executives and
departments
Facilities Access authorization Individuals per function
and clearance such as
customers, visitors, or
vendors
Building safety,
emergency response
All employees Outside emergency
response
What Is Information Security?
Information security is the application of measures to ensure the safety and privacy of
data by managing it's storage and distribution. Information security has both technical
and social implications. The first simply deals with the 'how' and 'how much' question of
applying secure measures at a reasonable cost. The second grapples with issues of
individual freedom, public concerns, legal standards and how the need for privacy
intersects them. This discussion covers a range of options open to business managers,
system planners and programmers that will contribute to your ultimate security strategy.
The eventual choice rests with the system designer and issuer.
The Elements Of Data Security
In implementing a security system, all data networks deal with the following main
elements:
1. Hardware, including servers, redundant mass storage devices, communication
channels and lines, hardware tokens (smart cards) and remotely located devices
(e.g., thin clients or Internet appliances) serving as interfaces between users and
computers
2. Software, including operating systems, database management systems,
communication and security application programs
3. Data, including databases containing customer - related information.
4. Personnel, to act as originators and/or users of the data; professional personnel,
clerical staff, administrative personnel, and computer staff


The Mechanisms Of Data Security
Working with the above elements, an effective data security system works with the
following key mechanisms to answer:
1. Has My Data Arrived Intact? (Data Integrity) This mechanism ensures that
data was not lost or corrupted when it was sent to you
2. Is The Data Correct And Does It Come From The Right Person?
(Authentication) This proves user or system identities
3. Can I Confirm Receipt Of The Data And Sender Identity Back To The
Sender? (Non-Repudiation)
4. Can I Keep This Data Private? (Confidentiality) - Ensures only senders and
receivers access the data. This is typically done by employing one or more
encryption techniques to secure your data
5. Can I Safely Share This Data If I Choose? (Authorization and Delegation) You
can set and manage access privileges for additional users and groups
6. Can I Verify The That The System Is Working? (Auditing and Logging)
Provides a constant monitor and troubleshooting of security system function
7. Can I Actively Manage The System? (Management) Allows administration of
your security system
Smart Card Security (Section 2)
Data Integrity
This is the function that verifies the characteristics of a document and a transaction.
Characteristics of both are inspected and confirmed for content and correct authorization.
Data Integrity is achieved with electronic cryptography that assigns a unique identity to
data like a fingerprint. Any attempt to change this identity signals the change and flags
any tampering.
Authentication
This inspects, then confirms, the proper identity of people involved in a transaction of
data or value. In authentication systems, authentication is measured by assessing the
mechanisms strength and how many factors are used to confirm the identity. In a PKI
system a Digital Signature verifies data at its origination by producing an identity that
can be mutually verified by all parties involved in the transaction. A cryptographic hash
algorithm produces a Digital Signature.
Non-Repudiation
This eliminates the possibility of a transaction being repudiated, or invalidated by
incorporating a Digital Signature that a third party can verify as correct. Similar in
concept to registered mail, the recipient of data re-hashes it, verifies the Digital
Signature, and compares the two to see that they match.


Authorization and Delegation
Authorization is the processes of allowing access to specific data within a system. Delegation is the utilization of
a third party to manage and certify each of the users of your system. (Certificate Authorities).Auditing and
Logging
This is the independent examination and recording of records and activities to ensure
compliance with established controls, policy, and operational procedures, and to
recommend any indicated changes in controls, policy, or procedures.
Management
Is the oversight and design of the elements and mechanisms discussed above and below.
Card management also requires the management of card issuance, replacement and
retirement as well as polices that govern a system.
Cryptography/Confidentiality
Confidentiality is the use of encryption to protect information from unauthorized
disclosure. Plain text is turned into cipher text via an algorithm, then decrypted back into
plain text using the same method.
Cryptography is the method of converting data from a human readable form to a
modified form, and then back to its original readable form, to make unauthorized access
difficult. Cryptography is used in the following ways:
Ensure data privacy, by encrypting data
Ensures data integrity, by recognizing if data has been manipulated in an
unauthorized way
Ensures data uniqueness by checking that data is "original", and not a "copy" of
the "original". The sender attaches a unique identifier to the "original" data. This
unique identifier is then checked by the receiver of the data.
The original data may be in a human-readable form, such as a text file, or it may be in a
computer-readable form, such as a database, spreadsheet or graphics file. The original
data is called unencrypted data or plain text.The modified data is called encrypted data or
cipher text. The process of converting the unencrypted data is called encryption. The
process of converting encrypted data to unencrypted data is called decryption.
Data Security Mechanisms and their Respective
Algorithms
In order to convert the data, you need to have an encryption algorithm and a key. If the
same key is used for both encryption and decryption that key is called a secret key and
the algorithm is called a symmetric algorithm. The most well-known symmetric algorithm
is DES (Data Encryption Standard).

The Data Encryption Standard (DES) was invented by the IBM Corporation in the 1970's.
During the process of becoming a standard algorithm, it was modified according to
recommendations from the National Security Agency (NSA). The algorithm has been
studied by cryptographers for nearly 20 years. During this time, no methods have been
published that describe a way to break the algorithm, except for brute-force techniques.
DES has a 56-bit key, which offers 256 or 7 x 1016 possible variations. There are a very
small numbers of weak keys, but it is easy to test for these keys and they are easy to
avoid.
Triple-DES is a method of using DES to provide additional security. Triple-DES can be
done with two or with three keys. Since the algorithm performs an encrypt-decrypt-
encrypt sequence, this is sometimes called the EDE mode. This diagram shows Triple-
DES three-key mode used for encryption:

If different keys are used for encryption and decryption, the algorithm is called an
asymmetric algorithm. The most well-known asymmetric algorithm is RSA, named after
its three inventors (Rivest, Shamir, and Adleman). This algorithm uses two keys, called
the private key. These keys are mathematically linked. Here is a diagram that illustrates
an asymmetric algorithm:

Asymmetric algorithms involve extremely complex mathematics typically involving the
factoring of large prime numbers. Asymmetric algorithms are typically stronger than a
short key length symmetric algorithm. But because of their complexity they are used in
signing a message or a certificate. They not ordinarily used for data transmission
encryption.
Smart Card Security (Section 3)
As the card issuer, you must define all of the parameters for card and data security.
There are two methods of using cards for data system security, host-based and card-
based. The safest systems employ both methodologies.

Host-Based System Security
A host-based system treats a card as a simple data carrier. Because of this, straight
memory cards can be used very cost-effectively for many systems. All protection of the
data is done from the host computer. The card data may be encrypted but the
transmission to the host can be vulnerable to attack. A common method of increasing the
security is to write in the clear (not encrypted) a key that usually contains a date and/or
time along with a secret reference to a set of keys on the host. Each time the card is re-
written the host can write a reference to the keys. This way each transmission is
different. But parts of the keys are in the clear for hackers to analyze. This security can
be increased by the use of smart memory cards that employ a password mechanism to
prevent unauthorized reading of the data. Unfortunately the passwords can be sniffed in
the clear. Access is then possible to the main memory. These methodologies are often
used when a network can batch up the data regularly and compare values and card
usage and generate a problem card list.
Card-Based System Security
These systems are typically microprocessor card-based. A card, or token-based system
treats a card as an active computing device. The Interaction between the host and the
card can be a series of steps to determine if the card is authorized to be used in the
system. The process also checks if the user can be identified, authenticated and if the
card will present the appropriate credentials to conduct a transaction. The card itself can
also demand the same from the host before proceeding with a transaction. The access to
specific information in the card is controlled by A) the card's internal Operating System
and B) the preset permissions set by the card issuer regarding the files conditions. The
card can be in a standard CR80 form factor or be in a USB dongle or it could be a GSM
SIM Card.
Threats To Cards and Data Security
Effective security system planning takes into account the need for authorized users to
access data reasonably easily, while considering the many threats that this access
presents to the integrity and safety of the information. There are basic steps to follow to
secure all smart card systems, regardless of type or size.
Analysis: Types of data to secure; users, points of contact, transmission. Relative
risk/impact of data loss
Deployment of your proposed system
Road Test: Attempt to hack your system; learn about weak spots, etc.
Synthesis: Incorporate road test data, re-deploy
Auditing: Periodic security monitoring, checks of system, fine-tuning
When analyzing the threats to your data an organization should look closely at two
specific areas: Internal attacks and external attacks. The first and most common
compromise of data comes from disgruntled employees. Knowing this, a good system
manager separates all back-up data and back-up systems into a separately partitioned
and secured space. The introduction of viruses and the attempted formatting of network
drives is a typical internal attack behavior. By deploying employee cards that log an
employee into the system and record the time, date and machine that the employee is
on, a company automatically discourages these type of attacks.
External attacks are typically aimed at the weakest link in a company's security armor.
The first place an external hacker looks at is where they can intercept the transmission of
your data. In a smart card-enhanced system this starts with the card.

The following sets of questions are relevant to your analysis. Is the data on the card
transmitted in the clear or is it encrypted? If the transmission is sniffed, is each session
secured with a different key? Does the data move from the reader to the PC in the clear?
Does the PC or client transmit the data in the clear? If the packet is sniffed, is each
session secured with a different key? Does the operating system have a back door? Is
there a mechanism to upload and down load functioning code? How secure is this
system? Does the OS provider have a good security track record? Does the card
manufacturer have precautions in place to secure your data? Do they understand the
liabilities? Can they provide other security measures that can be implemented on the card
and or module? When the card is subjected to Differential Power attacks and Differential
Thermal attacks does the OS reveal any secrets? Will the semiconductor utilized meet
this scrutiny? Do your suppliers understand these questions?
Other types of problems that can be a threat to your assets include:
Improperly secured passwords (writing them down, sharing)
Assigned PINs and the replacement mechanisms
Delegated Authentication Services
Poor data segmentation
Physical Security (the physical removal or destruction of your computing
hardware)
Security Architectures
When designing a system a planner should look at the total cost of ownership this
includes:
Analysis
Installation and Deployment
Delegated Services
Training
Management
Audits and Upgrades
Infrastructure Costs (Software and Hardware)
Over 99% of all U.S.- based financial networks are secured with a Private Key
Infrastructure. This is changing over time, based on the sheer volume of transactions
managed daily and the hassles that come with private key management. Private Key-
based systems make good sense if your expected user base is less than 500,000
participants.
Public Key Systems are typically cost effective only in large volumes or where the value
of data is so high that its worth the higher costs associated with this type of deployment.
What most people don t realize is that Public Key systems still rely heavily on Private
Key encryption for all transmission of data. The Public Key encryption algorithms are only
used for non-repudiation and to secure data integrity. Public Key infrastructures as a rule
employ every mechanism of data security in a nested and coordinated fashion to insure
the highest level of security available today.






The most common smart card applications are:
Credit cards
Electronic cash
Computer security systems
Wireless communication
Loyalty systems (like frequent flyer points)
Banking
Satellite TV
Government identification
Smart cards can be used with a smart-card reader attachment to a personal computer to
authenticate a user. Web browsers also can use smart card technology to supplement
Secure Sockets Layer (SSL) for improved security of Internet transactions. Visa's Smart
Card FAQ shows how online purchases work using a smart card and a PC equipped with a
smart-card reader. Smart-card readers can also be found in mobile phones and vending
machines.
Future of Smart Cards:
Given the advantages of smart cards over magnetic stripe cards, there can be no
doubt that the future of smart cards is very bright. If the current trends are
anything to go by, the smart card market is set for exponential growth in the next
few years.

Future for smart cards depends mainly on the introduction of multi-application
cards and overcoming the simplistic mindset that smart cards are just a method of
making a payment.
Conclusion:
Smart cards can add convenience and safety to any transaction of value and data; but
the choices facing today's managers can be daunting. We hope this site has adequately
presented the options and given you enough information to make informed evaluations of
performance, cost and security that will produce a smart card system that fits today's
needs and those of tomorrow. It is our sincere belief that informed users make better
choices, which leads to better business for everybody.


IETE TIRUPATI-SUBCENTER
SRI VENKATESWARA UNIVERSITY,TIRUPATI.

PAPER PRESENTATION
ON
SPEECH PROCESSING
BY
B.HARI KISHORE
S.SAJ ID


ADDRESS FOR COMMUNICATION:

S.SAJID,
E-Mail:sajid_hussain0143@yahoomail.com
ADDRESS:SIDDHARTH INSTITUTE OF ENGINEERING & TECHNOLOGY
SIDDHARTH NAGAR, NARAYANAVANAM ROAD,
PUTTUR-517583.

&

B.HARI KISHORE,
E-Mail:bhari_krishna@yahoomail.com
ADDRESS: SIDDHARTH INSTITUTE OF ENGINEERING & TECHNOLOGY
SIDDHARTH NAGAR, NARAYANAVANAM ROAD,
PUTTUR-517583.











INDEX

Abstract
1. Speech processing
2. Digital signal processing (DSP)
2.1 DSP domains
2.2 Signal sampling
2.3 Frequency domain
2.4 Applications
2.5 Implementation
2.6 Techniques
3. Natural language processing
3.1 Tasks and limitations
3.2 Concrete problems
3.3 Speech segmentation
3.4 Text segmentation
3.5Statistical NLP
3.6 The major tasks in NLP
4 Applications of speech recognition
5 Speaker recognition
6 Verification versus identification
7 Variants of speaker recognition
8 Speech encoding
9 Analysis methods
10 Conclusion













Abstract

Speech is one of the most complex
signals an engineer has to handle. It
is thus not surprising that its
automatic processing has only
recently found a wide market. In this
paper we analyze Speech
recognition, Speaker recognition,
Enhancement of speech signals,
Speech coding and Voice analysis,
and show why they were necessary
for commercial maturity.
Speech processing
Speech processing is the study of
speech signals and the processing
methods of these signals.
The signals are usually processed in
a digital representation whereby
speech processing can be seen as the
intersection of digital signal
processing and natural language
processing.
Speech processing can be divided in
the following categories:
Speech recognition, which
deals with analysis of the
linguistic content of a speech
signal.
Speaker recognition, where
the aim is to recognize the
identity of the speaker.
Enhancement of speech
signals, e.g. noise reduction,
Speech coding for
compression and
transmission of speech. See
also telecommunication.
Voice analysis for medical
purposes, such as analysis of
vocal loading and dysfunction
of the vocal cords.
Speech synthesis: the
artificial synthesis of speech,
which usually means
computer generated speech.
Speech compression is
important in the
telecommunications area for
increasing the amount of info
which can be transferred,
stored, or heard, for a given
set of time and space
constraints.
Digital signal processing (DSP)
DSP is the study of signals in a
digital representation and the
processing methods of these signals.
DSP and analog signal processing
are subfields of signal processing.
DSP has at least four major
subfields: audio signal processing,
control engineering, digital image
processing and speech processing.
Since the goal of DSP is usually to
measure or filter continuous real-
world analog signals, the first step is
usually to convert the signal from an
analog to a digital form, by using an
analog to digital converter. Often,
the required output signal is another
analog output signal, which requires
a digital to analog converter.
The algorithms required for DSP are
sometimes performed using
specialized computers, which make
use of specialized microprocessors
called digital signal processors (also
abbreviated DSP). These process
signals in real time and are generally
purpose-designed application-
specific integrated circuits (ASICs).
When flexibility and rapid
development are more important
than unit costs at high volume, DSP
algorithms may also be implemented
using field-programmable gate
arrays (FPGAs).
DSP domains
In DSP, engineers usually study
digital signals in one of the following
domains: time domain (one-
dimensional signals), spatial domain
(multidimensional signals),
frequency domain, autocorrelation
domain, and wavelet domains. They
choose the domain in which to
process a signal by making an
educated guess (or by trying
different possibilities) as to which
domain best represents the essential
characteristics of the signal. A
sequence of samples from a
measuring device produces a time or
spatial domain representation,
whereas a discrete Fourier
transform produces the frequency
domain information, which is the
frequency spectrum.
Signal sampling
With the increasing use of
computers the usage and need of
digital signal processing has
increased. In order to use an analog
signal on a computer it must be
digitized with an analog to digital
converter (ADC). Sampling is
usually carried out in two stages,
discretization and quantization. In
the discretization stage, the space of
signals is partitioned into
equivalence classes and
discretization is carried out by
replacing the signal with
representative signal of the
corresponding equivalence class. In
the quantization stage the
representative signal values are
approximated by values from a finite
set.
Frequency domain
Signals are converted from time or
space domain to the frequency
domain usually through the Fourier
transform. The Fourier transform
converts the signal information to a
magnitude and phase component of
each frequency. Often the Fourier
transform is converted to the power
spectrum, which is the magnitude of
each frequency component squared.
Applications
The main applications of DSP are
audio signal processing, audio
compression, digital image
processing, video compression,
speech processing, speech
recognition and digital
communications. Specific examples
are speech compression and
transmission in digital mobile
phones, room matching equalisation
of sound in Hifi and sound
reinforcement applications, and
audio effects for use with electric
guitar amplifiers.
Implementation
Digital signal processing is often
implemented using specialised micro
processors such as the MC56000 and
the TMS320. These often process
data using fixed-point arithmetic.
For slow applications such as flame
scanning, a traditional slower
processor such as a microcontroller
can cope.
Techniques
Bilinear transform
Discrete Fourier transform
Discrete-time Fourier
transform
Filter design
LTI system theory
Minimum phase
Transfer function
Z-transform
Goertzel algorithm

Natural language processing
Naturalal language processing
(NLP) is a subfield of artificial
intelligence and linguistics. It studies
the problems of automated
generation and understanding of
natural human languages. Natural
language generation systems convert
information from computer
databases into normal-sounding
human language, and natural
language understanding systems
convert samples of human language
into more formal representations
that are easier for computer
programs to manipulate.
Tasks and limitations
In theory natural language
processing is a very attractive
method of human-computer
interaction. Early systems such as
SHRDLU, working in restricted
"blocks worlds" with restricted
vocabularies, worked extremely well,
leading researchers to excessive
optimism which was soon lost when
the systems were extended to more
realistic situations with real-world
ambiguity and complexity.
Concrete problems
Some examples of the problems
faced by natural language
understanding systems:
The sentences We gave the
monkeys the bananas because
they were hungry and We gave
the monkeys the bananas
because they were over-ripe
have the same surface
grammatical structure.
However, in one of them the
word they refers to the
monkeys, in the other it
refers to the bananas: the
sentence cannot be
understood properly without
knowledge of the properties
and behaviour of monkeys
and bananas.
Speech segmentation
In most spoken languages, the
sounds representing
successive letters blend into
each other, so the conversion
of the analog signal to
discrete characters can be a
very difficult process. Also, in
natural speech there are
hardly any pauses between
successive words; the location
of those boundaries usually
must take into account
grammatical and semantical
constraints, as well as the
context.
Text segmentation
Some written languages like
Chinese, Japanese and Thai
do not have signal word
boundaries either, so any
significant text parsing
usually requires the
identification of word
boundaries, which is often a
non-trivial task.
Statistical NLP
Statistical natural language
processing uses stochastic,
probabilistic and statistical methods
to resolve some of the difficulties
discussed above, especially those
which arise because longer sentences
are highly ambiguous when
processed with realistic grammars,
yielding thousands or millions of
possible analyses. Methods for
disambiguation often involve the use
of corpora and Markov models. The
technology for statistical NLP comes
mainly from machine learning and
data mining, both of which are fields
of artificial intelligence that involve
learning from data.
The major tasks in NLP
Text to speech
Speech recognition
Natural language generation
Machine translation
Information retrieval
Speech recognition
Speech recognition (in many
contexts also known as 'automatic
speech recognition', computer speech
recognition or erroneously as Voice
Recognition) is the process of
converting a speech signal to a
sequence of words, by means of an
algorithm implemented as a
computer program. Speech
recognition applications that have
emerged over the last years include
voice dialing (e.g., Call home), call
routing (e.g., I would like to make a
collect call), simple data entry (e.g.,
entering a credit card number), and
preparation of structured documents
(e.g., a radiology report).
Voice recognition or speaker
recognition is a related process that
attempts to identify the person
speaking, as opposed to what is
being said.
Speech recognition
technology
In terms of technology, most of the
technical text books nowadays
emphasize the use of Hidden
Markov Model as the underlying
technology. The dynamic
programming approach, the neural
network-based approach and the
knowledge-based learning approach
have been studied intensively in the
1980s and 1990s.
Performance of speech
recognition systems
The performance of speech
recognition systems is usually
specified in terms of accuracy and
speed. Accuracy is measured with
the word error rate, whereas speed
is measured with the real time
factor.
Most speech recognition users would
tend to agree that dictation machines
can achieve very high performance
in controlled conditions. Part of the
confusion mainly comes from the
mixed usage of the term speech
recognition and dictation.
Speaker-dependent dictation
systems requiring a short period of
training can capture continuous
speech with a large vocabulary at
normal pace with a very high
accuracy. Most commercial
companies claim that recognition
software can achieve between 98%
to 99% accuracy (getting one to two
words out of one hundred wrong) if
operated under optimal conditions.
These optimal conditions usually
means the test subjects have 1)
matching speaker characteristics
with the training data, 2) proper
speaker adaptation, and 3) clean
environment (e.g. office space). (This
explains why some users, especially
accented, might actually find that
the recognition rate could be
perceptually much lower than the
expected 98% to 99%).
Noisy channel formulation of
statistical speech recognition
Many modern approaches such as
HMM-based and ANN-based speech
recognition are based on noisy
channel formulation (See also
Alternative formulation of speech
recognition). In that view, the task of
a speech recognition system is to
search for the most likely word
sequence given the acoustic signal.
In other words, the system is
searching for the most likely word
sequence among all possible word
sequences W
*
from the acoustic
signal A (what some will call the
observation sequence according to
the Hidden Markov Model
terminology).

Based on Bayes' rule, the above
formulation could be rewritten as

Because the acoustic signal is
common regardless of which word
sequence chosen, the above could be
usually simplified to

The term is generally
called acoustic model. The term
is generally known as
language model.
Approaches of statistical
speech recognition
Hidden Markov model (HMM)-
based speech recognition
Modern general-purpose speech
recognition systems are generally
based on hidden Markov models
(HMMs). This is a statistical model
which outputs a sequence of symbols
or quantities.
One possible reason why HMMs are
used in speech recognition is that a
speech signal could be viewed as a
piece-wise stationary signal or a
short-time stationary signal. That is,
one could assume in a short-time in
the range of 10 milliseconds, speech
could be approximated as a
stationary process. Speech could
thus be thought as a Markov model
for many stochastic processes
(known as states).
Neural network-based speech
recognition
Another approach in acoustic
modeling is the use of neural
networks. They are capable of
solving much more complicated
recognition tasks, but do not scale as
well as HMMs when it comes to
large vocabularies. Rather than
being used in general-purpose
speech recognition applications they
can handle low quality, noisy data
and speaker independence. Such
systems can achieve greater
accuracy than HMM based systems,
as long as there is training data and
the vocabulary is limited.
Dynamic time warping
(DTW)-based speech
recognition
Dynamic time warping is an
algorithm for measuring similarity
between two sequences which may
vary in time or speed. For instance,
similarities in walking patterns
would be detected, even if in one
video the person was walking slowly
and if in another they were walking
more quickly, or even if there were
accelerations and decelerations
during the course of one
observation. DTW has been applied
to video, audio, and graphics --
indeed, any data which can be
turned into a linear representation
can be analysized with DTW.
A well known application has been
automatic speech recognition, to
cope with different speaking speeds.
In general, it is a method that allows
a computer to find an optimal match
between two given sequences (e.g.
time series) with certain restrictions,
i.e. the sequences are "warped" non-
linearly to match each other.

Knowledge-based speech
recognition
This method uses a stored data base
of commands that compares simple
words with ones in the data base.
Applications of speech
recognition
Command recognition - Voice
user interface with the
computer
Dictation
Interactive Voice Response
Automotive speech
recognition
Medical Transcription
Pronunciation Teaching in
computer-aided language
learning applications
Automatic Translation
Hands-free computing

Speaker recognition
Speaker recognition, or voice
recognition is the task of recognizing
people from their voices. Such
systems extract features from
speech, model them and use them to
recognize the person from his/her
voice.
Note that strictly speaking there is a
difference between speaker
recognition (recognizing who is
speaking) and speech recognition
(recognizing what is being said).
Generally these two terms are
frequently confused and voice
recognition is used as a synonym for
speech recognition instead.
Speaker recognition has a history
dating back some four decades,
where the output of several analog
filters was averaged over time for
matching. Speaker recognition uses
the acoustic features of speech that
have been found to differ between
individuals. These acoustic patterns
reflect both anatomy (e.g., size and
shape of the throat and mouth) and
learned behavioral patterns (e.g.,
voice pitch, speaking style). This
incorporation of learned patterns
into the voice templates (the latter
called "voiceprints") has earned
speaker recognition its classification
as a "behavioral biometric."
Verification versus
identification
Generally, two applications of
speaker recognition can be
distinguished: If the speaker claims
to be of a certain identity and the
voice is used to verify this claim this
is called speaker verification or voice
authentication. Recent evidence from
linguistics has raised doubts about
the security of using speaker
identification as a means of user
authentication. On the other hand,
speaker identification is the task of
determining an unknown speaker's
identity. In a sense speaker
verification is a 1:1 match where one
speaker's voice is matched to one
template (and possibly a general
world template) whereas speaker
identification is a 1:N match where
the voice is matched to N templates.
Variants of speaker
recognition
Each speaker recognition system has
two phases: Enrollment and test.
During enrollment the speaker's
voice is recorded and typically a
number of features are derived to
form a voice print, template, or
model. In the test phase (also called
verification or identification phase)
the speaker's voice is matched to the
templates or models.
Speaker recognition systems employ
three styles of spoken input: text-
dependent, text-prompted and text-
independent. This relates to the
spoken text used during enrollment
versus test.
If the text must be the same for
enrollment and test this is called
text-dependent recognition. It can be
divided further into two cases: The
highest accuracies can be achieved if
the text to be spoken is fixed. This
has the advantage that the system
designer can devise a text which
emphasizes speaker differences.
However, since the text is always the
same such systems are vulnerable to
Impostors. Furthermore, it is not
very user friendly if all users have to
remember some complex text and in
addition it makes the system
language dependent.
Text-independent systems are most
often used for speaker identification
as they require very little if any
cooperation by the speaker. In this
case the text during enrollment and
test is different. In fact, the
enrollment may happen without the
user's knowledge. Some recorded
piece of speech may suffice.
Since text-independent systems have
no knowledge of the text being
spoken only general speaker-specific
properties of the speaker's voice can
be used. This does limit the accuracy
of the recognition. On the other
hand, this approach is also
completely language independent.
Technology
The various technologies used to
process and store voiceprints include
frequency estimation, Hidden
Markov models, pattern matching
algorithms, neural networks, matrix
representation and decision trees.
Some systems also use "anti-
speaker" techniques, such as cohort
models, and world models.
Speech encoding
Speech coding is the compression of
speech (into a code) for transmission
with speech codecs that use audio
signal processing and speech
processing techniques.
The two most important applications
using speech coding are mobile
phones and internet phones.
The techniques used in speech
coding are similar to that in audio
data compression and audio coding
where knowledge in psychoacoustics
is used to transmit only data that is
relevant to the human auditory
system. For example, in narrowband
speech coding, only information in
the frequency band 400 Hz to 3500
Hz is transmitted but the
reconstructed signal is still adequate
for intelligibility.
The A-law algorithm and the Mu-
law algorithm are used in nearly all
land-line long distance telephone
communications. They can be seen
as a kind of speech encoding,
requiring only 8 bits per sample but
giving effectively 12 bits of
resolution.
The most common speech coding
scheme is Code-Excited Linear
Predictive (CELP) coding, which is
used for example in the GSM
standard. In CELP, the modelling is
divided in two stages, a linear
predictive stage that models the
spectral envelope and code-book
based model of the residual of the
linear predictive model.


Major subfields:
Wide-band speech coding
o AMR-WB for
WCDMA networks
Narrow-band speech coding
o FNBDT for military
applications
o SMV for CDMA
networks

Voice analysis is the study of speech
sounds for purposes other than
linguistic content, such as in speech
recognition. Such studies include
mostly medical analysis of the voice
i.e. phoniatrics, but also speaker
identification.
Typical voice problems
A medical study of the voice can be,
for instance, analysis of the voice of
patients who have had a polyp
removed from his or her vocal cords
through an operation. In order to
objectively evaluate the
improvement in voice quality there
has to be some measure of voice
quality. An experienced voice
therapist can quite reliably evaluate
the voice, but this requires extensive
training and is still always
subjective.
Analysis methods
Voice problems that require voice
analysis most commonly originate
from the vocal cords since it is the
sound source and is thus most
actively subject to tiring. However,
analysis of the vocal cords is
physically difficult. The location of
the vocal cords effectively prohibits
direct measurement of movement.
Imaging methods such as x-rays or
ultrasounds do not work because the
vocal cords are surrounded by
cartilage which distort image
quality. Movements in the vocal
cords are rapid, fundamental
frequencies are usually between 80
and 300 Hz, thus preventing usage of
ordinary video. High-speed videos
provide an option but in order to see
the vocal cords the camera has to be
positioned in the throat which makes
speaking rather difficult.


Conclusion

Speech technology has experienced a
major paradigm shift in the last
decade: from speech
science, it became speech
engineering. The
increasing availability of large
databases, the existence of
organizations responsible for
collecting and redistributing speech
and text data ,the growing need for
algorithms that work in real
applications requires people to act
as engineers more than as experts.
Currently emerging products and
technologies are certainly less
human-like than what we
expected, but they tend to work in
real time, with todays machines.







STEGANOGRAPHYINIMAGES
USINGPIXELREPLCEMENT
TECHNIQUE







M.SHRAVYA M.SAVYA
III/IVB.Tech IV/IVB.Tech
FhM.S
Mail id:shravya_21m@rediffmail.com
savyachowdary@rediffmail.com


NARAYANA ENGG. COLLEGE
NELLORE.






ABSTRACT
In this age of universal electronic connectivity, of viruses and hackers, of electronic
eavesdropping and electronic fraud, there is indeed no time at which security of information
does not matter. The explosive growth of computer systems and their interconnections via
networks has increased the dependency on the information stored and communication using
these systems. Thus the field of cryptography has got more attention nowadays. More and
more complex techniques for encrypting the information are proposed every now and then.
Some advanced encryption algorithms like RSA, DES, AES etc. which are extremely hard to
crack have been developed. But, as usual, when a small step is made to improve the security,
more work is done in the opposite direction by the hackers to break it. Thus they are able to
attack most of these algorithms and that too, successfully. Even complex algorithms like RSA
are no exception to this.
So, to deceive the hackers, people have started to follow a technique called
Steganography. It is not an entirely new technique and has been in the practice from ancient
times. In this method, the data is hidden behind unsuspecting objects like images, audio, video
etc. so that people cannot even recognize that there is a second message behind the object.
Images are commonly used in this technique.
In this paper, we have proposed a technique for hiding data secretly behind images. The
existing techniques for image Steganography have some serious drawbacks and we have tried
to overcome those with ours. Here, the pixels in the images are replaced with the new ones,
which are almost identical to the old ones, in a manner that can be used to retrieve back the
hidden data. We have implemented this technique practically and found the results satisfying.
S ST TE EG GA AN NO OG GR RA AP PH HY Y U US SI IN NG G P PI IX XE EL L R RE EP PL LA AC CE EM ME EN NT T
T TE EC CH HN NI IQ QU UE E
In todays world of increased security concern in the Internet due to hacking, strong
cryptography techniques are required. But, unfortunately, most of the cryptography
techniques have become vulnerable to attack by the snoopers. This applies to some of the
advanced encryption methods too. So, the need of the hour is to find new methods for keeping
the information secret. One such method commonly proposed nowadays is Steganography.
STEGANOGRAPHY DEFINITION:
Steganography is the art and science of communicating in a way which hides the
existence of the communication. In contrast to cryptography, where the "enemy" is allowed to
detect, intercept and modify messages without being able to violate certain security premises
guaranteed by a cryptosystem, the goal of steganography is to hide messages inside other
harmless messages in a way that does not allow any enemy to even detect that there is a
second secret message present.
STEGANOGRAPHY IN IMAGES:
In essence, image steganography is about exploiting the limited powers of the human
visual system. Within reason, any plain text, ciphertext, other images, or anything that can be
embedded in a bit stream can be hidden in an image. The common methods followed for hiding
data in images are the Least Significant Bit (LSB) Insertion technique in which the LSB of
the pixel values are replaced with the data to be encoded in binary form, the Masking
Technique in which the original bits are masked with data bits and theFiltering Technique
in which certain transformations are done on the image to hide data. The last two techniques
hide data by marking an image in a manner similar to paper watermarks But, there are some
drawbacks with these methods which hinders their use.
DRAWBACKS IN THE CURRENT TECHNIQUES:
Extremely liable to attacks like Image Manipulation techniques where the pixels will be
scanned for a possible relation which will be used to trace out the actual characters.
Only 24 bit messages are suitable and 8 bit images are to be used at great risk.
Extreme Care needs to be taken in the selection of the cover image, so that changes to
the data will not be visible in the stego-image.
Commonly known images, such as famous paintings must be avoided.
THE PIXEL REPLACEMENT TECHNIQUE:
Because of the drawbacks in the currently followed techniques, we propose a new
technique for hiding data in images. Here, we replace the existing pixels in the image with the
new ones in such a way that no difference is visible between the steganographed and the
original image.
THE ALGORITHM:
Encoding:
The Algorithm used for encoding in this technique can be described with the following
steps:
1. Get the Image, Message to be hidden and the Password.
2. Encrypt the message and the password.
3. Move some rows below the first row in the image and fix a reference position near the
left edge for odd characters and near the right for even characters.
4. For each character in the original data do
find a position corresponding to that character
search the surrounding pixels and find a pixel value closer to all of them
replace the current pixel and the reference pixel with this value
move to the next row
5. Set a pixel value as a threshold.
6. Repeat steps 4 and 5 for the Password from the bottom of the image.
Decoding:
The Algorithm for retrieving the original message from the steganographed image
follows this sequence:
1. Get the Image and the Password.
2. Move to the bottom row in the image where password hiding starts.
3. Find the value of the reference pixel.
4. Search the entire row for the same pixel value.
5. Find the position of that pixel and decode the character.
6. Repeat steps 3 thro 5 till the threshold is reached.
7. Concatenate all the characters found so far (Actual Password).
8. If the found password does not match the given password go to step 11.
9. Move to the top row in which the first character of original data was stored.
10. Repeat the sequence followed in steps 3 to 7 to get the original message.
11. Display the result.
FLOW DIAGRAM
The flow diagram of our propsed system may be shown as:
Encoding Decoding




IMPLEMENTATION:
With the algorithm described in brief, let us describe the pixel replacement technique in
detail. First, we shall see how the original message and the password are hidden into the image
and then well discuss how to retrieve message for the authorized person who knows the
correct password.
Encrypting the Message & Password:
First, the original message and the password are got from the user. Then, they are
encrypted using any of the encryption algorithms like RSA, DES etc.This encryption step is
an additional safety feature added to the technique to ensure maximum safety.Then, this
encrypted message is fed as the input to the next step.
Choosing a position to hide the character:
The actual process of steganography starts here. The Image is scanned from the top row
wise. First few rows are omitted and a suitable row is reached. The message is split into
individual characters. The following process is repeated for all the characters in the message. A
position for hiding the character is chosen according to some relation with that character.
The relation can be something like the ASCII value of the character, the order of occurence of
that character in the Alphabetical or Reverse order if it is an alphabet etc.
For example, the position of the character R can be chosen as:
ASCII Value of R =82,
So, position =82 50 =32.
(only eg. any value instead of 50 can be used)
Finding a Suitable Colour:
Once a position is chosen, the values of all the pixels surrounding the pixel in that
position are found. Since this position is usually not near the edge of the image, there will be 8
pixels surrounding it. A pixel value, i.e., a colour, is chosen so that it does not differ much
from those of the 8 pixels. This is the most difficult step in the whole process. The value will
differ only by a small value. Such a small change in the colour will be indiscernible to the
users.
Replacing the pixels:
Now, with the colour to be replaced being found, its time to replace the pixels with the
new colour. A position near to the left side of the row is fixed as the reference position for the
odd numbered character i.e., the 1st, 3rd ,5th character and so on. Similarly, for the even
numbered characters, a position close to the right side of the row is selected as the reference
position. These reference positions are the same for all the characters. When the reference
position is chosen, the pixel in that position is replaced with the new colour . Then, the
pixel in the already found position for the character ( in our eg., this is 32) is also replaced with
the new colour.
Example:
Let us consider a sample image whose pixels are replaced in this way. This can be
explained by the following figure:
(a)Image Before (b) Image After
Replacement Replacement




For the sake of explanation, the pixels are shown clearly in this figure. But, in real situations,
these will not be visible at all because the colour chosen for replacing is so close to the original
colour that cannot be found by the human eye but still can be found out by the computer.The
above mentioned steps of finding a suitable colour and replacing the pixels is continued for all
the characters in the encrypted message.
Setting a Threshold:
The final step in hiding the message is to set a threshold pixel in a fixed position to
indicate the end of the encrypted message. This is essential for decoding the message from the
image. Otherwise, we cannot find the end of the message.
Hiding the Password:
The same steps of chosing the colour, replacing the pixels and setting a threshold are
repeated for hiding the password but the only difference is that the image is scanned row
wise from the last row instead of from the first row.

Retrieving the Message:
The process of retrieving the original message from the steganographed image is
similar to that of the hiding process except in the reverse order. First, the password is got
from the user. The image is scanned from the last row and the row in which the password
hiding started is reached. The reference pixel value is found and the position of that colour in
that row is noted. Then, again the relation used previously for finding the position is used to get
the original character. This can be explained as:
If the position is 32, then
32 +50 =82,
The character of ASCII value 82 is R.
Similarly, the other characters are found till the threshold is reached and all of them are
concatenated to get the original password. Now, the given password and the original are
checked and if they match, then further processes are done, otherwise, an error message is
diaplayed.
If the password matches, then the image is scanned from the top and the starting row
from where the data hiding started is reached. The same steps of finding the reference pixel and
the position of the other pixel are repeated again till the threshold . Then, all the characters are
joined and the original message is displayed.
PRACTICAL IMPLEMENTATION:
We have completed a project using this pixel replacement technique using Visual Basic
6.0. Various types of images like J PEG, BMP, GIF etc are used as source image and the results
are noted. The results are promising. Let us see one of the results here which uses a BMP
image:
An illustration of the Pixel Replacement technique
realized using Visual Basic

Original Image Contaminated Image
As seen from this example, there are no big changes visible between the actual image
and the one in which data is hidden. Thus, our proposed technique can be an useful one for
hiding messages in images.
ADVANTAGES OF OUR TECHNIQUE:
Cost of cracking the hidden message is extremely high.
The data cannot be easily decoded without the key using Image Manipulation
techniques.
Any type of image, 8 or 24 bits can be used.
There is no increase in the size of the image due to data in it.
There are no constraints on the choice of the image.
CONCLUSION:
To overcome the drawbacks in the existing cryptography and steganography
techniques, we have proposed a new technique for hiding data in images. Our technique is less
prone to attacks and since the data is strongly encrypted and the cost of retrieving it by
unauthorized persons is extremly high. Since the pixels are replaced with almost identical
pixels, it is difficult to even identify that there is a second message hidden. So, we hope that
our technique will be used widely in the future.
REFERENCES:


1. Steganography by Markus Kuhn, Steganography Mailing List.
2. Network and Internetwork Security by William Stallings, Addison-Wesley.
3. Steganography by Dorian A. Flowers Xavier, University of Louisiana
4. http://www.thur.de/ulf/stegano
5. http://www.cs.uct.ac.za
1
TEXTURE ANALYSIS AND SYNTHESIS
Ram Prasad Tripathy
ME (Applied Electronics)
Hindustan College of Engineering,
Chennai


Abstract
We present a simple image-based method of generating novel
visual appearance in which a new image is synthesized by
stitching together small patches of existing images. We call this
process image quilting. First, we use quilting as a fast and very
simple texture synthesis algorithm which produces surprisingly
good results for a wide range of textures. Second, we extend the
algorithm to perform texture transfer rendering an object with
a texture taken from a different object. More generally, we
demonstrate how an image can be re-rendered in the style of a
different image. The method works directly on the images and
does not require 3D information.

Keywords: Texture Synthesis, Texture Mapping, Image-based
Rendering
.

1 Introduction
In the past decade computer graphics experienced a wave of activity
in the area of image-based rendering as researchers explored the idea
of capturing samples of the real world as images and using them to
synthesize novel views rather than recreating the entire physical
world from scratch. This, in turn, fueled interest in image based
texture synthesis algorithms. Such an algorithm should be able to
take a sample of texture and generate an unlimited amount of image
data which, while not exactly like the original, will be perceived by
humans to be the same texture. Furthermore, it would be useful to be
able to transfer texture from one object to anther (e.g. the ability to
cut and paste material properties on arbitrary objects). In this paper
we present an extremely simple algorithm to address
the texture synthesis problem. The main idea is to synthesize
new texture by taking patches of existing texture and stitching them
together in a consistent way. We then present a simple generalization
of the method that can be used for texture transfer.
.

1.1 Previous Work
Texture analysis and synthesis has had a long history in psychology,
Statistics and computer vision. In 1950 Gibson pointed out
the importance of texture for visual perception [8], but it was the
pioneering work of Bela J ulesz on texture discrimination [12] that
paved the way for the development of the field. J ulesz suggested










Figure 1: Demonstration of quilting for texture synthesis and texture
transfer. Using the rice texture image (upper left), we can synthesize
more such texture (upper right). We can also transfer the rice texture
onto another image (lower left) for a strikingly different result.

that two texture images will be perceived by human observers to
be the same if some appropriate statistics of these images match.
This suggests that the two main tasks in statistical texture synthesis
are (1) picking the right set of statistics to match, (2) finding an
algorithm that matches them. Motivated by psychophysical and
computational models of human texture discrimination [2, 14],
Heeger and Bergen [10] proposed to analyze texture in terms of
histograms of filter responses at multiple scales and orientations.
Matching these histograms iteratively was sufficient to produce
impressive synthesis results for stochastic textures (see [22] for a
theoretical justification). However, since the histograms measure
marginal, not joint, statistics
they do not capture important relationships across scales and
orientations, thus the algorithm fails for more structured textures. By
also matching these pairwise statistics, Portilla and Simoncelli [17]
were able to substantially improve synthesis results for structured
textures at the cost of a more complicated optimization procedure.
In the above approaches, texture is synthesized by taking a random
noise image and coercing it to have the same relevant statistics
as in the input image. An opposite approach is to start with an input
image and randomize it in such a way that only the statistics
to be matched are preserved. De Bonet [3] scrambles the input in
a coarse-to-fine fashion, preserving the conditional distribution of
filter outputs over multiple scales (jets). Xu el.al. [21], inspired by
the Clone Tool in PHOTOSHOP, propose a much simpler approach
yielding similar or better results. The idea is to take random square
blocks from the input texture and place them randomly onto the
synthesized texture (with alpha blending to avoid edge artifacts).
2


Figure 2: Quilting texture. Square blocks from the input
texture are patched together to synthesize a new texture
sample: (a) blocks are chosen randomly (similar to [21, 18]),
(b) the blocks overlap and each new block is chosen so as to
agree with its neighbors in the region of overlap, (c) to
reduce blockiness the boundary between blocks is computed
as a minimum cost path through the error surface at the
overlap.

The statistics being preserved here are simply the arrangement of
pixels within each block. While this technique will fail for highly
structured patterns (e.g. a chess board) due to boundary
inconsistencies, for many stochastic textures it works remarkably
well. A related method was successfully used by Praun et.al. [18] for
semiautomatic texturing of non-developable objects.
Enforcing statistics globally is a difficult task and none of the
above algorithms provide a completely satisfactory solution. A
easier problem is to enforce statistics locally, one pixel at a time.
Efros and Leung [6] developed a simple method of growing texture
using non-parametric sampling. The conditional distribution
of each pixel given all its neighbors synthesized so far is estimated
by searching the sample image and finding all similar neighborhoods.
(We have recently learned that a nearly identical algorithm
was proposed in 1981 by Garber [7] but discarded due to its then
computational intractability.) The algorithm produces good results
for a wide range of textures, but is excruciatingly slow (a full search
of the input image is required to synthesize every pixel!). Several
researchers have proposed optimizations to the basic method
including Wei and Levoy [20] (based on earlier work by Popat and
Picard [16]), Harrison [9], and Ashikhmin [1]. However, all these
improvements still operate within the greedy single-pixel-at-a-time
paradigm and as such are susceptible to falling into the wrong part
of the search space and starting to grow garbage [6].
Methods have been developed in particular rendering domains
which capture the spirit of our goals in texture transfer. Our goal is
like that of work in non-photorealistic rendering (e.g. [4, 19, 15]).











1.2 Motivation
One curious fact about one-pixel-at-a-time synthesis algorithms
such as Efros and Leung [6] is that for most complex textures very
few pixels actually have a choice of values that can be assigned
to them. That is, during the synthesis process most pixels have
their values totally determined by what has been synthesized so far.
As a simple example, let us take a pattern of circles on a plane.
Once the algorithm has started synthesizing a particular circle, all
the remaining pixels of that circle (plus some surrounding ones) are
completely determined! In this extreme case, the circle would be
called the texture element (texel), but this same effect persists to
a lesser extent even when the texture is more stochastic and there
are no obvious texels. This means that a lot of searching work is
wasted on pixels that already know their fate. It seems then, that
the unit of synthesis should be something more than a single pixel,
a patch perhaps. Then the process of texture synthesis would
be akin to putting together a jigsaw puzzle, quilting together the
patches, making sure they all fit together. Determining precisely
what are the patches for a given texture and how they are put
together
is still an open problem. Here we will present an very naive
version of stitching together patches of texture to form the output
image. We call this method image quilting.

2 Quilting
In this section we will develop our patch-based texture synthesis
procedure. Let us define the unit of synthesis Bi to be a square block
of user-specified size from the set SB of all such overlapping blocks
in the input texture image. To synthesize a new texture image, as a
first step let us simply tile it with blocks taken randomly from SB.
The result shown on Figure 2(a) already looks somewhat reasonable
and for some textures will perform no worse than many previous
complicated algorithms as demonstrated by [21, 18]. Still, the result
is not satisfying, for no matter how much smoothing is done across
the edges, for most structured textures it will be quite obvious that
the blocks do not match.
As the next step, let us introduce some overlap in the placement
of blocks onto the new image. Now, instead of picking a random
block, we will search SB for such a block that by some measure
agrees with its neighbors along the region of overlap. Figure 2(b)
shows a clear improvement in the structure of the resulting texture,
however the edges between the blocks are still quite noticeable.
Once again, smoothing across the edges will lessen this problem
But we will attempt to solve it in a more principled way.
Finally, we will let the blocks have ragged edges which will allow
them to better approximate the features in the texture. Now,
before placing a chosen block into the texture we will look at the
error in the overlap region between it and the other blocks. We find
a minimum cost path through that error surface and declare that to
be the boundary of the new block. Figure 2(c) shows the results of
this simple modification.









3


Figure 3: Image quilting synthesis results for a wide range of
textures. The resulting texture (right side of each pair) is synthesized
at twice
the size of the original (left).

2.1 Minimum Error Boundary Cut
We want to make the cut between two overlapping blocks on the
pixels where the two textures match best (that is, where the overlap
error is low). This can easily be done with dynamic programming
(Dijkstras algorithm can also be used [5]).
The minimal cost path through the error surface is computed in
the following manner. If B1 and B2 are two blocks that overlap
along their vertical edge (Figure 2c) with the regions of overlap
Bov 1 and Bov 2 , respectively, then the error surface is defined as e =
(Bov 1 _ Bov2 )2. To find the minimal vertical cut through this surface
we traverse e (i =2..N) and compute the cumulative minimum
error E for all paths: Ei,j =ei,j +min(Ei_1,j_1, Ei_1,j, Ei_1,j+1). (1)
In the end, the minimum value of the last row in E will indicate
the end of the minimal vertical path though the surface and one can
trace back and find the path of the best cut. Similar procedure can
be applied to horizontal overlaps. When there is both a vertical and
a horizontal overlap, the minimal paths meet in the middle and the
overall minimum is chosen for the cut.

2.2 The Image Quilting Algorithm
The complete quilting algorithm is as follows:

_ Go through the image to be synthesized in raster scan order in
steps of one block (minus the overlap).

_ For every location, search the input texture for a set of blocks
that satisfy the overlap constraints (above and left) within
some error tolerance. Randomly pick one such block.

_ Compute the error surface between the newly chosen block
and the old blocks at the overlap region. Find the minimum
cost path along this surface and make that the boundary of the
new block. Paste the block onto the texture. Repeat.






The size of the block is the only parameter controlled by the user
and it depends on the properties of a given texture; the block must
be big enough to capture the relevant structures in the texture, but
small enough so that the interaction between these structures is left
up to the algorithm.
In all of our experiments the width of the overlap edge (on one
side) was 1/6 of the size of the block. The error was computed
using the L2 norm on pixel values. The error tolerance was set to
be within 0.1 times the error of the best matching block.

2.3 Synthesis Results
The results of the synthesis process for a wide range of input textures
are shown on Figures 3 and 4. While the algorithm is particularly
effective for semi-structured textures (which were always the hardest
for statistical texture synthesis), the performance is quite
good on stochastic textures as well. The two most typical problems
are excessive repetition (e.g. the berries image), and mismatched
or distorted boundaries (e.g. the mutant olives image). Both are
mostly due to the input texture not containing enough variability.
Figure 6 shows a comparison of quilting with other texture synthesis
algorithms.
The algorithm is not only trivial to implement but is also quite
fast: the unoptimized MATLAB code used to generate these results
ran for between 15 seconds and several minutes per image depending
on the sizes of the input and output and the block size used.
Because the constraint region is always the same its very easy to
optimize the search process without compromising the quality of
the results (see also Liang et.al. [13] who report real-time
performance using a very similar approach).


Figure 4: More image quilting synthesis results (for each pair, left is
original, right is synthesized)


4



Figure 5: Texture transfer: here, we take the texture from the orange
and the Picasso drawing and transfer it onto different objects. The
result has the texture of the source image and the correspondence
map values of the target image.

3 Texture Transfer
Because the image quilting algorithm selects output patches based
on local image information, it is particularly well suited for texture
transfer. We augment the synthesis algorithm by requiring that
each patch satisfy a desired correspondence map, ~C, as well as
satisfy the texture synthesis requirements. The correspondence map
is a spatial map of some corresponding quantity over both the texture
source image and a controlling target image. That quantity could
include image intensity, blurred image intensity, local image
orientation angles, or other derived quantities.
An example of texture transfer is shown in Figure 1. Here, the
correspondence map are the (luminance) image intensities of the
mans face. That is, bright patches of face and bright patches of rice
are defined to have a low correspondence error. The synthesized
rice texture conforms to this second constraint, yielding a rendered
image where the face image appears to be rendered in rice.
For texture transfer, image being synthesized must respect two
independent constraints: (a) the output are legitimate, synthesized
examples of the source texture, and (b) that the correspondence
image
mapping is respected. We modify the error term of the image
quilting algorithm to be the weighted sum, _ times the block overlap
matching error plus (1 _ _) times the squared error between
the correspondence map pixels within the source texture block and
those at the current target image position.












The parameter _ determines the tradeoff between the texture synthesis
and the fidelity to the target image correspondence map.

Because of the added constraint, sometimes one synthesis pass
through the image is not enough to produce a visually pleasing result.
In such cases, we iterate over the synthesized image several
times, reducing the block size with each iteration. The only change
from the non-iterative version is that in satisfying the local texture
constraint the blocks are matched not just with their neighbor
blocks on the overlap regions, but also with whatever was
synthesized
at this block in the previous iteration. This iterative scheme
works surprisingly well: it starts out using large blocks to roughly
assign where everything will go and then uses smaller blocks to
make sure the different textures fit well together. In our tests, we
used N =3 to N =5 iterations, reducing the block size by a third
each time. Our texture transfer method can be applied to render a
photograph using the line drawing texture of a particular source
drawing; or to transfer material surface texture onto a new image (see
Figure 5). For the orange texture the correspondence maps are the
source and target image luminance values; for Picasso the
correspondence maps are the blurred luminance values.



4 Conclusion
In this paper we have introduced image quilting, a method of
synthesizing a new image by stitching together small patches of
5
existing images. Despite its simplicity, this method works
remarkably
well when applied to texture synthesis, producing results that are
equal or better than the Efros & Leung family of algorithms but with
improved stability (less chance of growing garbage) and at a
fraction
of the computational cost. We have also extended our method
to texture transfer in a general setting with some very promising
results.


References
[1] M. Ashikhmin. Synthesizing natural textures. In Symposium on Interactive
3D Graphics, 2001.
[2] J . Bergen and E. Adelson. Early vision and texture perception. Nature,
333:363364, 1988.
[3] J . S. De Bonet. Multiresolution sampling procedure for analysis and
synthesis of texture images. In SIGGRAPH 97, pages 361368, 1997.
[4] C. J . Curtis, S. E. Anderson, J . E. Seims, Kurt W. Fleisher, and D. H.
Salsin. Computer-generated watercolor. In SIGGRAPH 97, pages 421430,
1997.
[5] J . Davis. Mosaics of scenes with moving objects. In Proc. IEEE Conf. on
Comp.Vision and Patt. Recog., 1998.
[6] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric
sampling. In International Conference on Computer Vision, pages 10331038,
Corfu, Greece,September 1999.
[7] D. D. Garber. Computational Models for Texture Analysis and Texture
Synthesis.PhD thesis, University of Southern California, Image Processing
Institute, 1981.


































[8] J . J . Gibson. The Perception of the Visual World. Houghton Mifflin,
Boston, Massachusetts, 1950.
[9] P. Harrison. A non-hierarchical procedure for re-synthesis of complex
textures. In WSCG 2001 Conference proceedings, pages 190197, 2001. See
also http://www.csse.monash.edu.au/pfh/resynthesizer/.
10] David J . Heeger and J ames R. Bergen. Pyramid-based texture
analysis/synthesis.In SIGGRAPH 95, pages 229238, 1995.
11] A. Hertzmann, C.E. J acobs, N. Oliver, B. Curless, and D.H. Salesin.
Image analogies. In SIGGRAPH 01, 2001.
12] Bela J ulesz. Visual pattern discrimination. IRE Transactions on
InformationTheory, 8(2):8492, 1962.
13] L. Liang, C. Liu, , Y.Xu, B. Guo, and H.-Y. Shum. Real-time texture
synthesis bypatch-based sampling. Technical ReportMSR-TR-2001-
40,Microsoft Research, March 2001.
14] J . Malik and P. Perona. Preattentive texture discrimination with early
vision mechanism. JOSA-A, 5(5):923932, May 1990.
15] V. Ostromoukhov and R. D. Hersch. Multi-color and artistic dithering. In
SIGGRAPH 99, pages 425432, 1999.
16] Kris Popat and Rosalind W. Picard. Novel cluster-based probability model
for texture synthesis, classification, and compression. In Proc. SPIE Visual
Comm.and Image Processing, 1993.
17] J . Portilla and E. P. Simoncelli. A parametric texture model based on joint
statistics of complex wavelet coefficients. International Journal of Computer
Vision,40(1):4971, December 2000.
18] Emil Praun, AdamFinkelstein, and Hugues Hoppe. Lapped textures. In
SIGGRAPH 00, pages 465470, 2000.
19] M. P. Salisbury, M. T.Wong, J . F. Hughes, and D. H. Salesin. Orientable
textures for image-based pen-and-ink illustration. In SIGGRAPH 97, 1997.
20] Li-Yi Wei and Marc Levoy. Fast texture synthesis using tree-structured
vector quantization. In SIGGRAPH 00, pages 479488, 2000.
21] Y. Xu, B. Guo, and H.-Y. Shum. Chaos mosaic: Fast and memory efficient
texture synthesis. Technical Report MSR-TR-2000-32, Microsoft Research,
April 2000.
22] Song Chun Zhu, Yingnian Wu, and David Mumford. Filters, random
fields and maximumentropy (frame). International Journal of Computer
Vision, 27(2):120, March/April 1998.


UTILITY FOG AN APPLICATION OF NANOROBOTICS
K.GANDHI D.NAMRATHA
EEE EEE
COE, GITAM COE, GITAM
gandhi_korukula@yahoo.co.in namratha_dikkala@yahoo.co.in
Contact no. - 9985935538

Abstract: This paper throws light on a
technology in the branch of nanorobotics
named as UTILITY FOG. The limitations,
advantages, properties, applications in space
exploration of this technology are presented.
Utility fog is a collection of tiny robots,
envisioned by Dr. J ohn Storrs Hall while he
was thinking about a nanotechnological
replacement for car seatbelts. The robots would
be microscopic, with extending arms reaching
in several different directions, and can perform
lattice reconfiguration. Grabbers at the ends of
the arms would allow the robots (or foglets) to
mechanically link to one another and share
both information and energy, enabling them to
act as a continuous substance with mechanical
and optical properties that could be varied over
a wide range. Each foglet would have
substantial computing power, and would be
able to communicate with its neighbors.
Introduction: The human body is a pretty
nifty gadget. It has some maddening
limitations, most of which are due to its
essential nature as a bag of seawater. It
wouldn't be too hard, given nanotechnology, to
design a human body that was stronger, lighter,
with a faster brain and less limited senses, able
to operate comfortably in any natural
environment on earth or in outer space
(excluding the Sun and a few other obvious
places).

This Utility Fog material, composed of individual
foglets below, would float loosely over the driver and in
the event of an accident they would hold together via
their 12 arm to form an invisible shield protecting the
driver from injury.

In the virtual environment of the uploads, not
only can the environment be anything you like;
you can be anything you like. You can be big
or small; you can be lighter than air, and fly;
you can teleport and walk through walls. You
can be a lion or an antelope, a frog or a fly, a
tree, a pool, the coat of paint on a ceiling.
You can be these things in the real world, too,
if your body is made of Utility Fog.
What is Utility Fog?
It is an intelligent substance, able to simulate
the physical properties of most common
substances, and having enough processing
power that human-level processes could run in
a handful or so of it. Imagine a microscopic
robot. It has a body about the size of a human
cell and 12 arms sticking out in all directions.
A bucketful of such robots might form a
``robot crystal'' by linking their arms up into a
lattice structure. Now take a room, with people,
furniture, and other objects in it--it's still
mostly empty air. Fill the air completely full of
robots. The robots are called Foglets and the
substance they form is Utility Fog.
With the right programming, the robots can
exert any force in any direction on the surface
of any object. They can support the object, so
that it apparently floats in the air. They can
support a person, applying the same pressures
to the seat of the pants that a chair would. They
can exert the same resisting forces that elbows
and fingertips would receive from the arms and
back of the chair. A program running in the
Utility Fog can thus si The Utility Fog operates
in two modes: First, the ``naive'' mode where
the robots act much like cells, and each robot
occupies a particular position and does a
particular function in a given object. The
second, or ``Fog'' mode, has the robots acting
more like the pixels on a TV screen. The object
is then formed of a pattern of robots, which
vary their properties according to which part of
the object they are representing at the time. An
object can then move across a cloud of robots
without the individual robots moving, just as
the pixels on a CRT remain stationary while
pictures move around on the screen.
The Utility Fog which is simulating air needs
to be impalpable. One would like to be able to
walk through a Fog-filled room without the
feeling of having been cast into a block of solid
Lucite. Of course if one is a Fog-mode upload
this is straightforward; but the whole point of
having Fog instead of a purely virtual reality is
to mix virtual and physical objects in a
seamless way. To this end, the robots
representing empty space can run a fluid-flow
simulation of what the air would be doing if the
robots weren't there. Then each robot moves
where the air it displaces would move in its
absence.
The other major functions the air performs, that
humans notice, are transmitting sound and
light. Both of these properties are obscured by
the presence of Fog in the air, but both can be
simulated at a level sufficient to fool the senses
of humans and most animals by transmitting
the information through the Fog by means we'll
consider later, and reconstructing the physical.
To understand why we want to fill the air with
microscopic robots only to go to so much
trouble to make it seem as if they weren't there,
consider the advantages of a TV or computer
screen over an ordinary picture. Objects on the
screen can appear and disappear at will; they
are not constrained by the laws of physics. The
whole scene can shift instantly from one
apparent locale to another. Completely
imaginary constructions, not possible to build
in physical reality, could be commonplace.
Virtually anything imaginable could be given
tangible reality in a Utility Fog environment.
Why not, instead, build a virtual reality
machine that produces a purely sensory (but
indistinguishable) version of the same apparent
world? The Fog acts as a continuous bridge
between actual physical reality and virtual
reality. The Fog is universal effector as well as
a universal sensor. Any (real) object in the Fog
environment can be manipulated with an
extremely wide array of patterns of pressure,
force, and support; measured; analyzed;
weighed; cut; reassembled; or reduced to
bacteria-sized pieces and sorted for recycling.
General Properties and Uses
As well as forming an extension of the senses
and muscles of individual people, the Fog can
act as a generalized infrastructure for society at
large. Fog City need have no permanent
buildings of concrete, no roads of asphalt, no
cars, trucks, or buses.It will be more efficient
to build dedicated machines for long distance
energy and information propagation, and
physical transport. For local use, and interface
to the worldwide networks, the Fog is ideal for
all of these functions. It can act as shelter,
clothing, telephone, computer, and automobile.
It will be almost any common household
object, appearing from nowhere when needed
(and disappearing afterwards). It gains certain
efficiency from this extreme of polymorphism;
consider the number of hardcopy photographs
necessary to store all the images one sees on a
television or computer screen. With Utility Fog
we can have one ``display'' and keep all our
physical possesions on disk.
Another item of infrastructure that will become
increasingly important in the future is
information processing. Nanotechnology will
allow us to build some really monster
computers. Although each Foglet will possess a
comparatively small processor--which is to say
the power of a current-day supercomputer--
there are about 16 million Foglets to a cubic
inch. When those Foglets are not doing
anything else, i.e. when they are simulating the
interior of a solid object or air that nothing is
passing through at the moment, they can be
used as a computing resource (with the caveats
below).
The Limits of Utility Fog Capability
When discussing something as far outside of
everyday experience as the Utility Fog, it is a
good idea to delineate both sides of the
boundary. The Fog is capable of so many
literally amazing things, we will point out a
few of the things it isn't capable of:
--Anything requiring hard metal (cold steel?).
For example, Fog couldn't simulate a drill bit
cutting through hardwood. It would be able to
cut the hole, but the process would be better
described as intelligent sandpaper.
--Anything requiring both high strength and
low volume. A parachute could not be made of
Fog (unless, of course, all the air were filled
with Fog, in which case one could simply fly).
--Anything requiring high heat. A Fog fire
blazing merrily away on Fog logs in a fireplace
would feel warm on the skin a few feet away; it
would feel the same to a hand inserted into the
``flame''.
--Anything requiring molecular manipulation
or chemical transformation. Foglets are simply
on the wrong scale to play with atoms. In
particular, they cannot reproduce themselves.
On the other hand, they can do things like
prepare food the same way a human cook does-
-by mixing, stirring, and using special-purpose
devices that were designed for them to use.
--Fog cannot simulate food, or anything else
that is destined to be broken down chemically.
Eating it would be like eating the same amount
of sand or sawdust.
--Fog can simulate air to the touch but not to
the eyes. The best indications are that it would
look like heavy fog. Thus the Fog would need
to support a pair of holographic goggles in
front of the eyes of an embedded user. Such
goggles are clearly within the capabilities of
the same level of nanotechnology as is needed
for the Fog, but are beyond the scope of this
paper.
ANGELNETS:

Angelnets (and, indeed, utility fog in general)
can have many different settings. These are just
some of them.
Privacy Mode - An angelnet can be set to
'ignore' beings, or areas. This could be done so
that maintenance sections might remain
invisible, or defensive installations not intrude
on baseline lives. This is one of the more
frequent abuse modes, and is often considered
a dead tip-off by angelnet controllers that some
shenanigans are going on if one or more
sections of the angelnet/utility fog go to
privacy mode without prior approval.
Constraint Mode - An angelnet can be set to
restrict mobility of one or more of the beings
inside its purview. This can be anything from
gentle guidance (slowly growing resistance to
motion except in the controller-desired
direction) through to a full body cast
equivalent. In extreme cases, angelnet
controllers have been known to insert
nanotechnic 'tendrils' into sophonts to constrain
internal processes or assist in absorbing shock.
Whisper Mode - An angelnet activated in
whisper mode greatly reduces sound
transmission as it allows the nodes of the 'net to
flex and sway slightly on the support
structures, absorbing portions of the kinetic
energy imparted via sonic waves. Of course,
the angelnet is capable of generating most
sounds at nearly any section of its extent,
allowing stereophonic sound effects for any or
all of the sophonts within it. Some makes of
angelnetting may have difficulty damping
certain hi-frequency sounds, based on physical
constraints.
Grav Mode - As a short-term stopgap,
angelnets can reduce baseline requirements for
gravity by reading the baseline's body motions
and responding with pressure appropriate to the
desired motion. In effect, the sophont becomes
a puppet on the strings of the angelnet, but
guides their own manipulations by muscle
contraction and effort. Note that this is not a
long term solution in and of itself, as secondary
effects of microgravity environments on bionts
continue, including pooling of bodily fluids in
extremities, as well as some biochemical
changes. Note also that many baseline
hominids find this mode quite uncomfortable,
as their perceived physical motion may well
not agree with their somatic sensations, leading
to motion sickness.
Object mode - When tactile interaction with a
physical object is desired, angelnetting can take
the place of many (not all) physical objects.
The main limitations are temperature (many
angelnets are unable to generate body heat
without extensive modification and/or jury-
rigging, for instance) and texture, as
angelnetting at full contraction retains a
somewhat 'grainy' feel as the nodes retain some
seperation. (Some more-advanced angelnets
get around these limitations with 'props',
sections of various textured and/or
temperature-responsive materials kept handy
for interactions and moved as the sophont
comes in contact with them.)
Service Mode - The UF/angelnet transports
consumables to individuals residing within it. It
can "hide" objects in transit from individuals
until such time as the "delivery" is made.
Medical Mode - If a being experiences an
extreme cutting or puncturing injury, the local
angelnet gushes into the wound and seals off
the blood flow forcibly until emergency units
can respond. If the area is too far from a
civilization center for emergency assistance to
arrive in a timely manner, the angelnet can
`pick up' and `levitate' the sophont to a medical
center. In extreme cases the angelnet acts as the
hands and eyes of a remote doctor, who
provides emergency medical care onsite.
Teleport mode - When a sophont wishes to
travel somewhere within an angelnetted
environment they specify where they want to
go and the net 'teleports' them there. A section
of angelnet at the 'destination' is bound over to
the sophonts control and takes on their form (or
whatever form they specify to represent them).
Meanwhile the net surrounding the traveler
takes on the form of the surroundings at the
destination. As the traveler moves around
inside the simulation (which uses the net as an
omni directional treadmill allowing them to
walk without actually going anywhere) their
movements are duplicated by their projection.
Whatever the traveler's projection experiences
is transmitted to them via the net. Effectively
the being has traveled instantly to a distant
location without ever leaving home.

Advantages of a Utility Fog Environment
Another major advantage for space-filling Fog
is safety. In a car (or its nanotech descendant)
Fog forms a dynamic form-fitting cushion that
protects better than any seatbelt of nylon fibers.
An appropriately built house filled with Fog
could even protect its inhabitants from the
(physical) effects of a nuclear weapon within
95% or so of its lethal blast area.
There are many more mundane ways the Fog
can protect its occupants, not the least being
physically to remove bacteria, mites, pollen,
and so forth, from the air. A Fog-filled home
would no longer be the place that most
accidents happen. First, by performing most
household tasks using Fog as an
instrumentality, the cuts and falls that
accompany the use of knives, power tools,
ladders, and so forth, can be eliminated.
Secondly, the other major class of household
accidents, young children who injure
themselves out of ignorance, can be avoided by
a number of means. A child who climbed over
a stair rail would float harmlessly to the floor.
A child could not pull a bookcase over on
itself; falling over would not be among the
bookcase's repertoire. Power tools, kitchen
implements, and cleaning chemicals would not
normally exist; they or their analogs would be
called into existence when needed and vanish
instead of having to be cleaned and put away.
Outside the home, the possibilities are, if
anything, greater. One can easily imagine
``industrial Fog'' which forms a factory. It
would consist of larger robots. Unlike domestic
Fog, which would have the density and
strength of balsa wood, industrial Fog could
have bulk properties resembling hardwood or
aluminum. A nanotechnology- age factory
would probably consist of a mass of Fog with
special-purpose reactors embedded in it, where
high-energy chemical transformations could
take place. All the physical manipulation,
transport, assembly, and so forth would be
done by the Fog.
Applications in Space Exploration
The major systems of spaceships will need to
be made with special- purpose
nanotechnological mechanisms, and indeed
with such mechanisms pushed much closer to
their true capacities than anything we have
talked about heretofore. In the spaceship's
cabin, however, will be an acceleration couch.
When not accelerating, which is most of the
time, we'd prefer something useful, like empty
space, there. The Utility Fog makes a better
acceleration couch, anyway.
Fill the cabin with Utility Fog and never worry
about floating out of reach of a handhold.
Instruments, consoles, and cabinets for
equipment and supplies are not needed. Non-
simulable items can be embedded in the fog in
what are apparently bulkheads. The Fog can
add great structural strength to the ship itself;
the rest of the structure need be not much more
than a balloon.
The same is true for spacesuits: Fog inside the
suit manages the air pressure and makes
motion easy; Fog outside gives extremely fine
manipulating ability for various tasks. Of
course, like the ship, the suit contains many
special purpose non-Fog mechanisms.
Surround the space station with Fog. It needs
radiation shielding anyway (if the occupants
are long-term); use big industrial Foglets with
lots of redundancy in the mechanism; even so
they may get recycled fairly often. All the
stock problems from SF movies go away:
humans never need go outside merely to fix
something; when EVA is desired for transfer or
recreation, outside Fog provides complete
safety and motion control. It also makes a good
tugboat for docking spaceships.
Homesteaders on the Moon could bring along a
batch of heavy duty Fog as well as the special-
purpose nanotech power generation and waste
recycling equipment. There will be a million
and one things, of the ordinary yet arduous
physical task kind, that must be done to set up
and maintain a self- sufficient household.
Telepresence
An eidolon is the common term for a sophont's
telepresence in utility fog. An eidolon is
generally seen as less personal than a visit in
person, but more so than a virchspace
interaction and far more so than text, audio, or
audiovisual communications. It is commonly
used as an alternative to a personal visit when
direct contact is impossible because of great
distance, radically different environmental
requirements, distrust between the parties
concerned, or matters of social convention and
propriety. Some individuals who avoid using
virchspace for practical or personal reasons
will still consent to send or interact with an
eidolon as an alternative.

Telepresence refers to a set of technologies
which allow a person to to feel as if they were
present, to give the appearance that they were
present, or to have an effect, at a location other
than their true location.
Telepresence requires that the senses of the
user, or users, are provided with such stimuli as
to give the feeling of being in that other
location. Additionally, the user(s) may be given
the ability to affect the remote location. In this
case, the user's position, movements, actions,
voice, etc. may be sensed, transmitted and
duplicated in the remote location to bring about
this effect. Therefore information may be
travelling in both directions between the user
and the remote location.
Physical Properties of Utility Fog
Most currently proposed nanotechnological
designs are based on carbon. Carbon is a
marvelous atom for structural purposes,
forming a crystal (diamond) which is very stiff
and strong. However, a Fog built of diamond
would have a problem which nanomechanical
designs of a more conventional form do not
pose: the Fog has so much surface area
exposed to the air that if it were largely
diamond, especially on the surface, it would
amount to a ``fuel-air explosive''. Therefore the
Foglet is designed so that its structural
elements, forming the major component of its
mass, are made of aluminum oxide, a
refractory compound using common elements.
The structural elements form an exoskeleton,
which besides being a good mechanical design
allows us to have an evacuated interior in
which more sensitive nanomechanical
components can operate. Of course, any
macroscopic ignition source would vaporize
the entire Foglet; but as long as more energy is
used vaporizing the exoskeleton than is gained
burning the carbon-based components inside,
the reaction cannot spread.
Each Foglet has twelve arms, arranged as the
faces of a dodecahedron. The arms telescope
rather than having joints. The arms swivel on a
universal joint at the base, and the gripper at
the end can rotate about the arm's axis. Each
arm thus has four degrees of freedom, plus
opening and closing the gripper. The only load-
carrying motor on each axis is the
extension/retraction motor. The swivel and
rotate axes are weakly driven, able to position
the arm in free air but not drive any kind of
load; however, there are load-holding brakes
on these axes.
The gripper is a hexagonal structure with three
fingers, mounted on alternating faces of the
hexagon. Two Foglets ``grasp hands'' in an
interleaved six-finger grip. Since the fingers
are designed to match the end of the other arm,
this provides a relatively rigid connection;
forces are only transmitted axially through the
grip. When at rest, the Foglets form a regular
lattice structure. If the bodies of the Foglets are
thought of as atoms, it is a ``face-centered
cubic'' crystal formation, where each atom
touches 12 other atoms.
Consider the arms of the Foglets as the girders
of the trusswork of a bridge: they form the
configuration known as the ``octet truss''
invented by Buckminster Fuller in 1956. The
spaces bounded by the arms form alternate
tetrahedrons and octahedrons, both of which
are rigid shapes.
The Fog may be thought of as consisting of
layers of Foglets. The layers, and the shear
planes they define, lie at 4 major angles
(corresponding to the faces of the tetrahedrons
and octahedrons) and 3 minor ones
(corresponding to the face-centered cube
faces). In each of the 4 major orientations, each
Foglet uses six arms to hold its neighbors in the
layer; layers are thus a 2-dimensionally rigid
fabric of equilateral triangles. In face-centered
mode, the layers work out to be square grids,
and are thus not rigid, a slight disadvantage.
Most Fog motion is organized in layers; layers
slide by passing each other down hand-over-
hand in bucket brigade fashion. At any instant,
roughly half the arms will be linked between
layers when they are in motion.
The Fog moves an object by setting up a seed-
shaped zone around it. The Foglets in the zone
move with the object, forming a fairing which
makes the motions around it smoother. If the
object is moving fast, the Fog around its path
will compress to let it go by. The air does not
have time to move in the Fog matrix and so the
motion is fairly efficient. For slower motions,
efficiency is not so important, but if we wish to
prevent slow-moving high-pressure areas from
interfering with other airflow operations, we
can enclose the object's zone in a self-
contained convection cell which moves Foglets
from in front to behind it.
Each moving layer of robots is similarly
passing the next layer along, So each layer
adds another increment of the velocity
difference of adjacent layers. Motors for arm
extension can run at a gigahertz, and be geared
down by a factor of 100 to the main screw in
the arm. This will have a pitch of about a
micron, giving a linear extension/retraction rate
of about 10 meters per second. We can
estimate the inter-layer shear rate at this
velocity; the foglets are essentially pulling
themselves along. Thus for a 100-micron
interlayer distance Fog can sustain a 100
meter-per-second shear per millimeter of
thickness.
The atomically-precise crystals of the Foglets'
structural members will have a tensile strength
of at least 100,000 psi (i.e. high for steel but
low for the materials, including some fairly
refractory ceramics, used in modern ``high-
tech'' composites). At arms length of 100
microns, the Fog will occupy 10% of the
volume of the air but has structural efficiency
of only about 1% in any given direction. Thus
Utility Fog as a bulk material will have a
density (specific gravity) of 0.2; for
comparison, balsa wood is about 0.15 and cork
is about 0.25. Fog will have a tensile strength
of only 1000 psi; this is about the same as low-
density polyethylene (solid, not foam). The
material properties arising from the lattice
structure are more or less isotropic; the one
exception is that when Fog is flowing, tensile
strength perpendicular to the shear plane is cut
roughly in half.
Without altering the lattice connectivity, Fog
can contract by up to about 40% in any linear
dimension, reducing its overall volume (and
increasing its density) by a factor of five. (This
is of course done by retracting all arms but not
letting go.) In this state the fog has the density
of water. An even denser state can be attained
by forming two interpenetrating lattices and
retracting; at this point its density and strength
would both be similar to ivory or Corian
structural plastic, at specific gravity of 2 and
about 6000 psi. Such high-density Fog would
have the useful property of being waterproof
(which ordinary Fog is not), but it cannot flow
and takes much longer to change configuration.
Foglets in Detail
Foglets run on electricity, but they store
hydrogen as an energy buffer. We pick
hydrogen in part because it's almost certain to
be a fuel of choice in the nanotech world, and
thus we can be sure that the process of
converting hydrogen and oxygen to water and
energy, as well as the process of converting
energy and water to hydrogen and oxygen, will
be well understood. That means we'll be able to
do them efficiently, which is of prime
importance.

Utility Fog is basically a 'Fractal Robot', made up of
trillions of smaller bots, like the polymorphic liquid
metal Robot in the movie Terminator 2.
Suppose that the Fog is flowing, layers sliding
against each other, and some force is being
transmitted through the flow. This would
happen any time the Fog moved some non-Fog
object, for example. J ust as human muscles
oppose each other when holding something
tightly, opposing forces along different Foglet
arms act to hold the Fog's shape and supply the
required motion.

When two layers of Fog move past each other,
the arms between may need to move as many
as 100 thousand times per second. Now if each
of those motions were dissipative, and the fog
were under full load, it would need to consume
700 kilowatts per cubic centimeter. This is
roughly the power dissipation in a .45 caliber
cartridge in the millisecond after the trigger is
pulled; i.e. it just won't do.
But nowhere near this amount of energy is
being used; the pushing arms are supplying this
much but the arms being pushed are receiving
almost the same amount, minus the work being
done on the object being moved. So if the
motors can act as generators when they're
being pushed, each Foglet's energy budget is
nearly balanced. Because these are arms
instead of wheels, the intake and outflow do
not match at any given instant, even though
they average out the same over time (measured
in tens of microseconds). Some buffering is
needed.
Almost never would one expect the Fog to
move actively at 1000 psi; the pressure in the
column of Fog beneath, say, a ``levitated''
human body is less than one thousandth of that.
The 1000 psi capability is to allow the Fog can
simulate hard objects, where forces can be
concentrated into very small areas. Even so,
current exploratory engineering designs for
electric motors have power conversion
densities up to a billion watts per cubic
centimeter, and dissipative inefficiencies in the
10 parts per million range. This means that if
the Empire State Building were being floated
around on a column of Fog, the Fog would
dissipate less than a watt per cubic centimeter.
Moving Fog will dissipate energy by air
turbulence and viscous drag. In the large, air
will be entrained in the layers of moving Fog
and forced into laminar flow. Energy
consumed in this regime may be properly
thought of as necessary for the desired motion
no matter how it was done. As for the waving
of the arms between layers, the Reynolds
number decreases linearly with the size of the
arm. Since the absolute velocity of the arms is
low, i.e. 1 m/s, the Reynolds number should be
well below the ``lower critical'' value, and the
arms should be operating in a perfectly viscous
regime with no turbulence. The remaining
effect, viscous drag (on the waving arms)
comes to a few watts per square meter of shear
plane per layer.
Communications and Control
In the macroscopic world, microcomputer-
based controllers (e.g. the widely used Intel
8051 series microcontrollers) typically run on a
clock speed of about 10 MHz. They emit
control signals, at most, on the order of 10 KHz
(usually less), and control motions in robots
that are at most 10 Hz, i.e. a complete motion
taking one tenth of a second. This million-
clocks-per-action is not strictly necessary, of
course; but it gives us some concept of the
action rate we might expect for a given
computer clock rate in a digitally controlled
nanorobot.
Each Foglet is going to have 12 arms with
three axis control each. In current technology it
isn't uncommon to have a processor per axis;
we could fit 36 processors into the Foglet but it
isn't necessary. The tradeoffs in macroscopic
robotics today are such that processors are
cheap; in the Foglet things are different. The
control of the arms is actually much simpler
than control of a macroscopic robot. They can
be managed by much simpler controllers that
take commands like ``Move to point X at speed
y.'' Using a RISC design allows a single
processor to control a 100 kHz arm; using
auxilliary controllers will let it do all 12 easily.
But there is still a problem: Each computer,
even with the power-reducing reversible logic
designs espoused by Drexler, Merkle, and this
author, is going to dissipate a few nanowatts.
At a trillion foglets per cubic meter, this is a
few kilowatts per cubic meter. Cooling for such
a dissipation must needs be somewhere
between substantial and heroic. As long as the
computers can go into a standby mode when
the Fog is standing still, however, this is quite
workable. Concentrations of heavy work,
mechanical or computing, would still require
cooling circulation to some degree, but, as we
have seen, the Fog is perfectly capable of doing
that.
What about all the other computing overhead
for the Fog? Besides the individual control of
its robotic self, each Foglet will have to run a
portion of the overall distributed control and
communications algorithms. We can do
another clock-speed to capability analogy from
current computers regarding communications.
Megahertz-speed computers find themselves
well employed managing a handful of megabit
data lines. Again we are forced to abandon the
engineering tradeoffs of the macroscopic
world: routing of a message through any given
node need theoretically consume only a
handful of thermodynamically irreversible bit
operations; typical communications controllers
take millions. Special-purpose message routers
designed with these facts in mind must be a
part of the Foglet.
Synergistic Combination with Other
Technologies
The counterintuitive inefficiency in
communications is an example, possibly the
most extreme one, of a case where macroscopic
mechanisms outperform the Fog at some
specific task. This will be even more true when
we consider nano-engineered macroscopic
mechanisms. We could imagine a robot,
human-sized, that was formed of a collection of
nano-engineered parts held together by a mass
of Utility Fog. The parts might include
``bones'', perhaps diamond-fiber composites,
having great structural strength; motors, power
sources, and so forth. The parts would form a
sort of erector set that the surrounding Fog
would assemble to perform the task at hand.
The Fog could do directly all subtasks not
requiring the excessive strength, power, and so
forth that the special-purpose parts would
supply.
Another major component that would be
special-purpose would be power and
communications. Working on more-efficient
protocols such as suggested above, the Fog
would form an acceptable communications link
from a person to some terminal in the same
building; but it would be extremely inefficient
for long-haul, high bandwidth connections such
as that needed for telepresence.

Conclusion:
Power is also almost certainly the domain of
special-purpose nano-engineered mechanisms.
Power transmission in the Fog is likely to be
limited, although for different reasons from
data transmission. Nanotechnology will give us
an amazing array of power generation and
distribution possibilities, and the Fog can use
most of them. The critical heterogeneous
component of Fog is the Fog-producing
machine. Foglets are not self-reproducing;
there is no need for them to be, and it would
complicate their design enormously to give
them fine atom-manipulating capability. One
imagines a Fog machine the size of a breadbox
producing Fog for a house, or building-sized
machines filling cities with Fog. The Fog itself,
of course, conveys raw materials back to the
machine.
References
Engines of Creation by K. Eric Drexler,
Anchor Press, 1986.
``Nanotechnology: wherein molecular
computers control tiny circulatory submarines'',
by A. K. Dewdney, Scientific American,
J anuary 1988, pages 100 to 103.






A
TECHNICAL PAPER ON
WAP




N.B.K.R.I.S.T




P.Sai prasanna Lakshmi M.Aparna kumari
II B-Tech, E.C.E II B-Tech, E.C.E
Roll No. 205446 Roll No.205402
N.B.K.R.I.S.T N.B.K.R.I.S.T
Vidyanagar Vidyanagar
Nellore Nellore

sai_ap22@yahoo.co.in prasanna_sep10@yahoo.co.in




CONTENTS
1.Abstract:
2. Introduction
3. Benefits
4. Why Choose WAP?
5. Mobile-Originated Example of WAP Architecture
6. The Future of WAP
7. conclusion


















Abstract:

You sometimes wonder how
life went on without the
Internet. The net has
become such an integral
part of our lifestyle that
most of us today feel the
need to be connected at all
times. This need has given
rise to technologies like
WAP, which allow you to
access the Internet through
your cell phones and PDAs.
You no longer need to be
physically near your
stationary desktop at home
or work in order to be
connected to the Internet.
Most of us are very familiar
with the internet and have
witnessed a technological
revolution in terms of the
way we conduct business
and manage our finances
with e-commerce(online
shopping ,business on the
net etc..)and e-banking
through internet. As mobile
phones become increasingly
popular you can be sure
that one of the greatest
innovations in the upcoming
decade would be the
introduction of wap.this
seminar discuss the latest
hot wireless internet
technology























1. Introduction
WAP bridges the gap between the
mobile world and the Internet as
well as corporate intranets and
offers the ability to deliver an
unlimited range of mobile value-
added services to subscribers
independent of their network,
bearer, and terminal. Mobile
subscribers can access the same
wealth of information from a
pocket-sized device as they can
from the desktop.
WAP is a global standard and is
not controlled by any single
company. Ericsson, Nokia,
Motorola, and Unwired Planet
founded the WAP Forum in the
summer of 1997 with the initial
purpose of defining an industry-
wide specification for developing
applications over wireless
communications networks. The
WAP specifications define a set of
protocols in application, session,
transaction, security, and transport
layers, which enable operators,
manufacturers, and applications
providers to meet the challenges in
advanced wireless service
differentiation and fast/flexible
service creation. There are now
over one hundred members
representing terminal and
infrastructure manufacturers,
operators, carriers, service
providers, software houses, content
providers, and companies
developing services and
applications for mobile devices.
For more information, visit the
WAP Forum at
http://www.wapforum.org/.



WAP also defines a wireless
application environment (WAE)
aimed at enabling operators,
manufacturers, and content
developers to develop advanced
differentiating services and
applications including a
microbrowser, scripting facilities,
e-mail, World Wide Web
(WWW)to-mobile-handset
messaging, and mobile-to-telefax
access.
The WAP specifications continue
to be developed by contributing
members, who, through
interoperability testing, have
brought WAP into the limelight of
the mobile data marketplace with
fully functional WAPenabled
devices (see Figure 1).
Figure 1. WAP
Enabled Devices

Based on the Internet model, the
wireless device contains a
microbrowser, while content and
applications are hosted on Web
servers.

2. Benefits
Operators
For wireless network operators,
WAP promises to decrease churn,
cut costs, and increase the
subscriber base both by improving
existing services, such as interfaces
to voice-mail and prepaid systems,
and facilitating an unlimited range
of new value-added services and
applications, such as account
management and billing inquiries.
New applications can be
introduced quickly and easily
without the need for additional
infrastructure or modifications to
the phone. This will allow
operators to differentiate
themselves from their competitors
with new, customized information
services. WAP is an interoperable
framework, enabling the provision
of end-to-end turnkey solutions
that will create a lasting
competitive advantage, build
consumer loyalty, and increase
revenues.
Content Providers
Applications will be written in
wireless markup language (WML),
which is a subset of extensible
markup language (XML). Using
the same model as the Internet,
WAP will enable content and
application developers to grasp the
tag-based WML that will pave the
way for services to be written and
deployed within an operator's
network quickly and easily. As
WAP is a global and interoperable
open standard, content providers
have immediate access to a wealth


of potential customers who will
seek such applications to enhance
the service offerings given to their
own existing and potential
subscriber base. Mobile consumers
are becoming more hungry to
receive increased functionality and
value-add from their mobile
devices, and WAP opens the door
to this untapped market that is
expected to reach 100 million
WAPenabled devices by the end
of the year 2000. This presents
developers with significant revenue
opportunities.
End Users
End users of WAP will benefit
from easy, secure access to
relevant Internet information and
services such as unified messaging,
banking, and entertainment
through their mobile devices.
Intranet information such as
corporate databases can also be
accessed via WAP technology.
Because a wide range of handset
manufacturers already supports the
WAP initiative, users will have
significant freedom of choice when
selecting mobile terminals and the
applications they support. Users
will be able to receive and request
information in a controlled, fast,
and low-cost environment, a fact
that renders WAP services more
attractive to consumers who
demand more value and
functionality from their mobile
terminals.
As the initial focus of WAP, the
Internet will set many of the trends
in advance of WAP
implementation. It is expected that
the Internet service providers
(ISPs) will exploit the true
potential of WAP. Web content

developers will have great
knowledge and direct access to the
people they attempt to reach. In
addition, these developers will
likely acknowledge the huge
potential of the operators' customer
bases; thus, they will be willing
and able to offer competitive prices
for their content. WAP's push
capability will enable weather and
travel information providers to use
WAP. This push mechanism
affords a distinct advantage over
the WWW and represents
tremendous potential for both
information providers and mobile
operators.
3. Why Choose
WAP?
In the past, wireless Internet access
has been limited by the capabilities
of handheld devices and wireless
networks. WAP utilizes Internet
standards such as XML, user
datagram protocol (UDP), and
Internet protocol (IP). Many of the
protocols are based on Internet
standards such as hypertext
transfer protocol (HTTP) and TLS
but have been optimized for the
unique constraints of the wireless
environment: low bandwidth, high
latency, and less connection
stability. Internet standards such as
hypertext markup language
(HTML), HTTP, TLS and
transmission control protocol
(TCP) are inefficient over mobile
networks, requiring large amounts
of mainly text-based data to be
sent. Standard HTML content
cannot be effectively displayed on
the small-size screens of pocket-
sized mobile phones and pagers.
WAP utilizes binary transmission
for greater compression of data and
is optimized for long latency and

low bandwidth. WAP sessions
cope with intermittent coverage
and can operate over a wide variety
of wireless transports.
WML and wireless markup
language script (WMLScript) are
used to produce WAP content.
They make optimum use of small
displays, and navigation may be
performed with one hand. WAP
content is scalable from a two-line
text display on a basic device to a
full graphic screen on the latest
smart phones and communicators.
The lightweight WAP
protocol stack is designed to
minimize the required bandwidth
and maximize he number of
wireless network types that can
deliver WAP content. Multiple
networks will be targeted, with the
additional aim of targeting multiple
networks. These include global
system for mobile communications
(GSM) 900, 1,800, and 1,900
MHz; interim standard (IS)136;
digital European cordless
communication (DECT); time-
division multiple access (TDMA),
personal communications service
(PCS), FLEX, and code division
multiple access (CDMA). All
network technologies and bearers
will also be supported, including
short message service (SMS),
USSD, circuit-switched cellular
data (CSD), cellular digital packet
data (CDPD), and general packet
radio service (GPRS). As WAP is
based on a scalable layered
architecture, each layer can
develop independently of the
others. This makes it possible to
introduce new bearers or to use
new transport protocols without
major changes in the other layers.



4. Mobile-
Originated
Example of WAP
Architecture
WAP will provide multiple
applications, for business and
customer markets such as banking,
corporate database access, and a
messaging interface (see Figure 2).
Figure 2. Messaging
Interface



The request from the mobile device
is sent as a URL through the
operator's network to the WAP
gateway, which is the interface
between the operator's network and
the Internet (see Figure 3).










Figure 3. Architecture
of the WAP
Gateway

Architecture of the
WAP Gateway
WDP
The WAP datagram protocol
(WDP) is the transport layer that
sends and receives messages via
any available bearer network,
including SMS, USSD, CSD,
CDPD, IS136 packet data, and
GPRS.
WTLS
Wireless transport layer security
(WTLS), an optional security
layer, has encryption facilities that
provide the secure transport service
required by many applications,
such as e-commerce.

WTP
The WAP transaction protocol
(WTP) layer provides transaction
support, adding reliability to the
datagram service provided by
WDP.
WSP
The WAP session protocol (WSP)
layer provides a lightweight
session layer to allow efficient
exchange of data between
applications.
HTTP Interface
The HTTP interface serves to
retrieve WAP content from the
Internet requested by the mobile
device.
WAP content (WML and
WMLScript) is converted into a
compact binary form for
transmission over the air (see
Figure 4).
Figure 4. WAP
Content in Compact
Binary Form

The WAP microbrowser software
within the mobile device interprets
the byte code and displays the
interactive WAP content (see
Figure 5).

Figure 5. Mobile
Device Display






5. The Future of
WAP
The tremendous surge of interest
and development in the area of
wireless data in recent times has
caused worldwide operators,
infrastructure and terminal
manufacturers, and content
developers to collaborate on an
unprecedented scale, in an area
notorious for the diversity of
standards and protocols. The
collaborative efforts of the WAP
Forum have devised and continue
to develop a set of protocols that
provide a common environment for
the development of advanced
telephony services and Internet
access for the wireless market. If
the WAP protocols were to be as
successful as transmission control
protocol (TCP)/Internet protocol
(IP), the boom in mobile
communications would be
phenomenal. Indeed, the WAP
browser should do for mobile
Internet what Netscape did for the
Internet.


As mentioned earlier, industry
players from content developers to
operators can explore the vast
opportunity that WAP presents. As
a fixed-line technology, the
Internet has proved highly
successful in reaching the homes of
millions worldwide. However,
mobile users until now have been
forced to accept relatively basic
levels of functionality, over and
above voice communications and
are beginning to demand the
industry to move from a fixed to a
mobile environment, carrying the
functionality of a fixed
environment with it. Initially,
services are expected to run over
the well-established SMS bearer,
which will dictate the nature and
speed of early applications. Indeed,
GSM currently does not offer the
data rates that would allow mobile
multimedia and Web browsing.
With the advent of GPRS, which
aimed at increasing the data rate to
115 kbps, as well as other
emerging high-bandwidth bearers,
the reality of access speeds
equivalent or higher to that of a
fixed-line scenario become
evermore believable. GPRS is seen
by many as the perfect partner for
WAP, with its distinct time slots
serving to manage data packets in a
way that prevents users from being
penalized for holding standard
circuit-switched connections.
Handset
Manufacturers and
WAP Services
It is expected that mobile terminal
manufacturers will experience
significant change as a result of
WAP technologya chance that
will impact the look and feel of the


hardware they produce. The main
issues faced by this arm of the
industry concern the size of mobile
phones, power supplies, display
size, usability, processing power,
and the role of personal digital
assistants (PDAs) and other mobile
terminals.
With over 75 percent of the world's
key handset manufacturers already
involved in the WAP Forum and
announcing the impending release
of WAPcompatible handsets, the
drive toward new and innovative
devices is quickly gathering pace.
The handsets themselves will
contain a microbrowser that will
serve to interpret the byte code
(generated from the WML/WMLS
content) and display interactive
content to the user.
The services available to users will
be wide-ranging in nature, as a
result of the open specifications of
WAP, their similarity to the
established and accepted Internet
model, and the simplicity of the
WML/WMLS languages with
which the applications will be
written. Information will be
available in push-and-pull
functionality, with the ability for
users to interact with services via
both voice and data interfaces.
Web browsing as experienced by
the desktop user, however, is not
expected to be the main driver
behind WAP as a result of time and
processing restraints. Real-time
applications and services demand
small and key pieces of
information that will fuel the
success of WAP in the mobile
marketplace. Stock prices, news,
weather, and travel are only some


of the areas in which WAP will
provide services for mobile users.
Essentially, the WAP application
strategy involves taking existing
services that are common within a
fixed-line environment and
tailoring them to be purposeful and
user-friendly in a wireless
environment.
Empowering the user with the
ability to access a wealth of
information and services from a
mobile device will create a new
battleground. Mobile industry
players will fight to provide their
customers with sophisticated,
value-added services. As mobile
commerce becomes a more secure
and trusted channel by which
consumers may conduct their
financial affairs, the market for
WAP will become even more
lucrative.
WAP in the
Competitive
Environment
Competition for WAP protocols
could come from a number of
sources:
subscriber identity module
(SIM) toolkitThe use of SIMs
or smart cards in wireless devices
is already widespread and used in
some of the service sectors.
Windows CEThis is a
multitasking, multithreaded
operating system from Microsoft
designed for including or
embedding mobile and other
space-constrained devices.
Java PhoneSun
Microsystems is developing
PersonalJ ava and a J ava


Phone API, which is embedded in a
J ava virtual machine on the handset.
NEPs will be able to build cellular
phones that can download extra
features and functions over the
Internet; thus,
customers will no longer be
required to buy a new phone to take
advantage of improved features.
The advantages that WAP can
offer over these other methods are
the following:
open standard, vendor independent
network-standard independent
transport mechanismoptimized
for wireless data bearers
application downloaded from the
server, enabling fast service
creation and introduction, as
opposed to embedded software











6. conclusion
WAP provides a markup language
and a transport protocol that open
the possibilities of the wireless
environment and give players from
all levels of the industry the
opportunity to access an untapped
market that is still in its infancy.
The bearer-independent nature of
WAP has proved to be a long-
awaited breath of fresh air for an
industry riddled with multiple
proprietary standards that have
suffocated the advent of a new
wave of mobile-Internet
communications. WAP is an



enabling technology that, through
gateway infrastructure deployed in
mobile operator's network, will
bridge the gap between the mobile
world and the Internet, bringing
sophisticated solutions to mobile
users, independent of the bearer
and network.
Backed by 75 percent of the
companies behind the world's
mobile telephone market and the
huge development potential of
WAP, the future for WAP looks
bright.





GOKARAJU RANGARAJU INSTITUTE OF
ENGINEERING & TECHNOLOGY


WAVELET BASED FINGERPRINT DETECTION



BY




N.AMARENDER D.DEEPAK
III B.TECH, BME III B.TECH, BME

Email id: amarender_n@yahoo.com















ABSTRACT
Nowadays, the issues about security and access
become increasingly important .To identify a
person, the most popular way is to use
fingerprint scanners to capture the fingerprint
image and match with the enrolled fingerprint.,
but most of the fingerprint sensors are not
embedded with live ness detection .Moreover,
recent research work revealed that it is not
difficult to spoof an automated fingerprint
authentication system (AFAS) using fake finger
tips made of commonly available material such
as clay and gelatin. Such fake fingertips can
spoof most practical AFAS as long as they can
retain some or all of the original fingertips
minutiae information. The process to decide
whether a fingerprint collected is really from a
live fingertip or not by acquiring and analyzing
accessorial fingertip information other than the
minutiae is called live ness detection.
To cope with the weakness
of fingerprint sensors, different methods are
suggested to detect live ness in biometrics
sample Static properties example (temperature,
conductivity) and dynamic behaviors (e.g. Skin
deformation, perspiration)of live fingertips have
been extensively studied in fingerprint live ness
detection research. All of the existing
approaches are believed to have certain
limitations because the properties used are
either unstable or not universal enough. In this
paper we propose a new fingerprint live ness
approach based on wavelet analysis of a more
robust and intrinsic property of finger tips:
SURFACE COARSENESS.
INTRODUCTION

One of the biometric attributes that have been
studied intensively is the human fingerprint. The
fingerprint is the unique skin structure of
fingertips. As a phenotypic biological feature it
is unique, even for identical twins. The
characteristic formation of the fingerprint
normally doesnt change over a persons life
span. For automatic processing, the fingerprint is
taken using special sensor devices. In most cases
the raw data taken is a gray scale image. To
compare fingerprint images, a set of distinctive
pieces of information is extracted from the gray
scale images. These extracted features can be
positions or configurations of ridge-lines,
crossing, bifurcation and ending points, referred
to as minutiae.



But some common scanner types can
be circumvented by finger surface dummies.
This attack works by capturing a fingerprint
image and producing a rubber mask for the
fingertip from digital images. To acquire the
finger print image, it can be taken from polished
surfaces such as glass or from a biometric
storage that also contains raw data. To defeat
these kinds of attacks, the discipline of live ness
detection has emerged in the past years and a
number of approaches have been presented for
this purpose. In the present paper we present a
new type of live ness detection method based on
analyzing a more robust and intrinsic property of
finger tips: SURFACE COARSENESS.

CONCEPTS
1. WAVELET ANALYSIS
Wavelet transform is capable of providing the
time and frequency information simultaneously,
hence giving a time-frequency representation of
the signal
WAVELET DECOMPOSITION: We pass the
time-domain signal from various high pass and
low pass filters, which filter out either high
frequency or low frequency portions of the
signal. This procedure is repeated, every time
some portion of the signal corresponding to
some frequencies being removed from the signal.
Here is how this works: Suppose we have a
signal which has frequencies up to 1000 Hz. In
the first stage we split up the signal in to two
parts by passing the signal from a high pass and
a low pass filter (filters should satisfy some
certain conditions, so-called admissibility
condition) which results in two different
versions of the same signal: portion of the signal
corresponding to 0-500 Hz (low pass portion),
and 500-1000 Hz (high pass portion). Then, we
take either portion (usually low pass portion) or
both, and do the same thing again. This operation
is called decomposition.
Assuming that we have taken the low pass
portion, we now have 3 sets of data, each
corresponding to the same signal at frequencies
0-250 Hz, 250-500 Hz, 500-1000 Hz.
Then we take the low pass portion again and
pass it through low and high pass filters; we now
have 4 sets of signals corresponding to 0-125 Hz,
125-250 Hz, 250-500 Hz, and 500-1000 Hz. We
continue like this until we have decomposed the
signal to a pre-defined certain level. Then we
have a bunch of signals, which actually represent
the same signal, but all corresponding to
different frequency bands.
MEYER WAVELET
A continuous wavelet transform is represented as
below

Where * denotes complex conjugation. This
equation shows how a function f (t) is
decomposed into a set of basis functions, called
the wavelets. The variables s and, scale and
translation, are the new dimensions after the
wavelet transform.
The Meyer wavelet is given as
Wavelet function


and

Where


Scaling function



Denoising and Soft Thresholding
An image is often corrupted by noise in its
acquition and transmission. Image denoising is
used to remove the additive noise while retaining
as much as possible the important signal
features. In the recent years there has been a fair
amount of research on wavelet thresholding and
threshold selection for signal de-noising because
wavelet provides an appropriate basis for
separating noisy signal from the image signal.
The motivation is that as the wavelet transform is
good at energy compaction, the small
coefficients are more likely due to noise and
large coefficient due to important signal features.
These small coefficients can be thresholded
without affecting the significant features of the
image.
Thresholding is a simple non-
linear technique, which operates on one wavelet
coefficient at a time. In its most basic form, each
coefficient is thresholded by
comparing against threshold, if the coefficient is
smaller than threshold, set to zero; otherwise it is
kept or modified. Replacing the small noisy
coefficients by zero and inverse wavelet
transform on the result may lead to
reconstruction with the essential signal
characteristics and with less noise.

Algorithm for soft thresholding
Soft thresholding sets any coefficient less than
or equal to the threshold to zero. The threshold is
subtracted from any coefficient that is greater
than the threshold. This moves the time series
toward zero.
1. if (coef[i] <=thresh)
2. coef[i] =0.0;
3. else
4. coef[i] = coef[i] -
thresh;
Soft thresholding not only smoothes the time
series, but moves it toward zero soft thresholding
has been used over hard thresholding because it
gives more visually pleasant images as compared
to hard thresholding ;reason being the latter is
discontinuous and yields abrupt artifacts in the
recovered images especially when the noise
energy is significant.

SNR 20 dB SNR 23.16dB

HISTOGRAM EQUALISATION

Images are often represented using
pixel values which are integers in the range [0,
255], but often the actual values in an image are
clustered in a small range of values. For
example, Figure 1 shows an image of the interior
of Il Duomo, in Florence. This picture is quite
dark, so the intensity values are grouped towards
0, rather than spread evenly across the range of
values.
Figure 1: The interior of Il Duomo, Florence
(left), and its histogram (right).

This means that it is difficult to see details in the
images because they are too dark, too light, or
just have low contrast. One way to enhance the
appearance of the images is called histogram
equalization. This involves changing the
intensity at each pixel so that the histogram is
more equally spread.

Figure 2.The histogram equalised version of
the interior of IL Duomo, Florence (left), and
its histogram (right).

The result of histogram equalisation applied to
the image from Figure 1 is shown in Figure 2
along with the resulting histogram. Note that
although the histogram of the resulting image is
not completely flat, it is more uniform than the
original.

PROPOSED APPROACH

Proposed is a simple and effective approach for
fingerprint live ness detection based on the
wavelet analysis of the finger tip surface texture.
Popular fake fingertip materials such as clay and
gelatin usually consists of large organic
molecules which tend to agglomerate during
processing .This will introduce asperities to the
surface of the fake fingertips produced .So,
generally speaking,the surface of the fake
fingertip is much coarser than that of a live
fingertip(human skin).This difference in surface
coarseness is used in the current live ness
detection method.
Nowadays, the resolution of popular
fingerprint sensors for public security purposes is
around 800dpi.Surface textures are generally not
clear enough at such resolution .Coping with the
development of electronic technologies ,we can
expect that in the near future ,relatively low cost
fingerprint sensors especially optical sensors
,will be able to generate high resolution (1000
dpi) images of fingertips. Here we propose a
liveness detection method based on the analysis
of such higher resolution fingertip images.
Fingertip surfaces are intrinsically coarse
at certain scale because of the alternation of the
ridges and valleys on them. Wavelet analysis can
help us to minimize the effect of ridge/valley
pattern when estimating the surface coarseness
because it allows investigation of the input signal
at different scales . In this paper we treat the
surface coarseness as a kind of Gaussian white
noise added to the images .A finger tip image is
first denoised using the wavelet based approach
.The noise residue (original image -denoised
image) is then calculated. Coarser surface
texture tends to result in a stronger pixel value
fluctuation in the noise residue .Thus the
standard deviation of the noise residue can be
used as an indicator to the texture coarseness.


The approach consists of the following steps.

STEP 1: Apply histogram equalization to the
input finger tip image.

STEP 2: Two levels of stationary wavelet
decomposition are performed to the finger tip
image .In the second level, only the level one
approximation(the down sampled output from
the first round of low pass-low pass filtering) is
further decomposed. Finally, one approximation
and six details are achieved. Finite Impulse
Response (FIR) based approximation of the
Meyer wavelet is adopted.

STEP 3: Wavelet shrinkage is performed by
applying soft thresholding to the six details .The
threshold for the wavelet shrinkage is
calculated according to the equation


Where log() stands for natural logarithm ,N-is
the signal length of each detail, is the
estimated standard deviation of the detail
coefficients of all the three details in the first
level of wavelet decomposition
STEP 4: Perform wavelet reconstruction to
obtain the denoised finger tip image.

STEP 5: Noise residue is achieved by
calculating the difference between the two finger
tip images before and after denoising .

STEP 6: If the standard deviation of the noise
residue is smaller than a preset threshold
, the original image is regarded as having been
captured from a live finger tip
Otherwise it is regarded as a fake fingertip.



CONCLUSION
Fingerprint recognition systems can really be
spoofed .In this paper we have proposed a live
ness detection algorithm to protect the
fingerprint systems from being spoofed. This
algorithm makes decisions based on the texture
of fingers. In other words, this algorithm uses
image processing techniques. We hope that this
anti spoofing fingerprint technique can help the
community to develop and test live ness more
conveniently.
This is a new forward looking fingerprint live
ness detection method which is efficient enough
so that its incorporation into existing fingerprint
verification systems will enhance the real time
performance of such systems in public security
applications.



REFERENCES

1. D.L. Donoho, De-Noising by Soft
Thresholding, IEEE Trans. Info. Theory 43,
2] David L. Donoho and Iain M. J ohn stone.
Ideal spatial adaption via wavelet shrinkage.
Biometrika, 81:425{455, September 1994.
3. http://fpserver.cse.cuhk.edu.hk/liveness.htm
WEB MINING

B.santosh kumar &M.Vijaybhaskar
III year B.Tech
Department of ECE
V.R. Siddhartha Engineering College
Vijayawada-520007
bsk_santosh79@rediff.com
Ph no: 9985793376
9985348956

Abstract:
With the explosive growth of
information sources available on the world
wide web , it has become increasingly
necessary for users to utilize automated
tools to find the desired information
resources, and to track and analyze their
usage patterns .These factors give rise to
the necessity of creating server-side and
client-side intelligent systems that can
effectively mine for knowledge and the
process is known as WEB MINING .
Application of data mining techniques to
the World Wide Web, referred to as WEB
MINING.

Goals include are
The improvement of site design and
site structure
The generation of dynamic
recommendations and
Improving marketing

Structure:
The web mining process has been
classified in to three distinct ways. The first
called, Web Content Mining is the process of
discovering information automatically from
sources across World Wide Web. The second
called Web Usage Mining is the process of
mining for user access patterns from Web
servers. The third one is known as Web
Structure Mining.







WEB CONTENT MINING:
Web content mining is the
process to discover useful information from
the content of a web page. The type of the
web content may consist of text, image, audio
or video data in the web. Web content mining
sometimes is called web text mining, because
the text content is the most widely researched
area. The technologies that are normally used
in web content mining are NLP (Natural
Language processing) and IR (Information
retrieval).

The content is extracted text
from HTML, Perform streaming, Remove
Stop words, Calculate Collection Wide Word
Frequencies, Calculate per Document term
frequencies. The lack of structure threat
permeates the information sources on the
World Wide Web makes automated
discovery of Web-based information difficult
Traditional search engines such as LYCOS,
Alta Vista, Web Crawler, ALIWEB, Meta
Crawler, and others provide some comfort to
users, but do not generally provide structural
information not categorize, filter, or interpret
documents. In recent years these factors
have prompted researchers to develop more
intelligent tools for information retrieval,
such as intelligent Web agents, and to extend
data mining techniques to provide a higher
level of organization for semi-structured data
available on the web.

Agent-Based Approach:

(i)Intelligent Base Approach:
Several intelligent Web agents have been
developed that search for relevant
information using domain characteristics and
user profiles to organize and interpret the
discovered information. Agents such as
Harvest, FAQ FINDER, Information
Manifold, OCCAM, and Parasite rely either
on pre-specified domain information about
particular types of documents, or on hard
coded models of the information sources to
retrieve and interpret documents. Agents
such as ShopBot, and ILA (Internet Learning
Agent) interact with and learn the structure of
unfamiliar information suluces. ShopBot
retrieves product information from a variety
of vendor sites using only general
information about the product domain. ILQA
learns models of various information sources
and translates these into its own concept
hierarchy.
(ii)Information
Filtering/Categorization: A number of Web
agents use various information retrieval
techniques and characteristics of open
hypertext web documents to automatically
retrieve, filter and categorize them. Hyper
suit uses semantic information embedded in
link structures and document content to
create cluster hierarchies of hypertext
documents, and structure and information
space. Bookmark Organizer combines
hierarchical clustering techniques and user
interaction to organize a collection of Web
documents based on conceptual information.
(iii) Personalized Web Agents: This
category of Web agents learn user
preferences and discover Web information
sources based on these preferences, and those
of other individuals with similar interests. A
few recent examples of such agents include
the WebMatcher, PAINT Syskill& Webert.

Database Approach: Database
approaches to web mining have focused on
techniques for organizing the semi-structured
data on the web into more structured
collections of resources, and using standard
database querying mechanisms and data
mining techniques to analyze it.

(i)Multilevel Databases: The main
idea behind this approach is that the lowest
level of the database contains semi-structured
information stored in various Web
repositories, such as hypertext documents. At
the higher levels Metadata or generalizations
are extracted from lower levels and organized
in structured collections, i.e. relational or
object-oriented databases. Multilayered
database where each layer is obtained via
generalization and transformation operations
performed on the lower layers. Kholsa eternal
propose the creation and maintenance of
meta-databases at each information providing
domain and the use of global schema for the
meta-database. The incremental integration
of a portion of the schema from each
information integration of a portion of the
schema from each information source, rather
than relying on a global heterogeneous
database schema. The ARANEUS system
extracts relevant information from hypertext
documents and integrates these into higher-
level derived Web Hypertexts which are
generalizations of the notion of database
views.

(ii)Web Query Systems: Many Web-
based query systems and languages utilize
standard database query languages such as
SQL, structural information about Web
documents, and even natural language
processing for the queries that are used in
World Wide Web searches. W3QL combines
structure queries, based on the organization
of hypertext documents, and content queries,
based on information retrieval techniques.
Web Log Logic-based query language for
restructuring extracts information from Web
information sources. Lorel and UnQL query
heterogeneous and semi-structured
information on the Web using a labeled graph
data model.

Web usage mining:

Web usage mining is the application
that uses data mining to analyze and discover
interesting patterns of users usage data on
the web. The usage data records the users
behavior when the user browses or makes
transactions on the web site. In order to better
understand and
serve the needs of users or Web-based
applications. It is an activity that involves the
automatic discovery of patterns from one or
more Web servers.
Organizations often generate and
collect large volumes of data; most of this
information is usually generated
automatically by Web servers and collected
in server log. Analyzing such data can help
these organizations to determine the value of
particular customers, cross marketing
strategies across products and the
effectiveness of promotional campaigns, etc.
The first web analysis tools simply
provided mechanisms to report user activity
as recorded in the servers. Using such tools,
it was possible to determine such information
as the number of accesses to the server, the
times or time intervals of visits as well as the
domain names and the URLs of users of the
Web server. However, in general, these tools
provide little or no analysis of data
relationships among the accessed files and
directories within the Web space. Now more
sophisticated techniques for discovery and
analysis of patterns are now emerging.

Web Usage Mining Process:
Problem identification
Data Collection
Data Pre-processing
Pattern discovery and analysis

Pre Processing Tasks:

The first preprocessing task is DATA
CLEANING. Techniques to clean a server
log is to eliminate irrelevant items are of
importance for any type of Web log analysis,
not just data mining. Elimination of irrelevant
items can be reasonably accomplished by
checking the suffix of the URL name. For
instance, all log entries with filename
suffixes such as gif, jpeg, jpg and map can be
removed. A related but much harder problem
is determining if there are important access
log. Mechanisms such as local caches and
proxy servers can severely distort the overall
picture of user traversals through a web site.
Current methods to try to overcome this
problem include the use of cookies, cache
busting, and explicit user registration.
Cookies are deleted by the user, cache
busting defeats the speed advantage that
caching was created to provide and can be
disabled, and user registration is voluntary
and users often provide false information.
Methods for dealing with the caching
problem include using site topology or
referrer logs, along with temporal
information to infer missing references.
Another problem associated with
proxy servers is that of user identification.
Use of a machine name to uniquely identify
users can result in several users being
erroneously grouped together as one user. An
algorithm presented in checks to see if each
incoming request is reachable from the pages
already visited. If a page is requested that is
not directly linked to the previous pages,
multiple users are assumed to exist on the
same machine. In user session lengths
determined automatically based on
navigation patterns are used to identify users.
Other heuristics involve using a combination
of IP address, machine name, browser agent,
ad temporal information to identify users.
The second major preprocessing task
is transaction identification. Before any
mining is done on Web usage data, sequences
of page references must be grouped into
logical units representing web transaction or
user sessions. A user session is all of the page
references made by a user during a single
visit to a site. Identifying user sessions is
similar to the problem of identifying
individual users. A transaction can range
from a single page reference to all of the page
reference in a user session, depending on the
criteria used to identify transactions. Unlike
traditional domains for data mining, such as
point of sale databases, there is no convenient
method of clustering page references into
transactions smaller than an entire user
session.

Discovery Techniques on Web
Transactions:

Once user transactions or sessions
have been identified, there are several kinds
of access pattern mining that can be
performed depending on the needs of the
analyst, such as path analysis, discovery of
association rules and sequential patterns, and
clustering and classification.
There are many different types of
graphs that can be formed for performing
path analysis, since a graph represents some
relation defined on web pages. The most
obvious is a graph representing the physical
layout of a Web site, with web pages as
nodes and hypertext links between pages as
directed edges. Other graphs could be formed
based on the types of Web pages with edges
representing similarity between pages, or
creating edges that give the number of users
that go from one page to another. Most of the
work to date involves determining frequent
traversal patterns or large reference
sequences from the physical layout type of
graph. Path analysis could be used to
determine most frequently visited paths in a
web site. Information that can be discovered
through path analysis are:
70% of clients who
accessed/company/product2 did so by
starting at /company and proceeding
through /company/new,
/company/products, and
/company/product1.
80% of clients who accessed the site
started from /company/products; or
65% of clients left the site after four
or less page references.

The first rule determines that there is
useful information in /company/product
2, but since users tend to take a circuitous
route to the page, it is not clearly marked.
The second rule simply states that the
majority of users are accessing the site
through a page other than the main page
and it might be a good idea to include
directory type information on this page if
it is not there already. The last rule
indicates on attrition rate for the site.
Since many users dont browse further
than four pages into the site, it would be
prudent to ensure that important
information is contained within four
pages of the common site entry points.
Association rule discovery techniques
are generally applied to databases of
transactions where each transaction
consists of a set of items. In such a
framework the problem is to discover all
associations and correlations among data
items where the presence of one set of
items in a transaction implies the
presence of other items. In the context of
Web usage mining, this problem amounts
to discovering the correlations among
references to various files available on
the server by a given client. Using
association rule discovery techniques the
correlations will be
40% of clients who accessed the web
page with URL/company/product1,
also accessed /company/product2; or
30% of clients, who
accessed/company/special placed an
online order in /company/product1.
Usually such transaction databases
contain extremely large amounts of data,
current association rule discovery
techniques try to prune the search space
according to support for items under
consideration. Support is a measure based
on the number of occurrences of user
transactions within transaction logs.
Discovery of such rule for
organizations engaged in electronic
commerce can help in the development of
effective marketing strategies. But in
addition, association rules discovered
from WWW access logs can give an
indication of how to best organize the
organizations Web space.
The main problem by using sequential
patterns is to find inter-transaction
patterns such that the presence of a set of
items is followed by another item in the
time-stamp ordered transaction set. In
web server transaction logs, the visit by a
client is recorded over a period of time.
The time stamp associated with a
transaction in this case will be time
interval which is determined and attached
to the transaction during the data cleaning
or transaction identification processes.
The discovery of sequential patterns in
Web server access logs allows Web-
based organi- zations to predict user
visit patterns and helps in targeting
advertising aimed at groups of users
based on these patterns. By analyzing this
information, the Web mining system can
determine temporal relationships among
data items such as following:
30% of clients who visited
/company/products, had done a
search in yahoo, within the past
week on keyword w; or
60% of clients, who placed an
online order
in/company/product1, also placed
an online order in
/company/product4.
Discovering classification rules allows one to
develop a profile of items belonging to a
particular group according to their common
attributes. This profile can then be used to
classify new data items that are added to the
database. In Web usage mining, classification
techniques allow one to develop a profile for
clients who access particular server files
based on demographic information available
on those clients, or based on their access
patterns.
Clustering analysis allows one to
group together clients or data items that have
similar characteristics. Clustering of client
information or data items on Web transaction
logs, can facilitate the development and
execution of future marketing strategies, both
online and off-line, such as automated return
mail to clients falling within a certain cluster,
or dynamically changing a particular site for
a client, on a return visit, based on past
classification of that client.

Pattern Discovery tools:

Tools for user pattern discovery use
sophisticated techniques from AI, data
mining, psychology, and information theory,
to mine for knowledge from collected data.
For example, the web miner system
introduces a general architecture for web
usage mining. Web miner automatically
discovers association rules and sequential
patterns from server access logs. These can,
in turn be used to perform various types of
user traversal path analysis, such as
identifying the most traversed paths through
web locality. Web page typing and site
topology information to categorize pages for
easier access by users.

Pattern Analysis tools:

Once access patterns are discovered,
analysts need the appropriate tools and
techniques to understand, visualize and
interpret these patterns, e.g. the Web viz
system.Using OLAP techniques such as data
cubes for the purpose simplifying the
analysis of usage statistics from server access
logs. The web miner systems propose an
SQL like Query Mechanism for Querying the
discovered knowledge (in the form of
association rules and sequential patterns).

Web Usage Mining Architecture:

The WEBMINER is a system that
implements parts of this general architecture.
The architecture divides the Web usage
mining process into two main parts. The first
part includes
The domain dependent processes of
transforming the Web data into suitable
transaction form. This includes
preprocessing, transaction identification and
data integration components. The second part
includes the largely domain independent
application of generic data mining and
pattern matching techniques (such as the
discovery of association rule and sequential
patterns0 as part of the systems data mining
engine.
Data cleaning is the first step
performed in the Web usage mining process.
Some low level data integration tasks may
also be performed at this stage, such as
combining multiple logs, incorporating
referrer logs etc,. After the data cleaning, the
log entries must be partitioned into logical
clusters using one or a series of transaction
identification modules. The goal of
transaction identifications is to create
meaningful clusters of references for each
user. The task of identifying transactions is
one of either dividing a large transaction into
multiple smaller one or merging small
transactions into fewer larger ones. The input
and output transaction formats match so that
any number of modules to be combined in
any order, as the data analyst sees fit.
Once the domain-dependent data
transformation phase is completed, the
resulting transaction data must be formatted
to conform to the data model of the
appropriate data mining task. For instance,
the format of the association rule discovery
task may be different than the format
necessary for mining sequential patterns.
Finally, a query mechanism will allow the
user (analyst) to provide more control over
the discovery process by specifying various
constraints.
Application Areas of Web Mining



E-commerce eg::Amazon.com
Search Engineer eg::Google
Personalized Web Protocol e.g.:: My
Yahoo
Website Design
Understanding User communities
e.g.: AOL
Understanding Auction Behavior e.g.:
e-Bay.

Conclusion:
Web structure mining: The term Web mining has been used
to refer to techniques that encompass a broad
range of issues. However, while meaningful
and attractive, this very broadness has caused
Web mining to mean different things to
different people, and there is a need to
develop a common vocabulary. Towards this
goal we proposed a definition of Web
mining, and developed taxonomy of the
various ongoing efforts related to it.

Web structure mining is the process
of using the graphic theory to analyze the
node and connection structure of a web site.
According to the type of web structural data,
web structure mining can be divided into two
kinds.
The first kind of web structure mining
is extract patterns from hyperlinks in the
web. A hyperlink is a structural component
that connects the web page to a different
location.



The second kind of the web structure
mining is mining the document structure. It is
using the tree-like structure to analyze and
describe the HTML (Hyper Text Markup
Language) or XML (Extensible Markup
Language) tags within the web page.
References:

M. Balabanovic, Yoav shoham an
adaptive agent for automated web
browsing. J ournal of Visual
communication and Image
representation

Google Search

C. Chang and C.Hsu. Customizable
multi-engine search tool with
clustering. In 6
th
international
conference.



A query language and optimization
techniques for unstructured data by
P.Bunerman, S.Davidson.










































A MINI PROJECT ON XONIX



BY


K.VENKATESH
(jntu_venky@yahoo.com)
(Ph:9866189399)


AND



T.RAVI KRISHNA
(ravikrishna_527@yahoo.com)
(Ph:9885950834)



J NTU College of Engineering( Autonomous)
Anantapur





















ABSTRACT:
Ours is a mini project in C related to the game
XONIX . It exploits the power of C.It reflects the
real world game.
This is a board game played by
one player.This is a level by level game.We operate a
device which move in the playing field.Our goal is to
fill the majority of the field.To accomplish this,it is
necessaryto move in a clear area and to cut off an
area with out balls(thus balls should not cross our
trajectory). Playing time for each level is limited.

The game also contain bonuses to gain life,increase
score,time and slow down all the balls.



HIGHLIGHTS:
A fine amount of graphics is used with
good visual effects.
Its design is based on less time
complexity algorithms
It is highly user friendly
It images the powerful concepts in C.









FUTURE ENHANCEMENTS:
WE WILL IMPROVE
THE GAME BY INTRODUCING MORE
LEVELS. WE WILL IMPLEMENT THE
GAME IN OTHER ENVIRONMENTS.
WE WILL TRY TO INCREASE THE
COMPLEXITY OF THE GAME.




Open standards vs High security



Presented by

R.Vijaykrishna Nag P.V.S.N.M. Pavan kumar
III
rd
year Computer Science Engg III
rd
year Computer Science Engg
Narasaraopeta Engg College Narasaraopeta Engg College
e-mail : vkn.teresa@gmail.com email : vsnmpkr@yahoo.co.in
Mobile no : 9908706318 Mobile no : 9948494981




ABSTRACT
"It's an interesting paradox: open standards versus high security"
For the last few years, we've witnessed a great expansion of remote
control devices in our day-to-day life.To interact with all these remotely controlled devices,
we'll need to put them under a single standardized control interface that can interconnect into
a network, specifically a Home Area Network (HAN). One of the most promising HAN
protocols is ZigBee, a software layer based on the IEEE 802.15.4 standard. This article will
introduce you to ZigBeehow it works and how it may be more appropriate than simply
accumulating more remotes.
Why so many remotes? Right now, the more remotely controlled
devices we install in our homes, the more remotes we accumulate. Devices such as TVs,
garage door openers, and light and fan controls predominantly support one-way, point-to-
point control. They're not interchangeable and they don't support more than one device.
Because most remotely controlled devices are proprietary and not standardized among
manufacturers, even those remotes used for the same function. In other words, you'll have as
many separate remote control units as you have devices to control.
Some modern IR remotes enable you to control multiple devices by
"learning" transmitting codes. But because the range for IR control is limited by line of sight,
they're used predominantly for home entertainment control.
A HAN can solve both problems because it doesn't need line-of-sight communication and
because a single remote (or other type of control unit) can command many devices. In the last
few years, new wireless local area networks (WLANs) such as Wi-Fi and Bluetooth became
available. Table 1 shows the strengths and applications of these different systems. Wireless
cameras for remote monitoring are an example of how to employ those technologies in home
automation and control areas. But the problem is that those technologies don't satisfy the
requirements for a HAN.
If we take a look at the type of data that circulates within a network of sensors and actuators,
we may find that most of it is small packets that control devices or obtain their status. For
many applications, such as wireless smoke and CO2 detectors or wireless home security, the
device mostly stays in deep-sleep mode and only sends a short burst of information if a
trigger event occurs. The main requirements for devices in such types of networks are:
extremely low power consumption
the ability to sleep for a long time
simplicity
low cost
These requirements are mostly fulfilled by this Zigbee technology.All the devices performing
using this technology has the capacity to meet the requirements of the society.


What is Zigbee?
ZigBee is a home-area network designed specifically to replace the proliferation of individual
remote controls. ZigBee was created to satisfy the market's need for a cost-effective,
standards-based wireless network that supports low data rates, low power consumption,
security, and reliability The alliance is working closely with the IEEE to ensure an integrated,
complete, and interoperable network for the market. The ZigBee Alliance will also serve as
the official test and certification group for ZigBee devices. ZigBee is the only standards-
based technology that addresses the needs of most remote monitoring and control and sensory
network applications.
The 802.15.4 specification only covers the lower networking layers (MAC and PHY). To
achieve inter-operability over a wide range of applications such as Home, Industrial or
Building Automation, the higher layers must be standardised as well.
The Zigbee Alliance has produced such a standard, using 802.15.4 wireless (generally in the
2.4 GHz band) as the low-level transport. Through the use of 'profiles', the specification may
customised to suit various application areas.
ZigBee Home Automation Example

It may be helpful to think of IEEE 802.15.4 as the physical radio and ZigBee as the logical
network and application software, as Figure 1 illustrates. Following the standard Open
Systems Interconnection (OSI) reference model, ZigBee's protocol stack is structured in
layers. The first two layers, physical (PHY) and media access (MAC), are defined by the
IEEE 802.15.4 standard. The layers above them are defined by the ZigBee Alliance.

Figure 1: ZigBee stack architecture
Frame structure :
The data frame provides a payload of up to 104 bytes. The frame is numbered to ensure that
all packets are tracked. A frame-check sequence ensures that packets are received without
error. This frame structure improves reliability in difficult conditions.
After receiving a data packet, the receiver performs a 16-bit cyclic redundancy check (CRC)
to verify that the packet was not corrupted in transmission. With a good CRC, the receiver
can automatically transmit an acknowledgement packet (depending on application and
network needs), allowing the transmitting station to know that the data were received in an
acceptable form. If the CRC indicates the packet was corrupt, the packet is dropped and no
acknowledgement is transmitted. When a developer configures the network to expect
acknowledgement, the transmitting station will retransmit the original packet a specified
number of times to ensure successful packet delivery. If the path between the transmitter and
receiver has become less reliable or a network failure has occurred, ZigBee provides the
network with self-healing capabilities when alternate paths (if physically available) can be
established autono-mously.
Another important structure for 802.15.4 is the acknowledgment (ACK) frame. It provides
feedback from the receiver to the sender confirming that the packet was received without
error. The device takes advantage of specified "quiet time" between frames to send a short
packet immediately after the data-packet transmission.
A MAC command frame provides the mechanism for remote control and configuration of
client nodes. A centralized network manager uses MAC to configure individual clients'
command frames no matter how large the network.

Figure 2: The four basic frame types defined in 802.15.4: Data, ACK, MAC command, and
beacon
Finally, the beacon frame wakes up client devices, which listen for their address and go back
to sleep if they don't receive it. Beacons are important for mesh and cluster-tree networks to
keep all the nodes synchronized without requiring those nodes to consume precious battery
energy by listening for long periods of time.
Channel access, addressing :
Two channel-access mechanisms are implemented in 802.15.4. For a non"beacon network, a
standard ALOHA CSMA-CA (carrier-sense medium-access with collision avoidance)
communicates with positive acknowledgement for successfully received packets. In a
beacon-enabled network, a superframe structure is used to control channel access. The
superframe is set up by the network coordinator to transmit beacons at predetermined
intervals (multiples of 15.38ms, up to 252s) and provides 16 equal-width time slots between
beacons for contention-free channel access in each time slot. The structure guarantees
dedicated bandwidth and low latency. Channel access in each time slot is contention-based.
However, the network coordinator can dedicate up to seven guaranteed time slots per beacon
interval for quality of service.
Device addresses employ 64-bit IEEE and optional 16-bit short addressing. The address field
within the MAC can contain both source and destination address information (needed for
peer-to-peer operation). This dual address information is used in mesh networks to prevent a
single point of failure within the network.


Device Types :
There are three different types of ZigBee device:
ZigBee coordinator(ZC): The most capable device, the coordinator forms the root of
the network tree and might bridge to other networks. There is exactly one ZigBee
coordinator in each network. It is able to store information about the network,
including acting as the repository for security keys.
ZigBee Router (ZR): Routers can act as an intermediate router, passing data from
other devices.
ZigBee End Device (ZED): Contains just enough functionality to talk to its parent
node (either the coordinator or a router); it cannot relay data from other devices. It
requires the least amount of memory, and therefore can be less expensive to
manufacture than a ZR or ZC.
Power saving:
Ultra-low power consumption is how ZigBee technology promotes a long lifetime for devices
with nonrechargeable batteries. ZigBee networks are designed to conserve the power of the
slave nodes. For most of the time, a slave device is in deep-sleep mode and wakes up only for
a fraction of a second to confirm its presence in the network. For example, the transition from
sleep mode to data transition is around 15ms and new slave enumeration typically takes just
30ms.
Security:
Security and data integrity are key benefits of the ZigBee technology. ZigBee leverages the
security model of the IEEE 802.15.4 MAC sublayer which specifies four security services:
access controlthe device maintains a list of trusted devices within the network
data encryption, which uses symmetric key 128-bit advanced encryption standard
frame integrity to protect data from being modified by parties without cryptographic
keys
sequential freshness to reject data frames that have been replayedthe network
controller compares the freshness value with the last known value from the device and
rejects it if the freshness value has not been updated to a new value
The actual security implementation is specified by the implementer using a standardized
toolbox of ZigBee security software.
Network: layer
The NWK layer associates or dissociates devices using the network coordinator, implements
security, and routes frames to their intended destination. In addition, the NWK layer of the
network coordinator is responsible for starting a new network and assigning an address to
newly associated devices.
The NWK layer supports multiple network topologies including star, cluster tree, and mesh,
all of which are shown in Figure 3.

Figure 3: ZigBee network model
In a star topology, one of the FFD-type devices assumes the role of network coordinator and
is responsible for initiating and maintaining the devices on the network. All other devices,
known as end devices, directly communicate with the coordinator.

In a mesh topology, the ZigBee coordinator is responsible for starting the network and for
choosing key network parameters, but the network may be extended through the use of
ZigBee routers. The routing algorithm uses a request-response protocol to eliminate sub-
optimal routing. Ultimate network size can reach 264 nodes. Using local addressing, you can
configure simple networks of more than 65,000 (2
16
) nodes, thereby reducing address
overhead.
General Operation Framework :
The General Operation Framework (GOF) is a glue layer between applications and rest of the
protocol stack. The GOF currently covers various elements that are common for all devices.
It includes subaddressing and addressing modes and device descriptions, such as type of
device, power source, sleep modes, and coordinators. Using an object model, the GOF
specifies methods, events, and data formats that are used by application profiles to construct
set/get commands and their responses.
Actual application profiles are defined in the individual profiles of the IEEE's working
groups. Each ZigBee device can support up to 30 different profiles. Currently, only one
profile, Commercial and Residential Lighting, is defined. It includes switching and dimming
load controllers, corresponding remote-control devices, and occupancy and light sensors.
The ZigBee stack is small in comparison to other wireless standards. For network-edge
devices with limited capabilities, the stack requires about 4Kb of the memory. Full
implementation of the protocol stack takes less than 32Kb of memory. The network
coordinator may require extra RAM for a node devices database and for transaction and
pairing tables. The 802.15.4 standard defines 26 primitives for the PHY and MAC layers;
probably another dozen will be added after finalizing the NWK layer specification. Those
numbers are still modest compared to 131 primitives defined for Bluetooth. Such a compact
footprint enables you to run Zigbee on a simple 8-bit microcontroller such as an HC08- or
8051-based processor core.

Figure 4: A typical ZigBee-enabled device will consist of RF IC and 8-bit microprocessor
with peripherals connected to an application sensor or actuators
A typical ZigBee-enabled device includes a radio frequency integrated circuit (RF IC) with a
partially implemented PHY layer connected to a low-power, low-voltage 8-bit
microcontroller with peripherals, connected to an application sensor or actuators. The
protocol stack and application firmware reside in on-chip flash memory. The entire ZigBee
device can be compact and cost efficient.
The focus of network applications under the ZigBee standard include the features of low
power consumption, needed for only two major modes (Tx/Rx or Sleep), high density of
nodes per network, low costs and simple implementation.
Zigbee Using Sensors :
Many in the industry are calling for a wireless networking standard that can deliver device-
level communications for sensing, data acquisition, and control applications. Will the ZigBee
standard make this a reality?
Consider a typical security application, such as a magnetic reed switch
door sensor. The sensor itself consumes almost no electricity; its the
radio that uses the bulk of the power. The sensor is configured to have a
heartbeat at 1 min. intervals and to immediately send a message when an
event occurs. Assuming dozens of events per day, analysis shows that
the sensor can still outlast an alkaline battery. The configuration allows
the network to update the sensor parameters remotely, change its reporting interval, or
perform other remote functions and still have battery longevity beyond the shelf life.
Transmission Range :
ZigBee relies on the basic 802.15.4 standard to establish radio performance. As a short-range
wireless standard, 802.15.4 doesnt try to compete with high-powered transmitters but instead
excels in the ultra-long battery life and low transmitter power. The standard specifies
transmitter output power at a nominal 3 dBm (0.5 mW), with the upper limit controlled by the
regulatory agencies of the region in which the sensor is used. At 3 dBm output, single-hop
ranges of 10 to more than 100 m are reasonable, depending on the environment, antenna, and
operating frequency band.
Data Rate :
When the sensor is transmitting only a few bits or bytes, the system can be more efficient if it
transmits and receives the data quickly. For any given quantity of data, transmitting at a
higher data rate allows the system to shut down the transmitter and receiver more quickly,
saving significant power.
Data Latency:
Sensor systems have a broad range of data-latency requirements. If sensor data are needed
within tens of milliseconds, as opposed to dozens of seconds, the requirement places different
demands on the type and extent of the intervening network. For many sensor applications,
data latency is less critical than battery life or data reliability.
For simple star networks (many clients, one network coordinator), ZigBee can provide
latencies as low as ~16 ms in a beacon-centric network, using guaranteed time slots to
prevent interference from other sensors. You can further reduce latencies to several
milliseconds and it has risk potential interference from accidental data collision with other
sensors on the network.
If you relax data-latency requirements, you can assume that the battery life of the client nodes
will increase. This is even truer of network hubs, which are required to coordinate and
supervise the network.
Size :
As silicon processes and radio technology progress, transceiver systems shrink in physical
size. A transceiver might easily fit inside a thimble. In the case of ZigBee systems, the radio
transceiver has become a single piece of silicon, with a few passive components and a
relatively noncritical board design.
Microcontrollers that have native ability to interface with sensors have eclipsed even the
radios rapid reduction in size. Today, the 8-bit MCU that hosts the application may already
include dozens of kilobytes of flash memory, RAM, and various hardware-based timer
functions, along with the ability to interface directly to the radio transceiver IC. The MCU
requires only a few external passive components to be fully functional.

Data Security :
Its important to provide your sensor network with adequate security. IEEE 802.15.4 provides
authentication, encryption, and integrity services for wireless systems that allow systems
developers to apply security levels as required. These include no security, access control lists,
and 32-bit to 128-bit AES encryption with authentication. This security suite lets the
developer pick and choose the security necessary for the application, providing a manageable
tradeoff against data volume, battery life, and system processing power requirements. The
IEEE 802.15.4 standard doesnt provide a mechanism for moving security keys around a
network; this is where ZigBee comes in.
The ZigBee security toolbox consists of key management features that let you safely manage
a network remotely. For those systems where data security is not critical (e.g., a set of sensors
monitoring microclimates in a forest), you may decide not to implement security features but
instead optimize battery life and reduce system cost. For the developer of an industrial or
military perimeter security sensor system, data security and more importantly the ability to
defend against sensor masking or spoofing may have the higher priority. In many ZigBee-
approved applications, security will already be a seamless part of the overall system.

APPLICATIONS
Industrial Purposes :

In industrial sector, ZigBee technology helps improve Automated Meter Reading (AMR) for
utility and energy management, logistics and inventory tracking, and security and access
control. Other systems can be tracked for preventive maintenance & performance monitoring.
Seismic detectors, inclinometers, robotics and security systems are just a few examples.

Fingerprint Keypad Lock :
If you're like most people, you're probably frustrated when fumbling with multiple keys late
at night, or when you're juggling grocery bags. The Fingerprint Keypad Lock combines a
conventional cylinder lockset with state-of-the-art fingerprint-reading circuitry and a digital
keypad. You'll be able to keep your keys in your pocket and open the door by simply pressing
the sensor with your registered fingerprint, or you
can key in your PIN on the keypad. Electronic
access is also great for housekeepers and guests
anyone you'd rather not have a key.With a single
latch mortise, installation of this fingerprint door
lock is both simple and quick. This Lock can store
upto 120 different fingerprints to allow access for
family members, cleaning staff, friends, and more.
A super integrated circuit chip easily allows a
full-function program of enrolling and erasing
fingerprints.

Butterfly Indoor Flyer :
The Butterfly Indoor Flyer is the smallest
and lightest ready-to-fly RC airplane
available. At 3.6 grams, this delicate plane is
small and light enough to maneuver around
your living room. The included four-channel
transmitter provides precise control for tight
turns and slow flying. Your airspace needs
only to be 12 feet by 16 feet. This tiny
remote-control plane is powered by the
included lithium-polymer battery. The receiver
is powered by four AA batteries (sold
separately). The Butterfly Indoor Flyer remote-
control airplane has a beautifully constructed
delicate airframe along with a carbon-fiber
propeller, a Swiss-engineered gearbox, and a
tiny 4mm coreless motor. Control the RC
airplane with the transmitter's two joysticks. The
transmitter also includes a built-in portable
charger for the remote-control plane's lithium
polymer battery.
Please note: This delicate aircraft is not a toy!
Motion and Heat-sensing solar Floodlight :
Want to install a motion- and heat-sensing solar floodlight without having to hire an
electrician to wire it? Install this solar powered security light anywhere around your home
without wiring hassles or expensive battery replacement! This maintenance-free security light
uses solar energy to power the long lasting, bright, energy-efficient halogen bulb, so you
won't be left in the dark. Install a Solar Floodlight with Heat & Motion Sensor near each of
your entryways and you can be sure that your family can enter and exit your home safely,
even when there's a power outage. The Solar Floodlight is powered by a rechargeable lead
acid battery, which is included.
Don't worry about cloudy skies, because the Solar
Floodlight works for approximately 2 weeks
without ANY sunlight (at an average of eight 30-
second light intervals each night)! A built-in motion
and heat sensor turns the light on for people, but not
for swaying tree branches. You can adjust the
sensitivity of the sensor to pick up a full 90
horizontally, at a distance of up to 30 feet. Set the
darkness level to keep the light from coming on too
early in the evening or too late at night. Choose
either a 30- or 60-second duration for the light to
remain on after the last motion detected. A 14-foot
cable between the solar panel and the light lets you
place the security light where you need it, while
allowing the solar panel to be installed facing the
sun
Wireless Personal Area Networking (WPAN) :
Whereas RFID is a method of remotely storing and retrieving data using RFID tags and
readers, Zigbee goes much further. It is a full- blown telemetry system in its own right, with
the ability to provide wireless personal area networking (WPAN) i.e. digital radio
connections between computers and related devices, such as sensors. This kind of network
eliminates the use of physical data buses such as USB and Ethernet cables. As such, Zigbee it
is the ideal system to provide the copper-less warehouse or factory. When used in a tracking
application, Zigbee does not require the read portals or the associated management software
Zigbee builds on the global communication protocol standards developed by the 802.15
Working Group. The fourth in the series of these protocols, WPAN Low rate Zigbee is
designed primarily for telemetry applications.However, its strength is that it can be
incorporated into small chips that consume little power and are relatively inexpensive. These
chips can then be integrated into low-cost, low -power devices that can "sleep" for 99% of the
time until awakened by a beacon signal. The technology provides high reliability, self-
healing, self-joining networks, with network protocol security encryption, and is designed to
operate in electrically noisy industrial environments.
IN WIND TURBINES :
One of the first uses planned by IDC for Zigbee is in the area of offshore safety; in particular,
offshore manpower tracking in wind turbines. Tracking is required in these structures for a
number of reasons.First, because the weather often turns bad very quickly, and there is a need
to get people back on shore within 60 minutes. Second, there is the safety issue of personnel
working in isolation, remotely. Third, security and access restrictions apply to the turbines.
Fourth, management need to know who is working where, and how efficiently. Fifth, access
to maintenance logs is restricted. Finally, details of personnel medical records can be held on
the Zigbee chip, for fast retrieval of personnel medical details in the event of an accident.
The building blocks of this system using ZigBee technology start with a simple configuration
employing a single wireless sensor in each turbine to log when a person enters and leaves.
Complementing this will be basic central software for real time tracking and logging.
ZigBee for Business
Building control and automation, wireless lighting, security and access control and asset and
inventory management are ideal applications for ZigBee technology in commercial systems.
Wireless inventory management is becoming especially important with the advent of asset
tags that track everything from individual equipment and products to pallets in a warehouse.
A ZigBee wireless system can bring a new level of control to security systems that connect
components such as motion-control sensors, cameras and employee badges.
EmbedSense
TM
MicroStrain, Inc. announces the availability of the miniature EmbedSense wireless sensor.
EmbedSense is a tiny wireless sensor and data acquisition system that is small enough to
be embedded in a product, enabling the creation of smart structures, smart materials and
smart machines
EmbedSense nodes can be placed within implants, on spinning machinery and within
composite materials. Because they can tolerate extreme G levels and high temperatures,
sensor measurements can be made in applications where previously no data could be
obtained.
Batteries are completely eliminated which means that the embedded sensors and
EmbedSense node can be queried for the life of the structure. EmbedSense uses an
inductive link to receive power from an external coil and to return digital strain, temperature
and unique ID information.
Applications range from monitoring the healing of the spine, to testing strains and
temperatures on jet turbine engines.
. EmbedSense tags can read data from multiple types of sensors, including semiconductor
temperature sensors, thermocouples, strain gauges, pressure sensors, and load cells.



Smaller or niche markets for ZigBee technology :
There are many other smaller or niche markets for ZigBee technology, including consumer,
health care monitoring and automotive. The following are just a few examples:
Commercial systems: Vending machines, fleet management.
Consumer products: Cellular handsets, computer peripherals, remote controls, portable
devices
Enterprise systems: Health care and patient monitoring, environmental monitoring and
detection

ZigBee Markets and Technologies :
Industrial systems: Remote-controlled machines
Military and government systems: Asset tracking, personnel monitoring and surveillance
Transportation systems: Audio control and automation, security and access control
Conclusion
ZigBee and the underlying 802.15.4 communications technology could form the basis of
future wireless sensors, offering data reliability, long battery life, lower system costs, and
good range through flexible networking. IEEE 802.15.4 is providing the network, security,
and profile specifications.
What Do ZigBee Developers Want?
As an emerging market, the growth of short-range, low-power wireless can be difficult to
forecast. The 2.4 GHz band is the frequency of choice for todays ZigBee networks. The
upcoming changes to the IEEE 802.15.4 standard are expected to provide performance
benefits for sub 1GHz bands, which the ZigBee standard can accommodate. Government
regulations are also opening non-2.4 GHz bands for license-free use. And there is a critical
need for low-cost platforms that can enable millions of devices worldwide with wireless
connectivity.
A ZigBee network should be scalable, flexible and secure, and use minimal
power. Ultra-long battery life is also critical for ZigBee networksup to 20 years on a single
lithium-ion battery. For greater flexibility, high-end.
Developers are free to use their own applications on top of ZigBee, "it's up to the application
developer to use the proper level of encryption just as with any other communication system,
so if a security solution uses only the IEEE part of the ZigBee standard, it can be used today."
Key Words : Zigbee,data latency,protocols

References :
www.zigbee.com ,
www.smartthome.com ,
Our Crossbow Sensor Equipment and Zigbee by Philip Kuryloski and Sameer Pai

You might also like