Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

International Journal of

Computational Intelligence and

Information Security
ISSN: 1837-7823

September 2010

Vol. 1 No. 7

© IJCIIS Publication
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

IJCIIS Editor and Publisher


P Kulkarni

Publisher’s Address:
5 Belmar Crescent, Canadian
Victoria, Australia
Phone: +61 3 5330 3647
E-mail Address: ijciiseditor@gmail.com

Publishing Date: September 30, 2010

Members of IJCIIS Editorial Board

Prof. A Govardhan, Jawaharlal Nehru Technological University, India


Dr. Awadhesh Kumar Sharma, Madan Mohan Malviya Engineering College, India
Prof. Ayyaswamy Kathirvel, BS Abdur Rehman University, India
Prof. Deepankar Sharma, D. J. College of Engineering and Technology, India
Dr. D. R. Prince Williams, Sohar College of Applied Sciences, Oman
Prof. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
Dr. Imen Grida Ben Yahia, Telecom SudParis, France
Dr. Himanshu Aggarwal, Punjabi University, India
Dr. Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Prof. Natarajan Meghanathan, Jackson State University, USA
Dr. Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Dr. Ousmane Thiare, Gaston Berger University, Senegal
Dr. K. D. Verma, S. V. College of Postgraduate Studies and Research, India
Prof. M. Thiyagarajan, Sastra University, India
Dr. Manjaiah D. H. Mangalore University, India
Dr.N.Ch.Sriman Narayana Iyengar, VIT University ,India
Prof. Nirmalendu Bikas Sinha, College of Engineering and Management, Kolaghat, India
Dr. Rajesh Kumar, National University of Singapore, Singapore
Dr. Raman Maini, University College of Engineering, Punjabi University, India
Dr. Shahram Jamali, University of Mohaghegh Ardabili, Iran
Dr. Shishir Kumar, Jaypee University of Engineering and Technology, India
Prof. Sriman Narayana Iyengar, VIT University, India
Dr. Sujisunadaram Sundaram, Anna University, India
Dr. Sukumar Senthilkumar, National Institute of Technology, India
Prof. V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Dr. Venkatesh Prasad, Lingaya's University, India
Journal Website: https://sites.google.com/site/ijciisresearch/

2
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Contents

1. Modelling and Simulation of Closed Loop Speed Controlled PFC Half Bridge
Converter fed PMBLDC motor ( pages 4-13)
2. An Approach to Compress & Secure Image Communication (pages 14-19)
3. Cloud Computing Approaches for Educational Institutions (pages 20-28)
4. Preprocessing of Digital Mammogram for Image Analysis (pages 29-41)
5. Comparative Analysis of Performance of Series FACTS Devices Using PSO Based
Optimal Power Flow Solutions (pages 42-52)
6. Secure and Unique Biometric Template Using Post Quantum Cryptosystem (pages 53-
61)
7. Statcom In Eight Bus System For Power Quality Enhancement (pages 62-68)
8. Improved Modification of single stage Ac-Ac converter for Induction Heating
Application (pages 69-77)

3
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Modelling and Simulation of Closed Loop Speed Controlled PFC Half


Bridge Converter fed PMBLDC motor
C.Umayal and S.Rama Reddy
Research Scholar,
Electrical and Electronics Engg Dept,
Anna University, Chennai, India
cumayal@yahoo.com
Professor,
Electrical and Electronics Engg Dept ,
Jerusalem College of Engineering, Chennai,
srr_victory@yahoo.com
Abstract
Digital Simulation of a Power Factor Correction (PFC) half bridge converter based adjustable speed voltage
controlled VSI fed PMBLDC motor is presented in this paper. A single-phase AC-DC converter topology based
on the half bridge converter is employed for PFC which ensures near unity power factor over wide speed range.
The proposed speed control scheme has the concept of DC link voltage control proportional to the desired speed
of the PMBLDC motor. The speed is regulated by a PI controller. The PFC converter based PMBLDCM drive is
designed, modeled and simulated using MATLAB-SimuLink environment. This drive also ensures high
accuracy, robust operation from near zero to high speed.

Keywords: Boost rectifier, low conduction losses, power factor correction (PFC), Hall position sensors,
permanent magnet brushless DC motor, BLDC motor, closed loop speed control, PI controller.

4
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction
Research on PFC circuits for high power applications has increased. Three level boost type PFC converters
is an attractive solution for high power density and high efficiency 3 phase pre regulators. Single stage PFC circuits
have presented a serious challenge recently to increase the output power capability with optimized component
ratings. PFC using half bridge converters is the recent trend. In these rectifiers only half of the output voltage is
applied across the switches thus reducing the stress on them to a greater extent.
Recent developments in power electronics, micro electronics and modern control technologies have greatly
influenced the wide spread use of permanent magnet motors. The major classification of Permanent Magnet motors
are permanent magnet synchronous motor (PMSM) and Permanent Magnet Brushless DC motors (PMBLDCM).
While PMSM has sinusoidal back-emf waveform the BLDC motor has trapezoidal back-emf waveform.
The BLDC motors have a good performance, high efficiency, low maintenance, high power density, low
inertia. Hence it finds widespread use in variety of applications in motion control. The classical PI controller is a
wide used controller. Comparing with conventional DC motors, BLDC motors do not have brushes for commutation.
Instead they are electronically commutated. BLDC motors have many advantages over brushed DC motors and
induction motors, like better speed-torque characteristics, high dynamic response, high efficiency, noiseless
operation and wide speed ranges. Torque to weight ratio is higher enabling it to be used in applications where space
and weight are critical factors.
A new generation of microcontrollers and advanced electronics has overcome the challenge of
implementing required control functions, making the BLDC motor more practical for a wide range of uses [1], [2],
[3].

2. Power Factor Correction Converters

In the field of inverter appliance, the application of AC/DC/AC converter is more and more popular. There
are serious contaminations from harmonic currents at the mains side, making the products not pass IEC61000-3-2
and IEC61000-3-12 (harmonics standards) successfully. In order to mitigate the harmonic current pollution, most of
the household inverters are equipped with power factor corrector as front AC/DC converter, and the input power
factor approaches one. But the conventional active PFC has to employ uncontrolled rectifier and costly boost
inductor, and these power components result in power loss, low efficiency and high cost. Nevertheless, the bridgeless
PFC (BLPFC) is characteristic of small number of power switches, making room for low power loss [4].
Additionally, in the conventional active PFC the power switches are in on and off state in a whole mains period,
enduring high voltage and current stresses, producing a lot of switching loss and conduction loss and limiting the
efficiency.
A voltage source inverter can run the BLDC motor by applying three phase square wave voltages to the stator
winding of the motor .A variable frequency square wave voltage can be applied to the motor by controlling the
switching frequency of the power semiconductor switches. The square wave voltage will induce low frequency
harmonic torque pulsation in the machine. Also variable voltage control with variable frequency operation is not
possible with square wave inverters. Even updated pulse-width modulation (PWM) techniques used to control
modern static converters such as machine drives, power factor compensators do not produce perfect waveforms,
which strongly depend on the semiconductors switching frequency. Voltage or current converters, as they generate
discrete output waveforms, force the use of machines with special isolation, and in some applications large
inductances connected in series with the respective load. Also, it is well known that distorted voltages and current
waveforms produce additional power losses, and high frequency noise that can affect not only the power load but
also the associated controllers. All these unwanted operating characteristics associated with PWM converters could
be overcome with improved bridgeless PFC boost converters.

2.1. Principle of the bridgeless topology

The basic topology of the bridgeless PFC boost rectifier [5] is shown in Fig. 1. Compared to the
conventional PFC boost rectifier, shown in Fig. 2, one diode is eliminated from the line-current path, so that the line
current simultaneously flows through only two semiconductors resulting in reduced conduction losses [6]. The
bridgeless PFC topology removes the input rectifier conduction losses and is able to achieve higher efficiency.
However, the bridgeless PFC boost rectifier in Fig. 1 has significantly larger common-mode noise than the
conventional PFC boost rectifier [7].

5
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

D1 D2

LB

RL
AC C

d
SB1
SB2

s
Figure 1: Bridgeless Boost Converter

Based on the analysis above, the bridgeless PFC circuit can simplify the circuit topology and improve the
efficiency as well.

3. Types of control techniques of PMBLDC motor

Though various control techniques are discussed in [8].Basically two methods are available for controlling
PMBLDC motor. They are sensor control and sensorless control.
To control the machine the present position of the rotor is required to determine the next commutation
interval. One is by controlling the DC bus rail voltage and the next one is by PWM method. Some designs utilize
both to provide high torque at high load and high efficiency at low load. Such hybrid design also allows the control
of harmonic current [9].
In control methods using sensors, mechanical position sensors, such as a hall sensor, shaft encoder or
resolver have been utilized in order to provide rotor position information. Hall Position sensors or simply Hall
sensors are widely used and popular. The rotor position information is used to generate precise firing commands for
power converter. This ensures drive stability and fast dynamic response. The speed feedback is derived from the
position sensor output signals. Between the two commutation signals the angle variation is constant as the Hall
Effect Sensors are fixed relative to the motor, thus reducing speed sensing to a simple division. Usually speed and
position of a permanent magnet brushless direct current motor rotor is controlled in a conventional cascade structure.
The inner current control loops runs at a larger width than the outer speed loop to achieve an effective cascade
control [10].
Various sensorless methods for BLDC motors are analyzed in [11-18]. [11] Proposes a speed control of
brushless drive employing PWM technique using digital signal processor. A PSO based optimization of PID
controller for a linear BLDC motor is given in [12], Direct torque control and indirect flux control of BLDC motor
with non sinusoidal back emf method controls the torque directly and stator flux amplitude indirectly using d-axis
current to achieve a low-frequency torque ripple-free control with maximum efficiency[13-14]. [15] Proposes a
novel architecture using a FPGA-based system. Fixed gain PI speed controller has the limitations of being suitable
for a limited operating range around the operating point and having overshoot. To eliminate this problem a fuzzy
based gain scheduled PI speed controller is proposed in [16].A new module structure of PLL speed controller is
proposed by [17].A fixed structure controller (PI or PID) using time constrained output feedback is given in [18].
The above literatures does not deal with PFC in closed loop controlled PMBLDC. This work proposes PFC at the
input of PMBLDC drive.

4. Mathematical Model of the PMBLDC motor

Modeling and simulation play an important role in the design of power electronics system. The classic
design approach begins with an overall performance investigation of the system, under various circumstances
through mathematical modeling.

6
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

The circuit model of PMBLDC motor is shown in Fig 2.

Figure 2:Motor Circuit Model

The voltage equations of the BLDC motor are as follows:


d d λ (θ )
Va = Ra ia + ( Laa ia + Lab ib + Lac ic ) + ar
dt dt
d d λbr (θ )
Vb = Rb ib + ( Lba ia + Lbb ib + Lbc ic ) +
dt dt
d d λcr (θ )
Vc = Rc ic + ( Lca ia + Lcb ib + Lcc ic ) +
dt dt
In balanced system the voltage equation becomes

Va   R 0 0  ia   La Lba Lca  ia  ea 


V  =  0 R 0  i  + d  L    
 b    b  dt  ba Lb cb L   ib  +  eb  − − − − − −(1)

Vc   0 0 R   ic   Lca Lcb Lc   ic   ec 


The mathematical model for this motor is described in Equation (1) with the assumption that the magnet has high
sensitivity and rotor induced currents can be neglected [3]. It is also assumed that the stator resistances of all the
windings are equal. Therefore the rotor reluctance do not change with angle. Now
La = Lb = Lc = L
Lab = Lbc = Lca = M

Assuming constant self and mutual inductance, the voltage equation becomes
Va   R 0 0  ia   L − M 0 0  ia  ea 
V  =  0 R 0  i  +  0 L−M 0  ib  +  eb  − − − − − (2)
d
 b   b  dt
Vc   0 0 R   ic   0 0 L − M   ic   ec 
In state space form the equation is arranged as
ia  ia   ea  va 
d   R  1  1 
ib = − ib  −  eb  +  vb 
dt   L L L
 ic   ic   ec   vc 
The electromagnetic torque is given as Te = (ea ia + eb ib + ec ic ) / ωr
d
The equation of motion is given as ωr = (Te − TL − Bωr ) / J _ _ _ _ _ (3)
dt

7
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

5. BLDC Motor Speed Control

Block diagram of drive system is shown in Fig.3.In servo applications position feedback is used in the
position feedback loop. Velocity feedback can be derived from the position data. This eliminates a separate velocity
transducer for the speed control loop. A BLDC motor is driven by voltage strokes coupled by rotor position. The
rotor position is measured using Hall sensors. By varying the voltage across the motor, we can control the speed of
the motor. When using PWM outputs to control the six switches of the three-phase bridge, variation of the motor
voltage can be obtained by varying the duty cycle of the PWM signal.

Figure 3: Block Diagram of Drive System

6. PMBLDC Motor fed from a Voltage source inverter with PFC Full Bridge Converter
Schematic diagram of a three level voltage source inverter fed PMBLDC motor with PFC full bridge
converter is shown in Fig.4..This is a closed loop control circuit using 3 Hall Sensors. MOSFETs are used as
switching devices here. To control the speed of the motor the output frequency of the inverter is varied.
To maintain the flux constant the applied voltage is varied in linear proportion to the frequency. The
MATLAB simulation is carried out and the results are presented.
For very slow, medium, fast and accurate speed response, quick recovery of the set speed is important
keeping insensitiveness to the parameter variations. In order to achieve high performance, many conventional control
schemes are employed. At present the conventional PI controller handles these control issues. Moreover
conventional PI controller is very sensitive to step change of command speed, parameter variation and load
disturbances.
With high frequency switching, the PMBLDC motor rotates at a higher speed. But without the strong
magnetic field at stator, the rotor fails to catch up the switching frequency because of weak pull force. Speed of
BLDC motor is indirectly determined by the applied voltage magnitude. Current in the winding is increased by
increasing the voltage. This produces stronger magnetic pull to align the rotor’s magnetic field faster with the
induced stator magnetic field. The rotational speed or the alignment is proportional to the voltage applied to the
terminals. The torque pulsation is very high as the step size is reduced.
When using PWM outputs to control the six switches of the three-phase bridge, variation of the motor
voltage can be achieved easily by changing the duty cycle of the PWM signal. In this method the speed is controlled
in a closed loop by measuring the actual speed of the motor. The error in the set speed and actual speed is calculated.
A Proportional plus Integral (P.I) controller is used to amplify the speed error and dynamically adjust the PWM duty
cycle.

7. Simulation Results
The technical specifications of the drive system are as follows C= 2200 microfarad.TON= 5.88 µsecs. TOFF=
5.88µsecs.T= 11.76 µsecs. Stator Resistance is 2.875 ohms, Stator Inductance is 8.5e-3mH, Motor inertia is 0.8e-
3J.With the help of the designed circuit parameters, the MATLAB simulation is done and results are presented here.
Speed is set at 1800 rpm and the load torque disturbances are applied at time t=0.6 sec. The speed regulations are
obtained at this speed and the simulation results are shown. The waveforms of input voltage and current are shown in

8
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Fig. 5.The waveforms of the phase voltage and currents are shown in Figs 6 and 7 respectively. The waveforms of
back EMF are shown in Fig.8. From Fig.5 it can be seen that the power factor is 0.98. The stator current
waveforms are shown in Fig 7. They are quasi sinusoidal in shape and are displaced by 120°.

Figure 4.Closed Loop model of PMBLDC Motor

Figure 5: Input voltage and current (PF=0.98)

Figure 6: Phase Voltage supplied to the stator windings

Figure 7:Three phase inverter stator current

9
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 8: Back emf

Figure 9: Load Torque disturbance applied at t= 0.6 sec

Figure10: Rotor speed in rpm

8. PMBLDC Motor fed from a Voltage source inverter with PFC Half Bridge Converter
Simulink model of PMBLDC motor with PFC half bridge converter is shown in Fig 11. A boost converter is
used at the input to improve the power factor. AC input voltage and current waveforms are shown in Fig 12. The
waveforms of back emf are shown in Fig 13. Step change in load torque is shown in Fig 14.

10
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure11: Closed Loop Speed Control of PMBLDC Motor with PFC Half Bridge Converter

Figure 12: Input voltage and current

Figure 13: Back EMF

Figure 14: Load torque disturbance applied at t=0.6 sec

11
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 15: Rotor speed in rpm.


Figures 9 and 15 show the load torque disturbance applied at time t=0.6 sec for a set speed of 1800 rpm.
From Figs 10 and 15, it can be seen that the closed loop system brings the speed to the normal value. From the
figures 5 and 12, it can be seen that the power factor is improved by using half bridge PFC converter.

9. Conclusion
Closed loop controlled VSI fed PMBLDC motor with PFC full bridge and half bridge converters are
modeled and simulated. Feedback signals from the PMBLDC motor representing speed and position are utilized to
get the driving signals for the inverter switches. The simulated results shown are on par with the theoretical
predictions. The power factor is corrected by using PFC converter. PFC converter fed PMBLDC drive is a viable
alternative since it has improved power factor.

References
[1] Tay Siang Hui, K.P. Basu and V.Subbiah Permanent Magnet Brushless Motor Control Techniques, National
Power and Energy Conference (PECon) 2003 Proceedings, Bangi, Malysia
[2] P.Thirusakthimurugan, P.Dananjayan,’A New Control Scheme for the Speed Control of PMBLDC Motor Drive’
1-4244-0342-1/06/$20.00 ©2006 IEEE
[3] Nicola Bianchi,Silverio Bolognani,Ji-Hoon Jang, Seung-Ki Sul,” Comparison of PM Motor structures and
sensorless Control Techniques for zero-speed Rotor position detection” IEEE transactions on Power Electronics,
Vol 22, No.6, Nov 2006.
[4] Woo-Young Choi, Jung-Min Kon, Eung-Ho Kim, Jong-Jae Lee and Bong-Hwan Kwon, ”Bridgeless Boost
Rectifier with Low Conduction Losses and Reduced Diode Reverse-Recovery Problems” IEEE Transactions on
Industrial Electronics, vol 54, No.2, pp.769-780,April 2007.
[5] J. C. Salmon, “Circuit topologies for PWM boost rectifiers operated from 1-phase and 3-phase ac supplies and
using either single or split dc rail voltage outputs,” in Proc. IEEE Applied Power Electronics Conf.,
[6] Laszlo Huber,Yungtaek Jang, Milan M. Jovanovic, ”Performance Evaluation of bridgeless PFC Boost
Rectifiers”, IEEE Trans ,Power Electronics.,vol.23,No.3,May 2008
[7] B. Lu, R. Brown, and M. Soldano, “Bridgeless PFC implementation using one cycle control technique,” IEEE
Applied Power Electronics (APEC) Conf. Proc., pp. 812-817, Mar. 2005 Mar. 1995, pp. 473–479.
[8] R.Krishnan, Electric Motor Drives Modeling, Analysis,and Control, Prentice-Hall Internationa Inc., New Jersey,
2001.
[9] New Approach to Rotor Position Detection and Precision Speed Control of the BLDC Motor Yong-Ho Yoon
Tae-Won Lee Sang-Hun Park Byoung-Kuk Lee Chung- 1-4244-0136-4/06/$20.00 '2006 IEEE
[10]Ling KV, WU Bingfang HE Minghua and Zhang Yu, “ A Model predictive controller for multirate cascade
system”, Proc.of the American Control Conference,ACC 2004,USA, pp.1575-1579.2004
[11] G.Madhusudhanrao,B.V.SankerRam,B.Sampath Kumar,K.Vijay Kumar,” Speed Control of BLDC Motor using
DSP”, International Journal of Engineering Science and Technology Vol.2(3), 2010.
[12] Yingfa Wang, Changliang Xia, Zhiqiang Li, Peng Song,” Sensorless Control for BLDC motor using support
vector machine based on PSO
[13] Salih Baris Ozturk, Hamid A.Toliyat, “Sensorless Direst Torque and Indirect Flux Control of Brushless DC
Motor with Non-Sinusoidal back-emf,978-1-4244-1766-7/08 ©2008 IEEE
[14] Salih Baris Ozturk, William C.Alexander,Hamid A.Toliyat,”Direct Torque Control of four-switch brushless DC
motor with non-sinusoidal back emf, IEEE transactions on power electronics, Vol 25,No-2 Feb 2010

12
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

[15] Kuang-Yao Cheng ,” Novel Architecture of a mixed-mode sensorless control IC for BLDC motors with wide
speed ranges”, 978-1-422-2812-0/09 ©2009 IEEE.
[16] M.S.Srinivas, K.R.Rajagopal,” Fuzzy Logic based gain scheduled PI speed controller for PMBLDC motor, 978-
1-4244-4859-3/09 © 2009.
[17] Ting-Yu-Chang, Ching-Tsai-Pan,Emily Fang,” A novel high performance variable speed PMBLDC motor drive
system”, 978-1-4244-4813-5/10 ©2010 IEEE.
[18] Shinn-Ming Sue, Kun-Lin Wu,” A voltage controlled brushless DC motor over extended speed range”, 978-i-
4244-1709-3/08 ©2008 IEEE

About the Authors

Umayal Chandrahasan has obtained her ME degree from Anna University, Tamil
Nadu, India, in the year 2005. She is presently doing her research in the area of BLDC
motors. She has 9 years of teaching experience and 8 years of Industrial experience.
She is doing her research on PMBLDC motor drives

Rama Reddy Sathi obtained the ME degree from Anna University, Tamil Nadu, India,
in 1989. He has pursued research in the area of resonant converters in 1995. He has 2
years of industrial experience and 18 years of teaching experience. He has secured
A.M.I.E institution Gold medal for obtaining higher marks. He has secured AIMO best
project award. He is a life member of IEE, IETE, ISTE, SSI, and SPE. He has worked in
Tata Consulting Engineers, Bangalore and Anna University. He has authored textbooks
on power electronics and electronic circuits. He has published 20 technical papers in
National and International Conference proceedings / journals in the area of power
electronics and FACTs.

13
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

An Approach to Compress & Secure Image Communication


Ramveer Singh1and Deo Brat Ojha2
1
Mr. Ramveer Singh, R. K .G. Institute of Technology, Gzb. U.P.(India)
& Research Scholar Singhania University, jhunjhunu, Rajsthan, INDIA.
e-mail: ramveersingh_rana@yahoo.co.in. .
2
Dr. Deo Brat Ojha, Prof., Deptt. Of mathematics, R. K. G. Institute of Technology, Gzb.,
U.P.(India),
e-mail: deobratojha@rediffmail.com

Abstract
In our day to day competitive and insecure busy life, requires more secure communication. Like medical field
(Telemedicine) uses the transmission of images or videos nearly up to complete efficiency for saving human life,
secrecy of communication between secret agents and their relative government, to maintain the confidentiality in
military operations, etc. In our approach, we introduced a new scheme to transmit a image over infringeable
communication environment. Our approach is combination of cryptography and compression. Cryptography
provides secure transmission, whereas compression increases the capacity of communication channel.

Keywords: Image, Compression, Encryption, Decryption, Secure Communication

14
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction
The amount of image has increased rapidly on the Internet. Image Two different approach of technologies
have been developed for this purpose. The first approach is based on content protection through encryption [1], [2].
In this approach, proper decryption of data requires a key. The second approach bases the protection on digital
watermarking or data hiding, aimed at secretly embedding a message into the data. In the current era the transmission
of Image over internet is so much challenging over the internet. In this manner, the better way to transmit the image
over internet is encryption. Using the cryptography we secure the image as well as also better utilise the
communication channel through compression technique.
Cryptography is a branch of applied mathematics that aims to add security in the ciphers of any kind of
messages. Cryptography algorithms use encryption keys, which are the elements that turn a general encryption
algorithm into a specific method of encryption. The data integrity aims to verify the validity of data contained in a
given document. [3]
In this current article, we introduced a manner of image transmission over highly traffic channel. The
section 2, describes the main tools of our newly introduced approach. It also shows that the SEQUITUR and EHDES
(Enhanced Data Encryption Standard). SEQUITUR is a dictionary based lossless compression technique and
EHDES is a symmetric key encryption technique. Using the SEQUITUR, we can get a compress form of image after
that we apply the EHDES, for the security of image. The combination of both makes this approach very much usable
and secure for medical era. In section 3, it shows the complete working style and behaviour of combination of
SEQUITUR and EHDES.

2 Preliminaries

2.1 Enhanced Data Encryption Standard (EHDES)


In Enhanced Data Encryption Standard (EHDES) [4, 5,], we use the block ciphering of data and a symmetric key. As
traditional Data Encryption Standard (DES), we also break our data into 64-Bit blocks and use a symmetric key of
56-Bit. EHDES having three phases: 1. Key Generation. 2. Encryption on Input Data. 3. Decryption on Input
Cipher.

56 Bit 56 Bit
Symmetric Symmetric
Key ‘k’ Key ‘k’

Data or Data or
Message (M) Message (M)

EHDES EHDES
M1, M2, M3...... Cipher text or M1, M2, M3......
...Mn Encrypted ...Mn
Message (C)

Encryption Process Decryption Process

Figure 1: Encryption and Decryption process of EHDES.

2.1.1 Key Generation


In this phase of EHDES, We moderate the initial 56 Bit key using Random Number Generator (RNG) for every
block of message (M1, M2, M3 ...Mn). The new generated 56 Bit keys (Knew1, Knew2, Knew3................ Knew n) from initial key
K is used for encryption and decryption for each block of data. For new keys, we generate a random number and
implement a function F on generated random number (NRNG) and the initial key K.

15
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Random No.
Generated by RNG
NRNG
56 Bit Initial Key K
F Generated 56 Bit keys
Knew1, Knew2, Knew3........... Knew n

Figure 2: Process of new generated key (Knew i) of EHDES.

2.1.2. Encryption on Input Data.


As we know Data Encryption Standard (DES) is based on block cipher scheme. Message breaks in 64 Bit n blocks of
plain text.
M = {M1, M2, M3,...................., Mn}
Now, we encrypt our message {M1, M2, M3,...................., Mn} blocks by each new generated key Knew1, Knew2, Knew3................
Knew n.

2.1.3. Decryption on Input Cipher


Decryption is the reverse process of encryption. For decryption, we also used the same key which is used in
encryption. On the receiver side, the user also generate the same new key Knew i for each block of cipher and generate
plain text through decryption process of data encryption standard.

2.2 Compression:
Previous well known data compression techniques includes the standard algorithm for example, Huffman
coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-
length code table for encoding a source symbol (such as a character in a file) where the variable-length code table
has been derived in a particular way based on the estimated probability of occurrence for each possible value of the
source symbol. It was developed by David A. Huffman. There are various data compression algorithm like Run-
length encoding, Burrows-Wheeler transform, Dynamic Markov Compression, entropy encoding: Huffman coding
,Adaptive Huffman coding, Shannon-Fano coding, arithmetic coding etc., which has been used for data compression
successfully internationally.
In addition, to avoid sending files of the enormous size, a compression scheme can be employed what is
known as lossless compression on secrete message to increase the amount of hiding secrete data, a scheme that
allows the software to exactly reconstruct the original message [6].
The transmission of numerical images often needs an important number of bits. This number is again more
consequent when it concerns medical images. If we want to transmit these images by network, reducing the image
size is important. The goal of the compression is to decrease this initial weight. This reduction strongly depends of
the used compression method, as well as of the intrinsic nature of the image. Therefore the problem is the following:
1. To compress without lossy, but with low factor compression. If you want to transmit only one image, it is
satisfactory. But in the medical area these are often sequences that the doctor waits to emit a diagnostic.
2. To compress with losses with the risk to lose information. The question that puts then is what the relevant
information is’s to preserve and those that can be neglected without altering the quality of the diagnosis or the
analysis. The human visual system is one of the means of appreciation, although subjective and being able to vary
from an individual to another. However, this system is still important to judge the possible causes of degradation and
the quality of the compression [7].

2.2.1 The SEQUITUR Algorithm [8]


The SEQUITUR algorithm represents a finite sequence _ as a context free grammar whose language is the singleton
set {σ}. It reads symbols one-by-one from the input sequence and restructures the rules of the grammar to maintain
the following invariants:
(A) no pair of adjacent symbols appear more than once in the grammar, and

16
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

(B) every rule (except the rule defining the start symbol) is used more than once. To intuitively understand the
algorithm, we briefly describe how it works on a sequence 123123. As usual, we use capital letters to denote non-
terminal symbols. After reading the first four symbols of the sequence 123123, the grammar consists of the single
production rule S  1, 2, 3, 1 where S is the start symbol. On reading the fifth symbol, it becomes S  1, 2, 3, 1, 2
Since the adjacent symbols 1, 2 appear twice in this rule (violating the first invariant), SEQUITUR introduces a non-
terminal A to get

S  A, 3,A A 1, 2

Note that here the rule defining non-terminal A is used twice. Finally, on reading the last symbol of the sequence
123123 the above grammar becomes

S  A, 3, A, 3 A  1, 2

This grammar needs to be restructured since the symbols A, 3 appear twice. SEQUITUR introduces another non-
terminal to solve the problem. We get the rules

S  B,B
BA3
A 1 2

However, now the rule defining non-terminal A is used only once. So, this rule is eliminated to produce the final
result.

S  B, B B  1, 2, 3

Note that the above grammar accepts only the sequence 123123.

3. Our Scheme
A complete compression and encryption process includes the following phases:
Phase 1: Generating blocks:
In RGB space the image is split up into red, blue and green images. The image is then divided into blocks of
pixels and accordingly the image of pixels will contain blocks. Where, , .

Phase 2: DCT: All values are level shifted by subtracting 128 from each value. The Forward Discrete Cosine
Transform of the block is then computed. The mathematical formula for calculating the DCT is:

Where,

Where

Phase 3: Quantization: Quantization is the step where the most of the compression takes place. DCT really does not
compress the image, as it is almost lossless. Quantization makes use of the fact that, the high frequency components
are less important than the low frequency components. The Quantization output is

The matrix could be anything, but the JPEG committee suggests some matrices which work well with
image compression.

17
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Phase 4: Compression using SEQUITUR:


After quantization, the scheme uses a filter to pass only the string of non-zero coefficients. By the end of this
process we will have a list of non-zero tokens for each block preceded by their count.
DCT based image compression using blocks of size 8x8 is considered. After this, the quantization of DCT
coefficients of image blocks is carried out. The SEQUITER compression is then applied to the quantized DCT
coefficients.
The compression achieved in this approach is evaluated based on the overall compression ratio (CR) which is
defined as:
C.R. =

Phase 5: Encryption using EHDES:


In encryption phase, EHDES take output of compression phase as a message block and a new generated key
implement encryption process as per traditional DES.
In this process, New key is also make 16 different key for every round of EHDES using shifting property as per
traditional DES. For every block of message M, new key makes a new key block for every round of DES to
implement in the encryption process.
Decryption Process is the inverse step of encryption process. In decryption, we also use the same key which
is used in encryption.

{mi} and {Ci},


where 1≤ i ≤ n.
Cipher Text and Plain Text

Source
Image
Divide Source DCT
Image into n Transform Quantization
Blocks Phase 2 Phase 3
Phase 1

Output – A compressed
and encrypted object
EHDES SQUITER
Phase 5
Phase 4

Figure 3: Block Diagram of Proposed Scheme.


4. Security Analysis
We verified that the compression ratio of Sequitur outperforms Gzip as well as Compress. On the other hand,
however, the compression and decompression are very slow compared to Gzip and Compress, because Sequitur
utilizes the arithmetic coding that is time consuming, and the program might not be fully optimized. From our view
point of compressed pattern matching, compression time is not a serious matter, while the decompression time is
critical. In the original program of Sequitur, decompression routine borrows the same data structures, such as doubly
linked list, that are unnecessary for decompression only. Thus we simply rewrote the decompression routine using a
standard array.
Cryptographic scheme strength is often described by the bit length of encryption key. The more bits in the key, the
harder it is to decrypt data simply by all possible key. DES uses 56 bit, Cracking 56- bit algorithm with a single key
search might take around a week on a very powerful computer.

18
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Now,
At time t, the generated key is,
At time t + 1, the generated key is ,
And At time t + n, the generated key is
Here,

It might be possible that are equal if and only if the generated no. NRNG at time t, t + 1, t + n
are same.

5. Conclusion
Our scheme certainly provides the required high level security measures and increase the capacity of communication
channel up to the extent of satisfactory level like for saving human life in telemedicine, military communication and
communication between secret agents of governments as well as the personal use of secure communication.

References
[1] G. Lo-varco,W. Puech, and M. Dumas. “Dct-based watermarking method using error correction codes”, In
ICAPR’03, International Conference on Advances inPattern Recognition, Calcutta, India, pages 347–350, 2003.
[2] R. Norcen, M. Podesser, A. Pommer, H.P. Schmidt, and A. Uhl. “Confidential storage and transmission of
medical image data”, Computers in Biology and Medicine, 33:277–292, 2003.
[3] Diego F. de Carvalho, Rafael Chies, Andre P. Freire, Luciana A. F. Martimiano and Rudinei Goularte, “Video
Steganography for Confidential Documents: Integrity, Privacy and Version Control” , University of Sao Paulo –
ICMC, Sao Carlos, SP, Brazil, State University of Maringa, Computing Department, Maringa, PR, Brazil.
[4] Ramveer Singh , Awakash Mishra and D.B.Ojha “An Instinctive Approach for Secure Communication –
Enhanced Data Encryption Standard (EHDES)” International journal of computer science and Information
technology, , Vol. 1 (4) , 2010, 264-267
[5] D.B. Ojha, Ramveer Singh, Ajay Sharma, Awakash Mishra and Swati Garg “An Innovative Approach to
Enhance the Security of Data Encryption Scheme” International Journal of Computer Theory and Engineering,
Vol. 2,No. 3, June, 2010,1793-8201
[6] Nameer N. EL-Emam, “Hiding a Large Amount of Data with High Security Using Steganography Algorithm”
Applied Computer Science Department, Faculty of Information Technology, Philadelphia University, Jordan
[7] Borie J., Puech W., and Dumas M., “Crypto-Compression System for Secure Transfer of Medical Images”, 2nd
International Conference on Advances in Medical Signal and Information Processing (MEDSIP 2004),
September 2004.
[8] N.Walkinshaw, S.Afshan, P.McMinn ”Using Compression Algorithms to Support the Comprehension of
Program Traces” Proceedings of the International Workshop on Dynamic Analysis (WODA 2010) Trento, Italy,
July 2010.

Ramveer Singh, Bachelor of Engineering from Dr. B.R. Ambedkar University, Agra (U.P.), INDIA in 2003. Master
of Technology from V.M.R.F. Deemed University, Salem (T.N.), INDIA in 2007. Persuing Ph.D from Singhania
University, Jhunjhunu, Rajsthan, INDIA. The major field of study is Cryptography and network security. He has
more than eight year experience in teaching and research as ASSOCIATE PROFESSOR. He is working at Raj
Kumar Goel Institute of Technology, Ghaziabad (U.P.), INDIA. The current research area is Cryptography and
Network security. Mr. Singh is the life-time member of Computer Society of India and Computer Science Teacher
Association.

Dr. Deo Brat Ojha, Ph.D from Department of Applied Mathematics, Institute of Technology, Banaras Hindu
University, Varansi (U.P.), INDIA in 2004. His research field is Optimization Techniques, Functional Analysis &
Cryptography. He has more than Six year teaching & more than eight year research experience. . He is working as a
Professor at Raj Kumar Goel Institute of Technology, Ghaziabad (U.P.), INDIA. He is the author/co-author of more
than 50 publications in International/National journals and conferences.

19
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Cloud Computing Approaches for Educational Institutions


N.Mallikharjuna Rao1, V.Satyendra Kumar2, Sudhakar3, P.Seetharam4
1. Associate Professor, Annamacharya P.G college of Computer Studies, Rajampet
e-mail: drmallik2009@gmail.com
2. Assistant Professor, Annamacharya Institute of Technology and Sciences, Rajampet,
e-mail: sati2all@gmail.com
3. Assistant Professor, Annamacharya P.G college of Computer Studies, Rajampet,
e-mail: dsudhakar2all@gmail.com
4. Systems Engineer, Annamacharya Institute of Technology and Sciences, Rajampet,
email:seetharam.p@gmail.com

Abstract
Cloud computing is a deployment model leveraged by IT in order to reduce infrastructure costs and scalability
concerns. Cloud computing is not about the application itself; it is about how the application is deployed as how it is
delivered Cloud ware is an extension of cloud computing but they do not enable business to leverage cloud
computing. Cloud computing is rapidly increasing and attracting popularity. Companies such as Red Hat, Microsoft,
Amazon, Google, and IBM are increasingly funding cloud computing infrastructure and research, making it
important for students to gain the necessary skills to work with cloud-based resources. Cloud Computing refers to
both the applications delivered as services over the Internet and the hardware and systems software in the
datacenters that provide those services. The services themselves have long been referred to as Software as a Service
(SaaS). When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud;
the service being sold is Utility Computing. In this paper, we are proposing cloud computing for to use in
educational institutions for improving quality in education. Cloud computing is a simple idea, but it can have a
huge impact on business.
Keywords: Cloud Computing, IaaS, SaaS, HaaS

20
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

I. Introduction
Cloud Computing has many antecedents and equally as many attempts to define it. The players in the large world of
clouds are Software as Service providers, outsourcing and hosting providers, network and IT infrastructure
providers and, above all, the companies whose names are closely linked with the Internet's commercial boom. But,
all these services in combination outline the complete package known as Cloud Computing – depending on the
source with the appropriate focus. That which long ago established itself in the private environment of the Internet is
now, noticeably, coming to the attention of businesses too. Not only developers and startups but also large
companies with international activities recognize that there is more to Cloud Computing than just marketing hype.
Cloud Computing offers the opportunity to access IT resources and services with appreciable convenience and
speed. Behind this primarily, is a solution that provides users with services that can be drawn upon on demand and
invoiced as and when used. Suppliers of cloud services, in turn, benefit as their IT resources are used more fully and
eventually achieve additional economies of scale. Additionally, Intel is already taking advantage of external cloud
computing technologies. IBM has much opportunistic software a service (SaaS) implementations. Our preliminary
experiences with Infrastructure as a Service (IaaS) suggest that it may be suitable for rapid development and some
batch applications.
Many applications are not suitable for hosting in external clouds at present. Good candidates may be applications
that have low security exposure and are not mission-critical or competitive differentiators for the corporation. A
strategy of growing the cloud from the inside out delivers many of the benefits of cloud computing and positions us
to utilize external clouds over time. IBM expect to selectively migrate services to external clouds as supplier
offerings mature, enterprise adoption barriers are overcome, and opportunities arise for improved flexibility and
agility as well as lower costs. A strategy of growing the cloud from the inside out delivers many of the benefits of
cloud computing and positions us to utilize external clouds over time.
In this paper, we are proposing to use cloud computing in educational institutions especially in engineering and
management fields, those who are not able to invest large amount of money on softwares for improving the
student ability and quality in advanced software, to meet the requirements of business industry and Information
Technology sector. In Section-II we have discussed background of cloud computing, Section-III presented
several approaches of cloud computing techniques and at the end concluded about the advantages with cloud
computing technology.
II Background
Cloud computing is about how an application or service is deployed and delivered. It's really about the behavior of
the entire infrastructure; how the cloud delivers an application,
Dynamism: This is the ability of the application delivery infrastructure to expand and contract automatically based
on capacity needs. Note that this does not require virtualization technology, though many Providers are using
virtualization to build this capability. There are several approaches for implementing dynamism in architecture.
Abstraction: Do you need to care about the underlying infrastructure when developing an application for
deployment in the cloud. If you have to care about the operating system or any piece of the infrastructure, it's not
abstracted enough to be cloud computing.
Resource Sharing: The architecture must be compute and network resources of the cloud infrastructure are sharable
among applications. This ties back to dynamism and the ability to expand and contract as needed. If an application's
method of scaling is to simply add more servers on which it is deployed rather than be able to consume resources on
other servers as needed, the infrastructure is not capable of resource sharing.
Provides A Platform: Cloud computing is essentially a deployment model. If it provides a platform on which you
can develop and deploy an application and meets the other two criterion of cloud computing.
2.1 Compute resources
 CPU time
 Cores & clock cycles per
 Floating point processing vs. Integer processing
 RAM (MBytes used)

21
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

 Access speed / latency time


 Data storage (GBytes used)
 I/O throughput (MBytes per second Transactions (I/O operations per second), Resilience (e.g. RAID level)
, Bandwidth (GBytes transferred)
2.2 Latency & resilience
According to the IEEE Computer Society Cloud Computing is: "A paradigm in which information is permanently
stored in servers on the Internet and cached temporarily on clients that include desktops, entertainment centers,
table computers, notebooks, wall computers, handhelds, etc."
Cloud computing means and open market for computing resources; utility computing applied to multiple grids.
A compute cloud is a grid spanning multiple administrative domains with applications able to move between
domains in response to cost and SLA requirements. Cloud is about scale and the computing resource market.
The role of the open-source community especially for Cloud Computing Modern Clouds mostly consists of already
existing, well-known and often open-source components. For every aspect of a Cloud environment a different set of
utilities is used e.g. Puppet for automated configuration management. This paper explores whether cloud
computing services are suitable for high- performance computing (HPC) workloads. In contrast, web service
workloads that often have little intra-cluster communication are the primary users of cur-rent cloud computing
services. However, cloud nodes are typically con-gured to run user-provided software so that cloud computing nodes
can just as easily run scientific applications. The ability to quickly create and scale-up a custom compute cluster is a
boon to individual scientists whose computing needs can be sporadic. Cloud computing services can also be used to
extend existing clusters for larger problem sizes. Although cloud providers currently place small bounds on
dynamically allocated resources, trends point to-ward increasing bounds on these resources over time.
Cloud computing is often associated with a couple of acronyms:
• SaaS – Software as a Service; and,
• HaaS – Hardware as a Service.
Successful SaaS examples include: Salesforce.com and Google’s Google Reader and Google Docs which are
useful for engineering and management students for sharing the useful information to develop useful projects.
Successful HaaS examples include: Amazon’s Elastic Compute Cloud provides compute capacity in the cloud.
III Several Approaches for Cloud Computing
The value of the Cloud Computing industry is expected to reach $100 Billion in five years so it’s easy to understand
why the big IT vendors like Microsoft, Google, and Amazon are rapidly ramping up cloud computing platforms and
services.
3.1 Tribulations with the Predictable Solution
We need to invest in infrastructure to implement the mechanisms necessary for continued integration. Also, we need
to set up machines for carrying out the tasks like analyzing the source code, running the tests et al. naturally, this
leads to up- front capital expenditure as well as operational expenditure. As the modern software systems are getting
sizable, tasks like analyzing the source code and the large test base is implying lengthy test execution cycles! Under
such circumstances, it would become difficult to ensure that the feedback is rapid. This would get in the way of
ensuring the required agility and would also hamper the time to market. The problem can be summed up in just a
line; given that doing this stuff takes Time, Money and Resources, how do we ensure that we create value instead of
incurring costs? Cloud Computing is an innovation we can apply to bring economy to our solution in terms of time,
resources and of course, the money!
3.2 Infrastructure as a Service.
IPs manages a large set of computing resources, such as storing and processing capacity. Through virtualization,
they are able to split, assign and dynamically resize these re-sources to build ad-hoc systems as demanded by
customers, the SPs. They deploy the software stacks that run their services. This is the Infrastructure as a Service
(IaaS) scenario.

22
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

3.3 Platform as a Service.


Cloud systems can offer an additional abstraction level: instead of supplying a virtualized infrastructure, they can
provide the software platform where systems run on. The sizing of the hardware resources demanded by the
execution of the services is made in a transparent manner. This is denoted as Platform as a Service (PaaS). A well-
known example is the Google Apps Engine.
3.4 Software as a Service.
Finally, there are services of potential interest to a wide variety of users hosted in Cloud systems. This is an
alternative to run applications locally. An example of this is online alternatives of typical office applications such as
word processors. This scenario is called Software as a Service (SaaS).
IV Physical Components of a Cloud Computing
Cloud computing is a paradigm shift in the way scalable applications are architected and delivered. Since decades,
enterprises have spent time and resources building an infrastructure that could provide them a competitive
advantage. In most cases this approach resulted in:
 Large tracts of unused computing capacity
 Dedicated resources for server maintenance
 Risk mitigation & energy utilization
 High cost for build, acquire, own & maintain (Total cost of ownership)
With cloud computing, excess computing capacity can be put to use and be profitably sold to consumers. This
transformation of computing and IT infrastructure into a utility, which could be available to all, is the basis of cloud
computing. It forces competition based on innovation rather than computing resources. There are different colored
clouds present in the computing space today which could be classified into the following components:
4.1 Infrastructure
Cloud Infrastructure is the concept of providing “Hardware as a service” i.e. shared or reusable hardware for a
specific time of service. Examples include virtualization, grid computing, and pervert utilization. This service helps
reduce maintenance and usability costs, considering the need for infrastructure management & upgrade.
4.2 Storage
Cloud Storage is the concept of separating data from processing and storing in a remote place. Cloud Storage also
includes database services. Examples are Google’s Big Table, Amazon’s SimpleDB etc. software needs. This could
be a single service platform or a solution stack. Examples include Web application frameworks, Web hosting as we
shown in figure 1.
4.3 Application
A Cloud Application is an offering of software architecture that eliminates the need to install, run and maintain an
application at the user’s desktop/device. A cloud application eliminates the cost/resources required to maintain
and/or support applications.

23
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 1: Components of Cloud Computing

4.4 Services
A Cloud Service is an independent piece of software which can be used in conjunction with other services to
achieve an interoperable machine- to-machine interaction over the network. Examples include Amazon’s Simple
Queue Service, Google maps, Amazon’s flexible payment service etc.
4.5 Client
Cloud Client is a requester software or hardware device which tries to utilize cloud computing services over the
network. The client device could be a Web browser, PC, laptop or mobile etc.
4.6 Characteristics of Cloud Service
Standardized IT-based capability and it has computed, network or software-based capabilities. Standard offering
defined by the services provider with little or no flexibility for customization outside the offering,
4.6.1 Accessible via Internet protocol from any computer
Modern Internet-type protocols over IP, such as HTTP, Representational State Transfer (REST), or Simple Object
Access Protocol (SOAP) that are part of any modern operating system. Always available, and scales automatically
to adjust to demand
4.6.2 Resilient and highly available
Service provider offers massive capacity, such that any given customer can get as much capacity as they need at a
given moment- and give it back when not needed.
4.6.3 Pay-per-use
Free or pay-per-use, usually without long-term contracts, setup charges, or exit fees. The service is paid for one of
three ways these are:
 Advertising, usually for consumers
 Subscription,
 Billed by availability per unit of time, such as a month
Transaction, billed for actual usage, such as minutes of compute time, gigabytes of network bandwidth, or gigabytes
of storage.
4.6.4 Web or programmatic-based control interfaces
Cloud-oriented Web sites with human interfaces host the customers data provide interactions with others, and offer a
rich internet application interface, such as Face book or Microsoft Virtual Earth 3D.

24
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

V Advantages and Benefits from a cloud Computing


With the different Cloud enabler technologies like utility computing, Grid Computing, RTI, web infrastructure and
others maturing, the different services would be cloud enabled.
 Infrastructure service providers are taking advantage of the paradigm and offering Cloud Services. Cloud
computing is considered an extension to SOA and SaaS.
 Information services, entertainment-oriented services such as video on demand, simple business services
such as customer authentication or identity management and contextual services such as location or
mapping services are positioned well to become cloud-delivered.
 Other services, such as corporate processes (for example, billing, deduction management and mortgage
calculation) and transactional services (for example, fiscal transactions), would take longer to reach the
cloud and the mainstream.
5.1 Motivating Forces for Change
There are many forces at play that are creating a ground swell for the change that is enabling the growth of Cloud
Computing. Some forces are technology advancements which have been evolving over the past decade, while
others include market and economic forces that are more recent and can be more potent in specific vertical markets.
The technology developments by themselves are not significant but the culmination of all them has profound effects
across industries. The following list describes some of the reasons for the change and also provides insights on how
you can adjust to and thrive in this dynamic environment.
1. Telecommunications – With the .COM boom of the late 90s, there were fiber optics and high bandwidth data
infrastructure put in place for global connections. This allowed access to fast speed voice and data connectivity. The
.COM bust during the start of the millennium affected many of the companies that established this infrastructure by
forcing them out of business. This offered the fiber optics and equipment for whole sale or at a market discount. This
created cost effective opportunities for remote computing in many environments and in particular for the
development of Cloud Computing.
2. Open Source – The decentralized approach to software development challenged the institutionalized form of
software development. It allowed for individuals to contribute program code updates to form sophisticated operating
systems such as Linux that runs on most web servers and many computer electronic devices. Open source software
also played a significant role in web server technologies that form the core components of Cloud Computing. In
addition to the Linux operating system that was first installed on commodity hardware, apache web servers provided
web services. The server side software on these web servers along with middle ware including client side XML
based components are all open source forming the foundation for many Cloud Computing applications. As per the
AICTE norms and standards all the technical institutes are required to purchase costly software’s like MATLAB,
Prowess and SPSS and some SAP applications. Through open source facility in cloud computing we will get
easily for rent and use what ever software as well as hardware need for educational institutions.
3. Social Network – What was once viewed as a social experiment that college kids did with Face book or a group of
eccentric academics tinkering with their Wikipedia has matured into a powerful business tool. The same principle
that drove the social networks to connect, share and learn has become a powerful tool for collaboration, marketing
and other business applications. Organizations develop new features to their products, get ratings, and connect to
their customers with greater efficiencies compared to the traditional print and broadcast media. The success of
Cloud Computing leverages on the implementation of social networks. Cloud Computing shares some of the same
core technologies and therefore works symbiotically with social networking.
4. Cloud Computing Services – As the technologies and processes mature and large companies such as Amazon,
Google and IBM operate large operations using Cloud Computing, they are starting to put together offerings to
external organizations and individuals. These pioneering organizations have worked out many of the challenges of
implementing Cloud Computing from the IT hardware infrastructure to the software and development tools needed.
More and more services are beginning to be offered to businesses and individuals to implement new Cloud
Computing services based on the existing infrastructure so you do not have to re-invent the wheel. This can be seen
at all levels of the traditional software market including consumer based individuals extending to large enterprise
solutions.

25
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

5. Economic Down Turn – The economic down turn has put a strain on IT budgets and refrained capital from
implementing large systems. Companies still need to deliver services to their customers and IT departments’ demand
solutions for their customers. However, the credit crunch has halted many projects that require a huge setup cost.
Rather than purchasing or committing to large systems, businesses can only spend a small amount on critical tasks
to operate their business
6. Outsourcing IT – With the economic environment, companies turn to outsourcing as a way of cutting cost to
maintain competitiveness. Various different parts of IT are outsourced. Support call centers can provide cost savings
when supporting their customers. In order to cut cost within their own organization, the ability to outsource the
implementation of software systems is another layer of services which can be outsourced. Educational institutions
are also the use of service of IT outsourcing for improving the industry norms and standards. This frees the resources
of an organization to focus on core business. Cloud Computing will be the enabler and the fuel to this type of
software as a service (SaaS).
5.2 Cloud Attributes and Taxonomy
Because this is an emerging and somewhat confusing area, we have created definitions that provide us with a
common basis for discussion and developing our strategy. We define cloud computing as a computing paradigm
where services and data reside in shared resources in scalable data centers, and those services and data are accessible
by any authenticated device over the Internet.
We have also identified some key attributes that distinguish cloud computing from conventional computing. Cloud
computing offerings are:
 Abstracted and offered as a service.
 Built on a massively scalable infrastructure.
 Easily purchased and billed by consumption.
 Shared and multi-tenant.
 Based on dynamic, elastic, flexibly configurable resources.
 Accessible over the Internet by any device.
Today, we have identified three main categories of external service that fall within our broad cloud computing
definition. Software as a service (SaaS). Software deployed as a hosted service and accessed over the Internet.
Platform as a service (PaaS): Platforms that can be used to deploy applications provided by customers or partners of
the PaaS provider. Infrastructure as a service (IaaS): Computing infrastructure, such as servers, storage, and network,
delivered as a cloud service, typically through virtualization. It is also possible to build an internal IT environment
with cloud computing characteristics. We call this an internal cloud, to differentiate it from the external clouds
provided by suppliers.
5.3 Software as a Service Benefits
This outsourcing approach has enabled us to take advantage of suppliers’ expertise. The amount of data regularly
transmitted between Intel and SaaS providers has been a huge challenge, causing difficulties during initial
deployments and upgrades. Testing solutions has also provided challenges, demonstrating the need for full
documentation and up-front clarification with suppliers about roles, responsibilities, and process. Overall, SaaS has
been successful in our environment and met Intel’s expectations for the intended use of the services.

• Save money by not having to purchase servers or other software to support use
• Focus Budgets on competitive advantage rather than infrastructure
• Monthly obligation rather than up front capital cost
• Reduced need to predict scale of demand and infrastructure investment up front as available capacity
matches demand
• Multi-Tenant efficiency
• Flexibility and scalability

26
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Software’s available are:


 Platforms: Windows, Linux, Solaris
 Oracle, Sybase, PostGres
 Core Java, JSP and JDBC
 BV, VB.NET
 Java Script, VB Script, HTML, XML, XSLT,
 Visual Studio.Net, Visio
 ANT, WinCVS
 Rational Rose, Enterprise Architect
5.4 Infrastructure as a Service
Intel uses IaaS for certain niche applications. For example, some of the content on Intel’s Web site is hosted by a
cloud service provider. This allows us to take advantage of the supplier’s worldwide infrastructure rather than facing
the expense and difficulty of building similar infrastructure ourselves.
We also gained experience with IaaS when we build a globally distributed Web-monitoring application. Intel
needed a service that would allow us to look at visitors’ experiences accessing the Intel Web site from different
regions of the globe.

Figure 2: Infrastructure as Service Components

We were initially concerned about the possibility of highly variable and unreliable performance. We found that the
Cloud Computing was highly available and it provides all type hardware and infrastructural facilities to the all end-
users. We were pleased to find that the time on the cloud computing was kept reasonably accurate.
Our experience suggests that once security and manageability concerns are addressed, current commercial IaaS
implementations may be good for rapid prototyping and compute-intensive batch jobs. After IaaS services prove
themselves for these applications, they could be considered for more demanding applications with stringent
response-time requirements.
“Cloud Computing is more an evolution than a revolution.”
Conclusion
Cloud computing is style of computing in which dynamically scalable and often virtualized resources are provided
as a service over the internet. Users need not have knowledge of, expertise in, or control over the technology
infrastructure in the “Cloud” that supports them. The more complex software becomes, ironically, the more simple
the user machines need to be to run them. This is because the goal is to simplify the usage and management
requirements so that the user can access sophisticated software from any computer with an internet access. The steps
needed to implement a successful solution with this Cloud Computing environment requires changes in how
software is implemented both on the client computer and also on the server. The foundation for changes on the
client machine has already taken shape due to ecommerce and social technologies. The challenge remains for

27
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

complex solutions such as SaaS to adjust and adapt on the server side to function as a service within Cloud
Computing. In this paper supports well to use cloud computing SaaS and IaaS are in educational institutions for
improving the educational quality.
References
[1] Google app engine web site. Web Resource, Sept 2008.
[2] Amazon simple storage service. Web Page http://www.amazon.com/gp/browse.html?node=16427261.
[3] Mark Baker, Amy Apon, Clayton Ferner, and Jeff Brown. Emerging grid standards. Computer, (4):43–50, April
2005.
[4] Miguel L. Bote-Lorenzo, Yannis A. Dimitriadis, and Eduardo G´omez-S´anchez. Grid characteristics and uses: a
grid defining. Pages 291–298, February 2004.
[5] Roy Bragg. Cloud computing: When computers really rule. Tech News World, July 2008. Electronic Magazine,
available at http://www.technewsworld.com/story/63954.html.
[6] Rajkumar Buyya, Chee Shin Yeo, and Srikumar Venugopal. Market-oriented cloud computing: Vision, hype, and
reality for delivering it services as computing utilities. CoRR,(abs/0808.3558), 2008.
[7] Brian de Haaff. Cloud computing - the jargon is back! Cloud Computing Journal, August 2008. Electronic
Magazine, article available at http://cloudcomputing.sys-con.com/node/613070
[8] Kemal A. Delic and Martin Anthony Walker. Emergence of the academic computing clouds. ACM Ubiquity,
(31), 2008.
[9] Flexi scale web site. http://www.flexiscale.com, last visited: August 2008.

28
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Preprocessing of Digital Mammogram for Image Analysis


Prof. Samir Kumar Bandyopadhyay and Indra Kanta Maitra
Dept. of Computer Sc. & Engg, University of Calcutta
92 A.P.C. Road, Kolkata – 700009, India
Email: skb1@vsnl.com, ikm.1975@yahoo.com

Abstract
Mammography is at present one of the available method for early detection of abnormalities, which is related to
human breast cancer. Raw digital mammograms are medical images that are difficult to interpret, thus a preparation
process is needed in order to improve the image quality and make the segmentation results more accurate for further
research. Three distinct steps have been proposed for preprocessing of digital mammogram to make it ready to
automatic detection of abnormalities in breast. The first step involves contrast enhancement. The Contrast Limited
Adaptive Histogram Equalization (CLAHE) technique is used in the process with 8X8 tiles, Clip Limit of 0.01,
histogram bins of 64 and distribution is Rayleigh. The second step identifies the region of interest (ROI) in medio-
lateral oblique (MLO) view of mammogram to determine the pectoral muscle. The third or final step is to suppress
the pectoral muscle using proposed modified seeded region growing (SRG) algorithm. The pectoral muscle
suppression, we have obtained 84% of near accurate result including accurate results on selected 50 numbers
mammograms of MIAS Database of different size, shapes and types. The said techniques facilitate for further image
analysis without destroying required identification features to detect the abnormalities of the digital mammogram. To
summarize, the results obtained by the method show that it is a robust approach but it can be improved in terms of
accuracy.

Keywords: Mammogram, Contrast Limited Adaptive Histogram Equalization (CLAHE), Region of Interest (ROI),
seeded region growing (SRG), Noise, Edge-Shadowing Effect and Pectoral Muscle

29
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction
Cancer is a group of diseases that cause cells in the body to change and grow out of control. Most types of
cancer cells eventually form a lump or masses called a tumor, and are named after the part of the body where the
tumor originates. Breast cancer begins in breast tissue, which is made up of glands for milk production, called
lobules, and the ducts that connect lobules to the nipple. The remainder of the breast is made up of fatty, connective,
and lymphatic tissue [1].
Breast cancer is a leading cause of cancer deaths among women. For women in US and other developed
countries, it is the most frequently diagnosed cancer. About 2100 new cases of breast cancer and 800 deaths are
registered each year in Norway. In India, a death rate of one in eight women has been reported due to breast cancer
[2].
Efficient detection is the most effective way to reduce mortality, and currently a screening programme based on
mammography is considered one the best and popular method for detection of breast cancer. Mammography is a
low-dose x-ray procedure that allows visualization of the internal structure of the breast. Mammography is highly
accurate, but like most medical tests, it is not perfect. On average, mammography will detect about 80%-90% of the
breast cancers in women without symptoms. Testing is somewhat more accurate in postmenopausal than in
premenopausal women [3]. The small percentage of breast cancers that are not identified by mammography may be
missed for just as mammography uses x-ray machines designed especially to image the breasts. An increasing
number of countries have started mass screening programmes that have resulted in a large increase in the number of
mammograms requiring interpretation [4]. In the interpretation process radiologists carefully search each image for
any visual sign of abnormality. However, abnormalities are often embedded in and camouflaged by varying densities
of breast tissue structures. Estimates indicate that between 10 and 30 per cent of breast radiologists miss cancers
during routine screening [4,5]. In order to improve the accuracy of interpretation, a variety of screening techniques
have been developed.
Recent studies showed that the interpretation of the mammogram by the radiologists give high rates of false
positive cases indeed the images provided by different patients have different dynamics of intensity and present a
weak contrast. Moreover the size of the significant details can be very small. Several research works have tried to
develop computer aided diagnosis tools. They could help the radiologists in the interpretation of the mammograms
and could be useful for an accurate diagnosis [6,7,8].
Digital mammography is a technique for recording x-ray images in computer code instead of on x-ray film, as
with conventional mammography. The first digital mammography [9] system received U.S. Food and Drug
Administration (FDA) approval in 2000. Digital mammography may have some advantages over conventional
mammography. The images can be stored and retrieved electronically. Despite these benefits, studies have not yet
shown that digital mammography is more effective in finding cancer than conventional mammography [10].
Imaging techniques play an important role in helping perform digital mammogram, especially of abnormal areas
that cannot be felt but can be seen on a conventional mammogram [11]. Before any image-processing algorithm of
mammogram preprocessing steps are very important in order to limit the search for abnormalities without undue
influence from background of the mammogram. These steps are needed only on digitized screen film mammography
(SFM) images because digital mammography devices perform this step automatically during the image storing
process. Breast segmentation consists of breast border contour extraction, pectoral muscle extraction, nipple
identification etc. On images obtained directly from the digital mammography devices segmentation process is much
more easier. Previous work from many authors used mammography image databases including this paper, especially
MiniMIAS [12] and DDSM [13], both comprised of scanned and digitized SFM images. In this paper we have
proposed three steps of preprocessing of raw digital mammogram for future automatic elaborate investigation of
abnormalities using some image processing technique.

2. Image Segmentation
The paper is based on the image segmentation method, which refers to the major step in image processing, the
inputs are images and, outputs are the attributes extracted from those images. Segmentation divides image into its
constituent regions or objects. The level to which segmentation is carried out depends upon the problem being solved
i.e. segmentation should stop when the objects of interest in an application have been isolated. Image segmentation
refers to the decomposition of a scene into its components. Image segmentation techniques can be broadly classified
as into five main classes threshold based, Cluster based, Edge based, Region based, Watershed based segmentation
[14].

30
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Segmentation plays an important role in image analysis. The goal of segmentation is to isolate the regions of
interest depending on the problem and its characters. Many applications of image analysis need to obtain the regions
of interest before the analysis can start. Therefore, the need of an efficient segmentation method has always been
there. A gray level image consists of two main features, namely region and edge.
Segmentation algorithms for gray images are generally based on of two basic properties of image intensity
values, discontinuity and similarity. In the first category, the approach is to partition an image based on abrupt
changes in intensity, such as edges in an image. The principle approaches in the second category are based on
partitioning image into regions that similar according to a set of predefined criteria. Thresholding, region growing
and region splitting and merging are examples of the methods in this category.

3. Region Growing
For the segmentation of intensity images like digital mammogram, there are four main approaches [15], [16],
namely, threshold techniques, boundary-based methods, region-based methods, and hybrid techniques which
combine boundary and region criteria.
Threshold techniques [17] are based on the postulate that all pixels whose value (gray level, color value, or
other) lies within a certain range belong to one class. Such methods neglect all of the spatial information of the image
and do not cope well with noise or blurring at boundaries. Boundary-based methods [18] use the postulate that the
pixel values change rapidly at the boundary between two regions. The complement of the boundary-based approach
is to work with the regions [19]. Region-based methods rely on the postulate that neighboring pixels within the one
region have similar value. This leads to the class of algorithms known as region growing of which the “split and
merge” technique [20] is probably the best known. The general procedure is to compare one pixel to its neighbor(s).
If a criterion of homogeneity is satisfied, the pixel is said to belong to the same class as one or more of its neighbors.
The fourth type is the hybrid techniques, which combine boundary and region criteria. This class includes
morphological watershed segmentation [21] and variable-order surface fitting [15]. The watershed method is
generally applied to the gradient of the image.
We mainly use a method known as “seeded region growing” (SRG), which is closer to that of the watershed
[22] with some necessary change in proposed technique which is based on the conventional region-growing postulate
of similarity of pixels within a regions. For Seeded Region Growing (SRG), seed or a set of seed can be
automatically or manually selected. Their automated selection can be based on finding pixels that are of interest, e.g.
the brightest pixel in an image can serve as a seed pixel. They can also be determined from the peaks found in an
image histogram. On the other hand, seeds can also be selected manually for every object present in the image. The
method is employed to segment an image into different regions using a set of seeds. Each seeded region is a
connected component comprising of one or more points and is represented by a set S. The set of immediate
neighbours bordering the pixel is calculated. The neighbours are then examined and if they intersect any region from
set S, then a measure δ (difference between a pixel and the intersected region) is computed. If the neighbours
intersect more than one region, then the set is taken as that region for which difference measure δ is maximum. The
new state of regions for the set then constitutes input to the next iteration. This process continues until the entire
image pixels have been assimilated into regions. Hence for each iteration the pixel that is most similar to a region
that it borders is appended to that region. The SRG algorithm is inherently dependent on the order of processing
image pixels. The method has the advantage that it is fairly robust, quick, and parameter free except for its
dependency on the order of pixel processing.

4. Previous Works on SRG Algorithm


Mehnert & Jackway (1997) improved the above seeded region-growing algorithm by making it independent of
the pixel order of processing and making it more parallel. Their study presents a novel technique for Improved
Seeded Region Growing (ISRG). ISRG algorithm retains the advantages of Seeded Region Growing (SRG) such as
fast execution, robust segmentation and no parameters to tune. The algorithm is also pixel order independent. If more
than one pixel in the neighbourhood has same minimum similarity measure value, then all of them are processed in
parallel. No pixel can be labelled and no region can be updated until all other pixels with the same priority have been
examined. If a pixel cannot be labelled, because it is equally likely belong to two or more adjacent regions, then it is
labelled as ‘tied’ and takes no part in the region growing process. After all of the pixels in the image have been
labelled, the pixels labelled ‘tied’ are independently re-examined to see whether or not the ties can be resolved. To
resolve the ties an additional assignment criterion is imposed, such as assigning a tied pixel to the largest

31
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

neighbouring region or to the neighbouring region with the largest mean. ISRG algorithm produces consistent
segmentation because it is not dependent on the order of pixel processing. Parallel processing ensures that the pixels
with the same priority are processed in the same manner simultaneously [23].
Beaulieu & Goldberg (1989) proposed a hierarchical stepwise optimisation algorithm for region merging, which
is based on stepwise optimisation and produces a hierarchical decomposition of the image. The algorithm starts with
an initial image partition into a number of regions. At each iteration, two segments are merged provided they
minimise a criteria of merging a segment to another. In this stepwise optimisation, the algorithm searches the whole
image context before merging two segments and finds the optimal pair of segments. This means that the most similar
segments are merged first. The algorithm gradually merges the segments and produces a sequence of partitions. The
sequences of partitions reflect the hierarchical structure of the image [24].
Gambotto (1993) proposed an algorithm that combines the region growing and edge detection methods for
segmenting the images. The method is iterative and uses both of these approaches in parallel. The algorithm starts
with an initial set of seeds located inside the true boundary of the region. The pixels that are adjacent to the region
are iteratively merged with it if they satisfy a similarity criterion. A second criterion uses the average gradient over
the region boundary to stop the growth. The last stage refines the segmentation. The analysis is based on cooperation
between the region growing algorithm and the contour detection algorithm. Since, adding segments to a region, some
pixels that belong to the next region and to the previous region, may be misclassified performs the growing process.
A nearest neighbour rule is then used to locally reclassify them [25].
Hojjatoleslami & Kittler (2002) proposed a new region growing approach for image segmentation, which uses
gradient information to specify the boundary of a region. The method has the capability of finding the boundary of a
relatively bright/dark region in a textured background. The method relies on a measure of contrast of the region,
which represents the variation of the region grey-level as a function of its evolving boundary during segmentation.
This helps to identify the best external boundary of the region. The application of a reverse test using a gradient
measure then yields the highest gradient boundary for the region being grown. The unique feature of the approach is
that in each step at most one candidate pixel will exhibit the required properties to join the region. The growing
process is directional so that the pixels join the grown region according to a ranking list and the discontinuity
measurements are tested pixel by pixel. The algorithm is also insensitive to a reasonable amount of noise. The main
advantage of the algorithm is that no a priori knowledge is needed about the regions [26].

5. Proposed Method
Digital Mammograms are medical images that are difficult to interpret, thus a preparation phase is needed in
order to improve the image quality and make the segmentation results more accurate. The main objective of this
process is to improve the quality of the image to make it ready for further processing by removing the irrelevant and
unwanted parts in the background of the mammogram. Here we have proposed three distinct phases of preprocessing
before actual algorithm of automatic analysis of digital mammogram can be conducted. The first phase is to contrast
enhancement of the digital mammogram, second phase is to identify region of interest and the third or final phase is
to eliminate pectoral tissues from the mammogram.

5.1. Enhancement of Digital Mammogram


The contrast enhancement phase is done using the Contrast Limited Adaptive Histogram Equalization (CLAHE)
technique, which is a special case of the histogram equalization technique [27] that functions adaptively on the image
to be enhanced. The pixel's intensity is thus transformed to a value within the display range proportional to the pixel
intensity's rank in the local intensity histogram. CLAHE [28] is a refinement of Adaptive Histogram Equalization
(AHE) where the enhancement calculation is modified by imposing a user-defined maximum, i.e. clip level, to height
of the local histogram and thus on the maximum contrast enhancement factor. The enhancement is there by reduced
in very uniform areas of the image, which prevent over enhancement of noise and reduces the edge-shadowing effect
of unlimited AHE [29].
The CLAHE method seeks to reduce the noise and edge-shadowing effect produced in homogeneous areas and
was originally developed for medical imaging [30]. This method has been used for enhancement to remove the noise
and reduces the edge-shadowing effect in the pre-processing of digital mammogram [31].
The CLAHE operates on small regions in the image called tiles rather than the entire image. Each tile’s contrast
is enhanced, so that the histogram of the output region approximately matches the uniform distribution or Rayleigh
distribution or exponential distribution. Distribution is the desired histogram shape for the image tiles. The

32
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

neighbouring tiles are then combined using bilinear interpolation to eliminate artificially induced boundaries. The
contrast, especially in homogeneous areas, can be limited to avoid amplifying any noise and reduce edge-shadowing
effect that might be present in the image; the CLAHE technique is described below [32]:
Step 1: Mammogram was divided into a number of non-overlapping contextual regions of
equal sizes, experimentally set to be 8x8, which corresponds to approximately 64
pixels.
Step 2: The histogram of each contextual region was calculated.
Step 3: A clip limit, for clipping histograms, was set (t=0.001). The clip limit was a
threshold parameter by which the contrast of the image could be effectively altered;
a higher clip limit increased mammogram contrast.
Step 4: Each histogram was redistributed in such a way that its height did not exceed the
clip limit.
Step 5: All histograms were modified by the transformation function

is the probability density function of the input mammogram image grayscale value j, n is the total number of
pixels in the input mammogram image and nj is the input pixel number of grayscale value j.
Step 6: The neighboring tiles were combined using bilinear interpolation and the
mammogram image grayscale values were altered according to the modified
histograms.
In our experiment, we defined tiles size i.e. the rectangular contextual regions to 8X8, which is chosen from
best result from trial. Contrast factor that prevents over-saturation of the image specifically in homogeneous areas is
restricted to 0.01 here to get the optimized output. The number of Bins for the histogram building for contrast
enhancing transformation is restricted to 64 and the distribution of histogram is 'Rayleigh' i.e. Bell-shaped for our
experimentation. The range is not specified in the experiment to get the full range of output image.

5.2. Detection and Define ROI


Mammograms show a projection of the breast that can be made from different angles. The two most common
projections are medio-lateral oblique and cranio-caudal. The advantage of the medio-lateral oblique projection is that
almost the whole breast is visible, often including lymph nodes. The main disadvantage is part of the pectoral muscle
will be shown in upper part of the image, which is superimposed over a portion of the breast. The cranio-caudal view
is taken from above, resulting in an image that sometimes does not show the area close to the chest wall. In our
research work we consider the earlier one for its advantage but pectoral muscle detection is one more difficult task in
the breast segmentation process. Reason for detecting pectoral muscle is to remove. Suppression can help in some
auto detect procedures such as finding bilateral asymmetry etc.
It is important to detecting the pectoral muscle and defines the region of interest (ROI) for further analysis. This
operation is important in medio-lateral oblique (MLO), where the pectoral muscle, slightly brighter compared to the
rest of the breast tissue, can appear in the mammogram. To detect the same it is very important to determine whether
the mammogram of the left or the right breast is viewed. We consider right breast first. At first, a straight line (AB) is
plotted between the left background of the mammogram and starting of actual part of breast image. In second step, is
to determine middle point (C) at the top margin of the mammogram and plot a straight line (CD) from the middle
point (C) to lower left corner point of the mammogram. The line CD cross the line AB at point E resulting an
inverted right angle triangle (ACE) that is our region of interest (ROI) to detect the pectoral tissues from
mammogram, the process is depicted in figure 1 and figure 2. Except the ROI rest part of mammogram is converted
to background color to make said ROI prominent. Finally to reduce the computational complexity inverted right
angle triangle (ACE) is cropped from the original mammogram shown in figure 3 for further processing.

33
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 1: The Inverted Right Angle Triangle (ACE)

Figure 2: Detected ROI from Mammogram

Figure 3: Isolated Part of ROI from Mammogram for Further Processing

Now the question will arise that in all the cases the said triangle will cover the entire region of pectoral muscle.
It gives absolute success ratio on our set of fifty digital mammography images from twenty-five different pairs of
mammogram of different shapes and size. In case of left breast we will consider the mirror image to run the said
process.

5.3. Suppression of Pectoral Muscle from ROI


We have used Seeded Region Growing (SRG) algorithm to suppress the pectoral muscle from predefined ROI.
‘Region growing’ is a procedure that groups pixels or sub regions into larger regions based on predefined criteria.
The basic approach is to start with sets of “seed” points and from these grow regions by appending to each seed those
neighboring pixels that have properties similar to the seed. Selection of the seed depends on the nature of the
problem.
This method takes a set of seeds as input along with the image. The seeds mark each of the objects to be
segmented. The pixel with the smallest difference measured this way is allocated to the respective region. This
process continues until all pixels are allocated to a region. Another problem in seeded region growing is the

34
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

formulation of a stopping rule. Basically, growing a region should stop when no more pixels satisfy the criteria for
inclusion in that region.
In our proposed method we have used the basic rules of SRG algorithm but introducing some new ideas to make
it more efficient, problem specific and less time consuming. Instead of a seed, a set of seeds has been considered.
The left top to right bottom diagonal has been considered to select the seeds, specifically up to the cross point of
right top to left bottom diagonal in the cropped image. The double arrow red line in figure 4 is indicating the line of
consideration.

Figure 4: The Double Arrow Red Line of Consideration to Collect the Set of Seeds

The Cartesian slope-intercept equation for the line may be chosen for traversing the line of consideration with
end points (x1, y1) and (x2, y2) in mammogram.

Where m representing the slop of the line and b as the y intercept. If we specify the two end points (x1, y1) and
(x2, y2) in the mammogram image, we can determine value of the slope and y intercept as following:

For any given x interval δx along a line, we can calculate corresponding y interval δy from the following
equation:

Similarly, we can obtain the x interval δx corresponding to a specified δy as

All pixels along the line of consideration is read one after another from left top to right bottom and calculates
the Minimum, Maximum and Average Intensity value of the pixel.

Figure 5 : Histogram of Figure 3 showing the Area of Pectoral Muscle Region by Bouble Arrow Black Line

35
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Now a selection of pixel for region growing has been introduced. Read the inverted right angle triangle area
from left top corner. The selection method is subtracting average intensity from that pixel intensity and divided by
the difference between maximum intensity and average intensity. If the pixel intensity is greater than 0 and less than
equals to 1, pixel will be merged to the region growing and the intensity value will be 0 else it will be remain
unchanged.

Where the I(x,y) is the intensity of a pixel where IAvg and IMax is average intensity and maximum intensity
calculated earlier from the set of seeds. The region growing will be restricted with in the boundary of the inverted
right angle triangle area of the cropped image. Using the method pectoral muscle part of the mammogram will be
eliminated. Superimposing the ROI to the original mammogram will show the breast image excepting the pectoral
muscle past, which will be helpful for further investigation of mammogram properly.

6. Experimental Results Analysis


The mammogram images used in this experiment are taken from the mini mammography database of MIAS
[33]. The original MIAS Database (digitized at 50 micron pixel edge) has been reduced to 200-micron pixel edge and
clipped/padded so that every image is 1024 pixels x 1024 pixels. All images are held as 8-bit gray level scale images
with 256 different gray levels (0-255) and physically in portable gray map (.pgm) format. The list is arranged in pairs
of mammograms, where each pair represents the left and right breast of a single patient. In our experiment we have
consider all types of breast tissues i.e. Fatty, Fatty-glandular, Dense-glandular and the abnormalities like
calcification, well-defined or circumscribed masses, speculated masses and other ill-defined masses. Fifty, medio-
lateral oblique (MLO) view of bilateral pairs of mammogram images is used as a test case.
Main objective of the proposed method is restricted to prepare raw mammogram for further analysis to identify
the abnormalities. In the proposed method, uniform contrast enhancement prevents the over enhancement of noise
and reduces the edge-shadowing effect and on the other hand, method of region of interest identification along with
modified region growing algorithm detects of pectoral muscle and suppress the same. The presence of noise, edge-
shadowing effect and pectoral muscle can affect results of lesion detection algorithms, so, it is recommended to have
it removed from the image before.
First we used the contrast enhancement algorithm, which produces excellent result by reducing the noise and the
edge-shadowing effect. The raw mammogram and the output of contrast enhancement phase are sited in the figure 6
and 8 respectively along with the histogram in figure 7 and 9 justifying the result of proposed technique.

Figure 5: The Original Mammogram Image

Figure 6: The Histogram of Original Mammogram Image

36
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 7: The Mammogram Image after Contrast Enhancement

Figure 8: The Histogram of Contrast Enhanced Mammogram Image

In the second phase is to establish the region of interest (ROI) from the contrast-enhanced mammogram to
extract the pectoral muscle. We have tested over 50 mammogram images of different shapes and types and obtained a
90% of “near accurate” results, which include the “accurate” results. Following figure 10,11 and 12 are showing the
results on different shapes and types of breast.

Figure 9: Mammogram Showing the Indentified Region of Interest (ROI)

Figure 10: Mammogram Showing the Indentified Region of Interest (ROI)

37
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 11: Mammogram Showing the Indentified Region of Interest (ROI)

The inverted right angle triangle (ACE) of region of interest (ROI) is cropped from the mammogram figure 10,
11 and 12 are showing in figure 13 respectively.

Figure 12: Cropped Inverted Right Angle Traingle or the ROI of figure 10, 11 and 12 from left to right

In the final phase, modified seeded region growing algorithm executed on the cropped mammogram images to
suppress the pectoral muscle. Figure 14 showing superimposing the cropped image over the mammogram after
suppression of pectoral muscle.

Figure 13: The Mammogram After Suppression of Pectoral Muscle

38
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 14: The Final Output, the Superimposed Original Mammogram after Suppression of Pectoral Muscle

The pectoral muscle suppression, we have obtained 84% of near accurate result including accurate results.
Results of pectoral muscle suppression are divided in three groups and showing in Table 1. Total percentage for
"Good", "Acceptable" and "Unacceptable" results are shown at the end of each column.
Table 1: Pectoral Muscle Suppression Results
Result Good Acceptable Unacceptable
Total No 23 19 8
(50)
Percentage 46% 38% 16%
84% 16%

Figure 15: Results of Pectoral Muscle Suppression in Cropped Images

We notice that some of results are a little bit over or under segmented. The behavior of the method shows an
over-segmentation of the breast in cases with dense tissue, where the contrast between the muscle and the tissue is
unclear. In those cases, our method rejects the muscle detection and provides the region obtained without suppressing
the muscle as a final result. A possible solution could be to impose shape restrictions to the growing process. To
summarize, the results obtained by the method show that it is a robust approach but it can be improved in terms of
accuracy. Even so, we accept this method because it provides encouraging results.

7. Conclusions
A set process of preprocessing has been presented with contrast enhancement, pectoral muscle detection and
suppression. The results obtained over MiniMIAS database have shown a general good behavior. Using this
preprocessing segmentation processes reduce noise and edge-shadowing effect, accurately detect region of interest
(ROI) for pectoral muscle, suppress the pectoral muscle successfully. So, the processed mammogram can be used for
the automated abnormalities detection of human breast like calcification, circumscribed masses, speculated masses
and other ill-defined masses speculated, circumscribed lesions, asymmetry analysis etc. Further work may be
conducted to development to smoothing of the pectoral muscle segmentation, can impose shape restrictions to the
region growing process and that gives better results. This method has the potential for further development because
of its simplicity and encouraging results that will motivate online or real-time breast cancer diagnosis system further.

References
[1] Breast Cancer Facts & Figures, 2009-2010, American Cancer Society, Inc.

39
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

[2] Norum J. “Breast cancer screening by mammography in Norway. Is it cost-effective?” Ann Oncol 1999, 10:
197-203.
[3] Michaelson J, Satija S, Moore R, et al. “The pattern of breast cancer screening utilization and its consequences”
Cancer. Jan 1 2002; 94(1): 37-43.
[4] Baines CJ, McFarlane DV, Miller AB. “The role of the reference radiologist: Estimates of interobserver
agreement and potential delay in cancer detection in the national screening study”, Invest Radiol 1990, 25: 971-
6.
[5] Wallis MG, Walsh MT, Lee JR. “A review of false negative mammography in a symptomatic population”, Clin
Radiol 1991, 44: 13-5.
[6] Sterns EE, “Relation between clinical and mammographic diagnosis of breast problems and the cancer/ biopsy
rate,” Can. J. Surg., vol. 39, n°. 2, p 128-132, 1996.
[7] Highnam R and Brady M, “Mammographic Image Analysis”, Kluwer Academic Publishers, 1999. ISBN: 0-
7923- 5620-9.
[8] Kekre HB, Sarode Tanuja K and Gharge Saylee M, “Tumor Detection in Mammography Images using Vector
Quantization Technique”, International Journal of Intelligent Information Technology Application, 2009,
2(5):237-242
[9] FDA Web site, http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfMQSA/mqsa.cfm
[10] RadiologyInfo.org developed jointly by Radiological Society of North America and American College of
Radiology
[11] National Cancer Institute (NCI) Web site, http://www.cancernet.gov
[12] Suckling J., Parker J., Dance D.R., Astley S., Hutt I., Boggis C.R.M., Ricketts I., Stamatakis E., Cernaez N.,
Kok S.L., Taylor P., Betal D., Savage J., “The Mammographic Image Analysis Society Digital Mammogram
Database”, Proceedings of the 2nd International Workshop on Digital Mammography, York, England, 10–12
July 1994, Elsevier Science, Amsterdam, The Netherlands, pp. 375-378.
[13] Heath M., Bowyer K., Kopans D., Moore R., Kegelmeyer P. Jr., “The Digital Database for Screening
Mammography”, Proceedings of the 5th International Workshop on Digital Mammography, Toronto, Canada,
11-14 June 2000, Medical Physics Publishing, 2001, pp. 212-218.
[14] Sanmeet Bawa, A thesis on “Edge Based Region Growing”, Department of Electronics and communication
Engimeering, Thapar Institute of Engineering & Technology (Deemed University), India, June – 2006
[15] P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Trans. Pattern Anal.
Machine Intell., vol. PAMI-IO, pp.167-192, 1988
[16] R. M. Haralick and L. G. Shapiro, “Image segmentation techniques,” Comput. Vis. Graph. Image Process, vol.
29, pp. 100-132, 1985.
[17] P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Comput. Vis.. Graph.
Image Process, vol. 41, pp. 233-260, 1988.
[18] L. S. Davis, “A survey of edge detection techniques,” Compur. Graph. Image Process, vol. 4, pp. 248-270,
1975.
[19] S. W. Zucker, “Region growing: Childhood and adolescence,” Comput. Graph. Image Process, vol. 5, pp. 382-
399, 1976.
[20] S. L. Horowitz and T. Pavlidis, “Picture segmentation by a directed split-and-merge procedure,” Proc. 2nd Int.
Joint Conf Pattern Recognit, 1974, pp, 424-433.
[21] F. Meyer and S. Beucher, “Morphological segmentation,” J. Vis. Commun. Image Represent, vol. 1, pp. 2146,
1990
[22] Rolf Adams, et al, "Seeded Region Growing", IEEE Transactions on Pattern Analysis and Machine
Intelligence, VOL. 16, NO. 6, JUNE 1994, pp. 641-647.
[23] Mehnert and Jackway P, “An improved seeded region growing algorithm”, Pattern Recog/ Letters. 18, 1997,
pp.1065-1071
[24] Beaulieu JM and Goldberg M, A Hierarchy Research article in "Picture segmentation: a stepwise optimisation
approach", IEEE Trans. Pattern Analysis & Machine Intellig. 11 (2), 1989, pp.150-163.
[25] Gambotto JP, "A new approach to combining region growing and edge detection", Pattern Recog. Letters. 14,
1993, pp. 869-875.
[26] Hojjatoleslami SA and Kittler J, "Region growing: a new approach", Dept. Electric.Electronics Engg., Univ.
Surrey, Guildford, UK, GU25XH, 2002.
[27] Gonzalez, R.C., Woods, R.E. (1992), Digital image processing.

40
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

[28] Pizer SM, “Psychovisual issues in the display of medical images, in Hoehne KH (ed), Pictoral Information
systems in Medicine”, Berlin, Germany, Springer-Verlag, 1985, PP 211-234.
[29] Pisano ED, et al, “Contrast Limited Adaptive Histogram Equalization Image Processing to Improve the
Detection of Simulated Spiculations in Dense Mammograms”, Journal of Digital Imaging, Publisher Springer
New York, Issue Volume 11, Number 4, 1998, pp 193-200.
[30] Wanga X, Wong BS, Guan TC, “Image enhancement for radiography inspection”, International Conference on
Experimental Mechanics, 2004, pp 462-468.
[31] Ball JE, “Digital mammogram spiculated mass detection and spicule segmentation using level sets”,
Proceedings of the 29th Annual International Conference of the IEEE EMBS, 2007, pp. 4979-4984.
[32] Antonis Daskalakis, et al, “An Efficient CLAHE-Based, Spot-Adaptive, Image Segmentation Technique for
Improving Microarray Genes' Quantification”, 2nd International Conference on Experiments/Process/System
Modelling/Simulation & Optimization, Athens, 4-7 July, 2007.
[33] J Suckling et al (1994) “The Mammographic Image Analysis Society Digital Mammogram Database Exerpta
Medica”. International Congress Series 1069 pp375-378.

41
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Comparative Analysis of Performance of Series FACTS Devices Using PSO


Based Optimal Power Flow Solutions
K.Padma1 and K.Vaisakh2
Department of Electrical Engineering, AU College of Engineering
Andhra University, Visakhapatnam-530003,AP, India
E-mail: vaisakh_k@yahoo.co.in

Abstract
This paper incorporates three FACTS devices such as PST, TCSC and SSSC in optimal power flow solutions to
enhance the performance of the power systems. The particle swarm optimization is used for solving the optimal
power flow problem for steady-state studies. The effectiveness of the proposed approach was tested on IEEE 14-bus
with series FACTS devices. Results show that the proposed algorithm gives better solutions to enhance the system
performance with SSSC compared to other devices.

Keywords: Power system operation, series FACTS devices, particle swarm optimization, optimal power flow
solution

42
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction

Complexity of operating modern power systems is continually increasing because of larger power transfer over
longer distances, greater interdependence among interconnected systems, more complicate coordination and
interaction among various system controllers and less power reserves. These demands have forced systems to be
operated closer to their security limits, because instability has become a major threat for system operation, as
evidenced by the recent state of blackouts [1]. Voltage Stability is becoming an increasing source of concern in
secure operation of present day power systems. Hence it is necessary to consider the voltage stability aspects in
solving the optimal reactive power control problems.
To meet the increasing power demand with existing transmission lines, the introduction of FACTS devices
becomes an alternative. FACTS can improve the stability of network, and reduce the flows in heavily loaded lines by
controlling their parameters including series impedance, current, and voltage and phase angle. Especially, FACTS
devices can enable a line to carry its flow close to rating capacity and consequently can improve the power system
security in contingency [2-4].
In a power system, the FACTS devices may be used to achieve several goals. In steady-state, for a meshed
network, they can permit to operate transmission lines close to their thermal limits and to reduce the loop flows. In
this respect, they act by supplying or absorbing reactive power, increasing or reducing voltage and controlling series
impedance or phase angle [5-6]. Different types of devices have been developed such as series controllers, shunt
controllers, and combined series-shunt controllers. Inside a category, several FACTS devices exist and each one has
its own properties and may be used in specific contexts. The choice of the appropriate device is important since it
depends on the goals to be reached.
Recently, the success achieved by evolutionary algorithms in the solution of complex problems and the
improvement made in computation such as parallel computation have stimulated the development of new algorithms
like Particle Swarm Optimization (PSO) present great convergence characteristics and capability of determining
global optima[7-13].
This paper examines the effect of series FACTS devices on the power system performance using PSO based
OPF solutions. The effectiveness of the proposed method was tested on IEEE 14-bus system and comparison was
made on the performance of the three FACTS devices.

2. Voltage Stability Index (L-index) Computation

The voltage stability L-index is a good voltage stability indicator with its value change between zero (no load) and
one (voltage collapse) [14]. Moreover, it can be used as a quantitative measure to estimate the voltage stability
margin against the operating point. For a given system operating condition, using the load flow (state estimation)
results, the voltage stability L -index is computed as [14],
g
Vi
L j = 1 − ∑ F ji j = g + 1,..., n (1)
i =1 Vj
All the terms within the sigma on the RHS of equation (1) are complex quantities. The values of F ji are
btained from the network Y-bus matrix.
For stability, the index L j must not be violated (maximum limit=1) for any of the nodes j. Hence, the
global indicator L j describing the stability of the complete subsystem is given by maximum of L j for all j (load
buses). An L j -index value away from 1 and close to 0 indicates an improved system security. The advantage of this
L j -index lies in the simplicity of the numerical calculation and expressiveness of the results.

3. Series FACTS controllers

FACTS controllers are able to change, in a fast and effective way, the network parameters in order to
achieve better system performance. FACTS controllers, such as phase shifter, shunt, or series compensation and the
most recent developed converter-based power electronic controllers, make it possible to control circuit impedance,
voltage angle, and power flow for optimal operation performance of power systems, facilitate the development of
competitive electric energy markets, stimulate the unbundling the power generation from transmission and mandate
open access to transmission services, etc. The benefit brought about by FACTS includes improvement of system
behavior and enhancement of system reliability. However, their main function is to control power flows..

43
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

There are various types of series FACTS devices available for this purpose, namely Phase Shift Transformer
(PST), Thyristor-Controlled Series Capacitor (TCSC), and Static Synchronous Series Compensator (SSSC). Each of
these FACTS devices, however, has its own characteristics and limitations.

3.1 Phase Shift Transformer (PST)

The Phase shift Transformer circuit diagram can be represented by Figure 1. Due to the installation of phase
shifter, the system will have lots of benefits such as overload release, system loss reduction and generation
adjustment reduction. All these benefits may be selected as objective functions for OPF with PST. However, the
primary purpose of installing phase shifter is to remove line over load.

Figure 1 Circuit diagram of phase shifter

3.2 Thyristor Controlled Series Compensator (TCSC)

One important FACTS controller is the TCSC which allows rapid and continuous changes of the
transmission line impedance. TCSC controls the active power transmitted by varying the effective line reactance by
connecting a variable reactance in series with line and is shown in Figure 2. TCSC is mainly used for improving the
active power flow across the transmission line.

Figure 2 Circuit diagram of TCSC

3.3 Static Synchronous Series Compensator (SSSC)

A SSSC usually consists of a coupling transformer, an inverter and a capacitor. The SSSC is series connected with a
transmission line through the coupling transformer.
It is assumed here that the transmission line is series connected via the SSSC bus j. The active and reactive
power flows of the SSSC branch i-j entering the bus j are equal to the sending end active and reactive power flows of
the transmission line, respectively. In principle, the SSSC can generate and insert a series voltage, which can be
regulated to change the impedance (more precisely reactance) of the transmission line. In this way, the power flow of
the transmission line or the voltage of the bus, which the SSSC is connected with, can be controlled.

44
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Figure 3 Equivalent Circuit of SSSC


The equivalent circuit of SSSC is as shown in the Figure 3. From the equivalent circuit the power flow
constraints of the SSSC can be given as
Pij = Vi 2 g ii − ViV j ( g ij cos θ ij + bij sin θ ij ) − ViV se ( g ij cos(θ i − θ se )
(2)
+ bij sin(θ i − θ se ))
Qij = −Vi 2bii − ViV j ( g ij sinθ ij − bij cosθij ) − ViVse ( gij sin(θ i − θ se )
(3)
− bij cos(θ i − θ se ))
Pji = V j2 g ji − ViV j ( g ij cos θ ji + bij sin θ ji ) + V jV se ( g ij cos(θ j − θ se )
(4)
+ bij sin(θ j − θ se ))
Q ji = −V j2 b jj − ViV j ( g ij sin θ ji − bij cos θ ji ) + V jVse ( g ij sin(θ j − θ se )
(5)
− bij cos(θ j − θ se ))
where g ij + jbij = 1 / Z se , g ii = g ij , bii = bij , g jj = g ij , b jj = bij
Operating constraint of the SSSC (active power exchange via the DC link) is as
PE = Re(Vse I *ji ) = 0 or
− ViVse ( g ij cos(θ i − θ se ) − bij sin(θ i − θ se )) (6)
+ V jVse ( g ij cos(θ j − θ se ) − bij sin(θ j − θ se )) = 0
The active and reactive power flow constraints is
Pji − Pjispecified = 0 (7)
Q ji − Q specified
ji =0 (8)

where Pjispecified and Q specified


ji are specified active and reactive power flows.
The equivalent voltage injection Vse ∠θse bound constraints are as
V min
se ≤ Vse ≤ V max
se (9)
θ min
se ≤ θ se ≤ θ max
se (10)

4. Mathematical formulation of OPF problem

The FACTS devices are located to improve the system performance while minimizing certain objective functions,
maintaining thermal and voltage constraints. Mathematically, the OPF problem after incorporating the FACTS

45
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

controllers can be formulated as follows:


NG
Minimize F = (∑ (a i PGi2 + bi PGi + C i ) (11)
i =1
The minimization problem is subjected following equality and inequality constraints
4.1 Constraints
The OPF problem has two categories of constraints:
Equality Constraints: These are the sets of nonlinear power flow equations that govern the power system, i.e,
n
PGi − PDi − ∑ Vi V j Yij cos(θ ij − δ i + δ j ) = 0 (12)
j =1
n
QGi − Q Di + ∑ Vi V j Yij sin(θ ij − δ i + δ j ) = 0 (13)
j =1

where PGi and QGi are the real and reactive power outputs injected at bus i respectively, the load demand at the
same bus is represented by PDi and Q Di , and elements of the bus admittance matrix are represented by Yij and
θ ij .
Inequality Constraints: These are the set of constraints that represent the system operational and security limits like
the bounds on the following:
1) generators real and reactive power outputs
PGimin ≤ PGi ≤ PGimax , i = 1, K , N (14)
QGimin ≤ QGi ≤ QGimax , i = 1, K , N (15)
2) voltage magnitudes at each bus in the network
Vi min ≤ Vi ≤ Vi max , i = 1, K , NL (16)
3) transformer tap settings
Ti min ≤ Ti ≤ Ti max , i = 1, K , NT (17)
4) reactive power injections due to capacitor banks
QCimin ≤ QCi ≤ QCimax , i = 1, K , CS (18)
5) transmission lines loading
S i ≤ S imax , i = 1, K , nl (19)
6) voltage stability index
Lj i ≤ Ljimax , i = 1, K , NL (20)

FACTS devices constraints:


i) PST constraints
α Pimin ≤ α Pi ≤ α Pimax Phase angle constraint of PST (21)
where α Pi = Phase shift angle of PST at line i
α Pimin , α Pimax = Lower and upper phase shift angle limits of PST at line i
ii) TCSC constraints: Reactance constraint of TCSC
i ≤ X TCSC i ≤ X TCSC i i = 1,2,..., nTCSC
min max
X TCSC (22)
where X TCSCi = Reactance of TCSC at line i
min
X TCSCi = Minimum reactance of TCSC at line i
max
X TCSCi = Maximum reactance of TCSC at line i

46
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

nTCSC = number of TCSC’s


iii) SSSC constraints: Series voltage source magnitude and angle limits
≤ Vse ≤ Vse
min max
Vse ; (23)

θ semin ≤ θ se ≤ θ semax (24)


The equality constraints are satisfied by running the power flow program. The generator bus terminal
voltages ( V gi ), transformer tap settings ( t k ) and the reactive power generation of capacitor bank ( QCi ) are the
control variables and they are self-restricted by the representation itself. The active power generation at the slack bus
( Pgs ), load bus voltages ( V Li ) and reactive power generation ( Q gi ), voltage stability ( L j )-index are state variables
which are restricted through penalty function approach.

The installation of shunt reactive power sources involves the investment cost. The location of FACTS
devices and its size also involves the investment cost. These issues are beyond the scope of this thesis, and are not
considered in the solution of optimal power flow problems during minimization of different objective functions.

5. Overview of Particle Swarm Optimization


Basically, the PSO was developed through simulation of birds flocking in two-dimensional space [15]. The
position of each bird (called agent) is represented by a point in the X–Y coordinates, and the velocity is similarly
defined. Bird flocking is assumed to optimize a certain objective function. Each agent knows its best value so far
(pbest) and its current position. This information is an analogy of personal experience of an agent. Moreover, each
agent knows the best value so far in the group (gbest) among pbests of all agents. This information is an analogy of
an agent knowing how other agents around it have performed. Each agent tries to modify its position using the
concept of velocity.
The velocity of each agent can be updated by the following equation:
vik+1 = wvik + c1rand1 * ( pbesti − sik ) + c2rand2 * (gbest− sik ) (25)

where vik is velocity of agent i at iteration k,


w is weighting function,
c1 and c2 are weighting factors,
rand1 and rand2 are random numbers between 0 and 1,
sik is current position of agent i at iteration k,
pbesti is the pbest of agent i, and gbest is the best value so far in the group among the pbests of all agents.
The following weighting function is usually used in (25):
w = wmax − (( wmax − wmin ) /(itermax )) * iter (26)
where wmax is the final weight, wmin is the initial weight, itermax is the maximum iteration number, and iter is the
current iteration number. Using the previous equations, a certain velocity, which gradually brings the agents close to
pbest and gbest, can be calculated. The current position (search point in the solution space) can be modified by the
following equation:
sik +1 = s ik + vik +1 (27)

6. Overall Computational Procedure for solving the problem


The implementation steps of the proposed PSO based algorithm can be written as follows;

Step 1: Input the system data for load flow analysis


Step 2: Select a FACTS device and its location in the system
Step 3: At the generation Gen =0; set the simulation parameters of PSO parameters and randomly initialize k
individuals within respective limits and save them in the archive.

47
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Step 4: For each individual in the archive, run power flow under the selected network contingency to determine load
bus voltages, angles, load bus voltage stability indices, generator reactive power outputs and calculate line
power flows.
Step 5: Evaluate the penalty functions
Step 6: Evaluate the objective function values and the corresponding fitness values for each individual.
Step 7: Find the generation local best xlocal and global best xglobal and store them.
Step 8: Increase the generation counter Gen = Gen+1.
Step 9: Apply the PSO operators to generate new k individuals
Step 10: For each new individual in the archive, run power flow to determine load bus voltages, angles, load bus
voltage stability indices, generator reactive power outputs and calculate line power flows.
Step 11: Evaluate the penalty functions
Step 12: Evaluate the objective function values and the corresponding fitness values for each new individual.
Step 13: Apply the selection operator of PSO and update the individuals.
Step 14: Update the generation local best xlocal and global best xglobal and store them.
Step 15: If one of stopping criterion have not been met, repeat steps 4-14. Else go to stop 16
Step 16: Print the results

7. Simulation results
The proposed PSO algorithm to solve optimal power flow problems incorporating series FACTS devices for
enhancement of system performance i tested on standard IEEE 14-bus test system. The proposed algorithms are
implemented using MATLAB 7.4 running on Pentium IV, 2.66GHz, and 512MB RAM personal computer. The PSO
parameters used for the simulation are summarized in Table 1.

Table 1
Optimal parameter settings for PSO
Parameter PSO
Population size 20
Number of iterations 150
Cognitive constant, c1 2
Social constant, c2 2
Inertia weight, W 0.3-0.95

The network and load data for this system is taken from [16]. To test the ability of the proposed PSO
algorithm for solving optimal power flow problem with three series FACTS device. One objective function is
considered for the minimization using the proposed PSO algorithm. In order to show the affect of power flow control
capability of the FACTS devices in proposed PSO OPF algorithm, four sub case studies are carried out on the IEEE
14-bus system.
Case (a): power system normal operation (without FACTS devices installation),
Case (b): one PST is installed. PST is installed at line connected between buses 12 and 13 with line real and
reactive power settings of PST, Pmk = 0.025125, Qmk = 0.00 and –п/4 ≤ pi ≤ п/4 for two contingency cases.
Case (c): one TCSC is installed. TCSC is installed at line connected between buses 12 and 13 with XTCSC
value of -0.015 and 0 ≤ XTCSCi ≤ 60% of line reactance.
Case (d): one SSSC installed. SSSC installed at line connected between buses 12 and 13 with line real and
reactive power settings of SSSC, Pmk= 0.025125 and Qmk = 0.0145.
The first case is the normal operation of network without using any FACTS device. In second, third and
forth cases just installation of one device has been considered. Each device is placed in optimal location obtained
from the literature and trail and error method.
The evolution of objective function during optimization by the proposed method is shown in Figure 4 under
selected series FACTS device. The optimal settings of control variables and FACTS device parameters during
minimization of objective function is given in Tables 2 under the selected series FACTS device respectively. From
the Table 2, it is noted that PSO algorithm is able to enhance the system performance while maintaining all control
variables and reactive power outputs within their limits. Also from the Table 2 it is obvious that SSSC exhibits best
performance compared to other devices.
. The line loadings, bus voltage profiles, bus voltage angles, and voltage stability indices with and without
FACTS controllers are shown in Figures 5-8 under the each FACTS device. The Figures 5-8 revel that the proposed

48
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

PSO methodology incorporating FACTS devices is cable of maintaining better line loadings, load bus voltage
profiles, bus voltage angles and voltage stability indices.

9400
W/O FACTS
9200 With PST
Number of iterations

With TCSC
9000
With SSSC
8800

8600

8400

8200

8000
1 10 19 28 37 46 55 64 73 82 91 100 109 118 127 136 145
Cost($/h)

Figure 4 Convergence of cost of generation without and with FACTS device for IEEE 14-bus system

Table 2
Optimal settings of control variables for IEEE14-bus system
Limits(p.u) Without With FACTS
Control
FACTS
Variables Min Max PST TCSC SSSC
PG1 0.0 3.324 1.9447 1.9659 2.0426 1.9743
PG2 0.0 1.400 0.3647 0.3651 0.3532 0.3689
PG3 0.0 1.000 0.2919 0.2840 0.1467 0.2820
PG4 0.0 1.000 0.0000 0.0000 0.0000 0.0000
PG5 0.0 1.000 0.0830 0.0693 0.1430 0.0549
VG1 0.95 1.10 1.0557 1.0620 1.0650 1.0913
VG2 0.95 1.10 1.0292 1.0403 1.0389 1.0658
VG3 0.95 1.10 1.0046 1.0142 1.0126 1.0422
VG4 0.95 1.10 0.9961 1.0471 1.0191 1.0418
VG5 0.95 1.10 0.9974 1.0602 1.0018 1.0403
Tap - 1 0.9 1.1 1.0152 0.9469 1.0031 1.0169
Tap - 2 0.9 1.1 0.9488 1.0543 0.9825 0.9640
Tap - 3 0.9 1.1 1.0539 0.9442 1.0097 0.9792
QC6 0.0 0.10 0.0639 0.0092 0.0718 0.0014
QC8 0.0 0.10 0.0357 0.0225 0.0591 0.1000
QC14 0.0 0.10 0.0556 0.0412 0.0447 0.0579
Cost ($/h) 8087.200 8080.000 8061.600 8059.700
Ploss (p.u.) 0.0942 0.0943 0.0955 0.0901
Ljmax 0.0872 0.0774 0.0818 0.0749
CPU time (s) 20.0470 20.2030 22.3910 24.3800

49
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

80 W/O FACTS
70 With PST

% MVA Loading
60 With TCSC
With SSSC
50
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Line Number

Figure 5 Line loadings of IEEE 14-bus system without and with FACTS devices

1.12 W/O FACTS


1.1 With PST
1.08 With TCSC
1.06 With SSSC
Voltage(p.u.)

1.04
1.02
1
0.98
0.96
0.94
0.92
0.9
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Bus Number

Figure 6 Bus voltage profiles of IEEE 14-bus system without and with FACTS devices

0
-2 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4
Angle(deg.)

-6
-8
-10
-12 W/O FACTS
With PST
-14
With TCSC
-16 With SSSC
Bus Number

Figure 7 Bus voltage angles of IEEE 14-bus system without and with FACTS devices

50
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

0.1 W/O FACTS


0.09 With PST

Voltage stability L-index


0.08 With TCSC
0.07 With SSSC
0.06
0.05
0.04
0.03
0.02
0.01
0
6 7 8 9 10 11 12 13 14
Load Bus Number

Figure 8 Voltage stability indices of IEEE 14-bus system without and with FACTS devices

8. Conclusions
This paper has presented an OPF model incorporating series FACTS controllers such as PST, TCSC and
SSSC using the PSO algorithm for enhancement of system performance. This model is able to solve power networks
of any size and converges with minimum number of iterations and independent of initial conditions. The IEEE 14-
bus systems have been used to demonstrate the proposed methods over a wide range of power flow variations in the
transmission system. The results from the two tested systems showed that the proposed integrated OPF with Static
Synchronous Series Compensator scheme is very effective compared to other devices in improving the security of
the power system.

References
[1] M.Noroozian, L.Angquist, M.Ghandhari, G.Anderson, "Improving Power System Dynamics by Series-
connected FACTS Devices", IEEE Trans. on Power Delivery, Vol.12, No.4, October 1997.
[2] M.Noroozian, L.Angquist, M.Ghandhari, G.Anderson, "Use of UPFC for Optimal Power Flow Control",
IEEE Trans. on Power Delivery, Vol.12, No.4, October 1997.
[3] Roy Billinton, Mahmud Fotuhi-Firuzabad, Sherif Omar Faried, Saleh Aboreshaid, "Impact of Unified
Power Flow Controllers om Power System Reliability", IEEE Trans. on Power Systems Vol.15, No.1,
February 2000.
[4] James A. Momoh, Jizhong Z. Zhu, Garfiled D. Boswell, Stephen Hoffman, "Power System Security
Enhancement by OPF with Phase Shifter", IEEE Trans. on Power Systems, Vol.16, No.2, May 2001.
[5] N. G. Hingorani, L. Gyugyi, Understanding FACTS: Concepts and Technology of Flexible AC
Transmission Systems, IEEE Press, New- York, 2000.
[6] D. Povh and al, Load Flow Control in High Voltage Power Systems Using FACTS Controllers, CIGRÉ
Task Force 38.01.06, Jan. 1996.
[7] Gnanadas R., Venkatesh P. & Narayana Prasad Padhy, “Evolutionary Programming Based Optimal Power
Flow For Units With Non-Smooth Fuel Cost Functions”, Electric Power Components and Systems, Vol.33,
2005, pp. 1245-1250.
[8] Gnanadass R., Venkatesh P., Palanivelu T. G. & Manivannan K., “Evolutionary Programming Solution Of
Economic Load Dispatch With Combined Cycle Co-Generation Effect”, Institute Of Engineers Journal-EL ,
Vol. 85, September 2004, pp. 124-128.
[9] Hong-TzerYang, Pai-chuan Yang & Ching-Lein Huang, “Evolutionary programming based economic
dispatch for units with non-smooth fuel cost functions”, IEEE Transactions on Power Systems, vol. 11, No.
1, February 1996, pp. 112-118.
[10] Jayabarathi T., Jayaprakash K., Jeyakumar D. N. & Raghunathan T., “Evolutionary Programming
Techniques for Different Kinds of Economic Dispatch Problems”, Electric Power Systems Research, Vol.
73, 2005, pp. 169-176.
[11] Sinha N., Chakravarthi R. & Chattopadhyay P. K., “Improved Fast Evolutionary Program for Economic
Load Dispatch with Non-Smooth Cost Curves”, Institute Of Engineers Journal-EL, Vol. 84, September
2004, pp. 110-114.

51
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

[12] Somasundaram P., Kuppuswamy K. & Kumidini Devi R.P., “Evolutionary Programming Based Security
Constrained Power Flow”, Electric Power Systems Reasearch, Vol. 72, July 2004, pp. 137-145.
[13] Somasundaram P., Kuppuswamy K. & Kumudini Devi R.P., “Economic Dispatch With Prohibited
Operating Zones Using Fast Computation Evolutionary Programming Algorithm”, Electric Power Systems
Research, vol. 70, 2004, pp. 245-252.
[14] P. Kessel, H. Glavitch, “Estimating the voltage stability of a power system,” IEEE Trans. Power Delivery,
1986, PWRD-1(3), pp. 346-354.
[15] Abido MA. “Optimal power flow using particle swarm optimization,” Electric Power Energy Syst 2002;
24(7): 563-71.
[16] IEEE 14-bus system (1996), (Online) Available at //www.ee.washington.edu

Biographies
K. Padma received the B.Tech degree in electrical and electronics engineering from SV
University, Tirupathi, India in 2005, M.E degree from Andhra University, Visakhapatnam,
India in 2010.
She is currently working as an Assistant Professor in the department of electrical
engineering, AU College of engineering, Visakhapatnam, A.P, India. Her research interest
includes power system operation and control, power system analysis, power system
optimization, soft computing applications and FACTS.

Dr.K.Vaisakh received the B.E degree in electrical engineering from Osmania University,
Hyderabad, India in 1994, M.Tech degree from JNT University, Hyderabad, India in 1999,
and Ph.D. degree in electrical engineering from the Indian Institute of Science, Bangalore,
India in the year 2005.
Currently, he is working as professor in the department of electrical engineering, AU
College of engineering, Andhra University, Visakhapatnam, AP, India. His research
interests include optimal operation of power system, voltage stability, FACTS, power
electronic drives and power system dynamics.

52
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Secure and Unique Biometric Template Using Post Quantum Cryptosystem


Ajay Sharma1 and Deo Brat Ojha2
1
Research Scholar Singhania University, Jhunjhunu, Rajesthan,India
e-mail: ajaypulast@rediffmail.com
2
Professor, Department of Mathematics,
Rajkumar Goel Institute of Technology, Ghaziabad, U.P., India
e-mail: ojhdb@yahoo.co.in

Abstract
In this paper we enhance the accuracy and security of biometric template using fuzzy commitment scheme with post
quantum cryptosystem as cryptographic function in it. Here it is possible to generate many different secure biometric
template for the same system and also unique biometric templates for multiple systems from the same biometric trait;
it is just a matter of using a different set of error vector. It is also easy to cancel a secure template by simply deleting
the compromised template and generating a new one by using different error vector.

Keywords: Cryptography, Fuzzy Commitment Scheme, Biometric System, Template, algorithmic Noise, Enrollment
phase

53
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction
Cryptography is considered to be one of the fundamental building blocks to protect the biometric data with the
growing use of biometric recognition system. Biometric provides a person with a distinct characteristic that is always
prevalent. It is a technique of authentication of a person’s individuality from one or more behavioral or physiological
feature[3]. The use of biometrics (e.g., fingerprints, irises, faces) for recognizing individuals is becoming
increasingly popular and many applications are already available. Although these applications can be fundamentally
different, they can still be grouped into one of two categories: verification and identification [4][5][6].
A well-known difficulty has been how to cope with the 10 to 20% of error bits within an biometric data and derive
an error-free template. It is fundamentally impossible to avoid noise during biometric data acquisition, because “life
means change“. For example, faces age and iris patterns are not perfectly invariant to a contraction of a pupil. More
noise is introduced by changes in the environmental conditions, which is again an unavoidable circumstance. Finally
noise often finds its way into the sensor, during transmission or in the data processing process (“algorithmic noise“).
The latter noise sources can be reduced or even removed by improved engineering .To solve this problem, fuzzy
commitment scheme play an important role. Fuzzy commitment scheme is a tool for handling the noise in template
of a biometric recognition system.. Juels and Wattenberg’s fuzzy commitment scheme [2] has been introduced to
handle the difference occurring between two captured of biometric data, using error correcting code.
The various approach here been proposed to protect the stored template ,some are hardware based which is used
stand alone biometric system-on-devices. Some are software based which is relay on feature transformation and
biometric cryptosystems. Here on biometric cryptosystem common encryption technique, such as AES(Advance
Encryption standard) or RSA can not be used because of interclass variation in the biometric template[4,5].
This paper, itself define an application of a fuzzy commitment scheme with McEliece’s cipher [8,9]. The main idea
is here the biometric matching problem is transformed into an error correcting issue. We carefully studied the error
patterns within biometric data, and devised a two-layer error correction technique that combines Hamming code and
Goppa code. The error-correcting methods remove noise in the template [7]. Along with accuracy, Some
enhancement in the privacy of biometric cryptosystem, common encryption technique, such as AES or RSA can’t be
used, so the auxiliary data can be masked using homomorphic encryption that allows certain arithmetic operation in
the encryption domain[24].

2. Preliminaries
2.1 Biometric System
A generic biometric system consists of five components: Sensor, feature extractor, template database, matcher, and
decision module
In general , a biometric based recognition system consists of two phase In the enrollment phase, the biometric
template b are processed from a user U and stored or registered in the database. The second phase is the
verification phase; In verification system captures a new biometric sample b′ from U and compare it to the
registered or reference data via a matching function. Let µ be the biometric measure of U and τ is a recognition
threshold, b′ will be accepted if µ (b, b′) ≤ τ , else rejected. Mainly two kinds of errors are associated to this
scheme: False Reject (FR), when a matching user, i.e. a legitimate user, is rejected; False Acceptance (FA), when a
non-matching one, e.g. an impostor, is accepted. Note that, when the threshold increases, the FR’s rate (FRR)
decreases while the FA’s rate (FAR) grows, and conversely [11].

2.2 Definition
+
A metric space is a set C with a distance function dist : C × C → R = [0, ∞ ) , which obeys the usual
properties(symmetric, triangle inequalities, zero distance between equal points)[12].

2.3 Definition
n
Let C{0,1} be a code set which consists of a set of code words ci of length n. The distance metric between any
n
two code words ci and c j in C is defined by dist (ci , c j ) = ∑ cir − c jr ci , c j ∈ C
r =1
This is known as Hamming distance [13].

54
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

2.4 Definition
An error correction function f for a code C is defined as
f (ci ) = {c j / dist (c i , c j ) is the minimum, over C − {c i }} . Here, c j = f (ci ) is called the nearest neighbor
of ci [14].

2.5 Definition
The measurement of nearness between two code words c and c′ is defined by nearness (c, c ′) = dist (c, c ′) / n ,
it is obvious that 0 ≤ nearness (c, c ′) ≤ 1 [13].

2.6 Definition
The fuzzy membership function for a codeword c′ to be equal to a given c is defined as[13]
FUZZ (c ′) = 0 if nearness(c, c′) = z ≤ z 0 < 1
=z otherwise

2.7 Fuzzy Commitment Scheme with McEliece scheme[9]


Protocols are essentially a set of rules associated with a process or a scheme defining the process. Commitment
protocols were first introduced by Blum [1] . Moreover in the conventional commitment schemes, opening key are
required to enable the sender to prove the commitment. However there could be many instances where the
transmission involves noise or minor errors arising purely because of the factors over which neither sender nor the
receiver have any control , which creates uncertainties. Fuzzy commitment scheme was first introduced by Juels and
Martin [2]. The new property “fuzziness” in the open phase to allow, acceptance of the commitment using corrupted
opening key that is close to the original one in appropriate metric or distance. Fuzzy commitment scheme is based on
hash function [2] which causes them to share two shortcomings:
1. The hash functions used should be strongly collision free. However, this property can only be empirically checked.
It actually turns out that some schemes are inadvertently based on weakly collision-free hash functions.
2. Hash functions alone cannot offer non-repudiability.
Here we use the speed of McEliece and its randomness to enhance the fuzzy commitment scheme by using code base
cryptosystem which is base on Goppa Code.
The scheme consists of three phase: first setup phase, second commitment phase and third opening/verifying phase.
Setup up phase: At time t 0 , it is agreed between all that
CK ≅ XOR
f ≅ nearest neighbour in {h ( m )} .
Z 0 = 0.20.
Id A = Identifier
It is assumed that McEliece public key( PA ) is duly certified and public. It can be described by its k × n generator
matrix G. With the aid of a regular k × k matrix S and an n × n permutation matrix P, a new generator matrix G’ is
constructed that hides the structure of G:
G’ = S . G . P
The public key consists of G’and the matrices S and P together with g(x) are the private key( S A ).
Here, the root cause for using Id A as we stated in introduction section that This cryptosystem can not be used for
authentication because the encryption is not one to one and total algorithm is truly asymmetric.
Commitment phase: At time t1
1. Alice chooses message m in the form of bitstring to which she wish to commit.
2. Alice generates a secret pseudo q-bit random vector r.
3. Alice has a identifier Id A of p-bit random vector.
4. Alice concatenate her identifier Id A with secret pseudo q-bit random vector r which give us a vector R= Id A r.

55
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Here h( m) = mPA where h( m) ⊆ GF (2


n
),
Encryption: C = mPA ⊕ e , where e = g ( R ) , here g is an invertible function which maps R in to an n-bit error
vector of weight α.
According to the algorithms commita lg(e1 ) into string c i.e. her commitment
c = commita lg( XOR , h ( m), C ) , then after Alice sends c to Bob, which Bob will receive as t f (c) , where t f is
the transmission function which includes noise .
Open Phase: Alice sends the procedure for revealing the hidden commitment at time t 2 and Bob use this,So Alice
discloses the procedure h( m) and C to Bob to open the commitment.
opena lg(e2 ) : Bob constructs c′ using commita lg , message t (m) and opening key
i.e c′ = commita lg( XOR , t f ( h ( m)), t f (C )) and checks whether the result is same as the received commitment
t (c ) .
f
Fuzzy decision making
If (nearness (t (c), f (c ′)) ≤ Z )
f 0

Then A is bound to act as in m


Else he is free not to act as m .
Then after acceptance ,Bob decrypt the massage as first m can be recovered by using the decryption algorithm in the
original scheme. In the meantime, the value g ( R ) can also be obtained. Then the receiver computes
R = g −1 ( g ( R )) , where g −1 is the inverse of g . finally Bob calculates f (c′)( SGP ) −1 and finally get the
message. Here Bob get the Id A from the R to know the authenticity of the sender.

3. Related work
Our work is inspired from a number of authors who combine well known technique from the area of error correcting
code and cryptography to achieve a improve type of cryptographic primitive [1,2,8,13,14]. Further numerous works
that suggest combination of biometrics and cryptography. A more detailed of related research work on this field can
be found in [15, 16, 17 ].

4. Proposed System Architecture


In general, the identity theft problem is drastically exacerbated for the biometric systems. The proposed architecture
of biometric system will have enhanced the security and accuracy with respect to traditional system by combine
usage of code base cryptosystem and error correcting code.
In the enrollment stage (figure1 (a)) of a typical biometric recognition system, after the biometric acquisition
module, some processing is applied in order to obtain the biometric template, b which is then stored in a database.
Here H is called the hamming space of length N e.g. Η = {0,1} = F2N , where F2 = {0,1} . Here g ′ is an invertible
N

function which maps R in to an n-bit error vector of weight α . However, the biometric data is never stored in the
database to prevent it from being stolen. Instead, after the biometric has been acquired and the biometric template
has been generated, a cryptographic function will be applied to it [9]. The result of this operation will then be stored
in the database; this will be referred to in the rest of the paper as the secure biometric template. It should be pointed
out that it is impossible to recover any biometric data from this secure template as the cryptographic function is not
invertible.

56
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Public
User Key
Biometric (U) (P)

Template
Feature Generation
Acquisition Pre-processing Extraction b∈Η

Biometric
Identifier ( Id ) Error
Vector
e = g ′( R )
Random No. where, R = Id r
(r )

Database EP ( z ) = b × P ⊕ e g (b) = b × P

(a) Enrollment Phase (figure 1)


During the verification stage (figure1(b)), the probe biometric is acquired and the corresponding template, b′ , is
generated.. The problem here is that b itself is not stored in the database, but only a encrypted version of it. To
recovered the original biometric template b from the database, if the user is who he claims, or something
completely different if he is not. Therefore, the output of the feature extractor b′ needs to be encrypted. Only then,
is the result compared to the encrypted that is stored in the database. If the EP ( z ′) and EP ( z ) are equal, then the user
is validated to be who he claims to be. With this system, the three requirements above are verified. In particular, it is
possible to generate many different secure biometric templates from the same biometric trait; it is just a matter of
using a different set of error vector( e ) . It is also easy to cancel a secure template by simply deleting the
compromised template and generating a new one by using different error vector ( e ). Finally, since the biometric
data is never stored in a database, this guarantees that this information remains private.

57
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Database

User
Biometric(U)
Public Key(P)
EP ( z )

Template
Feature Generation
Acquisition Pre-processing Extraction b′ ∈ H

Biometric
Identifier ( Id )

Error
Vector
e = g ′( R )
where, R = Id r
Random No.
(r )

Accept
f ( EP ( z ′)( SGP ) −1 Fuzzy
g (b′) = b′ × P
Decision EP ( z ′)
Verified

Reject

(b) Verification Phase (figure 1)

4.1 Acquisition
The acquisition module, absolutely necessary in a real biometric verification system, has not been implemented but
here Instead of implementation, it is replaced by a large database of iris images , like the one developed by the
Chinese Academy of Sciences’ Institute of Automation (CASIA) [20] and code from [18]. This database consists of
22051 iris images from more than 700 subjects. All iris images are 8 bit gray-level JPEG files, collected under near
infrared illumination.

4.2 Pre-processing
In this step after acquisition is to extract the iris from the input eye images. The iris area is considered as a circular
crown limited by two circles. The iris inner (pupillary) and outer (scleric) circles are detected by applying the
circular Hough transform [21], relying on edge detection information previously computed using a modified Canny
edge detection algorithm [22].The eyelids often occlude part of the iris, thus being removed using a linear Hough
transform [23] .The presence of eyelashes is identified using a simple thresholding technique.

58
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

4.3 Feature Extraction


Once the iris texture is available, features are extracted from it to generate a more compact representation, also called
the biometric template. Reader can also read [19] in more detail, to know how Iris Recognition Works. To extract
this representation, the two-dimensional normalized iris pattern is convolved with a Log-Gabor wavelet [3]. The
resulting phase information is quantized, using two bits per pixel. The resulting iris template is composed of 9600
bits, stored as a 20×480 binary matrix.

4.4 Privacy-Protection and Error-Correction


This is main module of this scheme, in this scheme, we are using McEliece cryptosystem, which add some random
error at the time of encryption that makes the original template more secure than poorly chosen passwords and other
cryptosystem due to it’s randomness.
At the time of enrollment phase, inputs are biometric template ( b ), error vector( e ) which is chosen randomly and
public key( P ) which has generating matrix that defines an error correcting code. Here g ′ is an invertible function
which maps R in to an n-bit error vector of weight α . The output of this phase is encrypted template which one is
stored on system or on a data card (i.e. smart card). Now, it is not easy to gain the template from this data without the
knowledge of key and error vector.
At the time of verification phase, a similar procedure is used with a new acquire template b′ with same error vector,
and key and error correction coding is used to correct biometric templates. In this stage, the probe template of a
legitimate user is (error) corrected in order to recover the original template, obtained during enrollment; this should
be possible because both templates are fairly similar. However, for an illegitimate user, whose probe template is
fairly different from the one originally enrolled by the legitimate user, it should not be possible to recover the
original from the probe template. Now we calculates f (c′)( SGP ) and finally get the template. Here we get
−1

Id from the R to know the identification of the machine. Which is unique for each machine so same biometric
information should not be able to link template corresponding to the same individual for different machine.
Therefore, the selected error correcting code should be strong enough to correct templates of legitimate users, but not
so strong as to also correct the templates of illegitimate users. Therefore, µ be the biometric measure of U and τ
is a recognition threshold, b′ will be accepted if µ (b, b′) ≤ τ , else rejected.

5. Security Analysis
The accuracy of any biometric system depends on the ability of that system to separate genuine users from imposters.
Here we describe a possible attack to the scheme and identify ways of preventing it. It is possible for an attacker to
imitate a signer by obtaining a copy of their biometric data. For example, see [10] for methods of duplicating
fingerprints. After obtaining a copy of the signer’s biometric data, the attacker can sign a forged message that will
appear genuine on verification by the signer. To prevent this attack, genuine messages can be signed in the presence
of a trusted witness.
Some issue of security in stored template consider here as,
(1). Stored Template should not reveal any data and no close replica made from the stored data.
(2). Multiple system using the same biometric information should not be able to link template corresponding to the
same individual.
(3). If the stored data is compromised, remove that one and reissue a new one.
Solutions of this issue are as,
Explanation. of issue 1.
Here we use goppa code in McEliece, first we encrypt a user biometric template and at the time of encryption an
error vector of fixed weight α is added. To reveal any template; attacker should now the solution of decoding
problem for unknown weight α of error vector which is very hard to solve. Coding theory based cryptosystem are
secure because decoding is hard without the knowledge of secret.
Explanation. of issue 2.
If we consider error vector as e = g ′( R ) here g is an invertible function which maps R into an n -bit error vector
of weight α . Where R = Id r and Id is machine identification and r is secret pseudo random vector. Since,
each system has unique Id so same biometric information should not be able to link template corresponding to the
same individual.

59
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Explanation. of issue 3.
It is possible to generate many different secure biometric templates from the same biometric trait; it is just a matter
of using a different set of error vector( e ) . It is also easy to cancel a secure template by simply deleting the
compromised template and generating a new one by using different error vector ( e ).The randomness property of
error vector is also required to prevent cross-matching of subjects across databases.
In this scheme, we are using McEliece cryptosystem, which add some random error at the time of encryption that
makes the original template more secure than poorly chosen passwords and other cryptosystem due to it’s
randomness. In addition of randomness, McEliece cryptosystem is also probabilistic which give more susceptibility
of template towards brute force attacks.
It also provides non-repudiation i.e. a legitimate user may access the facilities offered by an application and
then do not claim that an intruder had circumvented the system. A bank clerk, for example, may modify the financial
records of a customer and then can’t deny responsibility by claiming that an intruder could have possibly stolen her
biometric data. So our proposed scheme enhances the biometric security and accuracy from the previous available
literature.

6. Conclusion
Using a public key cryptosystem to construct a commitment is a way to achieving non-repudiability and
authentication, a property which can not be offered by hash functions alone. By using McEliece in fuzzy
commitment scheme error vector e used to enhance the security of the function hiding, particularly against matrix
factorization attacks. Main enhancement in this approach is randomness of the error vector, we can not obtain any
information about the positions in which the error occurs. Thus the information rate is increase and information
leakage rate decrease.
We specifically focus on attacks designed to elicit information about the original biometric data of an individual
from the stored template. As soon as identical templates are stored in multiple databases or datasets, it is possible to
perform cross matching between them. We discuss the importance of adopting the error vector as a function of
identification of machine and random number to enhance the uniqueness of biometric templates for different
machine for same the individual. Since, each system has unique Id so same biometric information should not be
able to link template corresponding to the same individual for different machine.
The randomness property of error vector is also required to prevent cross-matching of subjects across databases.

References

[1] M. Blum, “Coin flipping by telephone: a protocol for solving impossible problems”, Proc. IEEE Computer
Conference, pp. 133-137, 1982.
[2]. A.Juels and M.Wattenberg, “ A fuzzy commitment scheme”, In Proceedings of the 6th ACM Conference on
Computer and Communication Security, pp.28-36, November 1999.
[3]Sunil V.K. Gaddam, Manohar Lal “Efficient Cancellable Biometric Key Generation Scheme for
Cryptography”International Journal of Network Security, Vol.11, No.2, pp.61–69, Sept. 2010.
[4] A. K. Jain, S. Pankanti, S. Prabhakar, L. Hong, A. Ross, “Biometrics: A Grand Challenge”, Proc. of the
International Conference on Pattern Recognition, Vol. 2, pp. 935–942, August 2004.
[5] J. Wayman, A. Jain, D. Maltoni, D. Maio, Biometric Systems:Technology, Design and Performance Evaluation,
Springer-Verlag, 2005.
[6] D. Maltoni, D. Maio, A. K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, 2003.
[7] Daugman,J “How Iris Recognition Works”, IEEE Transactions On Circuits and
systems for Video Technology, 2004,14 (1), pp.23-30
[8] Deo Brat Ojha, Ajay Sharma “ A fuzzy commitment scheme with McEliece’s cipher” Survey in Mathematics and
Its Application Vol.5(2010) pp73-83.
[9] Ajay Sharma, Deo Brat Ojha,“Application of Coding Theory in Fuzzy Commitment Scheme”, Middle-East
Journal of Scientific Research 5 (6): 445-448, 2010.
[10] T. van der Putte and J. Keuning. Biometrical Fingerprint Recognition: Don’t Get Your Fingers Burned.
Proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications, 2000.
[11] Andrew Burnett, Adam Duffy, Tom Dowling “A Biometric Identity Based Signature Scheme”,
eprint.iacr.org/2004/176.pdf -
[12] V.Pless, “ Introduction to theory of Error Correcting Codes”, Wiley , New York 1982.

60
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

[13].A.A.Al-saggaf,H.S.Acharya,“A Fuzzy Commitment Scheme”IEEE International Conference on Advances in


Computer Vision and Information Technology 28-30November 2007 – India.
[14] F. J. MacWilliams and N. J. A. Sloane, Theory of Error-Correcting Codes. North Holland, 1991.
[15] F. Hao, R. Anderson, and J. Daugman, “Combining crypto with biometrics effectively,” IEEE Transactions on
Computers, vol. 55, no. 9, pp. 1081–1088, 2006.
[16] A. Cavoukian and A. Stoianov, “Biometric encryption: A positive-sum technology that achieves strong
authentication, security and privacy,” Information and privacy commissioner of Ontario, White Paper, March
2007.
[17] E. Krichen, B. Dorizzi, Z. Sun, S. Garcia-Salicetti, and T. Tan, Guide to Biometric Reference Systems and
Performance Evaluation. Springer-Verlag, 2008, ch. Iris Recognition, pp. 25–50.
[18] L. Masek, P. Kovesi, MATLAB Source Code for a Biometric Identification System Based on Iris Patterns,
School of Computer Science and Software Engineering, University of Western Australia, Australia, 2003.
[19] J. G. Daugman, “How Iris Recognition Works”, IEEE Transactions on Circuits and Systems for Video
Technology, Vol. 14, No. 1, pp. 21–30, January 2004.
[20] CASIA website, http://www.cbsr.ia.ac.cn/IrisDatabase.htm
[21] T. Kawaguchi, D. Hidaka, M. Rizon, “Detection of eyes from human faces by Hough transform and separability
filter”, Proc.of the IEEE International Conference on Image Processing, Vol. 1, pp. 49-52, Vancouver, Canada,
2000.
[22] J. Canny, “A Computational Approach to Edge Detection”, IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 8, pp. 679-714, 1986.
[23] R. Duda, P. Hart, “Use of Hough Transformation to Detect Lines and Curves in Pictures: Graphics and Image
Processing”, Communications of the ACM, Vol. 15, pp. 11-15, 1972.
[24] J. Bringer and H. Chabanne,“An Authentication protocol with encrypted biometric data”, proc. Int. con
cryptology. Africacrypt .pp-109-124, 2008.

61
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Statcom In Eight Bus System For Power Quality Enhancement


1. G.Sundar 2. S.RamaReddy
Research Scholar, Bharath University, Chennai
Professor, Jerusalam College of Engg., Chennai
sund_geee@yahoo.co.in , srr_victory@yahoo.com

Abstract
This paper presents the detailed analysis of a STATCOM in eight bus system for harmonic content achievement
in order to enhance the voltage regulation by connecting a heavy load. For fast response, the STATCOM in intended
to replace the widely used static Var compensator (SVC). STATCOM display a low harmonic rate and injects
reactive power in the load. The results of simulation are presented.

Key words: STATCOM, voltage regulation, reactive power compensation

62
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

1. Introduction

The rapid growth in electrical energy use, combined with demand for low cost energy, has gradually led to
the development of generation sites remotely located from the load center. The generation of bulk power at
remote locations necessitates the use of transmission line to connect generation sites to load centers. With long
distance ac power transmission and load growth, active control of reactive power is indispensable to stabilize the
power system and to maintain the supply voltage. The static synchronous compensator (STATCOM) using
voltage source inverters has been accepted as a competitive alternative to the conventional Static Var
compensator (SVC) using thyristor-controlled reactors STATCOM functions as a synchronous voltage source. It
can provide reactive power compensation without the dependence on the ac system voltage. By controlling the
reactive power, a STATCOM can stabilize the power system, increase the maximum active power flow and
regulate the line voltages. Faster response makes STATCOM suitable for continuous power flow control and
power system stability improvement. The interaction between the AC system voltage and the inverter-composed
voltage provides the control of the STATCOM output. When these two voltages are synchronized and have the
same amplitude, the active and reactive power outputs are zero.
As with all static FACTS devices the STATCOM has the potential to be exceptionally reliable but with the
added capability to: sustain reactive current at low voltage (constant current not constant impedance), reduce
land use and increase re-floatability (footprint 40% of SVC) and, be developed as a voltage and frequency
support (by replacing capacitors with batteries as energy storage). Although currently being applied to regulate
transmission voltage to allow greater power flow in a voltage limited transmission network in the same manner
as a static var compensator (SVC), the STATCOM has further potential. By giving an inherently faster response
and greater output to a system with a depressed voltage, the STATCOM offers improved quality of supply. The
major applications are: voltage stability enhancement, damping torsional oscillations, power system voltage
control, and power system stability improvement. These applications can be implemented with a suitable control
(voltage magnitude and phase angle control).
This paper is aimed at the development of multilevel STATCOM for power quality enhancement. The
results reveal that the three-level STATCOM offers higher efficiency and reduced voltage and current harmonic
levels. The concept of the proposed multilevel STATCOM is supported by MATLAB/SIMULINK results.

2. Basic Principle of STATCOM

The Static Synchronous Compensator (STATCOM) is shunt connected reactive compensation equipment,
which is capable of generating and/or absorbing reactive power whose output can be varied so as to maintain
control of specific parameters of the electric power system. A single line diagram of STATCOM is shown in
Fig.1.The STATCOM basically consists of a step-down transformer with a leakage-reactance, a three-phase
GTO/IGBT voltage source inverter (VSI), and a DC capacitor. The AC voltage difference across the leakage
reactance produces reactive power exchange between the STATCOM and the power system, such that the AC
voltage at the bus bar can be regulated to improve the voltage profile of the power system, which is the primary
duty of the STATCOM.
The principle of STATCOM operation is as follows. The VSI generates a controllable AC voltage source
behind the leakage reactance. This voltage is compared with the AC bus voltage system; when the AC bus
voltage magnitude is above that of the VSI voltage magnitude, the AC system sees the STATCOM as an
inductance connected to its terminals. Otherwise, if the VSI voltage magnitude is above that of the AC bus
voltage magnitude, the AC system sees the STATCOM as a capacitance connected to its terminals. If the
voltage magnitudes are equal, the reactive power exchange is zero. If the STATCOM has a DC source or energy
storage device on its DC side, it can supply real power to the power system. This can be achieved adjusting the
phase angle of the STATCOM terminals and the phase angle of the AC power system. When the phase angle of
the AC power system leads the VSI phase angle, the STATCOM absorbs real power from the AC system; if the
phase angle of the AC power system lags the VSI phase angle, the STATCOM supplies real power to AC
system.

63
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Fig.1. Single line diagram

3. Multilevel STATCOM

The Multilevel STATCOM configuration consists of a voltage source inverter, dc side capacitors (C) with
voltage Vdc on it, and a coupling reactor or a transformer. The ac voltage difference across the coupling reactor
produces reactive power exchange between the STATCOM and the power systems at the point of common
coupling (PCC). If the output voltage of the STATCOM (Vc) is more than the system voltage (Vl), in that case,
reactive power is supplied to the power system, while reactive power goes to STATCOM if vc is less than that
of Vl. To take effect of this bidirectional flow of reactive power, the STATCOM output voltage should be varied
according to requirement of reactive power compensation, and this can be accomplished in two ways: i) by
changing the switching angles while maintaining the dc capacitor voltage at a constant level (inverter type I
control) or ii) keeping switching angles fixed and varying the dc capacitors voltages (inverter type II control).
The variation of dc capacitors voltages is simply achieved by varying the active power transfer between
STATCOM and power system by adjusting phase angle between Vc and Vl Each of these control schemes has
their own merits and demerits. In general, inverter type II control is preferred where very fast voltage control is
not required such as in power system applications because THD injected can be minimized in this case. In this
work inverter type II control scheme has been applied.

4. Harmonic in voltage source inverter (VSI)


The DC capacitor and series inductor can cause resonance at low order harmonics that can be present in the
system contingency conditions or due to harmonic source in the vicinity (nonlinear loads). An unbalance due to
faults results in negative sequence (fundamental frequency) voltage which in turn results in second harmonic
components on the DC side. A second harmonic voltage on the DC capacitor results in both negative sequence
(fundamental) component and a positive sequence third harmonic component. Depending on the value of the
DC capacitor, it is possible that the negative sequence (fundamental) current and positive sequence third
harmonic current in the AC are magnified. The use of multilevel inverter eliminates the need for harmonic
filters in the case of a STATCOM. The voltage and power ratings are expected to increase. The use of
STATCOM in distribution systems has become attractive, not only for voltage regulation, but also for
eliminating harmonics and improving power quality.

5. Simulation Results

A complete model of eight bus system using the simulation of STATCOM is presented in this paper. The
eight bus system model without STATCOM is shown in Fig.5a. Each line is represented by series impedance
model. An additional load is added in parallel with load 1 by closing the breaker. At 0.2sec, additional load is
connected. The P & Q across the load 1 is as shown in Fig.5b.

64
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Fig. 5a Model of 8-bus system without STATCOM


Fig. 5a Model of 8-bus system without STATCOM

Fig.5b Voltage, Real &Reactive Power across load-1

65
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Fig.5c Model of 8-bus system with STATCOM

Fig.5d Model of STATCOM

66
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Fig.5e Voltage, Real &Reactive Power across load-1


At t=0.2sec, the STATCOM is switched and connected to eight bus system between buses 3& 6 by
switching on the circuit breaker. The 8 bus system with STATCOM is shown in Fig.5c. The STATCOM model
is shown in Fig.5d. At this time, say t=0.2sec, the load is added to the buses, therefore, more reactive power
compensation is still required. The voltage phase displacement of STATCOM increases and therefore, the DC
capacitor voltage increases. The STATCOM injects Q in to the load 1. The regulated voltage, P & Q across the
load 1 is shown in Fig.5e.

6. Conclusion

A STATCOM is simulated in eight bus system with MATLAB SIMULINK. The simulation results of eight
bus system with & without STATCOM presented. This work has proposed to harmonic content achievement for
voltage regulation in eight bus system by connecting additional load. The simulation is based on the assumption
of load. The simulation results are presented and they are in line with the predictions.

7. References
[1] Ashwin and Thyagarajan (2006) “Modeling and simulation of VSC based STATCOM”, iicpe06, pp303-
307..
[2] Jianye Cuen, Shan Song, Zanji wang, (2006) “Analysis and implement of Thyrister based STATCOM”,
International conference on Power System technology.
[3] J.Kumar, B.Das and P.Agarwal, (2008) “Selective harmonic elimination technique for a Multilivel
inverter”, Fifteenth national power system conference (NPSC),IIT Bombay, pp.608-13.
[4] J.Kumar, B.Das and P.Agarwal, (2010) “Optimized Switching scheme of a Cascade Multilevel Inverter”,
Electric Power Components and systems, Vol.38, No.4, pp.445-64.
[5] KEPRI Electric Power System Technology Group (2003) “Development of FACTS Operation Technology
(phase II: Pilot plant Development and construction), KEPRI final report.
[6] MATLAB Version 7.3
[7] N.G. Hingorani and L. Gyugyi, (2000) Understanding FACTS, concepts and technology of Flexible AC
Transmission systems, piscatway, NJ:IEEE press.
[8] R. Mienski, R. Pawelek and I. Wasiak, (2004) “Shunt Compensation for Power Quality improvement
using a STATCOM controller: Modeling and simulation”, IEEE Proce,Vol.51, No. 2.

67
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

G.Sundar has obtained his B.E degree from Madras University, Chennai in the year 2001. He
obtained his ME degree from SCSVMV University, Kanchipuram in the year 2005.He is
presently a research scholar at Bharath University, Chennai. He is working in the area of
Power quality improvement using STATCOM.

S. Ramareddy is Professor of Electrical Department, Jerusalem College of engineering,


Chennai. He obtained his D.E.E from S.M.V.M Polytechnic, Tanuku, A.P. A.M.I.E in
Electrical Engg from institution of Engineers (India), M.E in Power System Anna
University. He received Ph.D degree in the area of Resonant Converters from College of
Engineering, Anna University, Chennai. He has published over 20 Technical papers in
National and International Conference proceeding/Journals. He has secured A.M.I.E
Institution Gold medal for obtaining higher marks. He has secured AIMO best project
award. He has worked in Tata Consulting Engineers, Bangalore and Anna University, Chennai. His research
interest is in the area of resonant converter and Solid State drives. He is a life member of Institution of
Engineers (India), Indian Society for India and Society of Power Engineers. He is a fellow of Institution of
Electronics and telecommunication Engineers (India). He has published books on Power Electronics and Solid
State circuits.

68
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 7, September 2010

Improved Modification of single stage Ac-Ac converter for Induction


Heating Application
S.Arumugam1, S.Ramareddy2
Research Scholar,
Bharath University, Chennai, India
. s_arumugam@rediffmail.com
Professor
Jerusalem College of Engg Chennai, India.
srr_victory@yahoo.com

Abstract
This paper presents simulation of single stage Induction heating system with series Load Resonant.
Low frequency AC is converted in to High Frequency Ac using newly developed ZVS-PWM high frequency
inverter. This High Frequency is used for Induction Heating .single stage Ac-Ac converter system are modeled
and they are simulated using matlab simulink. The simulation results of ZVS-PWM high frequency system are
presented. The effectiveness of this UFAC-to-HFAC direct power frequency converter using IGBTs for
consumer high-frequency IH appliances is evaluated and proved on the basis of simulation results.

Keywords: High frequency series load resonant inverter, Loss less capacitive snubbers, Asymmetrical
PWM,ZVS,UFAC-HFAC direct Inverter, Consumer IH cooking appliances.

69
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

I. Introduction
In recent years, new application fields of high-frequency induction heating (IH) power technology in
consumer and industry have developed more and more in all electricity power utilization systems as energy
saving. For example, these IH appliances are IH cooking heater, IH rice cooker, IH hot water producer, IH
steamer, and IH super heated steamer for cleaning, disinfecting, drying and cooking. These new IH applications
in addition to microwave oven for food processor have been expanding dramatically with tremendous
development of the core technology of the state-of-the art high-frequency power electronics in IH technology.
However, application-specific high frequency resonant inverters used for these appliances cause stage is required
actually. The power losses of this power stage are to be significant as well as size and cost consideration. In
order to solve these practical problems, the technological developments of new high frequency resonant inverter
circuit and system topologies are necessary to use high frequency soft switching commutation as ZVS, ZCS and
ZVZCS. In these two power stage series load resonant high frequency inverter with a new passive PFC rectifier,
switching losses and conduction losses of the power semiconductor devices used in the circuit can be reduced
and achieve high efficiency for the high frequency switching operation. These can have the advantages of high
performance and the miniaturization, and use enough limit of rated characteristics of power semiconductor
devices by reducing switching surges switching losses and conduction losses of power devices, getting larger
cooling devices and heat release systems, decreased rated ability of power devices by switching surges.
Furthermore, increased EMI/RFI noise levels due to high frequency leakage current in high frequency switching
of conventional high frequency inverters operated under hard switching PWM. Therefore, the PFC rectifier.
Generally, the high frequency IH power appliances based on the high frequency power electronics have PFC
rectifier stage, diode bridge rectifier stage as passive PFC rectifier with DCM of the inductor current and high
frequency resonant PWM inverter stage for supplying HFAC power to various HF-IH load structures. These two
power stage IH products using series load resonant HF inverter have high power factor and low utility AC
current harmonics characteristics in UFAC side. However, the IH direct inverter products actually need a lot of
power semiconductor switching devices, passive resonant circuit components and bulky aluminum electrolytic
smoothing Capacitor stack From these present backgrounds, this paper deals with the one stage soft switching
PWM high frequency series load resonant direct inverter for IH applications. This high frequency resonant
inverter topology has unique points as only one diode conducting mode passive PFC bridge rectifier operating at
ZCS.

Input ZVS-PWM
AC Source Rectifier DC Source Inverter IH

Control
Circuit

Fig1.Block Diagram
In this paper, the characteristics of the high frequency direct inverter on the basis of computer simulation results
are evaluated. In addition, its direct power regulation characteristics and UFAC side power quality
characteristics in periodic steady-state are illustrated including high frequency AC power regulation under the
conditions of soft switching and hard switching.
This high-frequency inverter is composed of a passive PFC converter operating at one diode conducting
bridge circuit and asymmetrical ZVS-PWM high frequency resonant inverter without the bulky electrolytic
capacitor stage for boosted DC voltage smoothing. In addition, this proposed high-frequency resonant inverter
has only one diode conducting mode in the diode bridge rectifier with boost inductor. The operating principle of
the proposed series load resonant high frequency inverter is described by using the switching mode equivalent
circuits in addition to the simulated operating voltage and current waveforms. The circuit parameters of proposed
high frequency inverter design to achieve one diode conduction at the same time at diode bridge part by
simulation analysis in this paper

70
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

2. Soft Switching PWM High Frequency Direct Converter


2.1. Circuit description

Figure 1 shows the General Block diagram of single stage AC-Ac Converter for IH Application.
Figure2 shows single stage ZVS-PWM high frequency power converter with passive PFC function. The
proposed single stage converter has two circuit parts; passive PFC converter part (see Fig.2 (b)) and high
frequency inverter part (see Fig.2(c)). The unique feature of the proposed high frequency direct inverter in Fig.
2(a) includes the direct power frequency conversion processing from UFAC to HFAC, conducting only one
diode of diagonal diodes alternatively during the one switching period with passive PFC function and voltage
boost function.

The high frequency direct inverter without symmetric bidirectional switches consists of low pass
filter with the mid point ofCa1, Ca2, Lb and La, diode bridge rectifier, boost capacitor Cb, boost inductor between
neutral point of Ca1 and Ca2 and midpoint of switching bridge leg, power semiconductor switches Q1, Q2, series
resonant tuned capacitor Cr and IH load (Ro, Lo).

2.2. Principle of Operation

Figure 3 shows asymmetrical PWM processing signal trains in the case of positive half wave of UFAC
side voltage source vAC. The operating voltage and current waveforms in steady state are illustrated in Fig.4
around a peak value of vAC(t)>0. The definition of the duty factor D is expressed by the following equation.

Ton2 +Td
D=--------------- ----- (1)
T
where, Ton2 is gate duration time of the switch S2 of Q2, Td is a dead time between
switches; Q1, Q2, and T is one cycle of HF inverter switching period.

The circuit operating modes and switching mode equivalent circuits of the high frequency series resonant
direct inverter with a new passive PFC rectifier in the case of positive half wave of UFAC voltage vAC. The
circuit operation in a periodic steady-state is described as follows

Fig 2. a. Single stage high frequency inverter

Fig 2. b. PFC converter part

71
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

Fig 2. c High frequency inverter part

Fig 3.Asymmetrical PWM gate pulses

[Mode 1; t0<t_t1] This circuit operating mode is started by turning-on the switch S1 of Q1. In this mode, diode Ds1
only in the bridge rectifier as a passive PFC also conducts. This mode ends when the switch S1 is turned off with
ZVS due to the gate pulse release.
[Mode 2; t1<t_t2] When the switch S1 is turned off with ZVS, the circuit operating mode is shifted to Mode 2.
The switch S1 can achieve ZVS turn off commutation, and at this point, the diode Ds1 only conducts
continuously. In this circuit operating mode, the lossless snubbing capacitor Cs1 in high side bridge arm is
charged and Cs2 in low side bridge arm is discharged simultaneously from a certain boosted voltage vCb. This
operating mode ends when the lossless snubbing capacitors complete charging and discharging each other.
[Mode 3; t2<t_t3] This circuit operating mode starts after charging and discharging of the lossless snubbing
capacitors; Cs1 and Cs2. The only one diode Ds1 conducts and D2 of Q2 is turned on. This circuit operating mode
ends when the switch S2 is turned on.
[Mode 4; t3<t_t4] When the gate driving pulse is delivered to the switch S2 of Q2 during the operating period in
Mode 3,

72
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

Fig 4.Operating waveforms

The circuit operation in Mode 4 will start from this point. In this mode, the only one bridge diode Ds1 conducts
and switch S2 is turned on with ZVZCS as the complete soft commutation. This circuit operating mode ends
when the only one bridge diode Ds1 is turned off with ZCS for one diode conducting and boost operating mode
passive PFC rectifier.
[Mode 5; t4<t_t5] In this circuit operating mode, the switch S2 conducts and the bridge diode Ds4 in diagonal
bridge arm is naturally commutated with ZCS from Ds1. This operating mode ends when the switch S2 is turned
off with ZVS.
[Mode 6; t5<t_t6] This circuit operating mode starts when the switch S2 is turned off with ZVS. The only one
diode Ds4 conducts continuously. In this circuit operating mode, the lossless snubbing capacitor Cs1 in a high side
arm is discharged with the aid of series resonant IH load and, on the other hand, Cs2 in the low side arm is
charged at the same time during a dead time interval. This circuit operating mode ends when charging and
discharging of the lossless snubbing capacitors Cs1, Cs2 are completely performed.
[Mode 7; t6<t_t7] When the charging and discharging behaviors of two lossless snubbing capacitors are
completed, the diode D1 of Q1 commutates from Cs1 naturally. The low side bridge diode Ds4 as a PFC rectifier
conducts continuously. This operating mode ends when the only one conducting diode Ds4 is turned off with ZCS
and a high side bridge diode Ds1 is turned on with ZCS.
[Mode 8; t7<t_t8] In this circuit operating mode, the bridge diode Ds1 is turned on with ZCS and diode D1
conducts. This circuit operating mode ends when the switch S1 of Q1 is turned on with ZVZCS.

3. Simulated Performance Evaluations


ZVS – PWM Inverter system is simulated using simulink and their results are given here Fig 5a. shows
the proposed simulation model. Driving pulses are shown in Fig 5b. Input current and voltage waveform are as
shown in Fig 5c. Ouput AC voltage waveforms are shown in Fig 5d. and it enlarged output waveform is shown
in Fig 5e. It can be seen that output voltage is almost sine wave. The low frequency AC input voltage is

73
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

converted to Dc using uncontrolled Rectifier and its output of Rectifier is converter in to high frequency Ac
using ZVS_PWM Inverter.

Fig 5. a Simulation of proposed inverter

Fig 5. b. Driving pulses

Fig 5. c. Input current & voltage waveform

74
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

Fig 5. d. Output voltage waveform

5. e. Enlarged waveform

Fig 5. f. output & input voltage waveform


4. Main Features
The advantageous features are summarized as follows, compared with two stage power frequency
conversion processing scheme.
(a) The diodes of the diode bridge circuit are turned on and off with ZCS. The diode recovery currents and their
power losses could be minimized.
(b) Only one diode switch can conduct in bridge rectifier stage for passive PFC converter equipped with utility
AC side grid. The diode conduction losses in the utility side diode bridge rectifier can be reduced in principle.
Ds1, Ds4 for vAC>0 and Ds2, Ds3 for vAC<0 can achieve ZCS.
(c) The capacitance in DC boost link can be reduced. The film capacitor could be used in replace of the DC
electrolytic capacitor. The ESR of the boosted DC capacitor can be lowered. Its power loss and temperature rise

75
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

might be minimized. The total power factor in UFAC side becomes unity and line harmonic current components
in the UFAC side can be reduced without complex specific control procedure with sensor less scheme
(d) The high frequency AC power for IH load can be regulated by the simple asymmetrical PWM under the
conditions of the soft commutation and constant frequency.
(e) Total system efficiency could be higher. The high power density might be achieved under simple cooling
scheme and energy saving.
(f) The DC component of working coil in IH load can be zero due to the series resonant load with tuned series
capacitor.
(g) The envelope of output high frequency current has the same sine wave as that in UFAC side.

5. Conclusions
This paper has presented analysis, Modeling and Simulation of single stage Ac-Ac converter for a variety
of consumer induction heating (IH) appliances such as IH cooker, IH hot water producer, IH steamer and IH
super heated steamer. This high frequency series load resonant tuned direct inverter using lossless snubbing
capacitors has a high efficiency passive PFC rectifier operating only one diode conduction and boosting mode in
the diode full bridge rectifier. Furthermore, the proposed direct high frequency inverter circuit topology has
lowered line current harmonic contents and THD characteristics in UFAC side, and utility power factor
characteristics in UFAC side. Also this system has advantages like low switching loss and reduced stress The
models are developed and they are successfully used for simulation studies. The simulation results are in line
with predictions.
In the future, it should do the comparative studies between the conventional type two-stage high
frequency resonant inverter and proposed one stage high frequency resonant direct inverter on the basis of
experimental results.

References
[1] R.Ordonez, H.Calleja, “Induction heating inverter with power factor correction”, VI IEEE International
Power Electronics Congress (CIEP)pp.90-95, 1998.
[2] H.Calleja, R.Ordonez, “Improved induction-heating inverter with powerfactor correction”, Proceedings of
30th Annual IEEE Power ElectronicsSpecialists Conference (PESC) Vol.2, pp.1132-1137, 1999.
[3] H.Tanimatsu, T.Ahmed, I.Hirota, K.Yasui, T.Iwai, H.Omori, N.A.Ahmed,H.W.Lee, M.Nakaoka, “Two-
switch boost-half bridge and boost active clamped ZVS-PWM AC-AC converters for consumer high
frequency induction heater”, Proceedings of Twentieth Annual IEEE Applied Power Electronics Conference
and Exposition (APEC), Vol.2, pp.1124-11302005
[4] N.A.Ahmed, Y.Miura, T. Ahmed, E.Hiraki, A.Eid, H.W.Lee, andM.Nakaoka, “Quasi-Resonant Dual Mode
Soft Switching PWM and PDM High-Frequency Inverter with IH Load Resonant Tank” Proceedings of
IEEE Power Electronics Specialists Conference (PESC), pp.2830-2853, Brazil, June, 2005
[5] H.Sugimura, A.M.Eid, S.K.Kwon, H.W.Lee, E.Hiraki, M, Nakaoka, “High frequency cyclo-converter using
one-chip reverse blocking IGBT based bidirectional power switches”, Proceedings of the Eighth
International Conference on Electrical Machines and Systems (ICEMS), Vol.2 pp.1095-1100, 2005.
[6] B.Saha, H.W.Lee, M.Nakaoka, “Utility Frequency AC to High Frequency AC Power Converter with Boost-
Half Bridge Single Stage Circuit Topology”, Proceedings of IEEE International Conference on Industrial
Technology (ICIT), pp.1430-1435, 2006.
[7] Y.Kawaguchi, E.Hiraki, T.Tanaka, M.Nakaoka, “Full bridge phase-shifted soft switching high-frequency
inverter with boost PFC function for induction heating system” Proceedings of European Conference on
Power Electronics and Applications (EPE), pp.1-8, Sept., 2007
[8] C.S.Seo, J.W.Park, K.Y.Sim, H.J.Kim, J.S.Won, D.H.Kim, “A study on Single-Stage High-Power-Factor
Electronic Ballast for Discharge Lamps Operating in Critical Conduction Mode,” Transactions of the
Korean Institute of Electrical Engineers (KIEE), Society of Electrical Machinery and Energy Conversion
Systems (EMECS), Vol.54B-12, pp.601-608, Dec. 2005.
[9] S.H.Ha, C.I.Kim, S.W.K, J.R.Nam, S.P.Mun, “The Improvement Effect of Input Current Waveform of Two
New Main Switching Boost Rectifiers,” Journal of the Korean Institute of Illuminating and Electrical
Installation Engineers, Vol.22, No.3, pp.15-26, March 2008.

About Authors
S.Arumugam has obtained his B.E degree from Bangalore University, Bangalore in the year
1999. He obtained his M.E degree from Sathyabama University; Chennai in the year 2005.He
is presently a research scholar at Bharath University, Chennai. He is working in the area of
Resonant inverter fed Induction.

76
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

S.Ramareddy is Professor of Electrical Department, Jerusalem Engineering College,


Chennai. He obtained his D.E.E from S.M.V.M Polytechnic, Tanuku, A.P. A.M.I.E in
Electrical Engg from institution of Engineers (India), M.E in Power System from Anna
University. He received Ph.D degree in the area of Resonant Converters from College of
Engineering, Anna University, Chennai. He has published over 20 Technical papers in
National and International Conference proceeding/Journals. He has secured A.M.I.E Institution
Gold medal for obtaining higher marks. He has secured AIMO best project award. He has
worked in Tata Consulting Engineers, Bangalore and Anna University, Chennai. His research interest is in the
area of resonant converter, VLSI and Solid State drives. He is a life member of Institution of Engineers (India),
Indian Society for India and Society of Power Engineers. He is a fellow of Institution of Electronics and
telecommunication Engineers

77
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

IJCIIS Reviewers
A. Govardhan, Jawaharlal Nehru Technological University, India
Ajay Goel, Haryana Institute of Engineering and Technology, India
Ajay Sharma, Raj Kumar Goel Institute of Technology, India
Akshi Kumar, Delhi Technological University, India
Alok Singh Chauhan, Ewing Christian Institute of Management and Technology, India
Amandeep Dhir, Helsinki University of Technology Finland, Denmark Technical University, Denmark
Amol Potgantwar, Sandip Institute of Technology and Research Centre, India
Anand Sharma, MITS, India
Aos Alaa Zaidan Ansaef, Multimedia University, Malaysia
Arul Lawrence Selvakumar, Kuppam Engineering College, India
Ayyappan Kalyanasundaram, Rajiv Gandhi College of Engineering and Technology, India
Azadeh Zamanifar, Iran University of Science and Technology University and Niroo Research Institute, Iran
Bilal Bahaa Zaidan, University of Malaya, Malaysia
B. L. Malleswari, GNITS, India
B. Nagraj, Tamilnadu News Prints and Papers, India
C. Suresh Gnana Dhas, Vel Tech Multitech Dr.Rengarajan Dr.Sagunthla Engg. College, India
C. Sureshkumar, J. K. K. M. College of Technology, India
Deepankar Sharma, D. J. College of Engineering and Technology, India
Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, India
D. S. R. Murthy, SreeNidhi Institute of Science and Technology, India
Hafeez Ullah Amin, KUST Kohat, NWFP, Pakistan
Hanumanthappa Jayappa, University of Mysore, India
Himanshu Aggarwal, Punjabi University, India
Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Jatinder Singh, UIET Lalru, India
Iman Grida Ben Yahia, Telecom SudParis, France
Leszek Sliwko, CITCO Fund Services, Ireland
M. Azath, Anna University, India
Md. Mobarak Hossain, Asian University of Bangladesh, Bangladesh
Mohammed Salem Binwahlan, Hadhramout University of Science and Technology, Yemen
Mohamed Elshaikh, Universiti Malaysia Perlis, Malaysia
M. Surendra Prasad Babu, Andhra University, India
M. Thiyagarajan, Sastra University, India
Manjaiah D. H., Mangalore University, India
Nahib Zaki Rashed, Menoufia Univesity, Egypt
Nagaraju Aitha, Vaagdevi College of Engineering, India
Natarajan Meghanathan, Jackson State University, USA
N. Jaisankar, VIT University, India
Ojesanmi Olusegun Ayodeji, Ajayi Crowther University, Nigeria
Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Perumal Dananjayan, Pondicherry Engineering College, India
Piyush Kumar Shukla, University Institute of Technology, Bhopal, India
Poonam Garg, Institute of Management Technology, India
Praveen Ranjan Srivastava, BITS, India
Rajesh Kumar, National University of Singapore, Singapore
Rajeshwari Hegde, BMS College of Engineering, India
Rakesh Chandra Gangwar, Beant College of Engineering and Technology, India
Raman Kumar, D A V Institute of Engineering and Technology, India
Raman Maini, University College of Engineering, Punjabi University, India
Ramveer Singh, Raj Kumar Goel Institute of Technology, India
Sateesh Kumar Peddoju, Vaagdevi College of Engineering, India
Shahram Jamali, University of Mohaghegh Ardabili, Iran
Sriman Narayana Iyengar, India
Suhas Manangi, Microsoft, India
Sujisunadaram Sundaram, Anna University, India
Sukumar Senthilkumar, National Institute of Technology, India
S. S. Mehta, J. N. V. University, India
S. Smys, Karunya University, India
S. V. Rajashekararadhya, Adichunchanagiri Institute of Technology, India

78
International Journal of Computational Intelligence and Information Security, Vol. 1 No. 6, August 2010

Thipendra P Singh, Sharda University, India


T. Ramanujam, Krishna Engineering College, Ghaziabad, India
T. Venkat Narayana Rao, Hyderabad Institute of Technology and Management, India
Vasavi Bande, Hyderabad Institute of Technology and Management, India
Vishal Bharti, Dronacharya College of Engineering, India
V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India

79

You might also like