Professional Documents
Culture Documents
ICCCI Proceedings
ICCCI Proceedings
ICCCI Proceedings
Proceedings of the
International Conference On
COMPUTERS, COMMUNICATION &
INTELLIGENCE
International Conference on
Computers, Communication & Intelligence
22nd & 23rd July 2010
CONFERENCE ORGANIZATION
Chief Patron : Shri. M.V. Muthuramalingam, Chairman
Organising Chair : Dr. N. Suresh Kumar, Principal
Organising Secretaries : Dr. P. Alli and Dr. G. Manikandan
PAPER ID
PAPER TITLE
SESSION
ID
PAGE NO.
IN
PROCEEDINGS
SESSION 1
AI002
S1-01
87 - 94
AI005
Hybrid PSO based neural network classifier and decision tree for
brain MRI mining
Gap: genetic algorithm based power estimation technique for
behavioral circuits
Human action classification using 3d star skeletonization and rvm
classifier
Enhanced knowledge base representation technique for intelligent
storage and efficient retrieval using knowledge based markup
lFace detection using wavelet transform and rbf neural network
S1-02
95 - 100
S1-03
101 - 107
S1-04
108 - 115
S1-05
154 - 157
S1-06
303 - 306
S1-07
178 - 187
S1-08
188 - 192
S1-09
24 - 30
S1-10
77 - 81
S1-11
453 - 456
S1-12
14 - 20
S1-13
237 - 239
S2-01
116 - 122
S2-02
316 - 319
COMN018
S2-03
320 - 325
AI013
S2-04
123 - 126
COMN022
S2-05
131 - 134
AI006
AI007
COMP002
AI009
COMP016
COMP017
COMP115
COMP102
COMP135
COMP111
COMP114
SESSION 2
AI008
COMN013
COMP007
S2-06
168 - 173
COMP026
S2-07
357 - 362
COMP027
S2-08
436 - 441
S2-09
417 - 420
S2-10
326 - 329
S2-11
369 - 376
COMN034
S2-12
149 - 153
COMP146
S2-13
212 - 215
S3-01
307 - 315
S3-02
158 - 167
S3-03
464 - 467
S3-04
174 - 177
S3-05
334 - 336
S3-06
442 - 452
S3-07
66 - 70
S3-08
207 - 211
S3-09
377 - 384
S3-10
291 - 296
S3-11
260 - 269
S3-12
270 - 277
S3-13
389 - 392
COMP103
COMN028
COMP118
SESSION 3
AI014
COMP006
COMP142
COMP008
COMP013
COMP032
COMP038
COMP133
COMP119
COMP124
COMP128
COMP129
COMP137
SESSION 4
COMP022
S4-01
351 - 356
S4-02
46 - 49
S4-03
127 - 130
S4-04
6 - 13
S4-05
135 - 138
S4-06
139 - 148
S4-07
421 - 425
COMN027
S4-08
203 - 206
COMP035
S4-09
55 - 59
COMP036
S4-10
60 - 65
S4-11
363 - 368
S4-12
330 - 333
S4-13
216 - 223
S5-01
240 - 245
S5-02
343 - 350
S5-03
21 - 23
COMP024
S5-04
397 - 401
COMP109
S5-05
284 - 290
COMP110
S5-06
412 - 416
COMP030
S5-07
402 - 405
COMP138
COMN005
COMN020
COMN023
COMN024
COMN026
COMP037
COMP012
COMP019
SESSION 5
COMP116
COMP020
COMP023
COMP033
S5-08
297 - 302
S5-09
426 - 431
COMN033
S5-10
193 - 198
COMP018
S5-11
337 - 342
COMP126
S5-12
256 - 259
S5-13
31 - 40
S5-14
457 - 463
S6-01
246 - 255
S6-02
224 - 229
S6-03
41 - 45
S6-04
1-5
COMP130
S6-05
278 - 283
COMP021
S6-06
50 - 54
COMP149
S6-07
199 - 202
S6-08
82 - 86
S6-09
230 - 236
S6-10
432 - 435
COMP101
S6-11
71 - 76
COMP120
S6-12
385 - 388
COMP148
S6-13
406 - 411
S6-14
393 - 396
COMP147
COMP127
COMP150
SESSION 6
COMP117
COMP112
COMP123
COMP125
COMP107
COMP113
COMP121
COMP149
Paper Index
Sl. No
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Title
MRI Mammogram Image Segmentation using NCut method and Genetic Algorithm with
partial filters
A.Pitchumani Angayarkanni
Performance Improvement in Ad Hoc Networks Using Dynamic Addressing
S.Jeyanthi & N.Uma Maheswari
Framework for Comparison of Association Rule Mining using Genetic Algorithm
K.Indira & S.Kanmani
Content Management through Electronic Document Management System
T.Vengattaraman, A.Ramalingam & P.Dhavachelvan
Intelligent Agent based Data Cleaning to improve the Accuracy of WiFiPositioning System
Using Geographical Information System (GIS)
T.Joshva Devadas
A Framework for Multiple Classifier Systems Comparison (MCSCF)
P.Shanmugapriya & S.Kanmani
Efficient Apriori Hybrid Algorithm For Pattern Extraction Process
J.Kavitha, D.Magdalene Delighta Angeline & P.Ramasubramanian
CLD for Improving Overall Throughput in Wireless Networks
Dr. P. Seethalakshmi & Ms. A. Subasri
Particle Swarm Optimization Algorithm In Grid Computing
Mrs.R.Aghila, M.Harine & G.Priyadharshini
NTRU - Public Key Cryptosystem For Constrained Memory Devices
V.Pushparani & Kannan Balasubramaniam
A Novel Randomized Key Multimedia Encryption Algorithm Security Against Several
Attacks
S. Arul Jothi
Secure Multiparty Computation Based Privacy Preserving Collaborative Data Mining
J.Bhuvana & Dr.T.Devi
Towards Customer Churning Prevention through Class Imbalance
M.Rajeswari & Dr.T.Devi
Designing Health Care Forum Using Semantic Search Engine & Diagnostic Ontology
Prof.Mr.V.Shunmughavel & Dr.P.Jaganathan
An Enhancing the Life Time of Wireless Sensor Networks Using Mean Measure
Mechanism
P.Ponnu Rajan & D.Bommudurai
Study of Similarity Metrics for Genomic Data Using GO-Ontology
V.Annalakshmi,R. Priyadarshini &V. Bhuvaneshwari
Hybrid PSO based neural network classifier and decision tree
for brain MRI mining
Dr.V.Saravanan & T.R.Sivapriya
GAP: Genetic Algorithm based Power Estimation Technique for Behavioral Circuits
Johnpaul C. I, Elson Paul & Dr. K. Najeeb
Human Action Classification Using 3D Star Skeletonization and RVM Classifier
Mrs. B. Yogameena, M. Archana & Dr. (Mrs) S. Raju Abhaikumar
Relevance Vector Machine Based Gender Classification using Gait Appearance Features
Mrs. B. Yogameena, M. Archana & Dr. (Mrs) S. Raju Abhaikumar
Page No.
1-5
6-13
14-20
21-23
24-30
31-40
41-45
46-49
50-54
55-59
60-65
66-70
71-76
77-81
82-86
87-94
95-100
101-107
108-115
116-122
123-126
127-130
131-134
135-138
139-148
149-153
154-157
158-167
168-173
174-177
178-187
188-192
193-198
199-202
203-206
207-211
212-215
216-223
224-229
230-236
237-239
240-245
43. A New Semantic Similarity Metric for Handling all Relations in WordNet Ontology
K.Saruladha, Dr.G.Aghila & Sajina Raj
44. Fault Prediction Using Conceptual Cohesion in Object Oriented System
V.Lakshmi, P.V.Eswaripriya, C.Kiruthika & M.Shanmugapriya
45. On the Investigations of Design,Implementation, Performance and Evaluation issues of a
Novel BD-SIIT Stateless IPv4/IPv6 Translator
J.Hanumanthappa, D.H.Manjaiah & C.V.Aravinda
46. The Role of IPv6 over Fiber (FIPv6): Issues, Challenges and its Impact on Hardware and
Software.
J.Hanumanthappa, D.H.Manjaiah & C.V.Aravinda
47. Localized CBIR for Indexing Image Databases
D.Vijayalakshmi & P. Vijayalakshmi
48. Architecture Evaluation for Web Service Security Policy
B.Joshi.vinayak ,Dr.D.H. Manjaiah ,J. Hanumathappa & Nayak.Ramesh.Sunder
49. Rule Analysis Based On Rough Set Data Mining Technique
P.Ramasubramanian, V.Sureshkumar & P.Alli
50. A Robust Security metrics for the e-Healthcare Information Systems
Said Jafari, Fredrick Mtenzi, Ronan Fitzpatrick & Brendan OShea
51. Face Detection Using Wavelet Transform And Rbf Neural Network
M.Madhu, M.Moorthi, S.Sathish Kumar & Dr.R.Amutha
52. An Clustering approach based on Functionality of Genes for Microarray data to find
meaningful associations
M.Selvanayaki & V.Bhuvaneshwari
53. An Energy Efficient Adavanced Data Compression And Decompression Schemes For Wsn
G.Mohanbabu#1, Dr.P.Renuga#2
54. Active Noise Control: A Simulation Study
Sivadasan Kottayi & N.K. Narayanan
55. Texture Segmentation Method Based On Combinatorial Of Morphological And Statistical
Operations Using Wavelets
V.Vijayapriya & Prof.K.R.Krishnamoorthy
56. FPGA Design Of Routing Algorithms For Network On Chip
R.Anitha & Dr.P.Renuga
57. Creating Actionable Knowledge within the Organization using Rough set computing
Mr.R.Rameshkumar, Dr.A.Arunagiri, Dr.V.Khanaa & Mr.C.Poornachandran
58. COMPVAL A system to mitigate SQLIA
S. Fouzul Hidhaya & Dr. Angelina Geetha
59. Integrating the Static and Dynamic Processes in Software Development
V. Hepsiba Mabel, K. Alagarsamy & S. Justus
60. Exploiting Parallelism in Bidirectional Dijkstra for Shortest-Path Computation
R.Kalpana, Dr. P.Thambidurai, R. Arvind kumar, S. Parthasarathi & Praful Ravi
61. Hiding Sensitive Frequent Item Set by Database Extension
B. Mullaikodi & Dr. S.Sujatha
62. Denial Of Service:New Metrics And Their Measurement
Dr.KannanBalasubramanian & P.Kavithapandian
63. High Performance Evaluation of 600-1200V, 1-40A Silicon Carbide Schottky Barrier
Diodes and Their Applications Using Mat Lab
K.Manickavasagan
64. A Cascade Data Mining Approach for Network Anomaly Detection System
C. Seelammal
246-255
256-259
260-269
270-277
278-283
284-290
291-296
297-302
303-306
307-315
316-319
320-325
326-329
330-333
334-336
337-342
343-350
351-356
357-362
363-368
369-376
377-384
385-388
389-392
393-396
397-401
402-405
406-411
412-416
417-420
421-425
426-431
432-435
436-441
442-452
453-456
457-463
464-467
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ABSTRACT:
Cancer is one of the most common leading deadly diseases
which affect men and women around the world. Among the
cancer diseases, breast cancer is especially a concern in women.
It has become a major health problem in developed and
developing countries over the past 50 years and the incidence
has increased in recent years. Recent trends in digital image
processing are CAD systems, which are computerized tools
designed to assist radiologists. Most of these systems are used
for automatic detection of abnormalities. However, recent
studies have shown that their sensitivity is significantly
decreased as the density of breast increases. In this paper , the
proposed algorithm uses partial filters to enhance the images
and the Ncut method is applied to segment the malignant and
benign regions , futher genetic algorithm is applied to identify
the nipple position followed by bilateral subtraction of the left
and the right breast image to cluster the cancerous and non
cancerous regions. The system is trained using Back
Propagation Neural Network algorithm.
Computational efficiency and accuracy of the proposed system
are evaluated based on the Frequency Receiver Operating
Characteristic curve(FROC). The algorithm are tested on 161
pairs of digitized mammograms from MIAS database. The
Receiver Operating Characteristic curve leads to 99.987%
accuracy in detection of cancerous masses.
Keywords: Filters, Normalized Cut, Segmentation, BPN, Genetic
Algorithm and FROC.
INTRODUCTION:
Breast cancer is one of the major causes for the increased
mortality among women especially in developed countries. It
is second most common cancer in women. The World Health
Organizations International estimated that more than
1,50,000 women worldwide die of breast cancer in year. In
India, breast cancer accounts for 23% of all the female cancer
death followed by cervical cancer which accounts to 17.5%
in India. Early detection of cancer leads to significant
improvements in conservation treatment. However, recent
studies have shown that the sensitivity of these systems is
significantly decreased as the density of the breast increased
Page 1
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 2
Parti
al
Filter
Feature
Extraction
using NCut
Segmentation
Genetic
Algorith
m
Multilay
ered
BPN
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
method
the
Microcalcifications
are
clustered.
Figure 4: After Normalized Cut Segmentation
Figure 5: GA
Page 3
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Figure 6
Asymmetric images
Fig 8(a) shows the extracted areas for the abnormal lesions.
(Image sequence 54 - 87 are stellate lesions and 74 to 100 are
regular masses). We first establish whether these represent
two different populations, by applying a Mann-Whitney
(Wilcoxon rank sum) non-parametric test, since it is
unrealistic to presume any specific underlying distribution.
Median values are 450 and 1450 pixels respectively which
produce a confidence level of 85% that the two data
sequences emanate from distinct populations. Since this is
not significant at normally acceptable levels we can compare
the abnormals as a single distribution against the nonsuspicious set, Fig 8(b). Using the same test, median values
of 5500 and 10 pixels for the two distributions are
established, giving a confidence level of greater than 97.5%
Page 4
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[6]
A. Papadopoulos, D. I. Fotiadis, and A. Likas,An
Automatic
microcacalcification Detection System Based On a Hybrid
Neural Network Classifier, Artificial Intelligence in
Medicine,vol. 25, pp. 149-167, 2002.
[7]
A. Papadopoulos, D. I. Fotiadis, and A. Likas,
Characterization of Clustered
microcalcifications in Digitized Mammograms Using
Neural Networks and Support Vector Machine, Artificial
Intelligence in Medicine,vol. 34, pp. 141-150, 2005.
[8]
R. Mousa, Q. Munib, and A. Moussa, Breast
Cancer Diagnosis System based in Wavelet Analysis and
Fuzzy-Neural, Expert Systems with Applications, vol. 28,
pp. 713-723, 2005.
Page 5
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
sk.jeya@gmail.com
Assistant Professor
PSNA College of Engg & Tech,Dindigul,Tamilnadu,India
2
numamahi@gmail.com
Abstract
Dynamic addressing refers to the assignment of IP addresses
automatically. In this paper we propose the scalable routing
in ad hoc networks. It is well known that the current ad hoc
protocol do not scale to work efficiently in networks of more
than a few hundred nodes. Most current adhoc routing
architectures use flat static addressing and thus, need to keep
track of each node individually, creating a massive overhead
problem as the network grows. In this paper, we propose that
the use of dynamic addressing can enable scalable routing in
adhoc networks. We provide an initial design of a routing layer
based on dynamic addressing, and evaluate its performance.
Each node has a unique permanent identifier and a transient
routing address, which indicates its location in the network at
any given time. The main challenge is dynamic address
allocation in the face of node mobility. We propose
mechanisms to implement dynamic addressing efficiently. Our
initial evaluation suggests that dynamic addressing is a
promising approach for achieving scalable routing in large
adhoc and mesh networks.
Adhoc
networking
technology
has
advanced
tremendously but it has yet to become a widely deployed
technology. Ad hoc networks research seems to have
downplayed the importance of scalability. In fact, current ad
hoc architectures do not scale well beyond a few hundred
nodes. Existing Ad Hoc Routing Layers do not support
several hundred nodes and lack of scalability. It uses flat
static addressing. It creates a massive Routing overhead. It
increases searching time (not optimal solution). The easy-touse, self-organizing nature of ad hoc networks make them
attractive to a diverse set of applications. Today, these are
usually limited to smaller deployments, but if we can solve
Page 6
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Cluster creation
Address allocation
Mapping
Distributed lookup table
Routing
Figure.1 Overall system design
Page 7
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
each sub tree of the address tree are enclosed with dotted lines.
Note that the set of nodes from any sub tree in figure 2 induces
a connected sub graph in the network topology in figure 3.
The nodes that are close to each other in the address space
should be relatively close in the network topology. More
formally, we can state the following constraint.
Prefix Sub graph Constraint: The set of nodes that share a
given address prefix form a connected sub graph in the
network topology. This constraint is fundamental to the
scalability of our approach. Intuitively, this constraint helps us
map the virtual hierarchy of the address space onto the
network topology. The longer the shared address prefix
between two nodes, the shorter the expected distance in the
network topology. Finally, let us define two new terms that
will facilitate the discussion in the following sections. A
Level-k sub tree of the address tree is defined by an address
prefix of (l-k) bits, as shown in figure 2. For example, a Level0 sub tree is a single address or one leaf node in the address
tree. A Level-1 sub tree has a (l-1)-bit prefix and can contain
up to two leaf nodes. In figure 1, [0xx] is a Level-2 sub tree
containing addresses [000] through [011]. Note that every
Level-k sub tree consists of exactly two Level-(k - 1) sub
trees. We define the term Level-k sibling of a given address to
be the sibling of the Level-k sub tree to which a given address
belongs. By drawing entire sibling sub trees as triangles, we
can create abstracted views of the address tree, as shown in
figure 4.
Here, we show the siblings of all levels for the address [100]
as triangles: the Level-0 sibling is [101], Level-1 is [11x], and
the Level-2 sibling is [0xx]. Note that each address has exactly
one Level-k sibling, and thus at most l siblings in total.
Page 8
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 9
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
information may linger even after a node has left the network.
Therefore, we set all lookup table entries to expire
automatically after a period twice as long as the periodic
refresh interval.
VI. Dynamic Address Allocation
To assess the feasibility of dynamic addressing, we develop a
suite of protocols that implement such an approach. Our work
effectively solves the main algorithmic problems, and forms a
stable framework for further dynamic addressing research.
When a node joins an existing network, it uses the periodic
routing updates of its neighbors to identify and select an
unoccupied and legitimate address. It starts out by selecting
which neighbor to get an address from the neighbor with the
highest level insertion point is selected as the best neighbor.
The insertion point is defined as the highest level for which no
routing entry exists in a given neighbors routing table.
However, the fact that a routing entry happens to be
unoccupied in one neighbors routing table does not guarantee
that it represents a valid address choice. We discuss how the
validity of an address is verified in the next subsection. The
new node picks an address out of a possibly large set of
available addresses. In our current implementation, we make
nodes pick an address in the largest unoccupied address block.
For example, in figure 4, a joining node connecting to the
node with address [100] will pick an address in the [11x] sub
tree. Figure 5 illustrates the address allocation procedure for a
3-bit address space.
Page 10
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 11
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
234
1003
1654
X. Conclusion
We proposed Dynamic address routing, an initial design
toward scalable ad hoc routing. We outline the novel
challenges involved in a dynamic addressing scheme, and
proceeded to describe efficient algorithmic solutions. We
show how our dynamic addressing can support scalable
routing. We demonstrate, through simulation and analysis, that
our approach has promising scalability properties and is a
viable alternative to current ad hoc routing protocols. First, we
qualitatively compare proactive and reactive overhead and
determine the regime in which proactive routing exhibits less
overhead that its reactive counterpart. Large scale simulations
show that the average routing table size with DART grows
logarithmically with the size of the network. Second, using the
ns-2 simulator, we compare our routing scheme to AODV,
Page 12
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 13
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
S.Kanmani
Professor & Head
Department of IT
Pondicherry Engineering College,
Pondicherry, India
induharini@gmail.com
kanmani@pec.edu
I.
INTRODUCTION
Page 14
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ASSOCIATION ANALYSIS
A.
[Start] Generate random population of n
chromosomes (suitable solutions for the problem)
B.
[Fitness] Evaluate the fitness f(x) of each
chromosome x in the population
C.
[New population] Create a new population by
repeating the following steps until the new population is
complete
i.
[Selection] Select two parent chromosomes from a
population according to their fitness (the better fitness, the
bigger chance to be selected)
ii.
[Crossover] With a crossover probability cross over
the parents to form a new offspring (children). If no
crossover was performed, offspring is an exact copy of
parents.
iii.
[Mutation] With a mutation probability mutate new
offspring at each locus (position in chromosome).
iv.
[Accepting] Place new offspring in a new
population
D.
[Replace] Use new generated population for a
further run of algorithm
E.
[Test] If the end condition is satisfied, stop, and
return the best solution in current population
F.
[Loop] Go to step 2
Page 15
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 16
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4. Application Areas
INFERENCES
Page 17
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Methodology
Steps In Genetic Algorithm
Represen
-tation
[1]
Varying
Length
Selectio
n
Seeded
Populati
on
Mutation
Macro
Mutations
Cross
Over
As GA
As GA
[2]
[3]
[4]
[5]
[6]
Binary
Coding
As GA
Using
Natural
Numbers
As GA
Using
Natural
Numbers
Roulette
Wheel
Selection
As GA
Roulette
Wheel
Selection
As GA
Optimum
Value
Whether
Required
Or Not
And If
Point Of
Crossove
r
As GA
Optimum
Value
Changes
In Weight Symbiotic
Generate
Of
Combinati
Single
Membersh
on
Rule Sets
ip
Operator
Function
As GA
As GA
As GA
TP,TN,FP, FN
Used
As GA
As GA
Grilled
Mushrooms
in Agaricus
and Lepiota
Family.
Synthetic
Database for
the Selection
of Electives
for a Course
As GA
Individual
Evaluation
Using Strength
Of Implication
[7]
Array
Represent
a -Tion
As GA
As GA
As GA
Based On
Sustaining,
Creditable
And Inclusive
Index
[8]
As GA
Done By
As GA
As GA
Based On Real
Evaluated By / Parameters
Size /
Pc/
Suppor Confi
t
dence
Fitness
Vehicle
Silhouette
Dataset
Whether
Required
Or Not
And If
Point Of
Mutation
Sample
Size
Fitness
Gen
er atio
ns
8128 Of 23
Species
22
Attributes
846
Records
18
Attributes
100
0.1
0.005
5%
User
define
d
5%
*
.25%
To
2%
4 To
14
Completen
ess
consider
-red
User
defined
Features 15
Size 690
Features 4
Size 150
Vote
Classes 2
Features 16
Size 435
Wine
Classes 3
Features 13
Size 178
10000
0.4
10
0.85
0.4
0.2
0.8
0.6
0.6
50
Six Datasets
from Irvine
Repository
S.I 1.0
C I 1.0
I.I 1.0
10
*
Single Table
Accuracy On
Training
Dataset
Between 95 To
100. On Test
Data Between
62 And 71
Rules With
Negation Of
Attributes As
Well As
General Rules
Generated
Based On
Time And
Fetching
Knowledge
GA Is Faster
Than Apriori.
Runs 2 To 5
Times Faster
Than Apriori
Features 41
Size 494021
IRIS
Classes 3
Car
Test Results
Dataset
Pm
Results
40
10%
50%
50
SEA Has
Better Or
Similar
Results When
Compared
With GA
SEA Much
Faster Than
GA
During
Generation
Between 1 To
200
Interesting
Rules For
Whatever Be
The
Threshold.
Predictive
Accuracy
Better Than
CN2 And AntMiner
Methods
Response Got
Page 18
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Users
From
Rules
Generate
d
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Support And
Confidence
Produced
Randomly
with 100
Transactions
As GA
As GA
As GA
As GA
Gene
String
Struct
ure
0
*
0.01
0.9
As GA
Elitist
As GA Recombinat
ion Method
As GA
Attributes
As GA
As GA
As GA
As GA
As GA
As GA
As GA
As GA
Dynamic
Adaptive
As GA
As GA
Mutation1
&
Mutation
2
Adaptive
As GA
Done On
Same
Attributes
If Present
Or
Random
Dynamic
As GA
As GA
As GA
As GA
Predictive
Accuracy
11 Data Sets
from Irvine
Machine
Learning
Repository
Adult
Nursery
Datasets
from UCI
Based On Last
Generation
Finance
Service Data
of Certain
City
Individual
Based
Database Of
Student
Achievement
in Schools in
Recent Years
Modified To
Decide
KDD CUP99
Whether A
Dataset
Chromosome
Is Right Or Not
Feature
Selection Is
Applied
CM1, KC1,
KC2, PC1
From UCI
Repository
Vehicle
Dataset And
As GA
Adaptive
Based On
Distance
Between Rules Lympography
Dataset From
UCI ML
Measure Of
Overall
Performance
Real Case
Data
Computers
12960
Instances
optimu
m
48842
Instances
15attributes
40
Optimu
m
0.3
200
Optimu
m
0.6
0.9
0.01
0.7
148 Records
18 Attributes
4 Classes
Varie
s
240
Length Of Chromosome 41
Generations 100
Varie
s
Varie
s
0.1
0.01
0.01
846 Records,
18 Attributes
4 Classes
50
22
0.05
*
12960 With
9 Attributes
2050
Groups
optimu
m
50
0
600
0.6
0.8
0.005
Varies
*
In Ten
Seconds
Whereas For
Apriori It Is
More Than
3000 Seconds
Faster And
Better
Behavior
Number Of
Rules
Generated Is
Between 60%80% Smaller
Classification
Error Rates
Are Low
Outperforms
C4.5
Better
Classification
Performance
Produces Partial
Association
Rules After 252
Generations
Whereas It Is
850 In
traditional GA
The
Algorithm
Based On 0.1
Support And
0.7
Confidence Is
Close To
Actual
Situation
Rules
Generated Are
Useful In
Detecting
Intrusion
Generated
Rules That
Provide Better
Estimation
And
Explanation
Of Defective
Modules
10
0
GRA
Outperforms
Conventional
Methods
50
Performance
And
Effectiveness
Of Proposed
Model Is Close
With Real
World Analysis
Faster &
Page 19
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Mechanism Is Introduced
Immune Recognition, Immune Memory And
Immune Regulation Is Applied To GA
Daily Records
Of API
0.26
*
Discovers
New Critical
Rules Though
Support Not
High
Page 20
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Puducherry, India.
1
vengat.mailbox@gmail.com
3
dhavachelvan@gmail.com
Puducherry, India.
2
a.ramalingam1972@gmail.com
I. INTRODUCTION
To develop a web based interface for creating new content
for the site or manage existing content. The system should
provide help in managing different personnel capable of
working in different areas of content creation. To make the
best possible content available as an end result. Divide the
complex task of content creating into no of specialists. To
allow the floating of content with the system before being
hosted on site. Different levels of approval to make the
content very precise as per the requirements. To speed up
the processing with in the
Page 21
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Processing Area
Database
Approving
User
Name
Account
Login
Deploy
Database
Editing
Password
Authorizing
B. Utilities:
Utilities section of the application is used to shut down
the site for the normal person to browse as well as to up the
site back for its use.
C. Authoring:
An administrator or a person with the author privileges
can access this part of the application. This part of the
application includes creating new content in the form of
stories which is normally done by the developers or content
writers. The newly created content may include no of notes
which will guide the editor at the time of editing the
content. The newly created content then can be posted to
editor for editing.
D. Editor:
An editor receives the content posted by the author. An
editor can view the content and later post the content to a
new revision or to an existing revision. If content is found
unsuitable to the cause the content is returned back to the
author. This part of the application can be explored only
by an administrator or the users who possess an editor
privilege. The editor can withdraw the content from being
hosted if found unfit for hosting.
E. Approver:
An approver is a person who will approve the contents
to be hosted on the site. An approver can approve the
content to the deploy section or Discontinue the content
usage or return the content back to the editor for revision.
The returned content should accompany with a message to
the editor regarding the revision in the content. This part of
the application can be accessed by the administrator of the
person who possesses an Approver privilege.
F. Deploy:
This area of the application includes the deployment
part of an application. A deploy person can view the
content before deploying it. The person can also return the
content if found unfit to be hosted on the site. The returned
content is sent back to the approver. The deployment of the
content includes the content to be placed in specific area of
the hosting environment. The hosting environment is
divided into three categories. The Deploy content, the
manager content, the protected content. These categories
are subdivided into no of sections.
G. Administrator:
An administrator has all the privileges that of the
guest as well as the normal registered user. Along with
these common features an administrator has the
administrator related features such as creating new users
and granting roles to those newly created users. The roles
granted by the administrator cannot be changes by the user.
An administrator can create new user as a guest or as a user
Page 22
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IV. CONCLUSION
This system has been appreciated by all the users and
is easy to use, since it uses the GUI provided in the user
dialog. User friendly screens are provided. The usage of
software increases the efficiency, decreases the effort. It
has been efficiently employed as a Content management
mechanism.
It has been thoroughly tested and
implemented. The application is capable of managing only
the current site. It can be converted into general Electronic
Document Management system by providing a tight
integration of the existing database with the one the site
holds. The current database should be redesigned so that it
can adapt to the changes with respect to the site. The
current Electronic Document Management system manages
only the articles and there exists no provision for the new
categories to be added and placed into the content section.
This can be brought into the current on user request. The
application can be converted into a Mobile based using
ASP.net with which the deployment of application will be
done only on enterprises server and is accessible to all
other departments of the organization. The current
application is confined to only one enterprise.
REFERENCES
[1]
Adam, Fundamentals of EDRMS. Implementing Electronic
Document and Record Management Systems, CRC Press. Madison Ave.
New York, 2007
[2]
J. Feldman and E. Freuder, Integrating Business Rules and
Constraint Programming Technologies for EDM, Business Rules Forum
2008.
[3]
Georgia Archives, Electronic Document Management System
Technologies ,November 2008.
[4]
P. Immanuel, Basic Components of Electronic Document
Management System Essential Characteristics of an Effective Document
Management System, November 2008
[5]
Laserfiche, A guide to the benefits, technology, and
implementation of electronic document management solutions, Archival
and scanning services, Nov 2008.
[6]
D.P. Quiambao, Document Management System at the
Property Management Division of Subic Bay Metropolitan Authority,
2004
[7]
T.J. Revano, Development of Automated Document Analyzer
using Text Mining and Aho-Corasick Algorithm . TIP Manila:
Unpublished Thesis, 2008
[8]
Sire Technologies, A Guide to Understanding Electronic
Document Management Systems: The Future File Cabinets, 2006.
Page 23
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1.
Introduction
Page 24
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 25
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3. Position Computation
Conceptual Overview
Our experiment considers first floor plan found in Fig 1. To
locate the access points position x-y coordinates is chosen.
Distance error is calculated with respect to the exact
location on the floor plan in hand. Signal strength should
be collected from the NAL-Q51 and DSP-Q52 will be
interpreted to any of the two models namely free-space
model and wall attenuation model.
Learning Process
Position determination is based on free-space model and it
does not require learning process as universal relationship
is implied every environment considered. WAF model is
used to achieve more accurate result and to represent more
realistic indoor environment[9].
Before positioning
service WAF and n have been computed from experiment.
Once the measurement of signal strength at marking points
is done, linear regression is applied to those data sets
resulting in WAF and n parameters.
Fig.3. Position determination overview
Radio Propagation models
Basically the radio channel associates with reflection,
refraction, diffraction and scattering of radio waves that
influence propagating paths. Transmitted signal from direct
and indirect propagation path are combined either
constructively or destructively causing variation of
received signal strength at the receiving station. The
situation is even more severe for indoors communication.
The building may have different architectures, construction
material which results in receiving challenging and
unpredictable signal.
Free-space Model
This model is used for the worst case consideration. This
model can be implemented with GIS but it unveils a trend
for further improvement. This model is not suitable to
implement because this may disturb the other signals. Also,
this model is not appropriate for the indoor environment to
do the positioning system. Free-space model or Friis
Page 26
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I
n
t
e
r
a
c
t
i
o
n
Action
Information
fusion
Information
processing
Page 27
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Excel sheet
Data cleaning
statergies
Cleaned data
Page 28
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
The process is done for all the areas and the agent stores
only the erroneous coordinates in the knowledgebase. Time
is saved by the use of agent while experimenting this
process. The accuracy of the positioning is improved when
agents are incorporated in the system with is the aid of
geographical information system. In the existing system
performance is measured by means of error distance which
requires the help of GIS but for the agent based system
error distance is calculated and stored in the
knowledgebase.
The above Fig 7 displays a floor plan of our test site along
with specific x and y dimensions of corridor section. Now
the X and Y ranges from 0 and 19.125 and from 9.25 to
11.125 meters respectively. Use of agent in positioning
system improves the accuracy and reduces the working
time of the process. Also it scopes down and eliminates
unlikely intersection area from positioning algorithm in the
optimization phase.
6. Conclusion
In this paper, Intelligent Agents are introduced to clean the
erroneous access point coordinates for the wifi positioning
system that uses the knowledge of geographical
Information system. The performance variation of the wifi
positioning system is described in terms of agent based and
normal positioning. Agents are introduced between the
excel data sheet and the MATLAB simulation data
Page 29
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
7.References
Data
Cleaning
Methods:,
Page 30
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
S.Kanmani
Research Scholar,
Department of CSE,
Pondicherry Engineering College,
Pondicherry, India
pshunmugapriya@gmail.com
Abstract In this Paper, we propose a new framework for
comparing Multiple Classifier Systems in the literature. A
Classifier Ensemble or Multiple Classifier System combines a
finite number of classifiers of same kind or different types,
which are trained simultaneously for a common classification
task. The Ensemble is able to efficiently improve the
generalization ability of the classifier when compared with a
single classifier. The objective of this paper is to introduce a
framework MCSCF that analyses the existing research work on
Classifier Ensembles. Our framework compares the classifier
ensembles on the basis of Constituent classifiers of an ensemble
and Combination methods, Classifier selection basis, Standard
datasets, Evaluation criteria and Behavior of classifier ensemble
outcomes. It is observed that, different types of classifiers are
combined using a number of different fusion methods and
classification accuracy is highly improved in the ensembles
irrespective of the application domain.
Keywords - Data Mining, Pattern Classification, Classifier
Ensemble, Multiple classifier Systems
I. INTRODUCTION
Combining classifiers to get higher classification accuracy is
rapidly growing and enjoying a lot of attention from pattern
recognition and machine learning communities. For
ensembling, the classification capabilities of a single
classifier are not required to be very strong. What is
important is to use suitable combinative strategies to improve
the generalization of classifier ensembles. In order to speed
up the convergences and simplify structures, the combinative
components are often weak or simple [1].
Classifiers like Region Based Classifier ,Contour Based
Classifier, Enhanced Loci classifier, Histogram-based
classifier , Crossing Based Classifier, Neural Networks
classifier, K-nearest neighbor classifier, SVM, Anorm, KDE,
KMP(Kernel Matching Pursuit), Minimum Distance
Classifier, Maximum Likelihood Classifier, Mahalanobis
Classifier, Nave Bayesian, Decision tree, Fisher classifier,
Nearest Means classifier are often combined to form
classifier fusion or classifier ensemble as shown in [Table I].
kanmani@pec.edu
Page 31
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 32
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 33
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
It
is
seen
from
[2],
[3],[5],[6],[7],[8],[10],[12],[14],[16],[17][19] and [24] the new
methods designed are highly robust and show improved
classification accuracy than the existing ensemble methods and
the individual constituent classifier. Most of the new methods
show a better performance compared to individual classifiers and
Page 34
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[7]
On the
Basis of
A-Priori
Knowledge
Classifiers Used
Method of
Ensembling
1.Region Based
Classifier
2.Contour Based
Classifier
3.Enhanced Loci
4.Histogrambased
5.Crossing based
1.Majority
Voting(MV)
2.Dempster
Shafer (DS)
3.Behavioral
Knowledge
Space(BKS)
Dataset
Hand
Written
Numerals
4 UCI datasets
:
1.Pima
[12]
Diversity
of
Classifiers
1.Knn (k=1)
2.Knn (k=3)
3.Svm (r=1)
4.Anorm
5.Kde.
Sample Size
764 patterns
8 features
2 classes
AdaBoosting
2.Spam
4016 patterns
54 features
2 classes
3.Haberman
2 classes
Evaluated
By
Comparing the
classification results
with that of
constituent
classifiers
Comparing the
classification results
with that of
constituent
classifiers
Results
1.Classificati
on Accuracy
90%
2.Better
Accuracy
than the
constituent
classifiers
Better
Performance
of
the new
method than
the
individual
classifiers of
the ensemble
Page 35
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4.Horse-colic
[8]
[11]
[14]
[19]
Diversity
of
Classifiers
3 different
classifiers
diversified by
Boosting
1.Boosting
2.Stacked
Generalizati
on
SATIMAGE
Dataset
from ELENA
Database
Diversity
of
Classifiers
1.Minimum
Distance
Classifier
2.Maximum
Likelihood
Classifier
3.Mahalanobis
Classifier
4.K nearest
Neighbor
Classifier
Five
Bayesian
Decision
rules:
1.Product
rule
2. sum rule
3. max rule
4. min rule,
5. median
rule
Remote
Sensing
Image SPOT
IV
Satellite
Image
Diversity
of
Classifiers
1.Diversity
of
Classifiers
SVM
KMP(Kernel
Matching
Pursuit)
1.Maximum
Entropy Model
2.Heterogeneous
Bagging and
Majority
voting
L1
Regularized
maximum
Entropy
Model
I.UCI Datasets
1.Waveform
2.Shuttle
3.Sat
II. Image
Recognition6 Plane Class
Images
Customer
Relationship
Management
ORANGE A
French
Telecom
6435 pixels
36 attributes
6 classes
Comparing the
classification results
with that of
constituent
classifiers
6 Land Cover
classes
1.Overall Accuracy
2.Kappa statistics
3.McNemars Test
4.Cochrans Q Test
5.F-Test
614
Sheets
128 X 128
pixels
1.OAO Approach
2. Comparing the
classification results
with that of
constituent
classifiers
2 Versions
1.Larger
Version15,000 Feature
Variables
50,000
1.By Cross
Validation
2. Overall Accuracy
1.Boosting
generates
more
diverse
classifiers
than Cross
validation.
2.Highly
Robust
compared
to original
Boosting
and
Stacking
1. Diversity
is not
always
beneficial.
2.Increasing
the number
of Base
classifiers
in the
ensemble
will not
increase the
Classificati
on
accuracy.
1. A New
Ensemble
of SVM
and KMP is
designed.
2. High
Classificati
on
Accuracy of
SVM.
3. Quick
Running
Time of
KMP.
1. Good
Classificati
on
Accuracy.
2.Has won
3rd place in
Page 36
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Base Learner
2.Best
Ensemble
Proposal KDD Cup
2009
3.Nave
Bayesian
Boosting
MODL
CriterionSelective
Nave
Bayesian
Companys
Dataset
3 tasks
(Churn,
Appetency,
Upselling)
examples
Ensemble
proposal of
KDD Cup
2009
2.Smaller
Version: 230
features
50,000
examples
Post
processing
the results of
3 methods
with SVM
[5]
[18]
[10]
[16]
1.Distrib
ution
Character
istics
2.Diversi
ty
1.Feature
Subspace
s
2.Diversi
ty
Feature
Selection
Feature
Selection
Classifiers
with Diversity
Cost Sensitive
SVC s
1.Fisher
classifier
2.Binary
Decision Tree
3. Nearest mean
Classifier
4.SVM
5.Nearest
Neighbor(1-nn)
n number
of SVM s
1. Kernel
Clustering
2. New
KPCM
Algorithm
Phenome
Dataset from
ELENA
Database
2 classes
5000 samples
5 features
Tr.Set
7931 positive
7869negative
In comparison
with Bagging
applied to the
same dataset.
Comparison with
Conventional
SVCs in terms of
Detection Rate
and Cost
Expectations
1.Random
Subspace
method
2.Bagging
Hidden Signal
DetectionHidden Signal
Dataset
Ensemble
Feature
Selection
based on GA
1.Colon
cancer data
Tr.Set 40
Tst.Set 22
2.Hepato
Cellular
Carcinoma
Data
Tr.Set 33
Tst.Set 27
2. Assigning
Weights
Tst.Set
9426 positive
179,528
negative
Tr.Set 21
Tst.Set 29
Better
Performance
than bagging
and any other
constituent
ensemble
classifier
1. SVC
parameter
optimizations
reduced by
89%.
2. Reduction
in overall
Training time
by 82%
without
performance
degradation.
Better
prediction
Accuracy
3.High grade
glioma dataset
Product
Rule
12 UCI
benchmark
Datasets
4- Fold CrossValidation
1.Simplifie
d Datasets
2.Reduced
time
complexity,
3.A new
FSCE
Page 37
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[9]
[2]
[17]
1.Evoluti
onary
Fusion
2.Context
awarenes
s
1.Evoluti
onary
Fusion
2.Context
awarenes
s
Best classifiers
in each
ensemble
obtained by GA
1.K-Means
Algorithm
2.GA
n number of
different
classifiers
trained with
different context
conditions
Embedding
Classifier
Ensemble
into Context
Aware
Framework
1.Nave
Bayesian
2.K-Nearest
Neighbor
1.Static
Majority
Voting(SMV
)
2.Wighted
Majority
Voting(WM
V)
3.Dynamic
WMV(DW
MV)
1.Decision Tree
[6]
2. SVM
With 4 kernels:
1. linear,
2.polynomial
3.radial basis
4.sigmoid
1.Bagging
2. Double
Bagging
Face
Recognition
FERET
Dataset
4 Face
Recognition
Systems
1.E- FERET
2.E-YALE
3.E-INHA
4.IT
(All 4 datasets
are further
divided into 3
separate
datasets I,
II,III )
6 contexts
1.ROC
2.FAR
3.FRR
By creating similar
offline
system(without
Result
Evaluator)trained
and tested on the
same dataset
UCI DataSets
1.TicTacToe
EndGame
2.Chess
EndGame
1.Condition
Diagnosis of
Electric Power
ApparatusGIS dataset
2. 15 UCI
Benchmark
Datasets
958 Instances
9 features
2 classes
3196 Instances
36 features
2 classes
Each dataset
has different
number of
objects, classes
and features
Cross
Validation
In comparison with
Other ensembles
performance on the
same data.
algorithm is
proposed
4.Higher
Classificati
on
Accuracy
1.Reduced
Error Rates
2.Good
Recognition
Rate
1.Highest
Recognition
Rate than
the
individual
classifiers
2.Most
Stable
Performanc
e
1.Better
Classificati
on
Accuracy
than the
individual
Classifier.
2. DWMV
has
Higher
classificatio
n
Accuracy.
Better
performanc
e than
popular
ensemble
methods
like
Bagging,
boosting,
Random
forest and
Rotation
Page 38
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Forest.
[3]
[24]
[25]
n number of
SVMs
Any classifier
capable of
classifying CSP
1.Rough Set
Theory
2.Decision Tree
3.SVM
Additional
SVM is used
to fusion the
outputs of all
SVMs
HyperSpectral
AVIRIS Data
1.Majority
Voting
2.Rejecting
the Outliers
BCIEEG signals
Integrating
the
advantages
of RST, DT
and
SVM
UCITeaching
Assistant
Evaluation
(TAE) Dataset
220 data
channels,
145 x 145
pixels,
6 land cover
classes
151 Instances
6 Features
2 classes
1.Overall
classification Results
compared to an
individual classifier.
2.By simple voting.
20 Cross
Validations
1. 102(68%)
Tr.Data
49(32%) -Tst.
Data
2. 6-Folds Cross
Validation
Good
Accuracy
than a
single
constituent
classifier in
the
Ensemble
1.A new
ensemble
CSPE is
designed
2.Better
performanc
e than LDA
, RLDA and
SVM
3.Average
accuracy of
83.02% in
BCI
(Comparitiv
ely a good
Accuracy)
1.Improved
Class
Prediction
with
Acceptable
Accuracy.
2.
Enhanced
Rule
Generation
[2]. Zhan Yu, Mi Young Nam and Phill Kyu Rhee, Online
Evolutionary
Context-Aware
Classifier
Ensemble
Framework For Object Recognition, Proceedings of the
2009 IEEE International Conference on Systems, Man, and
Page 39
[1]. Ludmila I. Kuncheva, Combining Pattern Classifiers Methods and Algorithms a john wiley & sons, inc.,
publication, tk7882.p3k83 2004
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 40
[22].
L. I. Kuncheva, and C. J. Whitaker, Ten measures
of diversity in classifier ensembles: limit for two classifiers,
DERA/IEE Workshop on Intelligent Sensor Processing,
Birmingham, U.K., pp.10/1-10/6, February 2001.
[23].
Xu Lei, Ping Yang, Peng Xu, Tie-Jun Liu, and DeZhong Yao, Common Spatial Pattern Ensemble Classifier
and Its Application in Brain-Computer Interface, Journal of
Electronic And Technology Since Of China, Vol. 7, No. 1,
March 2009
[24].
Li-Fei Chen, Improve class prediction performance
using a hybrid data mining approach , International
Conference on Machine Learning and Cybernetics, Vol 1, pp
210-214, 2009
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
delighta22@yahoo.co.in
jc_kavitha@yahoo.co.in
3
prams_2k2@yahoo.co.in
2
VI. INTRODUCTION
The teaching organization is responsible with the placement
of students in the industry for the internship program. It is
experiencing difficulty in matching organizations
requirement with students profile for several reasons. This
situation could lead to a mismatched between organizations
requirement and students background. Hence, students will
face problems in giving good service to the company. On the
other hand, companies too could be facing difficulties in
training the students and assigning them with a project.
The placement must be based on certain criteria in order to
best serve the organization and student. For example, student
who lives in Chennai should not be sent to an organization
located in Bangalore. This is to avoid problems in terms of
accommodation, financial, and social. It has been decided
that practicum students should match the organizations
requirement.
However, due to the large number of students
registered every semester, matching the organization with the
students is a very tedious process. The current procedures in
matching organization and students involve several steps.
First, the registered city1 (is the first choice for students) and
city2 (is the second choice for students) will be examined. A
match between organizations location and students
hometown will be determined. The next criterion is the
VII.
LITERATURE REVIEW
Data mining have been applied in various research works.
One of the popular techniques used for mining data in KDD
for pattern discovery is the association rule [1]. According to
[2] an association rule implies certain association
relationships among a set of objects. It attracted a lot of
attention in current data mining research due to its capability
of discovering useful patterns for decision support, selective
marketing, financial forecast, medical diagnosis and many
other applications. The association rules technique works by
finding all rules in a database that satisfies the determined
minimum support and minimum confidence [3].
An algorithm for association rule induction is the Apriori
algorithm, proven to be one of the popular data mining
Page 41
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 42
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
F. Algorithm AprioriTid
The AprioriTid algorithm, shown in Fig. 2, also uses the
apriori-gen function to determine the candidate item sets
before the pass begins. The interesting feature of this
algorithm is that the database D is not used for counting
support after the first pass. Rather, the set Ck is used for this
purpose. Each member of the set Ck is of the form < TID;
fXkg >, where each Xk is a potentially large k-item set
present in the transaction with identifier TID. For k = 1, C1
corresponds to the database D, although conceptually each
item i is replaced by the item set fig. For k > 1, Ck is
generated by the algorithm (step 10). The member of Ck
corresponding to transaction t is <t:T ID, fc 2 Ckjc contained
in tg>. If a transaction does not contain any candidate k-item
set, then Ck will not have an entry for this transaction. Thus,
the number of entries in Ck may be smaller than the number
of transactions in the database, especially for large values of
k. In addition, for large values of k, each entry may be
smaller than the corresponding transaction because very few
candidates may be contained in the transaction. However, for
small values for k, each entry may be larger than the
corresponding transaction because an entry in Ck includes all
candidate k-item sets contained in the transaction.
TABLE 1
EXTRACTED PATTERN BASED ON ORGANIZATION
CATEGORY
Organization
Region
Criteria (Apriori)
Major=Computer Science
and Engineering
Percentage=75-80
1) L1 = flarge 1-itemsetsg;
2) C1 = database D;
3) for ( k = 2; Lk-1 0; k++ ) do begin
4) Ck = apriori-gen(Lk-1 ); // New candidates
5) Ck =0;
6) for all entries t Ck-1 do begin
7) // determine candidate itemsets in Ck contained
// in the transaction with identi_er t.TID
Ct = {c Ck | (c - c[k]) t:set-of-itemsets ^
(c - c[k-1]) t.set-of-itemsets};
8) for all candidates c Ct do
9) c:count++;
10) if (Ct 0 ;) then Ck += < t:TID;Ct >;
11) end
12) Lk = {c Ck | c:count _ minsup}
13) end
14) Answer = k Lk;
Gender=Male
Race = Guindy
N_Region1
Major=Electronics
Communication
and Engineering
Percentage=75-80
Government
Gender=Male
Race = Guindy
Major=Electrical and
Electronics
W_Region2
Engineering
Percentage=75-80
Gender=Male
Race = Guindy
G Interpretation/ Evaluation
N_Region1
Major=Computer Science
and Engineering
Percentage=70-74 or 75-80
Gender=Female or Male
Page 43
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Race = Guindy or
Kodambakam
W_Region2
Major=Electronics
Communication
and Engineering
Private
Percentage=70-74
Gender=Male
Race = Guindy
Major=Electrical and
Electronics
Engineering
Percentage=70-74
Gender=Male
Race = Guindy
V CONCLUSIONS
This study has been implemented and conducted on existing
data from the teaching organization. In this study data mining
techniques namely association rule was used to achieve the
goal and extract the patterns from the large set of data. Using
organization category as the target, the patterns extracted can
provide information of the practicum placement and how the
matching of the organizations requirement and students
criteria was done previously. Further analysis can be done by
changing the target attributes.
ACKNOWLEDGMENT
Page 44
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1]
Hipp, J., Guntzer, U., Gholamreza, N. (2000). Algorithm for
Association
Rule Mining: A General Survey and Comparison, ACM SIGKDD,
volume 2 (Issue 1), p. 58.
[2] Fayyad, U. M., Shapiro, G. P., Smyth, P., and Uthurusamy, R. (1996).
Advances in Knowledge Discovery and Data Mining, Cambridge,
AAAI/MIT press
[3] Liu, B., Hsu, W., Ma, Y. (1998). Integrating Classification and
Association Rule Mining, American Association for Artificial
Intelligence
Page 45
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
srisan3175@gmail.com
IX. INTRODUCTION
In the past couple of decades, wireless communications have
gained dramatic development and have been recently
considered as an alternative to wire line networks in
providing the last-mile broadband services. Such
development further stimulates the emergence of multimedia
applications, which require wireless networks to support
broader bandwidth, higher transmission rate, and lower endto-end delay. For wireless communications, the challenge to
provide multimedia services stems from the hostile wireless
channel conditions. Besides channel noise, the time-variant
channel fluctuation(i.e., channel fading) severely affects the
transmission accuracy and the of quality of service(QoS).In
order to combat interference and channel fading various
diversity techniques, modulation and coding techniques are
used.
In the paper[1] the lossy feature of wireless links is
studied and a leaky pipe flow model is designed where the
flow rate changes per hop, which naturally points to hop-byhop rate control.
The effective network utility is
determined by considering two constraints, With link outage
constraints and with path outage constraints. Hop-by-hop rate
control algorithm that are jointly optimized are used. CSIchannel state information is used.
In the above work the interference considerations are not
analysed. For achieving throughput maximization only
estimations are done and analysed. Rayleigh fading model is
used.
II. RELATED WORKS
This scheme [8] is based on belief propagation, which is
capable of fully exploiting the statistics of interference.
Consider, the detection of a sequence of symbols of the
desired user with one strong interferer of the same signalling
format, where the fading processes of both the desired user
and the interference are Gauss-Markov in nature. Belief
propagation is an iterative message passing algorithm for
performing statistical inference on graphical models by
propagating locally computed beliefs.
Belief propagation algorithm has significant performance
gain over the traditional interference suppression schemes.
A. Local Balancing And Interference Aware Channel
Allocation:
This algorithm is used for reducing the overall interference in
the network. Several approaches have been proposed for
minimizing the Adjacent channel effects[5] ranging from coordinating the multiple radios in the wireless node and
adjusting antenna parameters and the filter characteristics to
using the channel overlaps for routing data across devices
operating on the non-overlapping
channels. One of the
popular approaches for mitigating the interference effects[4]
is to choose the transmission channels carefully by making
sure that nearby links are on channels that do not interfere
sufficiently. However, due to the dynamic nature of the links
in a wireless network, the interference characteristics may
vary, and therefore the channel allocation should be
adaptable to these variations.
Page 46
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
S1
D1
.
In the above scenario there is no interference as there is only
one source and one destination node.
Page 47
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IV RESULTS
As distance between the source and destination increases the
delay increases due to interference. As delay increases
throughput decreases. As packet delivery ratio increases
delay decreases or end to end latency decreases. The fig. 3
shows the end to end latency with respect to time .
Page 48
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[8]
Yan Zhu, Dongning Guo and Michael
L. Honig , Joint
Channel
Estimation and Co-Channel Interference Mitigation in
Wireless Networks Using Belief Propagation,
Department of
Electrical Engineering and Computer Science Northwestern
University, Evanston, Illinois 60208.
[9]
http://www.wirelesscommunication.nl/reference/chaptr04
/outage/compouta.htm
[10] http://www.wirelesscommunication.nl/reference/chaptr05/
spreadsp /ber.htm
[11] htttp://www.isi.edu/nsnam/ns/tutorial/index.html
CONCLUSION
This project aims at reducing the different types of
interferences at each layer and improving the overall
throughput by introducing throughput maximization
algorithm and interference aware algorithm along with rate
adaptation. NS2 simulation environment is implemented to
test the results. By introducing the interference aware
algorithm the throughput has been improved to an increase
of 3% when compared to the standard existing system.
REFERENCES
[1]
Qinghai Gao, Junshan Zhang and Stephen V. Hanly, Cross
Layer rate control in wireless networks with lossy links: leaky
pipeflow, effective network utility maximization and hop by hop
algorithms, IEEE Transactions of wireless communications
vol8,no. 6,june 2000.
[2]
Kaveh Pahlavan,Prashant Krishnamurthy, Principles of
Wireless Networks, Prentice Hall of India Private Limited,
2006.
[3]
Jochen Schiller, Mobile Communication 2nd Edition,
Pearson Education 2003.
[4]
Yaling Yang, Jun Wang and Robin Kravets, Interference-aware
Load Balancing for Multi-hopWireless
Networks,
University of Illinois at Urbana- Champaign.
[5]
Nitin H. Vaidya , Vijay Raman , Adjacent Channel Interference
Reduction in Multichannel Wireless NetworksUsing Intelligent
Channel Allocation, Technical Report (August 2009),University of
Illinois at Urbana Champaign.
[6]
Sachin
Katti
,Shyamnath
Gollakota,
Dina
Katabi,
EmbracingWireless Interference: Analog Network Coding, mit.edu.
[7]
David Tse, Pramod Viswanath , Fundamentals of Wireless
Communications , University of California, Berkeley,
August13, 2004.
Page 49
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Mr.R.Arun Kumar2
Lecturer
r.arunkumar.me@gmail.com
M.Harine3, G.Priyadharshini3
Final year student,
amutha.victory@gmail.com
I.
A.
INTRODUCTION
GRID COMPUTING:
B.
TASK SCHEDULING:
Page 50
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
II.
III.
A.
PARTICLE
ALGORITHM:
Figure 1. Flowchart Simulated Annealing algorithm
B.
OPTIMIZATION
Page 51
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
w: weighting function,
cj : weighting factor,
rand : uniformly distributed random number between 0 and
1,
sik : current
position of agent i at iteration k,
pbesti : pbest of agent i,
gbest: gbest of the group.
The following weighting function is usually utilized in (1)
w=wMax-[(wMax-wMin)xiter]/maxIter
(2)
Where wMax= initial weight,
wMin = final weight,
maxIter = maximum iteration number,
iter = current iteration number.
sik+1 = sik + Vik+1
(3)
Figure 4. PSO-Flowchart
B.
Page 52
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IMPLEMENTATION:
Task(T)/Resources(R)
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
R1
0
1
1
0
0
1
0
1
0
0
R2
1
0
0
0
1
0
0
0
1
0
R3
1
0
0
1
0
0
1
0
0
1
Page 53
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[9]
Foster and C. Kesselman (editors), The Grid: Blueprint for a Future
Computing Infrastructure,Morgan Kaufman Publishers, USA, 1999.
[10] Y. Gao, H.Q Rong and J.Z. Huang, Adaptive grid job scheduling with
genetic algorithms, Future Generation Computer Systems, pp.1510-161
Elsevier,21(2005).
[11] M. Aggarwal, R.D. Kent and A. Ngom, Genetic Algorithm Based
Scheduler for Computational Grids, in Proc. of the 19th Annual International
Symposium on High Performance Computing Systems and Application
(HPCS05), ,pp.209-215 Guelph, Ontario Canada, May 2005.
[12] S. Song, Y. Kwok, and K. Hwang, Security-Driven Heuristics and A
Fast Genetic Algorithm for Trusted Grid Job Scheduling, in Proc. of 19th
IEEE International Parallel and Distributed Processing Symposium
(IPDPS05), pp.65-74, Denver, Colorado USA, April 2005.
[13] Workflow scheduling algorithm for grid computing Jia Yu and
Rajkumar Buyya Grid computing and distributed system(GRIDS) laboratory
Department of Computer Science and Software Engineering The University
of Melbourne,Australia.
[14] Abraham, R. Buyya and B. Nath, Nature's Heuristics for Scheduling
Jobs on Computational Grids, The 8th IEEE International Conference on
Advanced Computing and Communications (ADCOM 2000), pp. 4552,Cochin, India, December 2000,.
[15] Kennedy J. and Eberhart R. Swarm interllignece,Morgan Kaufmann,
2001.
[16] J. Kennedy and R. C. Eberhard, Particle swarm optimization, Proc.
of IEEE Intl Conf. on Neural Networks, pp.1942-1948, Piscataway, NJ,
USA, ,1995.
[17] J.F. Schute and A.A. Groenwold, A study of global optimization using
particle swarms, Journal of Global Optimization, pp.93-108, Kluwer
Academic Publisher,31(2005).
[18] M. Fatih Tasgetiren, Yun-Chia Liang, MehmetSevkli, and Gunes
Gencyilmaz, Particle Swarm Optimization and Differential Evolution for
Single Machine Total Weighted Tardiness Problem, International Journal of
Production Research, pp. 4737-4754 , vol. 44, no. 22, 2006.
[19] R. Braun, H. Siegel, N. Beck, L. Boloni, M. Maheswaran, A. Reuther,
J. Robertson, M. Theys, B. Yao, D. Hensgen and R. Freund, A Comparison
of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto
Heterogeneous Distributed Computing Systems, pp. 810-837, J. of Parallel
andDistributed Computing, vol.61, No. 6, 2001.
CONCLUSION:
Page 54
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
#1
pushpavpr@yahoo.co.in
*2
kannanb6@gmail.com
I. INTRODUCTION
A. NTRU
NTRU (Number Theory Research Unit) is relatively new and
was conceived by Jeffrey Hoff stein, Jill Pipher and Joseph.
H. Silverman. NTRU uses polynomial algebra combined with
clustering principle based on elementary mathematical
theory. The security of NTRU comes from the interaction of
polynomial mixing system with the independence of
reduction modulo two relatively prime numbers. The basic
collection of objects used by the NTRU Public Key
Cryptosystem in the ring R that consists of all truncated
polynomials of degree N-1 having integer coefficients a =a
2
+ a 3X 3 + + a N-2X N-2 + a N-1X N-1.
0+ a 1X + a 2X
Polynomials are added in the usual way. They are also
aiXi R. When
i =0
i= 0
a i .b k i +
N 1
a i .b N
i= k +1
+ k i
i + j = k (mod
a i .b
N )
Page 55
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
( a 0 ,..., a N 1 ) a ( a N 1 , a 0 ,..., a N 2 )
Page 56
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
at
random
and
polynomials
f . f 1 1(mod q )
pF. If the congruence
the
binary
Compute f = 1 +
has a solution,
Page 57
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
C. Encryption
Encryption is the simplest part in the NTRU PKC.
Encryption only requires generating a random polynomial r
from the ring R that obscures the message. Then the
polynomial r is multiplied by the public key h. And finally
the product of r and h is added to the desired message to
encrypt. This means Encryption just needs to receive a
message in the polynomial form m and the public key h.
3) Verification:
In order to verify signature s on the message m, First checks
that s0 and then verifies the following two conditions:
1.
Compares s to f0*m by checking if their deviation
satisfies
Dmin Dev(s. f0 * m)Dmax
2.
Use public verification key h to compute the
polynomial th*s (mod q), putting the coefficients of t into
the range [-q=2; q=2] as usual. Then checks if the deviation
of t from g0*m satisfies
Dmin Dev(t. g0 * m)Dmax
If signature passes tests (A) and (B), then accepts it as valid.
IV. PERFORMANCE ANALYSIS
In order to grasp how well NTRU performs for different
applications, a timing analysis was conducted for the Key
generation, Encryption, and Decryption functions. The test
values for the parameters of NTRU used for this performance
analysis are listed in
Table ITABLE I
TEST VALUES USED for PERFORMANCE ANALYSIS
Parameters
N
q
p
NumOnes f
NumNegOnes f
NumOnes g
NumNegOnes g
NumOnes r
NumNegOnes r
NumOnes m
NumNegOnes m
107 NTRU
107
64
3
15
14
12
12
5
5
25
25
503 NTRU
503
256
3
216
215
72
72
55
55
165
165
Page 58
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Functions in
NTRU
Key
Generation
(ms)
Encryption
(ms)
Decryption
(ms)
107 NTRU
503 NTRU
16.2
699.5
0.6
15.0
1.4
29.4
Page 59
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I. INTRODUCTION
Information security deals with several different "trust"
aspects of information. Another common term is
information assurance. Information security is not confined
to computer systems, nor to information in an electronic or
machine-readable form. It applies to all aspects of
safeguarding or protecting information or data, in whatever
form.
Cryptography [3] [10]can also be defined as the science and
art of manipulating message to make them secure. In this the
original message to be transformed is called the plaintext
and resulting message after transformation is called the
cipher text. There are several ways of classifying
cryptographic algorithms. They will be categorized based on
the number of keys that are employed for encryption and
decryption, and further defined by their application and use.
The three types of algorithms that will be discussed are
Page 60
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
the NSA. Very little work was again made public until the mid
'70s, when everything changed.
In 1969[6] two major public (ie, non-secret) advances. First
was the DES (Data Encryption Standard) submitted
Cryptography/History by IBM , at the invitation of the
National Bureau of Standards (now NIST), in an effort to
develop secure electronic communication facilities for
businesses such as banks and other large financial
organizations. After 'advice' and modification by the NSA, it
was adopted and published as a FIPS Publication (Federal
Information Processing Standard) in 1977. It has been made
effectively obsolete by the adoption in 2001 of the Advanced
Encryption Standard, also a NIST competition, as FIPS 197.
DES was the first publicly accessible cipher algorithm to be
'blessed' by a national crypto agency such as NSA. The release
of its design details by NBS stimulated an explosion of public
and academic interest in cryptography. DES [19], and more
secure variants of it, are still used today, although DES was
officially supplanted by AES (Advanced Encryption
Standard)[18] in 2001 when NIST announced the selection of
Rijndael, by two Belgian cryptographers.
DES remains in wide use nonetheless, having been
incorporated into many national and organizational standards.
However, its 56-bit key-size has been shown to be insufficient
to guard against brute-force attacks (one such attack,
undertaken by cyber civil-rights group The Electronic Frontier
Foundation, succeeded in 56 hours the story is in Cracking
DES, published by O'Reilly and Associates). As a result, use
of straight DES encryption is now without doubt insecure for
use in new crypto system designs, and messages protected by
older crypto systems using DES[19] should also be regarded
as insecure. The DES key size (56-bits) was thought to be too
small by some even in 1976, perhaps most publicly Whitfield
Diffie. There was suspicion that government organizations
even then had sufficient computing power to break DES
messages and that there may be a back door due to the lack of
randomness in the 'S' boxes.
Second was the publication of the paper New Directions in
Cryptography by Whitfield Diffie and Martin Hellman. This
paper introduced a radically new method of distributing
cryptographic keys, which went far toward solving one of the
fundamental problems of cryptography [8], key distribution. It
has become known as Diffie-Hellman key exchange. The
article also stimulated the almost immediate public
development of a new class of enciphering algorithms, the
asymmetric key algorithms.
In contrast, with asymmetric key encryption, there is a pair of
mathematically related keys for the algorithm, one of which is
used for encryption and the other for decryption. Some, but
not all, of these algorithms have the additional property that
one of the keys may be made public since the other cannot be
(by any currently known method) deduced from the 'public'
Page 61
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
n l
where cil, cmj values are defined based on the of (i, l) and (m, j)
in the key. Since cil and cmj can take either 0 or 1, eqn1
involves 22n-2 possibilities. Hence, it is impossible to solve.
Therefore we cannot find the key k.
J. Chosen-plaintext attack
A chosen-plaintext attack (CPA) [11] is an attack model for
cryptanalysis which presumes that the attacker has the
capability to choose arbitrary plaintexts to be encrypted and
obtain the corresponding cipher texts. The goal of the attack is
to gain some further information which reduces the security of
the encryption scheme. In the worst case, a chosen-plaintext
attack could reveal the scheme's secret key.
Suppose we take a plaintext p as a n n block of zeros then
the encryption process yields the cipher text c as (n n) block
of zeros. If we take a plain text block p with 0 entry except
one, say(i, j) only equal to 1. Then the encryption process
spreads the value randomly (as per the key order) in the cipher
block c. from the pair (p-c) it is impossible to obtain the key k.
K. Chosen-cipher text attack
A chosen-cipher text attack (CCA)[2] is an attack model for
cryptanalysis in which the cryptanalyst gathers information, at
least in part, by choosing a cipher text and obtaining its
decryption under an unknown key.
Suppose a cipher text c is choosen. The decryption process
uses the key in the reverse order and yields the plain text. The
key spreads zeros & ones randomly in the plain text according
to the position order. Infact, there would be unbalanced
number of zeros and ones, as compared to the cipher text.
Therefore impossible to find the key.
IV ALGORITHM
Security is a major concern in an increasingly multimediadefined universe where the internet serves as an indispensable
resource for information and entertainment. This algorithm
protect and provide access to critical and time-sensitive
copyrighted material or personal information.
L. Step 1: Randomized key generation
Generate a pair of values (i,j) randomly, 1<=i<=m, 1<=j<=m,
such that no pair (i,j) is repeated more than once.
Page 62
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TIME FOR
ENCRYPTION
TIME FOR
DECRYPTION
1.
Txt,1kb
78ms
63ms
2.
Doc,551kb
1secs 360 ms
1secs 47 ms
3.
Bmp,14.3kb
141 ms
143 ms
4.
Jpeg,1.69mb
3 secs 469 ms
3 secs 266ms
5.
Pdf,245kb
562 ms
531 ms
6.
Xls,29kb
141 ms
125 ms
7.
Mp3,50.3kb
171 ms
128 ms
8.
Wav,45.3mb
36 secs
35 secs
9.
Vob,62.8mb
47 secs
46 secs
Page 63
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TIME FOR
TIME FOR
(BITS)
ENCRYPTION
DECRYPTION
47 secs
46 secs
16
1 min 10 secs
1 min 9 secs
32
1 min 31 secs
1 min 30 secs
64
2 min 35 secs
2 min 34 secs
V. CONCLUSION
Since the development of cryptology in the industrial and
academic worlds in the seventies, public knowledge and
expertise have grown in a tremendous way, notably because
of the increasing presence of electronic communication
means in our lives. Block ciphers are inevitable building
blocks of the security of various electronic systems.
Recently, many advances have been published in the field of
public-key cryptography, being in the understanding of
involved security models or in the mathematical security
proofs applied to precise cryptosystems. Unfortunately, this
is still not the case in the world of symmetric-key
cryptography and the current state of knowledge is far from
reaching such a goal. In this paper we developed a novel
dynamic symmetric key generation scheme for multimedia
data encryption.
VI. REFERENCES
[1]
M. Bellare and P. Rogaway, Robust
computational secret sharing and a unified account of
classical secret-sharing goals, Proceedings of the 14th
ACM Conference on Computer and Communications
Security (CCS), ACM, 2007.
[2]
J. Zhou, Z. Liang, Y. Chen, and O. C. Au,
Security analysis of multimedia encryption schemes based
on multiple Huffman table, IEEE Signal Processing
Letters, vol. 14, no. 3, pp. 201 204, 2007.
[3]
M. Bellare, A. Boldyreva and A. O'Neill,
Deterministic and efficiently searchable encryption,
Advances in Cryptology - Crypto 2007 Proceedings,
Lecture Notes in Computer Science Vol. 4622, A. Menezes
ed, Springer-Verlag, 2007.
[4]
Elaine Barker and John Kelsey, Recommendation
for Random Number Generation Using Deterministic
Random Bit Generators, NIST Special Publication 800-90.
Revised March 2007.
[5]
M. Grangetto, E. Magli and G. Olmo, Multimedia
selective encryption by means of randomized arithmetic
Page 64
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[15]
J. Kelsey, Bruce Schneier, David Wagner, and C.
Hall, Cryptanalytic Attacks on Pseudorandom Number
Generators, Fast Software Encryption, Fifth International
Workshop Proceedings (March 1998), Springer-Verlag,
1998, pp. 168-188.
[16]
Mitsuru Matsui, The First Experimental
Cryptanalysis
of the Data Encryption Standard. In
Advances in Cryptology Proceedings of CRYPTO 94,
Lecture Notes in Computer Science 839, Springer-Verlag,
1994.
[17]
Alfred J.Menezes, Paul C. van Oorschot, and Scott
A. Vanstone, Handbook of Applied Cryptography, CRC
Press, 1997.
[18]
Christof Paar and Jan Pelzl, "The Advanced
Encryption Standard", Chapter 4 of "Understanding
Cryptography, A Textbook for Students and Practitioners".
Springer, 2009.
[19]
Diffie, Whitfield and Martin Hellman, "Exhaustive
Cryptanalysis of the NBS Data Encryption Standard" IEEE
Computer 10(6), June 1977, pp7484.
[20]
John Kelsey, Stefan Lucks, Bruce Schneier, Mike
Stay, David Wagner, and Doug Whiting, Improved
Cryptanalysis of Rijndael, Fast Software Encryption, 2000
pp213230.
Page 65
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
bhuvi.jayabalan@gmail.com
2
tdevi5@gmail.com
I. INTRODUCTION
Recent advances in data collection, data dissemination and
related technologies have initiated a new era of research
where existing data mining algorithms should be
reconsidered from the point of view of privacy preservation.
Privacy refers to the right of users to conceal their personal
information and have some degree of control over the use of
any personal information disclosed to others. To conduct
data mining, one needs to collect data from various parties.
Privacy concerns may prevent the parties from directly
sharing the data and some types of information about the
data. The way in which multiple parties conduct data mining
collaboratively without breaching data privacy is a
challenge.
In recent years, the research community has developed
numerous technical solutions for the privacy preserving data
mining. However, Clifton has pointed out the notion of
privacy that satisfies both technical and societal concerns is
unknown [3]. Security is a necessary tool to build privacy,
but a communication or transaction environment can be
Page 66
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 67
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
i=1
Support A =
Tot. Nodes
Tot. Nodes
Support AB =
Tot. Nodes
i=1
Support AB
Confidence
=
Support A
R. Privacy Formalization
A privacy-oriented scheme S preserves data privacy if for any
private data T; the following holds [11]:
|Pr (T |PPDMS) Pr(T )|
Where,
PPDMS: Privacy-Preserving data mining scheme.
Pr(T | PPDMS): The probability that the private data T is
disclosed after a Privacy-Preserving data mining scheme has
been applied.
Pr(T): The probability that the private data T is disclosed
without any Privacy-Preserving data mining scheme being
applied.
Pr(T | PPDMS) - Pr(T): The probability that private data T is
disclosed with and without Privacy-Preserving data mining
schemes being applied.
To achieve privacy-preserving data mining, reduce the whole
algorithm to a set of component privacy oriented protocols.
The privacy preserving data mining algorithm preserves
privacy if each component protocol preserves privacy and the
combination of the component protocols does not disclose
private data. In the secure multiparty computation literature, a
composition theorem describes a similar idea. A privacyoriented component protocol CP preserves data privacy if for
any private data T, the following is held:
Tot. Nodes
AB
Page 68
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
K-Nearest
Neighbor
Page 69
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 70
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
mraji2231@gmail.com
2
tdevi5@gmail.com
I. INTRODUCTION
World-wide businesses are competing to capture and retain
customers and towards this Customer Relationship
Management (CRM) are practiced. Organisations turn to CRM
to enable them to be more effective in acquiring, growing and
retaining their profitable customers. Customer relationship
management has emerged as one of the demanding ways for the
firms and it provides an effective way for customer satisfaction
and retention [12]. In order to carry out CRM, the historical
data about customers and their behavior need to be analysed.
Towards such analysis, data mining techniques are used
nowadays. Due to increase in competition between the
companies they realized that customers are their valuable assets
and retaining existing customers is the best way to survive. But
it is analysed that it is more profitable to satisfy existing
customers than to look for a new one [3]. Customer attrition,
Page 71
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 72
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 73
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 74
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 75
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
generates rules for each class from the most rare class to the
most common class. Given this architecture, it is quite
straightforward to learn rules only for the minority classa
capability that Ripper provides.
VI. CONCLUSION
Churn is often considered as a rare object and in order to handle
these rare objects, class imbalance (i.e. rare classes) have been
utilised by many researchers. This paper gives ideas about
various data mining techniques that have been applicable for
customer relationship management concept particularly in the
prevention of customer churn by using class imbalance. The
authors are currently designing a methodology for preventing
customer churn through class imbalance.
ACKNOWLEDGMENT
I record my sincere thanks to Bharathiar University for
providing necessary financial assistance to carry out my
research.
REFERENCES
[1] Armstrong, G., and P. Kolter, Principles of Marketing, Prentice Hall New
Jersey, 2001.
[2]
Au, W. H., Chan, K. C. C., & Yao, X, A novel evolutionary data
mining algorithm with applications to churn prediction, IEEE Transactions on
Evolutionary Computation, vol. 7, pp. 532545, 2003.
[3]
Berry, M. J. A., & Linoff, G. S, Data mining techniques second
edition - for marketing, sales, and customer relationship management, Wiley,
2004.
[4]
Berson, A., Smith, S., & Thearling, K, Building data mining
applications for CRM, McGraw-Hill, 2000.
[5]
Burez, J., Van den Poel. D, Handling class imbalance in customer
churn prediction, Expert Systems with Applications, vol. 36, pp. 46264636,
2009.
[6]
Bush, R., The Interactive and Direct Marketing Guide, The Institute
of Direct Marketing, Middlesex, 2002, Chapter 3.6.
[7]
Carrier, C. G., & Povel, O, Characterising data mining software.
Intelligent Data Analysis, vol. 7, pp. 181192, 2003.
[8]
Chris Rygielski, Jyan-Cheng Wang, David C.Yen, Data Mining
and Customer RElatonship Management, Technology in Society, vol. 24, pp.
483-502, 2004.
[9]
Fayyad, U.M. (2003). Editorial. SIGKDD Explorations, 5(2).
[10]
Garver. M. S, Using Data Mining for Customer Satisfaction
Research. Marketing Research vol. 14, no. 1, pp. 812, 2002.
[11]
Gupta, S., Hanssens, D., Hardie, B., Kahn, W., Kumar, V., Lin, N.,
et al, Modeling customer lifetime value, Journal of Service Research, 2006, 9
(2), 139155.
[12]
Jayanthi Ranja., Vishal Bhatnagar, Critical Success Factors For
Implementing CRM Using Data Mining, Journal of Knowledge Management
Practice, vol. 9, 2008.
[13]
Japkowicz, N, Concept learning in the presence of between class
and within-class imbalances, In Proceedings of the Fourteenth Conference of
the Canadian Society for Computational Studies of Intelligence, 2001, pages
67-77, SpringerVerlag.
[14]
Jorg-Uwe Kietz, Data Mining for CRM and Risk Management,
Knowledge discovery services and knowledge discovery applications, 2003.
[15]
Kracklauer, A. H., Mills, D. Q., & Seifert, D, Customer
management as the origin of collaborative customer relationship management.
Collaborative Customer Relationship Management - taking CRM to the next
level, 2004, 36.
[16]
Neslin, S., Gupta, S., Kamakura, W., Lu, J., & Mason, C, Detection
defection: Measuring and understanding the predictive accuracy of customer
churn models. Journal of Marketing Research, vol. 43(2),
pp. 204211,
2006.
[17]
Nitesh V. Chawla, Data Mining For Imbalanced Datasets: An
Overview, pp. 853-867, 2005.
[18]
Parvatiyar, A., & Sheth, J. N, Customer relationship management:
Emerging practice, process, and discipline, Journal of Economic & Social
Research, vol. 3, pp. 134, 2001.
[19]
Regielski, C., Wang, J.C. & Yen, D.C, Data Mining Techniques for
Customer Relationship Management, Technology in Society, vol. 24, pp. 483502, 2002.
[20]
Weiss, G. M, Learning with rare cases and small disjuncts, In
Proceedings of the Twelfth International Conference on Machine Learning,
pages 558-565, Morgan Kaufmann, 1995.
[21]
Weiss, G. M, Timeweaver: a genetic algorithm for identifying
predictive patterns in sequences of events, In Proceedings of the Genetic and
Evolutionary Computation Conference, pages 718-725, Morgan Kaufmann,
1999.
[22]
Weiss, G. M. (2004), Mining with rarity: A unifying framework.
SIGKDD Explorations, vol. 6 (1), pp. 719, 2004.
[23]
Wirth, R. and Hipp, J, CRISP-DM: Towards a standard process
model for data mining, In Proceedings of the 4th International
Conference on the Practical Applications of Knowledge Discovery and Data
Mining, 2000, pages 29-39, Manchester, UK.
Page 76
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
#2
INTRODUCTION
RELATED WORK
Page 77
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3.
Fig. 1. Specification
The scope limits the ontology, specifying what must be
included and what must not. It is an important step Fig. 1 for
minimizing the amount of data and concepts to be analyzed,
especially for the extent and complexity of the diagnostic
semantics. In successive iterations for verification process, it
will be adjusted if necessary.
3.2.
Page 78
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3.3.
Specification: Motivating
Scenarios
Fig. 3. Implementation
Fig. 2. Conceptualization
In this step Fig. 2, a list of the most important terms was
elaborated. The core of basic terms is identified first and
then they are specified and generalized if necessary. Then
with these concepts as reference, the key term list was
defined. To properly understand the conceptual aspects in
the context, a Unified Modeling Language (UML) [5]
diagram was elaborated with the main relations among
Page 79
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4.2.
Data Module
4.4.
Page 80
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1]
Soe-Tsyr Yuan and Yen-Chuan Chen (2008) Semantic
Ideation Learning for Agent-Based E-Brainstorming, IEEE
Transactions on Knowledge And Data Engineering, Vol. 20, No. 2.
[2]
A.R. Dennis, A. Pinsonneault, K.M. Hilmer, H. Barki,
R.B. Gallupe, M. Huber, and F.Bellavance (2005) Patterns in
Electronic Brainstorming: The Effects of Synergy and Social
Loafing on Group Idea Generation Intl J. e-Collaboration, vol. 1,
no. 4, pp. 38- 57.
[3]
W.-L.Chang and S.-T. Yuan (2004) iCare Home Portal:
A Quest for Quality Aging e-Service Delivery, Proc. First
Workshop Ubiquitous Seaborne A., RDQL - A Query Language
for
RDF,
W3C
Member
Submission.
http://www.w3.org/Submission/2004/SUBM-RDQL-20040109/
[4]
Smith,M., Welty.C., McGuinness.D. (2004) OWL Web
Ontology Language Guide, W3C Recommendation 10
http://www.w3.org/TR/owl-guide/
[5]
UML
(2006)
Unified
Modeling
Language.
http://www.uml.org/ Brickley, D., Guha, R.V. (2004) RDF
Vocabulary Description Language 1.0: RDF Schema. W3C
Recommendation. http://www.w3.org/TR/rdf-schema/
[6]
Caliusco M. L. (2005), A Semantic Definition Support
of Electronic Business Documents in eColaboration, PhD thesis.
UTN - F.R.S.F. Santa Fe, Argentina.
[7]
Corcho O, Fernndez-Lpez M, Gmez-Prez A, LpezCima
A.
(2005),
Building
legal
ontologies
with
METHONTOLOGY and WebODE. Law and the Semantic Web,
Legal Ontologies, Methodologies, Legal Information Retrieval,
and Applications, Australian Computer Society, Inc. T
Australasian Ontology Workshop (AOW 2006), Hobart, Australia.
Conferences in Research and Practice in Information Technology,
Vol. 72.
[8]
Graciela Brusa, A Process for Building a Domain
Ontology: an Experience in Developing a Government Budgetary
Ontology Direccin Provincial de Informtica, San Martn 2466,
Santa Fe (Santa Fe), Argentina, gracielabrusa@santafe.gov.ar
[9]
www.isakanyakumari.com
[10]
www.ontotext.com/kim/
semanticannotation.html
[11]
Goutam Kumar Saha, ACM Ubiquity, v.8, 2007
[12]
Matthew Horridge, Holger Knublauch, Alan Rector,
Robert Stevens, Chris Wroe, A Practical Guide To Building OWL
Ontologies Using The Protege-OWL, Plugin and CO-ODE
Tools Edition 1.0, August 27, 2004.
[13]
Ian Horrocks, Ontologies and the Semantic Web,
communications of the acm|december 2008 | vol. 51| no. 12.
[14]
http://www.w3.org/TR/owl-guide/
[15]
Pablo Castells, Miriam Fernandez, and David Vallet An
Adaptation of the Vector-Space Model for Ontology-Based
Information Retrieval, IEEE Transactions On
Knowledge And
Data Engineering, sVol. 19, No. 2, February 2007.
Page 81
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ponnu_svks@yahoo.co.in
d_bommudurai@hotmail.com
I. INTRODUCTION
In wireless sensor network [1, 2] the sensors are deployed
randomly or densely in deterministic or non-deterministic
environment. The sensors node is small in nature with limited
power. The energy is vital for wireless sensor network
because it is not easy to recharge the nodes because it has
been deployed in a hostile environment. So conservation of
the energy efficiently is the primary motive in a wireless
sensor networks. The sensor node consumes energy for
sensing the activity, processing and transmitting. The energy
required for transmitting the data is high compared to sensing
and processing the data. The energy [3,4] spent for
transmitting a single bit of data over 100 meters is equal to
processing 3000 instructions. WSNs have been widely
employed in several applications such as habitat monitoring,
disaster supervising, defence application, and commercial
application.
An important challenge in designing of a wireless sensor
network is very limited bandwidth and energy than in wired
network environment. The innovative techniques are needed
to eliminate energy inefficiency that shortens the lifetime of
the network and efficient use of the limited bandwidth. So it is
highly desirable to find new methods for energy efficient route
Page 82
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
N4
N13
N11
N9
N6
N15
N7
N
2
TABLE I
ROUTING TABLE
Node-ID
N10
N1
4
N
12
N8
N5
N3
Base station
Sensor Field
N1
N2
,,
,,
Nn
BS
Distance
25Meters
37Meters
,,
,,
NMeters
-
Energy
1Joules
0.97Joules
,,
,,
NJoules
High Energy
Th =
i =1 No. of nodes in the range
Where,
Page 83
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
If the energy level of the node greater than the threshold level,
then the data to be transmitted to the node. Suppose if the
energy level of the node lesser than the threshold level, then
previous least distance node from the routing table should be
taken and compared with its threshold level. If the condition is
met, then make transmission to the node. During the
successive transmission, the energy level of nodes gets
changed. Suppose the Node N2 act as source node after sense
the event. It takes the transmission range of 50 meters. The
Node N4, N5 and N7 are located with in the range and mean
threshold is calculated. Then Node N2 decided to take
transmission to N7 that is least distance to the base station but
the energy level is greater than the mean threshold level .so it
act as target node. Next the N7 take transmission range of
50mts.The Node N6, N8, N11, and N10 are located with in the
range and mean threshold is calculated. The Node N7
transmits the data to Node N10 that is least distance to the
base station, but the energy level of the node lesser than the
mean threshold. So it takes the previous least distance node
N11 and compares energy level with the mean threshold.
Suppose if the energy level is more than the mean threshold
takes it as target node N11. The same procedure is repeated by
N11 to transmit the data to Node 15 and finally to the base
station.
Sensor Field
N14
N4
N11
N9
N15
N2
N
7
N10
N1
N
6
N8
N12
Base
station
N5
N3
N13
Sensed Data
Node-ID
Page 84
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
28.
//Route to base station
29.
End If
30. End If
31. End
//Finding the Minimum Distance Node
32. Minimum_dist ()
33. Begin
34.
For each (Node n in Transmission range)
35.
For each (Node m in Transmission range)
36.
Loop Beginning
37.
If (N [n] < M [n])
38.
t=N[n]
38.
N [n]=M [n]
39.
M [n]=t
40.
End If
41
Loop End
42. End
43. Get_next_minimum_dist ()
44. Begin
45.
For each (Node m in Transmission range)
46.
Loop Beginning
47.
Nn=M [n--]
48.
Get_target_node (Nn)
49.
Loop End
50. End
Parameters
Number of nodes
Network dimension
Initial Energy
Dead node
Sense radius
Deployment
Base station location
Data packet size
Transmission Range
Topology
Values
50
50*50 m2
1 Joule
< 0.002 Joule
20 Meters
Random
Middle of left side
53 bytes (variable length)
50 Meters
Static topology
TABLE IV
S.No.
1)
HPSD
Source node selects the target
node from the nearest neighbor
nodes.
2)
Name
Tr
Nn
EN1... ENn
EN
BTh
MTh
BS
Usage
Transmission range
Minimum distance node
Energy level of nodes in the range
Total energy of the nodes in the range
Base Threshold
Mean Threshold
Base Station
4)
5)
DER
Select the target node
based on the criteria:
a) In the range, Node
that nearest to base
station.
b) The node holds the
energy level greater than
the threshold level.
Threshold Energy level
is calculated based on
mean measure.
Less number of nodes
required for
transmission, due to
assumed transmission
range.
Time delay is less for
transmitting a packet.
Z. C. Simulation Result
Fig.4 shows the simulation results for total energy
consumption for three compared schemes. The energy
consumption of DER protocol is the least among three
protocols (DER, LEACH and HPSD).
As LEACH protocol adopts one- hop communication, the
death of nodes will lead to the increasing of communication
distance, which causes the energy consumption of LEACH
protocol increase fast after 300s.The additional cost is
required by transferring of control packets to the
neighbourhood node leads to the energy cost of HPSD is more
than the DER protocol.
Figure.5 shows the simulation results of the time
consumption for two schemes. In the HPSD Protocol, the time
consumption for transmitting the packet is high compared to
DER. Because the number of nodes required for transmitting a
Page 85
V. PERFORMANCE EVALUATION
In this section, it describes the simulation environment and its
parameter and simulation result.
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[20] Routing Techniques in Wireless Sensor Networks: A.
Survey. Jamal N. Al-Karaki. Ahmed E. Kamal. 2004.
[21] Energy Conservation in Wireless Sensor Networks: a
Survey. Giuseppe Anastasi, Marco Conti, Mario Di Francesco,
Andrea Passarella, Volume 7, Issue 3 (May 2009) Year of
Publication: 2009 Pages 537-568.
Page 86
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
annalakshmivmca@gmail.com
darshini.rajendran@gmail.com
3
bhuvanesh_v@yahoo.com
I.
INTRODUCTION
Data mining, the extraction of hidden predictive information
from large databases, is a powerful new technology with great
potential to help companies focus on the most important
information in their data warehouses [1]. Data mining
techniques are the result of a long process of research and
product development. Data mining is a component of a wider
process called Knowledge discovery from databases.
Bioinformatics is the science of organizing and analyzing
biological data that involves collecting, manipulating,
analyzing, and transmitting huge quantities of data.
Bioinformatics and data mining provide exciting and
challenging researches in several application areas especially
in computer science. Bioinformatics is the science of
managing, mining and interpreting information from
biological sequences and structures [2]. Data are collected
from genome analysis, protein analysis, microarray data and
probes of gene function by genetic methods.
The Gene Ontology (GO) is a major bioinformatics initiative
to unify the representation of gene and gene product attributes
across all species. The ontology covers three domains such as,
cellular component, molecular function and biological
process. GO is used to explore and analyze the data by using
the GO term object. A gene is the basic unit of heredity in a
living organism. All living things depend on genes. Genes
hold the information to build and maintain an organism's cells
and pass genetic traits to offspring. GO represents terms
within a Directed Acyclic Graph (DAG) consisting of a
number of terms, represented as nodes within the graph,
Page 87
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Eq., (1)
Where p(ti) is the probability of a term occurring in the
corpus:
Eq., (2)
Where the corpus is the set of annotations for all genes under
consideration. "Root" represents one of the three root ontology
terms and freq (root) is the number of times a gene is
annotated with any term within that ontology. freq(ti) is given
by:
Eq., (3)
The Lord measure calculates the information content of
term ti.
2)
Resniks Measure: Resnik's measure calculates the
similarity between two terms by using only the IC of the
lowest common ancestor (LCA) shared between two terms t1
and t2:
Eq., (4)
Page 88
3)
Lins Measure: Lin's measure of similarity takes into
consideration the IC values for each of terms t1 and t2 in
addition to the LCA shared between the two terms and is
defined as follows.
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Eq., (5)
Which has the advantage that it maps onto values on the
interval [0, 1] unlike Resnik's measure which maps onto the
interval [0, ]. Lin's measure also accounts for both the
commonality and difference between terms. Resnik's
measure does have the desirable property that terms close to
the root of the ontology have a low similarity however. This is
not the case for Lin's measure.
4)
Jiang & Conrath Measure: Jiang and Conrath
proposed an IC based semantic distance, which can be
transformed into a similarity measure.
Eq., (6)
This measure considered probability of term t1, t2 and LCA
of the two terms. The highest score for Lin and Jiang is 1, and
Resnik's measure has no upper bound.
5)
Term Overlap: The term overlap score for two genes
is then calculated as the number of terms that occur in the
intersection set of the two gene product annotation sets.
Eq., (7)
Normalized term overlap (NTO), in which the term overlap
score is divided by the annotation set size for the gene with
the lower number of GO annotations.
Eq., (8)
Traditional cardinality-based similarity measures such as
Jaccard and Dice [21] are computed similarly to NTO, but use
the union or sum, respectively, of the two gene annotation set
sizes as the normalizing factor.
BB.
Eq., (9)
Where vi represents a vector of terms constructed from an
annotation (group of terms) Gi. || corresponds to the size of
the vector and corresponds to the dot product between two
vectors. The source of descriptiveness, commonality and
difference is the same as the situation for set-based
approaches.
CC.
Eq., (10)
Given
Vector-based Approach
Eq., (11)
Page 89
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Eq., (12)
Where dmax is the maximum depth of the taxonomy, and the
len function is the simple calculation of the shortest path
length (i.e. weight = 1 for each edge).
DD.
Set based Approach
Set based methods for measuring the similarity of annotations
are based on the Tversky ratio model of similarity, which is a
general model of distance between sets of terms. It is
represented by the formula
EE.
Graph based Approach
Ontology is a directed, acyclic graph (DAG) whose edges
correspond to relationships between terms. Thus it is natural
to compare terms using methods for graph matching and
graph similarity. We may consider the similarity between
annotations in terms of the sub-graph that connects terms
within each annotation. Annotation similarity is then
measured in terms of similarity between two graphs. Graph
matching has only a weak correlation with similarity between
terms. It is also computationally expensive to compute, graph
matching being an NP-complete problem on general graphs.
1)
Improving Similarity Measures by Weighting Terms:
Set, vector and graph-based methods for measuring similarity
between annotations can be improved by introducing a
weighting function into the similarity measure. For example,
the weighted Jaccard distance can be formulated as:
Eq., (16)
Where, as before, G1 and G2 are annotations or sets of terms
describing data (e.g. a gene product), Tx is the xth term from a
set of terms and m(Tx) denotes the weight of Tx.
Eq., (13)
Where G1 and G2 are sets of terms or annotations from the
same ontology and is an additive function on sets (usually
set cardinality). For ==1 we get the Jaccard distance
between sets:
Eq., (14)
And for ==1/2 we get the Dice distance between sets
Eq., (17)
Where p(Ti) corresponds to the probability of a term Ti or its
taxonomic descendants occurring in a corpus.
Other Weighting Approaches: Other measures of information
can be used not necessarily relying on corpus data. One
measure [14] relies on the assumption that how the ontology is
constructed is semantically meaningful:
Eq., (15)
In this situation the source of descriptiveness of an annotation
is its set of terms. Each term and its set of associated instances
are considered independent of other terms. The commonality
and difference between annotations is modeled as set
intersection and difference of sets of terms respectively. Setbased approaches return a similarity of zero if they do not
share common terms ignoring the fact that terms may be
closely related. Because of the atomic nature of terms in the
set-based approach the monotonicity property does not apply
.
Page 90
Eq., (18)
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
FF.
Term-based Approaches
In term-based approaches similarity between pairs of terms
from each annotation are computed. These weightings are
then combined in order to characterize the similarity between
annotations as a whole. There are several ways to combine
similarities of pairs of terms such as the min, max or average
operations. Term-based approaches depend on a function s(Ti,
Tj) where Ti and Tj are terms from two annotations G1 and G2
respectively. s(Ti, Tj) provides a measure of distance/similarity
between these two terms. Once distances has been measured
between all possible pairs of terms they are then aggregated
using an operation such as max or the average of all distances.
For example:
Eq., (20)
Eq., (19)
More sophisticated term based approaches combine multiple
measures of term similarity and aggregate similarity values
using more complex functions.
Where T1 and T2 are the two terms being compared, Tlcta is the
term that corresponds to the lowest common taxonomic
ancestor between T1 and T2. Troot denotes to root node of the
ontology (assuming that the ontology has only one root).
dist(Ti, Tj) denotes the graph distance between terms Ti and Tj.
The 2 * dist(Tlcta, Troot) component of the denominator serves
to normalize the measure.
Approach
Node-based Approach
Measures
The similarity value is defined as the information
content value of specific gene pairs.
Author
Resnik, Lins, Jiang and
Conrath. Meeta Mistry
and Paul Pavlidis
Edge-based Approach
Set-Based Approach
Graph-Based Approach
Term-Based Approach
Vector-Based Approach
Huang da W, Sherman
BT, Tan Q, Collins JR,
Alvord WG, Lempicki
RA, Jeffrey Hau,
William Lee, Brendan
Sheehan
Brendan Sheehan, Aaron
Quigley, Benoit Gaudin
and Simon Dobson, Lee
JH, Kim MH, Lee YJ
Rada R, Mili H, Bicknell
E, Bletner, Wu-Palmer,
Lee JH, Kim MH, Lee
YJ
Resnik P , Lord PW,
Stevens RD ,Meeta
Mistry and Paul Pavlidis,
Brendan Sheehan
Ref No
[6], [4],
[3],
[13],[19],
[21],[22]
[7],[8],
[10]
[12],[18],
[19]
[13],[16],
[19]
[15],[16],
[17]
[4],[6],
[13],[19]
Page 91
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IV.
Each descendant can have their own genes. The genes can
be finding out by using the goid values. Descendants contain
the corresponding goid for the genes. The respective genes
can be shown in Fig. 4.
Page 92
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Genes
YKL050C
Similarly we can get the genes from the yeast dataset and
can be arranged in the structure. For example we consider the
six genes namely YKL050C, YGL114W, YBR241C,
YIL166C, YBL095W, YOR390W. And the corresponding
goid, descendants can be shown in Fig. 6.
YGL114W
YBR241C
Euclidean
Jaccard
Cosine
Euclidean
Jaccard
Cosine
Euclidean
Jaccard
Cosine
YKL050C
YGL114W
YBR241C
0
0
0
21.1896
0.7143
0.0061
11.0905
0.7143
0.0020
21.1896
0.7143
0.0061
0
0
0
21.9089
0.8571
0.0065
11.0905
0.7143
0.0020
21.9089
0.8571
0.0065
0
0
0
Genes
YKL050C
Lin
Jiang
YGL114W
Lin
Jiang
YBR241C
Lin
Jiang
YKL050C
0
0
1.5728
0.9039
1.5749
0.9037
V.
A.
Experimental Results
The traditional method for semantic similarity measure is
calculated based on the content comparison. The basic types
of traditional methods are Euclidean, Jaccard and Cosine
YGL114W
1.5728
0.9039
0
0
1.6721
0.8248
YBR241C
1.5749
0.9037
3.6721
0.8248
0
0
CONCLUSION
Page 93
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[34]
Arun. K. Pujari, Data Mining Techniques, Universities press
(India) Limited 2001, ISBN-81-7371-380-4.
[35]
Daxin Jiang, Chun Tang, and Aidong Zhang, Cluster Analysis for
Gene Expression Data: A Survey, IEEE Transactions on knowledge AND
Data Engineering, Vol 16, No. 11, November 2004.
[36]
Resnik P: Using Information Content to Evaluate Semantic
Similarity in a Taxonomy Proc 14th Int'l Joint Conf Artificial Intelligence,
1995, Vol 1, p.448-453.
[37]
Resnik P: Semantic Similarity in a Taxonomy: An InformationBased Measure and its Application to Problems of Ambiguity in Natural
Language J Artif Intell Res 1999, 11:95-130.
[38]
Schlicker A, Domingues FS, Rahnenfuhrer J, Lengauer T: A new
measure for functional similarity of gene products based on Gene Ontology
BMC Bioinformatics 2006, 7:302.
[39]
Lord PW, Stevens RD, Brass A, Goble CA: Semantic similarity
measures as tools for exploring the gene ontology Pac Symp Biocomputing
2003:601-612.
[40]
Jiang JJ, Conrath DW: Semantic Similarity Based on Corpus
Statistics and Lexical Taxonomy ROCLING X: 1997; Taiwan 1997.
[41]
Tatusova TA, Madden TL: BLAST 2 Sequences, a new tool for
comparing protein and nucleotide sequences FEMS Microbiol Lett 1999,
174(2):247-250.
[42]
Guangchuang Yu: GO-terms Semantic Similarity Measures
October 28, 2009.
[43]
Francisco M. Couto, Mrio J. Silva, Pedro Coutinho:
Implementation of a Functional Semantic Similarity Measure between GeneProducts DIFCUL TR0329, 2003.
[44]
Chabalier J, Mosser J, Burgun A: A transversal approach to
predict gene product networks from ontology-based similarity BMC
Bioinformatics 2007, 8:235.
[45]
Huang da W, Sherman BT, Tan Q, Collins JR, Alvord WG,
Roayaei J, Stephens R, Baseler MW, Lane HC, Lempicki RA: The DAVID
Gene Functional Classification Tool: a novel biological module-centric
algorithm to functionally analyze large gene lists Genome Biol 2007,
8(9):R183.
[46]
Meeta Mistry1 and Paul Pavlidis*2: Gene Ontology term overlap
as a measure of gene functional Similarity Bioinformatics 2008, 9:327.
[47]
Veale N, Seco JHT: An Intrinsic Information Content Metric for
Semantic Similarity in WordNet ECAI 2004 2004:1089-1090.
[48]
Rada R, Mili H, Bicknell E, Bletner M: Development and
Application of a Metric on Semantic Nets IEEE Transactions on Systems,
Man, and Cybernetics 1989, 19:17-30.
[49]
Lee JH, Kim MH, Lee YJ: Information Retrieval Based on
Conceptual Distance in IS-A Hierarchies Journal of Documentation 1993,
49:188-207.
[50]
Wu Z, Palmer M: Verb semantics and lexical selection In 32nd.
Annual Meeting of the Association for Computational Linguistics New
Mexico State University, Las Cruces, New Mexico; 1994:133-138.
[51]
Jeffrey Hau, William Lee, John Darlington: A Semantic Similarity
Measure for Semantic Web Services Imperial College London, 180 Queens
Gate, London, UK.
[52]
Brendan Sheehan*, Aaron Quigley, Benoit Gaudin and Simon
Dobson: A relation based measure of semantic similarity for Gene Ontology
annotations BMC Bioinformatics 2008. Vol 9,9:468.
[53]
Chabalier J, Mosser J, Burgun A: A transversal approach to
predict gene product networks from ontology-based similarity BMC
Bioinformatics 2007, 8:235.
[54]
Popescu M, Keller JM, Mitchell JA: Fuzzy measures on the Gene
Ontology for gene product similarity IEEE/ACM Trans Comput Biol
Bioinform 2006, 3(3):263-274.
[55]
Lin D: An information-theoretic definition of similarity In 15th
International Conf on Machine Learning Morgan Kaufmann, San Francisco,
CA; 1998:296-304
Page 94
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Dr.V.Saravanan#1 , T.R.Sivapriya*2
Dr.N.G.P Institute of Technology, Coimbatore, India
*
tvsaran@hotmail.com
Abstract
Artificial neural networks have been applied in a variety of realworld scenarios with remarkable success. In this paper, a hybrid
PSO based back propagation neural network for classifying
brain MRI is proposed. The results show that there is a marked
difference while training BPN with PSO. Also a customized PSO
is used to train the BPN which again results in better
performance compared to the conventional PSO used in training.
The Decision tree extracts rules from the trained network that
aids in medical diagnosis.
The neural model based on Particle Swarm
Optimisation was trained with 150 samples (including all patients
with mild and severe dementia). Additional hundred samples
have been used for validation. The proposed algorithm
outperforms the result of conventional training algorithms and is
found to have 95% sensitivity and 96% accuracy. The samples
were tested with a radiologist and psychiatrist by means of blind
folded study. When compared with the experts, the algorithm
achieved good accuracy with higher rate of reliability for the
assessment of mild and severe dementia.
Keywords --- Image mining, Neural networks, PSO, Decision tree
1.INTRODUCTION
There has been a growing number of research applying ANNs
for classification in a variety of real world applications. In
such applications, it is desirable to have a set of rules that
explains the classification process of a trained network The
classification concept represented as rules is certainly more
comprehensible to a human user than a collection of ANNs
weights . This paper proposes a fast hybrid PSO based BPN
classifier, whose results are mined by decision tree for the
diagnosis of dementia.
A neural network, by definition, is a massively parallel
distributed processor made up of simple processing units, each
of which has a natural propensity for storing experiential
knowledge and making the knowledge available for use [11].
Neural networks are fault tolerant and are good at pattern
recognition and trend prediction. In the case of limited
knowledge, artificial neutral network algorithms are
frequently used to construct a model of the data.
Page 95
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
g represents the index of the particle with the best pfitness, and d is the dth dimension.
IV.
DECISION TREE
Decision tree induction is one of the simplest, and yet most
successful forms of learning algorithm. It serves as a good
introduction to the area of inductive learning, and is easy to
implement. A decision tree takes as input an object or
situation described by a set of attributes and returns a
decision-the predicted output value for the input. The input
attributes can be discrete or continuous. For now, we assume
discrete inputs. The output value can also be discrete or
continuous; learning a discrete-valued function is called
classification learning; learning a continuous function is called
regression.
A. Evaluation of Rule set
The constructed rule set is evaluated against the instance test
set. Each instance is sequentially tested with rules in the rule
set, until it matches one rule antecedent. The instance class is
then compared to the matching rule predicted class. When all
Page 96
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
CLASS I
CLASS II
PREPROCESSING
NORMAL
HIGLHLY
DEMENTED
LESS DEMENTED
FEATURE EXTRACTION
TRAINNG IN ANN
TESTING
CLASSIFY CC,VT,
EXTRACT RULES
Page 97
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
((2/3)*(N1))+N2
Rules :
IF CC=+++ AND HIP=+++ AND
BG=+++ AND VT=++++
THEN DEMENTIA= HIGH
IF CC= +++ AND HIP=+ AND
BG=+ AND VT=+++
THEN DEMENTIA= MODERATE
B. Validation
PSO-BPN
BPN
Iterati
ons
175
Iterati
ons
175
MRI
200
TABLE I
COMPARISON OF CONVERGENCE
225
575
BPN
Sensitivity %
85
HYBRID
PSOBPN
92
Accuracy %
83
90
Specificity %
87
93
CC- Cerebral cortex
HIP= Hippocampus
VT Ventricle
BG Basal Ganglia
+++ high increase ;+ less increase
-- no increase
275
300
175
200
TABLE 2
COMPARISON OF EFFICIENCY
MEASURE
250
NORMAL
ITERATIONS
>2000
800
CUSTOMISED
PSOBPN
95
96
97
HIGHLY DEMENTED
TECHNIQUE
BPN
BPN-PSO
WITH
25
PARTICLES
BPN
PSO
WITH
10
PARTICLES
225
250
275
300
GBest
0.247863299824
5176
0.247863290721
40684
0.247863290493
80179
0.247863290484
66602
0.247863290484
66071
0.247863290484
66071
0.247863299824
5176
200
0.247863290721
40684
0.247863290493
80179
0.247863290484
66602
0.247863290484
66071
0.247863290484
66071
200
225
250
275
300
175
225
250
275
300
MSE
0.003164
58/0
0.003122
28/0
0.003033
84/0
0.002969
5
0.002870
65
0.002763
27/0
0.003164
58/0
0.003122
28/0
0.003033
84/0
0.002969
5
0.002870
65
0.002763
27/0
Page 98
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
0.7
0.6
out
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
in
25
20
15
10
5
0
1
10
VII. CONCLUSION
The hybrid PSO based BPN effectively classifies the
brain MRI images of dementia patients. It appears to prove for
highly reliable for use by practitioners in the medical field.
Mining large database of images poses severe challenges.
However, the hybrid classifier is being offered s to overcome
difficulties in diagnosis and treatment. The rules mined from
Page 99
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 100
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ci.johnpaul@yahoo.com
elson_april88@yahoo.com
3
k.najeeb@gmail.com
VI.
INTRODUCTION
The high transistor density, together with the growing
importance of reliability as a design issue, has made early
estimation of worst case power dissipation (peak power
estimation) [1] in the design cycle of logic circuits an
important problem. High power dissipation may results in
decrease in performance or in extreme cases cause burn out
and damage to the circuit. The increased transistor density in
the processors can be well defined by Moore's law which
describes a long term trend in the computing hardware, in
which the number of transistors that can be placed
inexpensively on an integrated circuit has doubled
approximately in two years. Whenever technology advances
with new high performance processors, transistor density will
obey Moore's law. As more and more transistors are put on a
chip, the cost to make each transistor decreases, but the
chance that the chip will not work due to a defect increases.
Moreover due to the complexity of their operations their
interrelationship between the transistors increases. This will
increase the processor complexity both in its design and
complexity. In this context it is essential to formulate the
P R=
1
V 2DD f
2
[N
g C g
(1)
Page 101
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 102
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 103
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Mutation (offspring).
until Stopping criteria is reached.
C . EXPLANATION
Updation
Create Initial Population
A population of size P is generated randomly to create
the initial population. Here the population elements are the
strings of binary numbers containing zeros and ones which are
equal to the number of input lines in the circuit [9]. The size
of the population is to be large enough so that variance in the
strings can be easily formed. From this initial population
optimization process is performed for the production of the
toggle vectors.
Selection of parents
Each individual or a string has a fitness value, which is
a measure of quality of the solution represented by the
individual [18] [19]. The formula which is used to calculate
the fitness value of each individual
F i =T i
T w T b
3
(2)
Page 104
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
No. of
gates
Toggle count
In lines
c17.v
10
100
c432.v
160
10
100
76
36
c7552.v
3513
10
100
1603
207
b01.v
70
10
100
37
b02.v
41
10
100
10
b03.v
312
10
100
96
b04.v
998
10
100
219
13
b05.v
1007
10
100
484
b06.v
73
10
100
37
b07.v
682
10
100
142
b08.v
261
10
100
109
11
b09.v
273
10
100
91
b10.v
288
10
100
103
13
b11.v
793
10
100
234
b11n.v
1145
10
100
284
b12.v
1084
10
100
537
b13.v
577
10
100
230
12
b14.v
6437
10
100
2398
34
b141.v
6934
10
100
2766
34
b15.v
12292
10
100
3649
38
b20.v
13980
10
100
5251
34
b211.v
14882
10
100
5646
34
b221.v
21671
10
100
7749
34
Page 105
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
boothG.v
1145
10
100
658
10
IdctG.v
7550
10
100
3086
41
TABLE I
TOGGLE COUNT OF GATE-LEVEL CIRCUITS
TOTAL BIT
TOGGLE
c432b.v
135
10
100
57
c74L85b.v
13
10
10
11
c74L81b.v
52
10
100
20
c6288.v
32
10
100
22
TABLE II
TOGGLE COUNT OF BEHAVIOR LEVEL CIRCUITS
REFERENCES
[56]
[1]
E. M. Rudnick M. S. Hsiao and J. H. Patel, Peak power
estimation of
[57]
VLSI circuits: New peak power measures, in IEEE
Transactions on
[58]
VLSI Systems, 2000, p. 435439.
[59]
[2]
E. M. Rudnick M. S. Hsiao and J. H. Patel, Peak power
estimation of
[60]
VLSI circuits: New peak power measures, in IEEE
Transactions on
[61]
Very Large Scale Integration Systems., 2000, pp. 435439.
[62]
[3]
C.T. Hsieh C.S. Ding, Q. Wu and M. Pedram, Statistical
estimation of
[63]
the cumulative distribution function for power dissipation in
VLSI
[64]
circuits, in Proceedings of Design Automation Conference,
1997, pp.
[65]
371376.
[66]
[4]
Liao and Lepak K. M, Temperature and supply voltage
aware
[67]
performance and power modeling at micro architecture
level, in IEEE
[68]
Trans on Computer-aided Design of Integrated Circuits and
Systems,
[69]
2005, pp. 10421053.
[70]
[5] M. Hunger and S. Hellebrand, Verification and analysis of
self[71]
checking properties through ATPG, in Proc. 14th IEEE Int.
On-Line
[72]
Testing Symp (IOLTS08), 2008, pp. 2530.
[73]
[6] S. Sheng A. P. Chandrakasan and R. W. Broderson, Lowpower CMOS
[74]
digital design, Journal of Solid-State Circuit, vol. 27, no. 4,
pp. 473
[75]
483, April 1992.
[76]
[7] H. Takase K. Zhang, T. Shinogi and T. Hayashi, A method
for
[77]
evaluating upper bound of simultaneous switching gates
using circuit
[78]
partition, in Asia and South Pacific Design Automation
Conf.
[79]
(ASPDAC), 1999, p. 291.
[80]
[8]
M. Pedram C-Y. Tsui and A. M. Despain, Efficient
estimation of
[81]
dynamic power dissipation under a real delay model, in
IEEE
[82]
Transactions on Computer-Aided Design of Integrated
Circuits and
[83]
Systems, 1993, pp. 224228.
[84]
[9]
Theodore W. Manikas and James T. Cain, Genetic
algorithms vs
[85]
simulated annealing: A comparison of approaches for solving
the circuit
[86]
partitioning problem, in IEEE Trans. on Computer-aided
Design, 1996,
[87]
pp. 6772.
[88]
[10]
P. Schneider and U. Schlichtmann, Decomposition of
Boolean
[89]
functions for low power based on a new power estimation
technique.,
[90]
In Proceedings of 1994 International Workshop on Low
Power Design,
[91]
1994, pp. 123128.
[92]
[11]
S. Devdas F. Fallah and K. Keutzer, Functional vector
generation for
[93]
HDL models using linear programming and Boolean
satisfiability, in
[94]
IEEE
Transactions on Computer-Aided Design of
Integrated Circuits
[95]
and Systems, August 2001, pp. 9941002.
[96]
[12] S. Ravi L. Lingappan and N. K. Jha, Satisfiability-based
test
[97]
generation for non separable RTL controller-datapath
circuits, in IEEE
[98]
Transactions on Computer-Aided Design of Integrated
Circuits and
[99]
Systems., 2006, pp. 544557.
Page 106
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[100]
[13]
Q. artas, Simulation Based Power Estimation, Altera
Corporation,
[101]
2004.
[102]
[14] A. K. Chandra and V. S. Iyengar, Constraint solving for
test case
[103]
generation: a technique for high-level design verification,
in In
[104]
Proceedings on International Conference on Computer
Design: VLSI
[105]
in Computers and Processors., 1992, pp. 245248.
[106]
[15] I. Ghosh and M. Fujiita, Automatic test pattern generation
for
[107]
functional register transfer level circuits using assignment
decision
[108]
diagrams, in IEEE Trans on Computer-aided Design of
Integrated
[109]
Circuits and Systems, 2001, pp. 402415.
[110]
[16] Noel Menezes Chandramouli Kashyap, Chirayu Amin and
Eli Chiprou,
[111]
A nonlinear cell macromodel for digital applications, in in
IEEE
[112]
Transactions on Computer-Aided Design of Integrated
Circuits and
[113]
Systems, 2007, p. 678685.
[114]
[17]
C. Y. Wang and K. Roy, Cosmos: A continuous
optimization
[115]
approach for maximum power estimation of CMOS
circuits, in
[116]
Proceedings of ACM/IEEE International Conference on
Computer
[117]
Aided Design, 1997, pp. 4550.
[118]
[18]
Thomas Weise, Global Optimization Algorithms: Theory
and
[119]
Application, www.it-weise.de/projects/book.pdf, Germany,
2006.
[120]
[19]
Alex Fraser and Donald Burnell, Computer Models in
Genetics,
[121]
McGraw-Hill, New York, 2002.
[122]
[20]
R. Vemuri and R. Kalyanaraman, Generation of design
verification
[123]
tests from behavioral VHDL programs using path
enumeration and
[124]
constraint programming, in in IEEE Transaction on VLSI
Systems, 1995, pp. 201214.
Page 107
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ymece@tce.edu
rajuabhai@tce.edu
archu_elen@yahoo.co.in
I.
INTRODUCTION
Security of citizens in public places such as Hotels, Markets,
Airports and Train stations is increasingly becoming a crucial
issue. The fundamental problem in visual surveillance system
is detecting human presence, tracking human motion,
analysing the activity and asses abnormal situations
automatically. Based on this motivation, crowd modelling
technology has been under development to analyse the video
input which is constantly crowded with humans, as well as to
ready to act against abnormal activities emerge. The aim of
the paper is to classify human normal and abnormal actions in
crowds using 3D star skeletonization for a surveillance system
which is based on RVM.
Human action and posture recognition is a significant part on
human centred interface which is coming up to the resent
issues now-a-days. In posture recognition application, the
skeletal representation [1] captures the essential topology of
the underlying object in a compact form which is easy to
understand. There are three existing methods for skeletal
construction such as distance transformation [2]-[5], voronoi
diagram [6]-[8] and thinning [9]-[12]. These methods can
generate an accurate skeleton but are computationally
Page 108
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Video Sequence
Background Subtraction
3D Star Skeletonization
Classified Poses
Fig 1: The block diagram of the Video Surveillance System
I.
METHODOLOGY
The overview of the system is shown in fig.1. In the
foreground detection stage, the blob detection system detects
the foreground pixels using a statistical background model.
Subsequently foreground pixels will be grouped into blobs.
The 3D star skeleton features are obtained for the foreground
image sequences. Using these features the Relevance Vector
Machine classification scheme classifies the given image
sequence into normal action or abnormal action sequence.
II.
Page 109
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1)
xc =
1 N
xi
N j =1
1
Nb
(2)
Nb
z
i =1
M rotation
( xi xc ) + ( yi yc ) + ( zi zc )
2
(4)
M reflection
0
=
0
0 0
w
1 0
0
0 1 255
0 0
1
(5)
0
=
0
w
2
1 0
0
256
0 1
2
0 0
1
0
(6)
(3)
di =
M translation
(1)
1 N
yc = yi
N j =1
zc =
cos
0
=
sin
0 sin
1
0
0 cos
0
0
0
0
(7)
Where,
is the variable which is changed according to the
projection map.
D. Clustering Candidates and 3D Star Skeleton Construction
To determine the extremities as features of the 3D star
skeleton, the transformed candidates should be classified.
After transformation process the transformed candidates are
scattered in one particular coordinate indicating the some part
of posture locates near others generated from the same part,
not exist at one specific position together due to the thickness
of human body. Hence, all the candidates are to be classified
into several groups. The mean of each cluster becomes the
extremities of star skeleton. K mean clustering algorithm is
used to classify the candidates. According to which the
candidates are divided into k groups, defining K as 5. The
centroid is the average of centroids on all the projection maps.
Finally, the 3D star skeleton is constructed by connecting the
extremities with the centroid. The features of 3D star skeleton
are generated by calculating the distance from extremities to
the centroid of 3D star skeleton.
In some cases the candidates might not locate at the landmark
points, such as feet or hands. For example, one knee point can
screw up the foot canter if it is included in that cluster leading
to error in clustering process. Hence K-Mean Clustering
Algorithm is modified for fitting to the proposed method. To
reduce the error candidates, noises are filtered from the cluster
by using standard deviation. After clustering process, if some
candidate is far from the mean of its cluster, it is removed and
the mean of that cluster is recalculated. Thus the 3D star
skeleton is constructed.
IV. RELEVANCE VECTOR MACHINE
Action classification is a key process for analyzing the human
action. Computer vision techniques are helpful in automating
this process, but cluttered environments and consequent
Page 110
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
X = wk ( z ) + K
( )
Lk = C k( n ) Yk( n )
( )
S k Yk( n )
(9)
( )
Yk( n ) = x( n ) wk Z ( n )
Here, Y
(n)
(10)
mapping function k.
k
(n)
(8)
Page 111
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Benchmark
Dataset
IBM
Dataset
Weizmann
Dataset
CMU
Dataset
CAVIAR
Indoor/outdoor
Sequence Length
Frame Size
Indoor/outdoor
1628
240X320
Indoor/outdoor
864
240X320
outdoor
1320
240X320
outdoor
920
240X320
Indoor
781
240X320
Eli-Walk
Outdoor
645
240X320
Eli-Run
Outdoor
712
240X320
Moshe-Bend
Outdoor
786
240X320
Eli-Jump
Outdoor
855
240X320
Indoor
1234
240X320
Indoor
1065
240X320
Indoor
1187
240X320
Page 112
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Normal
Abnormal
Benchmark
Datasets
Running
Carrying
Bar
Bending
Waving
hand
IBM
Eli-Walk
Eli-Run
MosheBend
Eli-Jump
CMU1
CMU2
CAVIAR
Vectors
RVM
Multi class RVM with
skeleton Points
11
18
23
12
16
12
15
18
20
16
19
14
12
VI.
CONCLUSION
In this paper, a novel and real-time video surveillance system
for classifying human normal and abnormal action is
described. Initially, the foreground blobs are detected using
adaptive mixtures of Gaussians which is robust to
illumination changes and shadows. Then the projection map
system is generated to get the 3D information of the object in
the foreground image. Then the 3D star skeleton features are
extracted. These features reduce the training time and also
improve the classification accuracy. The features are then
learnt though a relevance vector machine to classify the
individuals actions into normal and abnormal behaviour. The
number of relevance vectors obtained is less and it does not
require the tuning of a regularization parameter during the
training phase. The error rate is also reduced by selecting the
appropriate Gaussian kernel which also reduces the
computational complexity. The distict contribution of this
proposed work is to classify the actions of individuals. The
proposed system is able to detect abnormal actions of
individuals such as running, bending down, waving hand
while others walk, and people fighting with each other with
high accuracy.
Page 113
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Dataset
Original Frame
Background
Subtracted
Image
Skeleton
Image
Classified Image
IBM dataset
Weizmann
dataset
Eli-Walking
Eli-Running
MosheBending
EliJumping
CMU
Fighting
CMU
Pulling
CAVIAR
Page 114
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ACKNOWLEDGMENT
We sincerely thank our management and our respectful
principal for providing us with all the facilities we required for
carrying on our research. We also thank them for their support
and encouragement in all aspects for our research.
REFERENCES
[1] Nicu D. Cornea, Deborah Silver, Patrick Min, Curve-Skeleton
Properties, Application, and Algorithms, IEEE Trans. Visualization
and Computer Graphics, vol.13, 2007, pp. 530-548.
[2] Gunilla Borgefors, Distance transformations in digital images,
Computer Vision, Graphics, and Image Processing, vol. 34, 1986.
pp. 344-371.
[3] Gunilla Borgefors, Distance transformations in arbitrary
Dimensions, Computer Vision, Graphics, and Image Processing,
vol. 27, 1984. pp. 321-345.
[4] Gunilla Borgefors, On digital distance transforms in three
dimensions, Computer Vision and Image Understanding, vol. 64,
1996, pp. 368-376.
[5] Frank Y.Shih and Christopher C.Pu, A skeletonization algorithm by
maxima tracking on Euclidean distance transform, J.Pattern
Recognition, vol.28, 1995, pp. 331-341.
[6] Franz Aurenhammer, Voronoi diagrams A Survey of a fundamental
geometric data structure, ACM Commputing Survey, vol. 23,1991,
pp. 345-405.
[7] Jonathan W. Brandt and V. Ralph Algazi, Continuous skeleton
computation by Voronoi diagram, CVGIP: Image understanding, vol.
55, 1991, pp. 329-338.
[8] Kenneth E. Hoff III, Tim Culver, John Keyser, Ming Lin and Dinesh
manocha, Fast computation of generalied Voronoi diagrams using
graphic hardware, in Proc. 26th annual Conf. Computer graphics and
interactive tech-nique, 1999, pp. 277-286.
[9] Kalman Palagyi, Erich Sorantin, Emese Balogh, Attila Kuba,
Csongor Halmail, Balazs Erdohelyi, and Klaus Hausegger, A
Sequential 3D Thinning Algorithm and Its Medical Applications, in
Proc. 17th international Conf.IPMI, vol. 2082, 2001, pp. 409-415.
[10] Kalman Palagyi and Attila Kuba, A 3D 6-subiteration thinning
algorithm for extracting medial lines, Pattern Recognition Letters,
vol. 19, 1998, pp. 613-627.
[11] Kalman Palagyi and Attila Kuba, Directional 3D thinning using 8
subiterations, in proc. 8th international Conf. DGCI, vol. 1568,
1999, pp. 325-336.
[12] Ta-Chin Lee, Rangasami L. Kashyap and Chong-Nam Chu,
Building skeleton models via 3-D medical Surface/axis thinning
algorithms, CVGIP : Graphical Models and Image Processing, vol.
56, 1994, pp. 462-478.
[13] H. Fujiyoshi and A.J Lipton, Real-time human motion analysis by
image skeletonization, 4th IEEE Workshop on Application of
Computer Vision, 1998, pp. 15-21.
[14] Hsuan-Sheng Chen, Hua-Tsung Chen, Yi-Wen Chen and Suh-Yin
Lee, Human Action Recognition Using Star Skeleton, in Proc.
4th ACM international workshop on video surveillance and sensor
networks, 2006, pp.171-178.
[15] B. Yogameena, S. Veeralakshmi,E. Komagal, S. Raju, and V.
Abhaikumar, RVM-Based Human Action Classification in
Crowd through Projection and Star Skeletonization, in EURASIP
Journal on Image and Video Processing, vol. 2009, 2009.
[16] E. Ardizzone, A. Chella, R. Phone, Pose Classification Using
Support Vector Machines, International Joint Conference on
Neural Networks, vol. 6, 2000.
[17] D.F. Llorca, F. Vilarino, Z. Zhouand G. Lacey, A multi-class
SVM classifier ensemble for automatic hand Washing quality
assessment, Computer science Dep, Trinity College Dublin, Rep, of
Ireland.
[18] Cipolla and Philip et al, Pose estimation and tracking using
Multivariate regression, pattern recognition,2008.
[19] Hui Cheng Lian and Bao Liang Lu, Multi - View Gender
Classification using Local Binary Patterns and Support Vector
machines, verlag, 2006, pp. 202-209.
[20] Sungkuk Chun, Kwangjin Hong, and Keechul Jung, 3D Star
Skeleton for Fast Human Posture Representation, Proceedings of
World Academy of Science, Engineering and Technology, vol. 34,
2008.
[21] S. Atev, O. Masoud, N.P. Papanikolopoulos, Practical mixtures
of Gaussians with brightness monitoring, Proc. IEEE Int.
Conf. Intel. Transport. Syst. pp.423428, October, 2004.
[22]
C. Stauffer and W.E.L Grimson., Adaptive background mixture
models for real time tracking, In proceedings of the IEEE Intl
conf.Computer Vision and Pattern recognition pp. 246 -252 1999.
[23] G. Tzikas, A. Likas, N. P. Galatsanos, A. S. Lukic and M. N. Wernick,
Relevance vector machine analysis of functional neuroimages, IEEE intel.
Symposium on Biomedical Imaging, vol.1, pp. 1004-1007, 2004.
[24] H.C.Lian, B.L. Lu, Multi-view Gender Classification Using Local
Binary Patterns and Support Vector Machines,-verlag -pp. 202209,
2006.
[25] R. Chellappa, A. K. Roy-Chowdhury and S. K. Zhou, Recognition
of Humans and Their Activities Using Video, First edition,
Morgan and Claypool publishers, 2005.
[26] H. Ren and G. Xu, Human action recognition in smart classroom,
IEEE Proc. Int. Conf. on Automatic Face and Gesture Recognition, pp. 399404, 2002.
[27] A. Mittal, L. Zhao, L.S. Davis, Human Body Pose Estimation Using
Silhouette Shape Analysis, proceedings of the IEEE International
Conference on Advanced Video and Signal Based Surveillance (AVSS'03),
2003
[28] D Rita Cucchiara , A. Prati, R. Vezzani, University of Modena and
Reggio Emilia, Italy, Posture Classification in a Multi-camera Indoor
Environment, International conference on image processing..vol 1, 2005.
[29] G. Tzikas, A. Likas, N. P. Galatsanos, A. S. Lukic and M. N. Wernick,
Relevance vector machine analysis of functional neuroimages, IEEE intel.
Symposium on Biomedical Imaging, vol.1, pp. 1004-1007, 2004.
Page 115
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ymece@tce.edu
rajuabhai@tce.edu
archu_elen@yahoo.co.in
I.
INTRODUCTION
Automatic visual surveillance systems could play an
important role in supporting and eventually replacing human
observers. To become practical, this system needs to
distinguish people from other objects and to recognize
individual persons with a sufficient degree of reliability,
depending on the specific application and security level.
These applications required the task of estimating the gait and
automatically recognising the gender for high level analysis.
The first step in the automatic gender classification system is
background subtraction. Piccardi et al. have reviewed a
number of background subtraction approaches [1]. Wren et
al. [2] have proposed a statistical method, in which a single
Gaussian function was used to model the distribution of
background. Later Mittal et al. have proposed a novel kernel
based multivariate density estimation technique that adapts
the bandwidth according to the uncertainties [3].Yet there are
issues like the robustness to illumination changes, the
effectiveness in suppressing shadows and the smoothness of
foregrounds boundary which need to be addressed in indoor
and outdoor environments [4].
Page 116
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
III.BACKGROUND SUBTRACTION
The first stage of video surveillance systems seeks
background to automatically identify people, objects, or
events of interest in various changing environments. The
difficult part of background subtraction is not the differencing
itself, but maintenance of a background model and its
associated statistics. In this work, background substraction is
accomplished in real-time using the adaptive mixture of
Gaussians method proposed by Atev et al [22]. There are
some of the practical issues concerning the use of the existing
algorithm based on mixtures of Gaussians for background
segmentation in outdoor scenes, including the choice of
parameters [23]. The proposed system analyses the choice of
different parameter values and their performance impact are
obtained to get robust background model. In addition to that,
the notion for adapting this method is because of its
simplicity and also an efficient method for coping with
sudden global illumination changes based on the contrast
changes over time.
It describes K Gaussian distributions to model the surface
reflectance value and is represented by eqn (1).
k
P ( X t ) = i ,t * ( X t , i ,t , i, t )
Where
K is the number of distributions,
i, t
Background subtraction
Silhouette segmentation
using centroid
Ellipse Fitting
. (1)
i =1
1
n
1
(2) 2 || 2
1 (Xt t )T 1(Xt t )
l 2
..... (2)
= k2 I
(3)
t =(1 ) t 1 + ( M t )
i, t
Feature Extraction
.
(4)
Where
is the learning rate and
Classified Gender
Page 117
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
B = arg min b ( k T )
.. (5)
k =1
Where,
T is a measure of minimum models for background. The
background subtracted image obtained using adaptive GMM
is shown in Fig. 2.
IV. CENTROID
For each silhouette of a gait, the centroid (xc, yc) is
determined by using the following eqns (6 ),(7) and is shown
in Fig 3.
xc =
1
N
x
j =1
1
yc =
N
.. (6)
i
N
y
j =1
.. (7)
1
I ( x, y ) x
N x, y
1
y = I ( x, y ) y
N x, y
x=
.. (8)
. (9)
N = I ( x, y )
......... (10)
x, y
( x x ) 2
( x x)( y y )
a c 1
b b = N . I ( x, y ).
2
x, y
( x x)( y y ) ( y y )
............(11)
The covariance matrix can be decomposed into eigenvalues 1, 2
and eigenvectors v1, v2 which indicate the length and orientation of
the major and minor axes of the ellipse which is obtained by eqn
(12).
1 0
a c
c b [v1 v 2 ]= [v1 v 2 ] 0
. (12)
Page 118
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
l=
1
2
. (13)
v .x
= angle (v1 ) = arccos 1
v1
Frame 113
Frame 125
Frame 145
.. (14)
Frame 124
Frame 132
Frame 129
I. FEATURE EXTRACTION
After calculating the orientation and elongation of the human
walking figure, an ellipse is fitted according to the features
extracted from each frame of a walking sequence consists of
features from each of the 7 regions. They are the relative
height of the centroid of the whole body [12], for his or her
body length which is taken as a single feature and then the
four ellipse parameter consist of the x, y coordinates of the
centroid, the orientation, and the elongation, which
correspond to head region, chest, back, centroid of the head,
orientation of the head, and finally, the mean and variance of
the back calf. These six features constitute about the
appearance of the gait and are shown in Table I.
Instead of using the twenty nine features, these primary six features
are used for the gender classification.
Page 119
REGION
front calf
FEATURES
mean of orientation
2
3
Back
Head
4
5
Head
back calf
mean of orientation
mean of x coordinate of
centroid
mean of orientation
std of x of centroid
back calf
mean of x of centroid
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
x = Wk (z) +
. (15)
Where,
x is the input for the system
Wk - is the weight of the basis function
(z) - is the vector of the basis function
k - is the Gaussian noise vector
It is used to minimize the cost function of the RVM
regression using eqn (16)
( )
( )
LK = C K( n ) y k(n ) S k y k(n )
T
where,
(n )
. (16)
( )
y k = x (n ) W K Z (n )
(17)
Where,
yk(n)-is the output with n sample points belongs to the mapping
function k.
(z(n)) is the design matrix of vector of the basis function.
Sk is the diagonal covariance matrix of the basis function.
Ck(n) - is the probability that the sample point n belongs to the
mapping function k.
Frame 113
Frame125
Frame 145
features and are given as input for the RVM for the gender
classification. The classification accuracy is measured with
two types of kernels in this proposed method. The two types
of feature vectors of each subject are used for testing, training
and cross validation (CV) for both SVM and RVM. The
experimental results are summarised in Table II. The
accuracy is the average by the number of SVs, RVs, and
classification rate. The test result of 6 selected features is a
little higher than that of 29 original features. Also the
numbers of relevant vectors are smaller when compared to
the support vectors. The average accuracy of linear kernel
was the best with around 100.0% for training, 94.0% for CV
and 96.0% for testing in the 6 features. The error rate of the
linear kernel was lower than that of other kernels in terms of
classification rate and computational cost.
The Fig.8 shows the initial stage of the training. The Fig. 9
show the result after the fifth iteration. It is to be noted that
the 3 colors (Indigo, red and yellow) represents the gender as
a male and the remaining 3 colors (Blue, pink and green)
indicates the gender as a female by setting the threshold
value. The color dots in the figure denote the relevance
vectors of the corresponding dataset which is obtained after
the fifth iteration. The maximum number of iterations used
here is ten. Figure.10 shows the comparative representation
of SVM and RVM vectors, for the gender classification. The
vectors used for gender classification is reduced for RVM
than in SVM. From the fig 10, the results for the benchmark
datasets have been shown with the corresponding step by step
procedure as explained by the proposed method. Finally the
classified image is obtained as, the male sequences are
identified through the green color rectangular bounding box
and the female sequences a recognized through the red color
bounding box.
Output (Z+ weight)
Frame 124
Frame 132
Frame 129
Page 120
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
XI. CONCLUSION
This paper has introduced the framework of relevance vector
machine to classify the gender for the application of video
ACKNOWLEDGMENT
We sincerely thank our management and our respectful
principal for providing us with all the facilities we required
for carrying on our research. We also thank them for their
support and encouragement in all aspects for our research.
REFERENCES
[1]
M. Piccardi, Background subtraction techniques: A review,
IEEE International Conference on Systems,Man and Cybernetics, vol.4, pp.
3099-3104, Oct. 2004.
[2]
C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, Pfinder:
Real-Time Tracking of the Human Body, IEEE Trans.On Pattern Analysis
and Machine Intelligence, vol. 19, no. 7, pp.780-785, July 1997.
[3]
A. Mittal, N. Paragios, Motion-Based Background Subtraction
using Adaptive Kernel Density Estimation, In Proceedings of the Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR
2004), pp. 302-309, 2004
[4]
J. Hu and T.Su Robust background subtraction with shadow and
Highlight removal for indoor surveillance, In proceedings of the EURASIP
Journal on Advances in signal processing, 14 pages,2007.
[5]
R. Cutler and L. Davis., Robust real-time periodic motion
detection, analysis, and applications. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 2000, 22(8):781 796.
[6]
D.F. Llorca, F. Vilarino, Z. Zhou and G. Lacey, A multi-class
SVM classifier ensemble for automatic hand washing quality assessment,
British Machine Vision Conference proceedings 2007.
[7]
Cipolla and Philip et al pose estimation and tracking using
multivariate regression pattern recognition Letters 2008.
[8]
Hui-Cheng Lian and Bao-Liang Lu Multi-view Gender
Classification Using Local Binary Patterns and Support Vector MachinesVerlag, pp. 202209, 2006.
[9]
Yoo, D. Hwang and M. S. Nixon ,, Gender Classification in
Human Gait Using Support Vector Machine ACIVS 2005, LNCS 3708,
Springer-Verlag Berlin Heidelberg, pp. 138 145, 2005
[10]
M. Nixon, J. Carter, J. Nash, P. Huang, D. Cunado, and S.
Stevenage, Automatic gait recognition, IEE Colloquium on Motion
Analysis and Tracking, pp. 3/13/6, 1999.
[11]
J. Shutler, M. Nixon, and C. Harris. Statistical gait recognition
via velocity moments, Visual Biometrics (Ref.No. 2000/018), IEE
Colloquium, pp. 11/111/5, 2000.
[12]
A. Johnson and A. Bobick. Gait recognition using static, activity
specific parameters, CVPR, 2001.
[13]
C. BenAbdelkader, R. Cutler, and L. Davis. Motion-based
recognition of people in eigengait space, Automatic Face and Gesture
Recognition Proceedings, Fifth IEEE International Conference, pp.267
272, May 2002.
[14]
Rong Zhang, Christian Vogler, Dimitris Metaxas , Human gait
recognition at sagittal plane Image and Vision Computing, Vol 25, Issue 3,
pp 321-330, March 2007.
[15]
Carl Edward Rasmussen, The Infinite Gaussian Mixture Model
Advances in Neural Information Processing Systems, pp. 554560, 2004.
[16]
Prahlad Kilambi, Evan Ribnick, Ajay J. Joshi, Osama Masoud,
Nikolaos Papanikolopoulos, Estimating pedestrian counts in groups,
Computer Vision and Image Understanding, pp. 4359,2008.
[17]
Lily Lee, Gait analysis for classification AI Technical Report
Massachusetts Institute of Technology, Cambridge, USA. June 2003.
[18]
Tipping, M.E. Sparse Bayesian learning and the relevance vector
machine, J. Mach. Learn. Res, pp. 211244, 2001.
[19]
Tipping, M.E., Faul. A., Fast marginal likelihood maximization
for sparse bayesian models, In: Proc. 9th Internat. Workshop on Artificial
Intelligence and Statistics, 2003.
[20]
Agarwal, A., Triggs, B., 3D human pose from silhouettes by
relevance vector regression, In: Proc. Conf. on Computer Vision and
Pattern Recognition, vol. II. Washington, DC, pp. 882888, July 2004.
[21]
Weizmann dataset Downloaded from
http://www.wisdom.weizmann.ac.il/~vision/SpaceTime Actions.html
[22]
S. Atev, O. Masoud, N.P. Papanikolopoulos, Practical mixtures
of Gaussians with brightness monitoring,Proc. IEEE Int. Conf.
Intel.Transport. Syst. pp.423428, October, 2004.
Page 121
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
No. Of
SV
Classification Rate(%)
features
Training
SVM
Guassian
29
78
54
8
19
29
64
57
94.4
93.4
Cross Validation
RVM
SVM
Testing
RVM
SVM
RVM
94.6
96
92
91
92
92
93
94
94
96
34 100.0
100.0
95
96
94
95
28 100.0
100.0
95
96
95
97
Linear
Original image
Background
subtracted image
Segmented image
Elliptical view of an
image
Classified image
Denis
Eli walking
Ido
Daria
Lena_ walk1
Lena_ walk2
Page 122
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
INTRODUCTION
Page 123
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 124
image features, one being the width of the outer contour of the
binarized silhouette, and the other being the binary silhouette
itself. A set of exemplars that occur during a gait cycle is
derived for each individual.
To obtain the observation vector from the image features we
employ two different methods. In the indirect approach the
high-dimensional image feature is transformed to a lower
dimensional space by generating the frame to exemplar (FED)
distance. The FED vector captures both structural and
dynamic traits of each individual. For compact and effective
gait representation and recognition, the gait information in the
FED vector sequences is captured using a HMM model for
each individual. In the direct approach, we work with the
feature vector directly and train an HMM for gait
representation.
The difference between the direct and indirect methods is that
in the former the feature vector is directly used as the
observation vector for the HMM whereas in the latter, the
FED is used as the observation vector. In the direct method,
we estimate the observation probability by an alternative
approach based on the distance between the exemplars and the
image features. In this way, we avoid learning highdimensional probability density functions. The performance of
the methods is tested on different databases.
II. HIDDEN MARKOV MODEL (HMM)
Gait recognition approaches are basically of
three types: 1) temporal alignment-based, 2) static parameterbased, and 3) silhouette shape-based approaches. Here we
have discussed the silhouette shape-based approach. The first
stage is the extraction of features such as whole silhouettes,
principal components of silhouette boundary vector variations,
silhouette width vectors, silhouette parts or Fourier
Descriptors. The gait research group at the University of
Southampton (Nixon et al.) has probably experimented with
the largest number of possible feature types for recognition.
This step also involves some normalization of size to impart
some invariance with respect to distance from camera. The
second step involves the alignment of sequences of these
features, corresponding to the given two sequences to be
matched. The alignment process can be based on simple
temporal correlation dynamic time warping hidden Markov
models phase locked-loops or Fourier analysis.
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1]. Zongyi Liu and Sudeep Sarkar,Improved Gait
Recognition by Gait Dynamics normalization
IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 28, No. 6, June 2006.
[2]. Murat EK_INC_I, Human Identification Using
Gait, Turk J Elec Engin, VOL.14, NO.2 2006, c
TUB_ITAK.
[3].Naresh Cuntoor, Amit Kale and Rama Chellappa,
Combining Multiple Evidences for Gait Recognition.
[4].
Rong Zhang, Christian Vogler, Dimitris Metaxas,
Human Gait Recognition
[5].
S. Sarkar, P.J. Phillips, Z. Liu, I. Robledo-Vega, P.
Grother, and K.W. Bowyer, The Human ID Gait Challenge
Problem: Data Sets, Performance, and Analysis, IEEE Trans.
Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp.
162-177, Feb. 2005
[6].
Z. Liu and S. Sarkar, Simplest Representation yet for
Gait Recognition: Averaged Silhouette, Proc. Intl Conf.
Pattern Recognition, vol. 4, pp. 211-214, 2004.
[7].
D. Tolliver and R. Collins, Gait Shape Estimation for
Identification, Proc. Intl Conf. Audio and Video-Based
Biometric Person Authentication, pp. 734-742, 2003.
[8]. L. Wang, T. Tan, H. Ning, and W. Hu, Silhouette
Analysis-Based Gait Recognition for Human
Identification, IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 25, no. 12, pp. 15051518, Dec. 2003
[9]. R. Collins, R. Gross, and J. Shi, Silhouette-Based Human
Identification from Body Shape and Gait, Proc. Intl Conf.
Automatic Face and Gesture Recognition, pp. 366-371, 2002.
Page 125
V. FUTURE WORKS
HMM can be compared with the other models namely
Principal Component Analysis (PCA),Baseline Algorithm etc
their efficiency can be observed with respect with the other
two model which is being discussed here. Also efficiency of
the algorithm can be checked with various factors like time,
surface, carrying luckages, under various climate conditions
can be analyzed with the same Model (HMM).
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 126
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1. INTRODUCTION
Mobile ad-hoc networks (MANETs) are autonomous dynamic
networks which provide high flexibility. These networks uses
multi-hop radio relay that operate without the support of any
fixed infrastructure [1]. Streaming multimedia or CBR
applications over MANET require minimal delay and packet
loss. To meet these critical requirements, a MANET
inherently depends on the routing scheme employed. Routing
protocols for Ad hoc networks can be broadly classified as
proactive and reactive. Proactive (or) table-driven routing
algorithms employ distance vector based or link state based
routing strategies. However, the main drawback of these
algorithms is that the need for frequent table updation
consumes significant amount of memory, bandwidth and
battery power [2]. Example of such protocols is Optimized
Link State Routing (OLSR) [3] and Destination Sequenced
Distance Vector routing (DSDV) [4]. In reactive routing
Page 127
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2. RELATED WORK
The research on congestion for MANET is still in the early
stage and there is a need of new techniques. In this section, the
research work related to congestion control and multipath
routing is presented. Round trip time measurements are used
to distribute load between paths in Multi-path Source Routing
(MSR) [9]. A distributed multi-path DSR protocol (MP-DSR)
was developed to improve QoS with respect to end-to-end
reliability [10]. The protocol forwards outgoing packets along
multiple paths that are subjected to an end-to-end reliability
model. Split Multi-path Routing (SMR) utilized multiple
routes of maximally disjoint paths which minimize route
recovery process and control message overhead [11-12]. The
protocol uses a per-packet allocation scheme to distribute data
packets into multiple paths of active sessions which prevents
nodes from being congested in heavily loaded traffic
situations. Kui Wu and Janelle Harms [13] proposed a path
selection criteria and an on-demand multi-path calculation
method for DSR protocol. Peter Pham and Sylvie Perreau [14]
proposed a multi-path DSR protocol with a load balancing
policy which spreads the traffic equally into multiple paths
that are available for each source-destination pair. A dynamic
load-aware based load-balanced routing (DLBL) algorithm
was developed which considers intermediate node routing
load as the primary route selection metric [15]. This helps the
protocol to discover a route with less network congestion and
bottlenecks. When a link breaks because of the node mobility
or power off, DLBL provides efficient path maintenance to
patch up broken links to help to get a robust route from the
source to the destination. A simple Loop-Free Multi-Path
Routing (LFMPR) with QoS requirement was developed from
AODV and DSR protocol [16]. In the route request phase,
intermediate nodes record multi-reverse links which is applied
to construct multiple-paths during the route reply phase. Each
path is assigned unique flow identification in order to prevent
routing loop problems. Rashida Hashim et al [17] proposed an
adaptive multi-path QoS aware DSR Protocol. The protocol
collects information about QoS metrics during the route
discovery phase and uses them to choose a set of disjoint paths
for data transmission. De Rango et al [18] proposed an energy
aware multi-path routing protocol by considering minimum
drain rate as a metric. An update mechanism and a simple data
packet scheduling among the energy efficient paths have also
been implemented to update the source route cache and for
improving the traffic and energy load balancing. Raghavandra
et al [19] proposed Congestion Adaptive Routing in Mobile
Ad Hoc Networks (CRP). In CRP every node appearing on a
route warns its previous node when prone to be congested.
The previous node then uses a bypass route bypassing the
potential congestion to the first non-congested node on the
route. Traffic will be split probabilistically over these two
routes, primary and bypass, thus effectively lessening the
chance of congestion occurrence. Zhang XiangBo and Ki-Il
Kim [20] proposed a multi-path routing protocol based on
DSR which uses Multi-Probing and Round-Robin
mechanisms (MRDSR) for updating alternate multiple paths.
Page 128
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
AVERAGE JITTER
[1] C. S. R. Murthy, B. Manoj, Ad hoc wireless networks Architectures and protocols, special edition, Printice Hall,
2004.
[2] J. Macker, M. Corson and V. Park, Mobile and wireless
internet services: Putting the pieces together, IEEE
CommunicationsMagazine.36, 2001, pp. 146-155.
[3] T. Clausen and P. Jacquet, Optimized link state routing
protocol, IETF RFC 3626, Network Working Group, October
2003.
[4] C. E. Parkens and P. Bhagwat, Highly dynamic
Desination-Sequenced Distance-Vector Routing (DSDV) for
mobile computers,Computer Communications Review 24
(1994) ,234-244.
[5] C. Perkins, E. Belding-Royer and S. Das, Ad hoc ondemand Distance Vector Routing, RFC 3561, July 2003.
[6] D. B. Johnson, D. A. Maltz and J. Broch, DSR- The
dynamic source routing protocol for multi hop wireless ad hoc
networks, in: C. E. Perkins (Eds.), Ad hoc Network, Chapter
5, Addison-Wesley, 2001.
Page 129
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 130
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
M.P.Jancy#, B.Yogameena$
Research Scholar, $Assistant Professor
Dept. of Electronics and Communication Engineering, Thiagarajar College of Engineering, Madurai.India
flytojancy@gmail.com, ymece@tce.edu
#
I. INTRODUCTION
Multicamera surveillance system aims at tracking people or
object of interest. The fundamental problem in video
surveillance is automatic detection of human motion, tracking
human and classifying abnormal situations. Based on this
motivation, this paper proposes an efficient approach for
tracking people through multicamera system. It creates a
model for normal human behaviour and any deviation from
the basic model is analysed. Our methodology applies on short
term behaviour referring those that can be localized in a
spatio-temporal sense, i.e. brief and within a restricted space.
Examples of such behaviours are walking, standing still,
running, moving abruptly, waving hand etc. The algorithm
provides clear discrimination of such anomalies.
The paper is organized into five sections. Section II discusses
the algorithms available in the present day. Section III
describes the methodology Section IV describes background
subtraction, extraction of feature and RVM learning system
for classification are explained. Section V depicts the
experimental results and the classification of human using the
Relevance Vector Machine extraction method. Finally, the
conclusion is presented.
Page 131
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Video
Segmentation
Frame 10
Foreground Blob
Frame 90
Frame 115
Low level
High level
Feature Vectors
Frame 05
A. FEATURE CALCULATION
Feature vectors are computed by taking into account both the
background subtraction and the ground plane information.
Objects centroid is determined by using the following
equations
Final Decision
[Normal/Abnormal]
Fig.1 System Overview
Motion-based techniques are mostly used for shortterm activity classification. Examples are walking, running,
fighting. These techniques calculate features of the motion
itself and perform recognition of
IV. BACKGROUND SUBTRACTION
The first stage of video surveillance system seeks background
to automatically identify people, objects, or events of interest
in various changing environments. Background substraction is
accomplished in real-time using the adaptive mixture of
Gaussians method proposed by Stauffer [9].
It describes K Gaussian distributions to model the surface
reflectance value.
k
P ( X t ) = i ,t * ( X t , i ,t , i, t )
(1)
i, t
is the
B = arg min b ( k T )
xc =
yc =
1
N
1
N
x
j =1
j =1
(3)
yi
max
Y min
...(5)
i =1
Frame 15
Relevance Vector
Machine
th
Frame 85
...(2)
k =1
...(6)
Skeleton feature is obtained by calculating the distance
max
min
di
Page 132
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
( )
y k(n ) = x (n ) W K Z (n )
(17)
where, yk is the output with n sample points belonging to
the mapping function k. (z(n)) is the design matrix vector of
the basis function, Sk is the diagonal covariance matrix of the
basis function and Ck(n) is the probability that the sample
point n belongs to the mapping function k.
(n)
Page 133
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
The Fig.7 shows the initial stage of the training. Then the Fig
8 shows the result after the iteration. The circles in the figure
denote the relevance vector obtained after the iteration. The
maximum number of iterations used here is Twenty Five.
Thus the resultant vector obtained here is closer to the original
template and there is no probability of misclassification of
vectors. This is the main advantage in this proposed method.
Normal behaviour classification is shown in Fig 9. and the
datasets were taken from indoor and from caviar benchmark
dataset. The algorithm has been tested using 2100 samples.
Abnormal behaviour classification of a person bending down
and a person waving his hand are shown in Fig 10. .
VIII. CONCLUSION
This paper has introduced the framework of relevance vector
machine for the classification of different kind of human
activities for the application of video surveillance. To this end
we have classified the given image sequence as standing,
running and so on. Results demonstrate the proposed method
to be highly accurate, robust and efficient in the classification
of normal and abnormal human behaviours.
REFERENCES
1.
M. A. Ali, S. Indupalli and B. Boufama, Tracking Multiple
People for Video Surveillance, School of Computer Science, University of
Windsor,Windsor, ON N9B 3P4, 2004
2.
Mykhaylo Andriluka Stefan Roth Bernt Schiele,People-Trackingby-Detection and People-Detection-by-Tracking Computer Science
Department Technische Universitat Darmstadt,2007.
3.
Tao Zhao Ram Nevatia,Tracking Multiple Humans in Crowded
Environment, Proceedings of the 2004 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition (CVPR04)1063-6919/04
$20.00 2004 IEEE.
4.
Xinyu WU, Yongsheng OU, Huihuan QIAN, and Yangsheng XU,
Detection System for Human Abnormal Behavior,2004.
5.
B. Scholkopf, J.C. Platt, J. Shawe-Taylor, A.J. Smola, and R.C.
Williamson.Estimating the Support of a High-Dimensional Distribution.
Neural Computation, 13(7):14431471, 2001.
6.
Rong Zhang, Christian Vogler, Dimitris Metaxas , Human gait
recognition at sagittal plane Image and Vision Computing, Vol 25, Issue 3,
pp 321-330, March 2007.
7.
Carl Edward Rasmussen, The Infinite Gaussian Mixture Model
Advances in Neural Information Processing Systems, pp. 554560, 2004.
8.
Prahlad Kilambi, Evan Ribnick, Ajay J. Joshi, Osama Masoud,
Nikolaos Papanikolopoulos, Estimating pedestrian counts in groups,
Computer Vision and Image Understanding, pp. 4359,2008.
9.
C. Stauffer, W. Eric L. Grimson. Learning patterns of activity
using real-time tracking. IEEE
Trans. on Pattern Analysis and Machine
Intelligence, Volume 22, Issue 8, pp.747-757, 2000.
10.
C. BenAbdelkader, R. Cutler, and L. Davis. Motion-based
recognition of people in eigengait space, Automatic Face and Gesture
Recognition Proceedings, Fifth IEEE International Conference, pp.267 272,
May 2002
11.
C.Chang, C.Lin, LIBSVM: a library for support vector machines,
Software available at: http://www.csie,ntu.edu.tw/cjlin/libsvmi, vol. 80, 2001,
pp.604611.
12.
D. Kosmopoulos, P. Antonakaki, K. Valasoulis, D. Katsoulas,
Monitoring human behavior in an assistive environment using multiple
views, in: 1st International Conference on Pervasive Technologies Related to
Assistive Environments PETRA 08, Athens, Greece, 2008.
Page 134
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Jyothi.bitcse@gmail.com
Chandru_91@yahoo.com
ABSTRACT.
Intrusion Detection Systems (IDSs) for Mobile
Ad hoc Networks (MANETs) are indispensable since traditional
intrusion prevention based techniques are not strong enough to
protect MANETs. However, the dynamic environment of
MANETs makes the design and implementation of IDSs a very
challenging task. In this paper, we present a hierarchical ZoneBased Intrusion Detection System (HZBIDS) model that fits the
requirement of MANETs. The model utilizes intelligent light
weight mobile agent which collects data from the different
mobile nodes(Audit data), preprocess the data, alert and alarm
messages among HZBIDS local mobile agents and gateway
nodes. With alert information from a wider area, gateway nodes
IDS can effectively suppress many falsified alerts and provide
more diagnostic information about the occurring attacks. The
model can adjust itself dynamically to adapt to the change of the
external environmental. Also the model is robust and Scalable.
Keywords MANET, ADHOC network, Mobile agents,
INTRODUCTION
The unique characteristics of Mobile Ad hoc
Networks (MANETs), such as arbitrary node movement and
lack of centralized control, make them vulnerable to a wide
variety of outside and inside attacks [1]. How to provide
effective security protection for MANETs has become one of
the main challenges in deploying MANET in reality. Intrusion
prevention techniques, such as encryption and authentication,
can determine attackers from malicious behavior. But
prevention based techniques alone cannot totally eliminate
intrusions. The security research in the Internet demonstrates
that sooner or later a smart and determined attacker can
exploit some security holes to break into a system no matter
how many intrusion prevention measures are deployed.
Therefore, intrusion detection systems (IDSs), serving as the
second line of defense, are indispensable for a reliable system.
IDSs for MANETs can complement and integrate with
existing MANET intrusion prevention methods to provide
highly survivable networks [1]. Nevertheless, it is very
difficult to design a once-for-all detection model.
Intrusion detection techniques used in wired
networks cannot be directly applied to mobile Ad Hoc
networks due to special characteristics of the networks.
In this paper, we are concerned with the design
of intrusion detection system for MANET. Our goal is to
design a new Hierarchical ZBIDS model, derived from
2.RELATED WORK.
An IDS for MANET
The
Intrusion
Detection
systems
architecture for a wireless ad-hoc network may depend on the
network infrastructure itself. Wireless ad-hoc networks may
be configured in either a flat or multi-layered network
infrastructure. In a flat network infrastructure, all nodes are
equal; therefore it may be suitable to civilian applications such
as virtual classrooms or conferences [5]. On the contrary, in
the multi-layered network some nodes considered different.
Nodes can be divided into clusters, with one cluster head in
each cluster.
In traditional wired networks, many
intrusion detection systems have been implemented where
switches, routers or gateways play key role to make IDS
implemented in these devices easily [3]. But on the other hand
these devices do not exist in MANET. For that reason several
intrusion detection architectures proposed for ad hoc network
which include stand-alone IDS architecture, distributed and
cooperative IDS architecture, and hierarchical IDS
architecture [4]. In a stand-alone Intrusion Detection System
architecture, each node runs an IDS independently to detects
malicious attacks and determine intrusions, Since stand-alone
IDS do not cooperate or share information with other
detection systems running on other nodes , all intrusion
detection decisions are based on information available to the
individual node. Furthermore nodes do not know anything
about the situation on other nodes in the same network [5].
Zhang and Lee [1], proposed a
distributed and cooperative intrusion detection architecture
that is shown in figure 1, in this architecture each node runs an
IDS agent and makes local detection decision independently,
by monitoring activities such as user and system activities and
the communication activities within the radio range. At the
same time, all the nodes cooperate in a global detection
Page 135
1.
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 136
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
To other GIDS
LID
S
Response Agent
GIDS
Global
Association
Agent
Data Association
Agent
Global Detection
Agent
Detection Agent
Data Collection
Global Response
Agent
Audit data
Page 137
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 138
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I. INTRODUCTION
Over the last decade, high-performance computing has ridden
the wave of commodity technology, building clusters that
could leverage the tremendous growth in processor
performance fuelled by the commercial world. As this pace
slows down, processor designers are facing complex problems
when increasing gate density, reducing power consumption
and designing efficient memory hierarchies. Traditionally,
performance gains in commodity processors have come
through higher clock frequencies, an exponential increase in
number of devices integrated on a chip, and other architectural
improvements. Power consumption is increasingly becoming
the driving constraint in processor design. Processors are
much more power limited rather than area limited. Current
general purpose processors are optimized for single-threaded
Page 139
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 140
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 141
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 142
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 143
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
for(i=1;i<spe count;i++)
{
Start:=spe[i]->thread[no of ele , key ,spe id]
//invoke other SPEs to search in DFS
}
Found:=spu_read_out_mbox();
// Read status from outbound mailbox of SPEs.
If(Found)
Print Key found
Else
Print Key not found
Terminate(spe[i]->thread)
exit(0)
end
Page 144
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3.1 simulator
Performance in parallel algorithms is
measured in terms of accuracy of the results and speed of
execution of the algorithm. In this case the key element is
searched parallely by all SPEs using both the graph searching
algorithms. Since each SPE is working on its own set of nodes
the result is obtained hardly within few seconds as soon as the
process is started. The performance graph shown in figure 4 is
meant for the key value 500, where the number of elements to
be searched is 7000. Here we could see SPE7 doing the BFS,
while all other SPEs doing DFS as shown in the figure.
X. CONCLUSION
Together with an unprecedented level of performance,
multicore processors are also bringing an unprecedented level
of complexity in terms of software development. A clear shift
of paradigm from classical parallel computing, where
parallelism is typically expressed in a single dimension, to the
complex multidimensional parallelization space of multicore
processors, where several levels of control and data
parallelism must be exploited in order to gain the expected
performance is seen. With this work, it is proved that for the
specific case of the Hybrid Search graph exploration, it is
possible to tame the algorithmic and software development
process and achieve, at the same time, an impressive level of
Performance. Thus the performance issues encountered using
the existing searching algorithms are overcome by the hybrid
search algorithm. The hybrid algorithms work fine and good
Page 145
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
with all the cores of the cell broadband engine. The accuracy
of the results is also greatly improved. We can select an
application for which this algorithm can be used to find
optimal solution and enhance the performance by reducing the
running time of the application. Also we can find paths from
the root to the solution. We can also add weight corresponding
to each edge.
Daniele
Page 146
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 147
conference on Supercomputing,2008
[40]K. Subramani and K. Madduri. A Randomized Queueless
Algorithm for Breadth-First Search. Int'l. Journal of
Computers and their Applications, 15(3):177{186, 2008.}
[41]String Searching with Multicore Processors, Oreste Villa,
Politecnico di Milano/Pacific Northwest National Laboratory
Daniele
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 148
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
II. INTRODUCTION
SHAPE matching is a challenging problem particularly by
huge proliferation of images on the Internet, mainly in
computer databases containing thousands or millions of
images. Applications of shape recognition can be found in
Computer Aided Design/Computer Aided Manufacturing
(CAD/CAM), virtual reality (VR), medicine, molecular
biology, security, and entertainment [1].
Existing approaches can be divided into [1]: Statistical
descriptors, like for example geometric 3D moments
employed by [2] and the shape distribution [3]. Extensionbased descriptors, which are calculated from features sampled
along certain directions from a position within the object [4].
Volume-based descriptors use the volumetric representation of
a 3D object to extract features (examples are Shape
histograms [5], Model Voxelization [6] and point set methods
[7]). Descriptors using the surface geometry compute
curvature measures the distribution of surface normal vectors
[8]. Image based descriptors reduce the problem of 3D shape
matching to an image similarity problem by comparing2D
projections of the 3D objects [9]. Methods matching the
topology of the two objects (for example Reeb graphs, where
the topology of the 3D object is described by a graph structure
[10]). Skeletons are intuitive object descriptions and can be
obtained from a 3D object by applying a thinning algorithm
on the voxelization of a solid object like in [11].
Despite such a long history, interest in the problem has been
recently reignited [12] by the fantastic proliferation of
Page 149
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
z ( t ) = x ( t ) + iy ( t )
(1)
represent the shape outline in the two-dimensional, complex
domain where t is not a continuous variable.
Using the FFT, z(n) is easily converted to the Fourier domain
f ( k ) = F { z ( n )}
(2)
where the complex f(k) are known as the Fourier shape
descriptors. The entire challenge, then, of shape matching is
the interpretation and comparison of the Fourier descriptors
fm(k) from shape m with those of some other shape n.
U = {u }n andV = {v }m
i i =1
j j =1
.the
Given the two point sets
chamfer distance function is the mean of the distances
between each point, uiU and its closest point in V:
d cham (U , V ) =
1
n
min
u i U
|| u i v j ||
v j V
(4)
The symmetric chamfer distance is obtained by adding
d cham (U , V )
z ( n ) = + e i rz (( n ) mod N )
(5)
where r is scaling, is orientation, is rotation and is shift
and N is the length of the boundary in pixels.
Applying the Fourier transform, we find that
_
i 2 k / N
f (k ) = e
{ .( k = 0 ) + e rf ( k )}
(6)
Since z1(n), z2(n) are equivalent shapes, the goal is to
determine how to find descriptors invariant to r, , , and ,
and thus variant only to inherent changes in shape.
Fig 2.1. Sensitivity of shape to the phase information
Input trademark
Fourier spectrum using
FFT
Magnitude Normalization
Phase Discrimination
-Transform
TR ( , ) =
(3)
Selecting
common
reference
Phase Comparison
Fig 3.1. Flow Chart for proposed work
f (k ) =
f (k )
_
f (1)
= e i 2 ( k 1) / N
f (k )
(6)
f (1)
Page 150
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
f (k )
| f ( k ) |=
(7)
f (1 )
1,
| f (k ) |=| f (k ) |
_
f (k ) = f (k ) f ( j).
= f (k )
k j
j 1
(9)
k j
k 1
f ( j) +
f (1)
j 1
j 1
| f ( j ) | | is not small.
Thus we have constructed a -invariant phase sequence. Given
two shapes f1(k), f2(k), the phase comparison thus involves
selecting a good common reference j, typically by
_
f
maximizing | f1 ( j ). f 2 ( j ) | , computing
. f
and discriminating as
_
| f
Algorithm
Page 151
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
B.
Fig 4.6(a).Precision
Fig 4.4.Fourier transform. Magnitude normalization and phase comparison
Phase- F F T
R and o n t r ansf o r m
4
3
2
1
0
l ogo1
l ogo2
l o go 3
l ogo4
l o g o5
Phase-FFT
0.8
Randon transform
0.6
0.4
0.2
0
6
12
18
24
30
N o . o f R e t r i e v a l s
Fig 4.6(b).Recall
Comparision
Recall
1
l o go 6
l ogo7
l o g o8
l og o 9
I np u t t r a d e m a r k
As can be seen in the Fig 4.6 (a), (b) the precision and recall
values by Phase based shape matching for different number of
retrievals for all the database trademarks are greater than the
existing method. Thus it automatically improves the retrieval
efficiency and reduces the error rate.
V. CONCLUSION
A novel phase based shape descriptor for trademark-matching
problem has been proposed in this paper. The phase-based
discrimination has been implemented and proved to be
effective in recognition of trademarks from large databases.
The performance has been improved greatly, particularly in
the elimination of highly irrelevant matches which appeared in
earlier
Page 152
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ACKNOWLEDGMENT
The authors would like to thank the Management and
Department of Electronics and Communication Engineering
of Thiagarajar College of Engineering for their support and
assistance to carry out this work.
REFERENCES
[23]
B. Bustos, D. A. Keim, D. Saupe, T. Schreck, and D. V. Vranic.
Feature-based similarity search in 3d object databases, ACM Computer
Survey, 37(4):345387, 2005.
[24]
M. Elad, A. Tal, and S. Ar. Content based retrieval of vrml
objects: an iterative and interactive approach. In Euro graphics Workshop on
Multimedia, . Springer, pages 107118, New York, 2001.
[25]
R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin. Shape
distributions. ACM Trans.Graph., 21(4):807832, 2002.
[26]
D. Vranic and D. Saupe. Description of 3d-shape using a complex
function on the sphere, In International conference on Multimedia and Expo.,
pgs 177180. IEEE, 2002.
[27]
M. Ankerst, G. Kastenmller, H.-P. Kriegel, and T. Seidl. 3d shape
histograms for similarity search and classification in spatial databases. In 6th
Int. Symp. on Advances in Spatial Databases, Springer, pages 201226,
London, UK, 1999.
[28]
D. Vranic and D. Saupe. 3d shape descriptor based on 3d fourier
transforms. In EURASIP Conference on Digital Signal Processing for for
Multimedia Com munications and Services, pages 271274. Comenius
University, 2001
[29]
J. W. H. Tangelder and R. C. Veltkamp. Polyhedral model
retrieval using weighted point sets. In Shape Modeling International, pages
119129, IEEE. Seoul, Korea, 2003.
[30]
E. Paquet and M. Rioux. Nefertiti: A tool for 3-dshape databases
management. SAE transactions, 108:387393, 1999.
[31]
C. M. Cyr and B. B. Kimia. A similarity based aspect graph
approach to 3d object recognition. International Journal of Computer Vision,
57(1):522, 2004.
[32]
Y. Shinagawa, T. Kunii, and Y. Kergosien. Surface coding based
on morse theory. Computer Graphics and Applications, IEEE, 11(5):6678,
September 1991.
[33]
H. Sundar, D. Silver, N. Gagvani, and S. Dickinson. Skeleton
based shape matching and retrieval. Shape Modeling International, pages
130 139, 12-15 May 2003.
[34]
T. Bui, G. Chen, Invariant Fourier-wavelet descriptor for pattern
recognition, Pattern Recognition, 1999.
[35]
Paul Fieguth, Paul Bloore Adrian Domsa, Phase-Based Methods
For Fourier Shape Matching, ICASSP,2004
[36]
S. P. Smith and A. K. Jain, Chord distribution for shape
matching, Computer Graphics and Image Processing, vol. 20, pp. 259-271,
1982.
[37]
Thayananthan, B. Stenger, P. H. S. Torr, and R. Cipolla, Shape
context and chamfer matching in cluttered scenes, in Proc. IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2003.
[38]
S. Berretti, A. D. Bimbo, and P. Pala, Retrieval by shape
similarity with perceptual distance and effective indexing, IEEE Trans. on
Multimedia, vol. 2(4), pp. 225-239, 2000.
[39]
D. Guru and H. Nagendraswam, Symbolic representation of twodimensional shapes,Pattern Recognition Letters, vol. 28, pp. 144-155, 2007
[40]
N. Alajlan, M. S. Kamel, and G. Freeman, Multi-object image
retrieval based on shape and topology, Signal Processing: Image
Communication, vol. 21, pp. 904-918, 2006.
[41]
N. Alajlan, I. E. Rube, M. S. Kamel, and G. Freeman, Shape
retrieval using triangle-area representation and dynamic space warping,
Pattern Recognition, vol. 40(7), pp. 1911-1920, 2007.
[42]
S. Han and S. Yang, An invariant feature representation for shape
retrieval, in Proc. Sixth International Conference on Parallel and Distributed
Computing, Applications and Technologies, 2005.
Page 153
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I.
INTRODUCTION
IVING in a fast moving world, it is natural to expect
things faster. Similarly in our quest for data search we
need fast and efficient retrieving methodologies. With the
evolution of new technology and new products in various
domains, researchers are focusing more on exploring new
techniques of storage, managing and retrieval of data and
knowledge from a repository which has been acquired from
various sources. Only having a repository of data or efficiently
organizing the data cannot guide decision makers or
management to make accurate decisions as humans do. The
best approach is to integrate and manage the data in the form
of knowledge. Retrieving of exact knowledge through online
is increasing and it requires more amount of time to retrieve
from different data sources and creating knowledge from
information available. Knowledge searching through mobile
phones does not exist through SMS. To defeat these anomalies
we have built a Knowledge Base using Knowledge Base
Markup Language (KBML) which is derived from XML
architecture. This system also provides facilities to search/add
the contents to and from the Knowledge Base though mobile
phones and Windows Mobile phones without using GPRS.
The aim of our project is to build a secured intelligent storage
mechanism, which can store the information in the form of
knowledge using the knowledge
Page 154
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
made, the title is first picked and the corresponding IDs are
used to navigate for retrieving the description. If the search
keyword is not found in the KBML file, the control is passed
to search the knowledge in the database for retrieval.
End-users may wish to create knowledge when the particular
search is not present. In such cases, users should create an
account and enter the knowledge along with the title and
description. At this stage, a KBML file is created for this
newly created knowledge with all its constraints.
1.
Page 155
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4.
EXPERIMENTAL RESULTS
Users have the option of adding data in the already
existing data base. While adding data about a plant the fields
like name of the plant, taxonomy, geology should be filled in
the appropriate text boxes as given in the form.
3.
RETRIEVAL
4.1 SEARCHING THE KNOWLEDGE
it is the main part of the project, by which the data that a user
needs is retrieved unto him. It is easier for us to search the
relevant information which is available from the selected
knowledge base through windows/mobile applications. When
the user searches a particular knowledge, they are allowed to
select knowledge base and data sources from which they need
to search the knowledge from a varied variety of options. All
the relevant results are displayed as a list and the knowledge is
obtained by navigating to the specified data source.
6.1
CONCLUSION
REFERENCES
[1]
Knowledge
Engineering
from
http://pages.cpsc.ucalgary.ca/~kremer/courses/CG/.
[2] Jay Whittaker, Michelle Burns, John Van Beveren,
"Understanding and measuring the effect of social capital on
knowledge transfer within clusters of small-medium
enterprises" in proceedings of the 16th Annual Conference of
Small EnterpriseAssociation of Australia and New Zealand,
28 September 1 October 2003.
Fig. 4 Retrieval
Page 156
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 157
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I. INTRODUCTION
E-Learning, as a package of technology enhanced education tends
to replace the standard practices of learning of board teaching and
boring lectures. The concept of Study anywhere, anytime and at
anyones own pace has made it a generally acceptable learning
system. During the last decade, the introduction of information
technology in the educational domain has resulted in a
tremendous change in reshaping the method of delivery of
academic knowledge. The World Wide Web has increased the
intensity of technological play in the development of higher
education especially through e-learning by making very fast
access to relevant educational resource at any time and place.
The web enabled e-learning system enables socially excluded
communities to access higher education and connects different
societies, communities, resources and learners.
The very important resource of e-learning systems is the learning
materials and so the development of knowledge based educational
resources is the major area of research in the present educational
scenario. The e-learning courseware is always expected to satisfy
the knowledge needs of different types of learners and provide
personalized delivery of academic knowledge. Recent research
works in the field of e-learning shows that the main focus is on
developing intelligent e-learning systems i.e. Web based systems
that are more understandable by machines. The intelligent web
based services are now rendered through Semantic web [1][2][5],
which has been hottest topic of research now.
Page 158
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 159
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 160
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE I
A PARTIAL LOOK OF THE PRELIMINARY STACK ONTOLOGY AND OWL REPRESENTATION
GG.
HH.
Ontology Specification
KK.
(OWL RDF)
II.
LL.
WWW.
MM.
XXX.
NN.
Stack ->has: Definition -> is: Data structure in which items
are inserted and deleted at the same end(Definition type)
YYY.
<owl:versionInfo>$Id: datastructures.owl $
</owl:versionInfo>
OO.
ZZZ.
PP.
->Characteristic: Last-in-First-out
AAAA.
QQ.
BBBB.
RR.
Homogenouselements -> characteristics : Elements ->has
same : Datatype
CCCC.
DDDD.
EEEE.
<owl:Ontology rdf:about="">
<owl:Class rdf:ID="Stack">
<rdfs:label>Stack</rdfs:label>
string, records
FFFF.
UU.
VV.
WW.
HHHH.
XX.
IIII.
YY.
->of: Deletion
</rdfs:comment>
</owl:Class>
JJJJ.
KKKK.
AAA.
LLLL.
<owl:ObjecProperrty rdf:ID="hasMAXSIZE">
<rdfs:domain rdf:resource="#Stack" />
BBB.
MMMM. </owl:ObjectProperty>
CCC.
NNNN.
DDD.
OOOO.
EEE.
FFF.
GGG.
<owl:DatatypeProperty rdf:ID="MAXSIZE">
PPPP.
QQQQ.
</owl:DatatypeProperty>
RRRR.
Page 161
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
HHH.
III.
JJJ.
SSSS.
<owl:DatatypeProperty rdf:ID="toppointer">
TTTT.
UUUU.
</owl:DatatypeProperty>
KKK.
VVVV.
LLL.
WWWW.<owl:FunctionalProperty rdf:ID="Operations">
MMM.
XXXX.
YYYY.
ZZZZ.
</owl:FunctionalProperty>
>= MAXSIZE
NNN.
OOO.
CheckOverflow->false: PushStep2
PPP.
QQQ.
RRR.
SSS.
AAAAA.
BBBBB. <owl:Class rdf:about="#Operations">
CCCCC.
arraylocationof :Toppointer
TTT.
UUU.
stack
DDDDD.
</rdfs:comment>
EEEEE.
Stack : Push -> implemented by : Reference to C Program
FFFFF.
<rdfs:subClassOf>
code
VVV.
GGGGG.
HHHHH.
</rdfs:subClassOf>
IIIII.
</owl:Class>
JJJJJ.
KKKKK.<owl:DatatypeProperty rdf:ID="push argument">
LLLLL.
MMMMM.
</owl:DatatypeProperty>
NNNNN.<owl:Class rdf:about="#Operations">
OOOOO.
PPPPP. </rdfs:comment>
QQQQQ.
<rdfs:subClassOf>
RRRRR.
SSSSS.
</rdfs:subClassOf>
TTTTT.
</owl:Class>
Page 162
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Fig. 3 shows a partial concept map for the Push operation in the
stack. The learners are listed with the concepts and their
relationships to other concepts by means of concept maps and
they are shown the parts of the concept map in which they are
interested in. They are made to extend their understandability by
showing indirectly related concepts on demand. For e.g. a concept
map which deals with the basic sorting algorithms may also have
a reference to the current applications of sorting along with their
efficiency considerations and also may have a reference to an
URL or a textbook reference related to Sorting. If the learner is
interested in moving to the advanced level of learning, he/she can
move through the advanced links of concept maps. Those
advanced links of concept maps may be programmed to display
only upon the demands of the learner i.e. based on learners
interest. Similarly, the students are given option to specify the
keywords of their concern to extract specific parts of the concept
maps. They are also encouraged to build their own concept maps
for any topic of their interest and explore the relationships
between unrelated concepts. The developed concept maps are
verified and added into the content knowledge base thus
TABLE II
THREE LEVEL CLUSTERING FOR A LEARNER PERFORMANCE
UUUUU.
BBBBBB.
Level
Knowledge
VVVVV. Clusters
WWWWW.
ry High
Ve
XXXXX. Hig
h
YYYYY. Avera
ge
ZZZZZ. Lo
w
AAAAAA.
ry low
CCCCCC.
pic numbers
To
DDDDDD.
EEEEEE.
FFFFFF.
GGGGGG.
Ve
Page 163
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
HHHHHH.
ing
Understand
IIIIII.
JJJJJJ.
KKKKKK.
LLLLLL.
MMMMMM.
NNNNNN.
Application
OOOOOO.
PPPPPP.
QQQQQQ.
RRRRRR.
SSSSSS.
At the end of each topic, the learner is assessed with such a set of
questionnaire comprising a minimum of 20 multi choice objective
questions supporting all the levels of the given knowledge
evaluation pattern. If he/she answers correctly one mark is given,
otherwise zero mark is given for each question in a topic. The
learners response of correct answers is recorded as fraction of 1
in each of these three levels. Based on the fraction of the correct
answers scored in each of these 3 levels, the performance of the
learner in all the topics is clustered into five major clusters
Very high, High, Average, Low, Very low. The clustering is
applied to all the three levels of knowledge evaluation pattern.
Table II shows the initial clustering of the learner performance.
The learners will definitely fall into any one of these categories
for each topic and so the k means clustering algorithm is used to
determine the performance rate of a learner where k = 5
indicating the five clusters.
Given an initial set of k means m1(1),,m5(1), which may be
specified randomly or by some heuristic, and in this case, the
means chosen initially are 0.9, 0.7, 0.5, 0.3, 0.1 with an heuristic
approach based on the nature of the clusters respectively.
The algorithm proceeds by alternating between two steps:
1.
2.
(2)
The algorithm is deemed to have converged when the
assignments no longer change.
The clusters thus formed are then able to represent a single
learners behavior in each topic. The three level clusters
make the personalization of e-learning more powerful by
automatically classifying the learning capabilities of learner
in each topic. A set of rules are framed with these cluster
information and the rule based inference enables perfect
assessment of learner capabilities. Table III shows some of
the rules and the inferences made. The entries in the table
denote the following: K Knowledge, U- Understanding,
A- Application, VH Very High, H- High, M Average, L
Low, VL Very Low.
Assignment
Update
Page 164
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE III
FEW RULE BASED INFERENCE ON CLUSTERS AND THE ASSESSMENT OF LEARNER BEHAVIOR
TTTTTT.Rule
WWWWWW.
U x L)
Ti in{ K x VH
ZZZZZZ.Ti in{ U x VH A x
L)
UUUUUU.
Inference
XXXXXX.
Good in
knowledge but low in
Understanding level
AAAAAAA.
very high
Understanding
BBBBBBB.
Application - Low
VVVVVV.
Assessment
DDDDDDD.
Needs more realtime examples and problem solving
exercises
CCCCCCC.
EEEEEEE.
U x VL)
IIIIIII.
Ti in{ K x VL
Ti in{ K x M U x M)
FFFFFFF.
low
Knowledge Very
GGGGGGG.
Very low
Understanding
LLLLLLL.
i => j
MMMMMMM.
performance
Ti affects Tj
RRRRRRR.
i => j
JJJJJJJ. and
KKKKKKK.
U x L)
Tj in{ K x M
OOOOOOO.
U x L)
Ti in{ K x M
PPPPPPP.
and
QQQQQQQ.
U x H)
Tj in{ K x H
UUUUUUU.
T1..n in (K x H)
SSSSSSS.
Inspite of the
average performance in Tj , good
performance in Tj
VVVVVVV.
High performance
in all the topics for the knowledge
level
HHHHHHH.
A very dull learner.
Needs more attention. Need More
visualized presentations in Ti. Need
more worked examples.
NNNNNNN.
Learner
understanding of Ti must be improved
to improve his performance in Tj
TTTTTTT.
Swapping of the two
related topics suggested for the
learner
WWWWWWW.
A consistent learner
Page 165
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Cluster Distribution
4
No. of 3
topics in
2
the
cluster 1
0
VH
VL
Knowledge
Understanding
Application
N o . o f s tu d e n ts o f th e s a m e
c l u s te r
15
High
Medium
10
Low
5
Very low
0
1
9 10
Topic No.
Fig. 5 Students performance organized in clusters
Page 166
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 167
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I.
INTRODUCTION
Conventional visible image watermarking
schemes [1],[2] impose strict irreversibility. However
Page 168
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
if W n ' = 1
Y n (i , j ),
1 i, j 8
and
n S
(1)
and
[Y (1 , 1) ]2
exp n
2 + n 1 n N
2
2
1
n =
=
1
N
2 =
vn =
L(3)
Y (1 , 1)
n
is the mean of
dc coef
and
n =1
1
N
[Yn (1, 1) ] is
the variance
n =1
vn min n (vn )
max n (vn ) min n (vn )
vn = ln(vn ), 1 n N
vn =
1
[Yn (i , j ) n ]2
63 (i , j )(1,1)
n =
Yn (i, j ),
63 (i , j )(1,1)
1 i, j 8 and 1 n N
2) .
Page 169
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
P w (i, j )
n
, if Wn (i, j ) = 0
Yna (i, j ) = n
w
Pn (i, j ), if Wn (i, j ) = 1
1 i, j 8 and n S
L(4)
L (5 )
D = (Y Ya)ROI
x = 2x y;
y = 2 y x
(8)
Reverse transform is
Yna (i , j ) = Pnw (i , j ),
1 i , j 8 and n {1, 2, K , N } S
After encoding, the packet is compressed using simple runlength encoding. For the purpose of security, the encoded
and compressed (De) is exclusively ored (XOR) with a
pseudo random numbers of a secret seed before inserted
into the non-ROI region.
(6)
(7)
De(i, j ) D (i, j )
otherwise
(i, j )
2
2 (9) 1
1
x =
y = x + y
3 x + 3 y
3
3
where (x,y), (x,y) are the chosen pixel values in the range
[0-255] to prevent overflow and underflow conditions.
III.
(10)
(11)
Apply pixel prediction method to the watermarked image
after payload is removed, as done previously to estimate a
scaling factor and use it to get an approximated version of
original image Y. Now to recover original pixel values in
Page 170
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V.
Fig.1a Lena
Fig.1b Peppers
Fig.1c Splash
Fig.1d Airplane
Watermarks used:
Fig.2a mark-1
Fig.2b mark-2
Fig.2c mark-3
Fig.2d mark-4
Page 171
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Watermarked Images:
Fig.3a
Fig.3b
Fig.3c
Fig.3d
Fig.5b.Retrieved lena
Fig.5d.Extracted Lena
TABLE I: PERFORMANCE EVALUATION: WATERMARKING VARIOUS IMAGES WITH MARK-3 OF SIZE 128 128 AT VARIOUS ROI.
Images
Lena
F-16
Peppers
Splash
Payload
(in bits)
25372
25701
26143
25188
24515
25207
23473
25099
24832
24828
25246
24322
Position
PSNR-1
PSNR-2
PSNR-3
(0,0)
(160,160)
(352,272)
(0,0)
(160,160)
(352,272)
(0,0)
(160,160)
(352,272)
(0,0)
(160,160)
(352,272)
55.3490
42.8205
42.5805
66.8455
45.8896
45.7781
85.7012
50.6652
53.2093
65.2598
44.0934
44.0734
37.8900
39.0298
38.0277
39.1492
39.2714
39.5457
44.1700
40.1900
44.3139
42.2465
40.3683
38.6686
59.8601
65.0116
65.0359
65.2507
65.1645
64.9491
62.8522
65.0808
65.0948
65.2507
64.5645
64.6513
WPSNR
-1
30.2225
28.7887
28.7301
25.0714
24.9415
25.1664
37.6478
46.8588
50.4606
34.2956
33.6492
33.4906
WPSNR
-2
49.4495
49.2789
52.6813
62.4066
58.8382
57.0555
Inf
54.9316
Inf
Inf
Inf
47.0625
WPSNR
-3
33.2196
49.0391
32.6273
29.2657
46.0943
47.1568
53.9220
47.7934
52.3948
54.0074
50.4374
47.0983
SSIM-1
SSIM-2
0.9631
0.9735
0.9634
0.9681
0.9738
0.9799
0.9857
0.9807
0.9852
0.9770
0.9705
0.9643
0.9995
0.9998
0.9996
0.9996
0.9996
0.9998
0.9992
0.9997
0.9995
0.9996
0.9995
0.9995
SSIM3
0.9642
0.9873
0.9788
0.9686
0.9829
0.9895
0.9858
0.9850
0.9892
0.9777
0.9835
0.9773
TABLE II : PERFORMANCE EVALUATION : WATERMARKING LENA IMAGE WITH MARK-3 OF VARIOUS SIZE AT (0,0)
Watermark
size
32 32
64 64
128 128
256 256
Payload
(in bits)
2831
9319
25372
52700
PSNR-1
81.0282
81.0093
55.3490
40.2160
PSNR-2
PSNR-3
49.2997
43.5129
37.8900
33.2593
66.1034
61.2387
59.8601
61.1828
WPSNR1
58.0810
43.7084
30.2225
30.2871
WPSNR2
Inf
55.6605
49.4495
Inf
WPSNR3
58.9150
49.4605
33.2196
31.0454
SSIM-1
SSIM-2
SSIM-3
0.9973
0.9877
0.9631
0.9277
0.9998
0.9994
0.9995
0.9992
0.9973
0.9877
0.9642
0.9366
TABLE III : PERFORMANCE EVALUATION : BUYER AUTHENTICATION OF RETRIEVED IMAGE AT ROI STARTING FROM (0,0).
Page 172
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Image
Authenticable
Digest
Length
Lena
F-16
Peppers
Splash
yes
yes
yes
yes
110
107
117
120
VI.
CONCLUSION
A reversible visible watermarking technique is presented in
this paper which can be applied to any visual media. The
paper proposes a method which considers HVS of the host to
watermark, to achieve the desired features of visible
watermarking. And at same time the image quality is retained
as shown by our results. Further the application being blind
makes it suitable for extraction of original at any place & time
As a key dependent method the scheme allows only authentic
users with correct key to retrieve the original. The buyer
authentication when embraced in the process, though a simple
technique, helps a lot in not only preventing piracy but also to
find the buyer involved in piracy.
PSNR of
retrieved
image
59.8601
65.2507
62.8522
65.2507
PSNR of
authentic
image
59.9861
65.2663
63.4003
66.0321
REFERENCES
[1] G. Braudaway, K. A. Magerlein, and F. Mintzer, Protecting publicly
available images with a visible image watermark, Proc. SPIE, International
Conference on Electronic Imaging, vol. 2659, pp. 126133, Feb. 12, 1996.
[2] M. S. Kankanhalli, Rajmohan, and K. R. Ramakrishnan, Adaptive
visible watermarking of images, in Proc. IEEE Int. Conf. Multimedia
Comput. Syst., vol. 1. Florence, SC, Jul. 1999, pp. 568573.
[3] A. M. Alattar, A Novel Difference Expansion Transform for Reversible
Data Embedding, IEEE Trans . on Information Forensics and Security,, vol.
3, no. 3, pp. 456465, Sep. 2008.
[4] C. De Vleeschouwer, J.-F. Delaigle, and B. Macq, Circular
interpretation of bijective transformations in lossless watermarking for media
asset management, IEEE Trans. Multimedia, vol. 5, no. 1, pp. 97105, Mar.
2003.
[5] Kamstra, L., Heijmans, H.J.A.M. Reversible data embedding
into images using wavelet techniques and sorting, IEEE Trans.
Image Process., vol. 14, no. 12, pp. 20822090, Dec. 2005.
[6] S. C. Pei and Y. C. Zeng, A novel image recovery algorithm for visible
watermarked images, IEEE Trans. Inf. Forens. Security, vol. 1, no. 4,pp.
543550, Dec. 2006.
[7] Y. Yang, X. Sun, H. Yang, and C.-T. Li, Removable visible image
watermarking algorithm in the discrete cosine transform domain, J.Electron.
Imaging, vol. 17, no. 3, pp. 033008-1033008-11 Jul.Sep. 2008.
[8] Y. J. Hu and B. Jeon, Reversible visible watermarking and lossless
recovery of original images, IEEE Trans. Circuits Syst. Video Technol., vol.
16, no. 11, pp. 14231429, Nov. 2006.
[9] Ying Yang, Xingming Sun, Hengfu Yang, Chang-Tsun Li, and Rong
Xiao A Contrast-Sensitive Reversible Visible Image Watermarking
Technique, IEEE Transactions On Circuits And Systems For Video
Technology, vol. 19, no. 5, pp. 656-677, May 2009
[10] D. Coltuc and J. M. Chassery, Very fast watermarking by reversible
contrast mapping, IEEE Signal Process. Lett., vol. 14, no. 4, pp. 255258,
Apr. 2007.
[11] Randall C. Reiningek and Jerry D. Gibson Distribution of two
dimensional DCT coefficients for Images IEEE transactions on
communications, vol. com -31, no 6 Jun 1983.
Page 173
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
gkarthikme@gmail.com
2
s_svk@teacher.com
3
rupasharan@gmail.com
Abstract - Customer Relationship Management use data from
within and outside an organization to allow an understanding
of its customers on an individual basis or on a group basis
such as by forming customer profiles. These profiles can be
discovered using web usage mining techniques and can be
later personalized. Web personalization system captures and
models behavior and profiles of users interacting with the
web sites. Web personalization is the process of customizing a
web site to the needs of specific users taking advantage of the
knowledge acquired from the analysis of users navigational
behavior in correlation with the information collected namely
I.
INTRODUCTION
Page 174
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 175
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V. WEB PERSONALIZATION
Web personalization is defined as any action that adapts the
information or services provided by a web site to the needs of
a particular user or set of users, taking advantage of the
knowledge gained from the users navigational behavior and
individual interests, in combination with the content and
structure of the web site.
The overall process of usage-based web
personalization consists of five modules, which correspond to
each step of the process. They are
1)
User Profiling-it is the process of gathering
information specific to each visitor either explicitly or
implicitly.
2)
Log analysis and Web usage mining- Information
stored in Web server logs is processed by applying data
mining techniques in order to
a.
Extract statistical information and discover
interesting usage patterns
b.
Cluster the users groups according to their
navigational behavior
c.
Discover potential correlations between the
web pages and user groups.
3)
Content Management- It is the
process of
classifying the content of a web site in semantic categories in
order to make information retrieval and presentation easier for
the users.
4)
Web site publishing- It is used to present the content
stored locally in a Web server and / or some information
retrieved from other Web resources in a uniform way to the
end-user.
5)
Information acquisition and searching-Since the users
are interested in information from various Web sources
searching and relevance ranking techniques must be employed
both in the process of acquisition of relevant information and
in the publishing of the appropriate data to each group of
users.
VI. THE PERSONALIZATION SOLUTION
Web usage mining techniques when combined with the multiagent architecture gives a personalization solution to web
sites. Multi-agent architecture consists of a set of autonomous
agents interacting together to fulfill the main goal of the
system. Agents taps into the communication stream between
a users web browser and the web itself. Agents observe the
data flowing along the stream, observe the data flowing along
the stream and alter the data as it flows past. These agents can
learn about the user, influence what the user sees by making
up pages before passing them on, and provide entirely new
functions to the user through the web browser.
Agents are divided into modules that have well defined tasks
and that are further divided into two working groups such as
data mining module and personalization module. The
personalization agent uses the user model knowledge along
with the previously discovered sequential patterns and applies
a set of personalization rules in order to deliver
Page 176
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VIII. RESULTS
H-UNC [10] was applied on a set of Web sessions
preprocessed from Web log data for several months. After
filtering, the data was segmented into sessions based on the
client IP address. After filtering the irrelevant URLs unique
sessions were obtained. H-UNC partitioned the web user
sessions of each period into several clusters and each cluster
was characterized by one of the profile vectors.
Web Usage Mining multi-agent system for web
personalization enhances the quality of discovered models and
hence optimizes the personalization process. The software
agents of PWUM have been implemented using multi agent
platform called JADE [11]. The results obtained by using
both of multi agents systems and WUM techniques were very
encouraging.
IX CONCLUSIONS
We presented our system and described the mechanism
necessary for Web Usage Mining and Personalization tasks.
The combination of more than one technique of WUM
enhances the quality of discovered models, so this optimizes
the personalization process. The use of multi-agent paradigm
reduces the time complexity. We are looking forward testing
our approach in tourism web sites as part of national research
projects.
REFERENCES
[1]
H. Lieberman. Letizia. An agent that assists web browsing. In
Proceedings
of the Fourteenth International Joint Conference on
Artificial Intelligence, pages 924-929, 1995.
[2]
T.W.yan, M.Jacobsen, H.Garcia-Molina, and U.Dayal. From user
access patterns to dynamic hypertext linking. In Proceeding of the fifth
International World Wide Web Conference, Paris, 1996.
[3]
T.Joachims, D.Freitag and T. Mitchell. Web watcher: a tutor guide
for the World Wide Web. In proceedings of the fourteenth International Joint
Conference on Artificial Intelligence, pages 924-929, 1995.
[4]
D.Madenic. Machine Learning used by personal web watcher. In
proceddings of the workshop on Machine learning and Intelligent Agents
(ACAI-99), Chania, Greece, July 1999.
[5]
R.Cooley, B.mobasher, and J.Srivastava, Web mining:
Information and Pattern Discovery in the World Wide Web, Proc. Ninth
IEEE Intl Conf. Tools with Ai (ICTAI 97),pp.558-567,1997.
[6]
O.Nasraoui, R.Krishnapuram, H.Trigui and A.Joshi, Extracting
Web User profiles Using Relational Competitive Fuzzy Clustering Intl J.
Artificial Intelligence Tools, vol 9,no. 4, pp.509-526,2000.
[7]
J.Srinivastava, R.Cooley, M.Deshpande, and P.N.Tan, Web
Usage Mining: Discovery and Applications of Usage Patterns from Web
Data,SIGKDD Explorations, vol. 1, no.2,pp1-12, Jan 2000.
[8]
D.Billus and M.J.Pazzani , A Hybrid user Model for News
Classification, Proc . Seventh Intl Conf. User Modeling (UM 99),
J.Kay,ed., pp.99-108,1999
[9]
I.Grabtree and S.Soltysiak, Identifying and Tracking Changind
Interests, Intl J. Digital Libraries, vol.2, pp.38-53
10]
O.Nasraoui, R.Krishnapuram, A New Evolutionary Approach to
Web Usage and Context Sensitive Associations Mining Intl
J.Computational Intelligence and Applications, special issue on Internet
intelligent systems, vol.20, no.3, pp.339-348, Sept 2002.
[11]
Bellifemine F.etal, 2004 JADE Basic Documentation:
Programmers Guide.
Page 177
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Assistant Proffessor,Department of Computer Science and Engineering, Affiliated to Anna University, Thirunelveli
Thiagarajar College of Engineering, Madurai, Tamilnadu, India.
1
nishanazer@yahoo.com,
thangam@tce.edu
I. INTRODUCTION
Test automation of GUI means mechanizing the testing
process where testers use software in controlling the
implementation of the test on the new products and comparing
the expected and the actual outcomes of the product
application. Scheduled testing tasks on a daily basis and
repeating the process without human supervision is one
advantage of automated testing. With all the mass production
of gadgets and electronic GUI devices, the testing period
seems quite demanding. Electronic companies must ensure
quality products to deliver excellent products and maintain
customer preferences over their products.
In running automatic tests for the GUI application, the tester
saves much time, especially when he is in a huge production
house and needs to be multi-tasking. There are actually four
strategies to test a GUI.1. Window mapping assigns certain
names to each element, so the test is more manageable and
Page 178
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
for extending the basic tester classes for new components, and
has a well defined method for naming those new actions so
they are automatically available at the scripting level. It also
provides extensibility for Converting strings from scripts into
arbitrary classes, and introducing new Individual steps into
scripts. Scripts can call directly into java code (the script is
actually just a thin veneer over method calls). Abbot provides
both a script environment and a JUnit [13] fixture, both of
which handle setup and teardown of the complete GUI
environment.
Test Case
Generation
JAR Files
Page 179
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1) Code Coverage
Code coverage analysis is sometimes called test coverage
analysis. The two terms are synonymous. The academic world
more often uses the term "test coverage" while practitioners
more often use "code coverage". Likewise, a coverage
analyser is sometimes called a coverage monitor. Code
coverage is not a panacea. Coverage generally follows an 8020 rule. Increasing coverage values becomes difficult, with
new tests delivering less and less incrementally. If you follow
defensive programming principles, where failure conditions
are often checked at many levels in your software, some code
can be very difficult to reach with practical levels of testing.
Coverage measurement is not a replacement for good code
review and good programming practices. In general you
should adopt a sensible coverage target and aim for even
coverage across all of the modules that make up your code.
Relying on a single overall coverage figure can hide large
gaps in coverage.
2) Code Coverage with Clover
Clover [24] uses source code instrumentation, because
although it requires developers to perform an instrumented
build; source code instrumentation produces the most accurate
coverage measurement for the least runtime performance
overhead. As the code under test executes, code coverage
systems collect information about which statements have been
executed. This information is then used as the basis of reports.
In addition to these basic mechanisms, coverage approaches
vary on what forms of coverage information they collect.
There are many forms of coverage beyond basic statement
coverage including conditional coverage, method entry and
path coverage. Clover is designed to measure code coverage
in a way that fits seamlessly with your current development
environment and practices, whatever they may be. Clover's
IDE Plug-in provide developers with a way to quickly
measure code coverage without having to leave the IDE.
Clover's Ant and Maven integrations allow coverage
measurement to be performed in Automated Build and
Continuous Integration systems, and reports generated to be
shared by the team.
The Clover Coverage Explorer:
The Coverage Explorer allows you to view and
control Clover's instrumentation of your Java projects, and
shows you the coverage statistics for each project based on
recent test runs or application runs. The main tree shows
coverage and metrics information for packages, files, class
and methods of any Clover-enabled project in your
workspace. Clover will auto-detect which classes are your
tests and which are your application classes - by using the
drop-down box above the tree you can then restrict the
coverage tree shown so that you only see coverage for
application classes, test classes or both. Summary metrics are
displayed alongside the tree for the selected project, package,
file, class or method in the tree.
The Clover Coverage Measurement:
Clover uses these measurements to produce a Total
Coverage Percentage for each class, file, and package and for
Page 180
Jar files
Repository
Collection of test
cases
Test Execution
Automatically
Expected State
Automated Verified
Verified
B. Performance Analysis
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 181
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1) Code Coverage
Code coverage and event based performance analysis
are done in the module. Clover coverage tool is used for
collecting the metrics for frames, methods and package. Code
coverage tool is used for statements, branches and loop.
B. Coverage View
Manual analysis is done for event interaction and events
and the relevant data is collected. Coverage metrics are
collected for different units of application.The different
explorer view of coverage report for all completed test cases is
show in fig 7,8,9
Fig 10: Coverage view of Code Cover tool for Calculator program
2) Event Coverage
Event Coverage is determined by the total number
of click events generated by the user. For Eg table3 shows that
Single event is generated for Basic button, 11 events are
generated for calculating the area of circle (pi*r*r).
3) Event Interaction Testing
It represents all possible sequences of events that can
be executed on the GUI. In that Calculator Application
contains as the collection Buttons such as standard, Control,
and Scientific. In Figure 11 represents that,
e1 represents the clicking in the file Menu
e2 represents the clicking in the Menu Selection event
e4 represents the clicking in the Basic button after
clicking the e2 event. It is similar to e5.
Page 182
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE:1 VIEW OF ALL TEST CASES VALUE USING CLOVER COVERAGE TOOL
Area
Test Case
State
ment
Bran
ches
Metho
ds
Classes
LOC
Circle
Rectangle
Parallelogram
Triangle
Trapezoid
Total
280
293
291
298
297
339
132
132
132
132
132
132
21
24
24
24
24
28
1
2
2
2
2
2
625
667
666
672
672
738
Rectangle
Prims
Cylinder
Cone
Sphere
Total
312
304
303
300
294
373
132
132
132
132
132
132
25
25
25
25
24
28
2
2
2
2
2
2
690
681
676
671
668
775
Rectangle
Prims
Cylinder
Cone
Sphere
Pyramid
291
293
296
297
299
295
132
132
132
132
132
132
24
24
24
24
24
24
2
2
2
2
2
2
666
669
671
672
677
671
Total
351
132
29
NCLOC
762
483
513
511
518
517
570
Surface
535
527
525
522
513
602
Volume
511
513
516
517
519
515
586
Total
Cmp
Cmp
Densit
y
Avg
method
Cmp
Stmt /
methods
Methods /
classes
Total
Coverage
(in %)
138
141
141
141
141
145
0.49
0.48
0.48
0.47
0.47
0.43
6.57
5.88
5.88
5.88
5.88
5.18
13.33
12.21
12.12
12.42
12.38
12.11
21
12
12
12
12
14
54.3
53.6
53.6
54.1
54.1
55
142
142
142
142
141
145
0.46
0.47
0.47
0.47
0.48
0.39
5.68
5.68
5.68
5.68
5.88
5.18
12.48
12.16
12.12
12
12.25
13.32
12.5
12.5
12.5
12.5
12
14
41.6
42.7
53.3
53.3
51.7
53.6
141
141
141
141
141
141
0.48
0.48
0.48
0.47
0.47
0.48
5.88
5.88
5.88
5.88
5.88
5.88
12.12
12.21
12.33
12.38
12.46
12.29
12
12
12
12
12
12
55
55
55
55
55.5
55
146
0.42
5.03
12.1
14.5
55
Page 183
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE: 2 VIEW OF ALL TEST CASES VALUE USING CODE COVERAGE TOOL
Test Case
Basic
Minus
Add
Mul
Div
Mod
Hex
Dec
Oct
Bin
Stmt
59.4
63.9
63.9
63.9
63.9
67.1
61.6
60.7
61.6
61.6
Circle
Rectangle
Parallelogram
Triangle
Trapezoid
68.9
63.9
63.9
65.3
65.8
Rectangle
Prims
Cylinder
Cone
Sphere
64.4
65.3
68.9
68.9
67.6
Rectangle
Prims
Cylinder
Cone
Sphere
Pyramid
Total
63.9
64.8
68.5
68.5
68.5
64.4
97.7
Branch
2.3
10.5
9.3
11.6
12.8
15.1
11.6
9.3
11.6
14
Area
36.0
12.8
12.8
17.4
18.6
Surface
14.0
16.3
34.9
34.9
30.2
Volume
11.6
15.1
34.9
34.9
33.7
15.1
93
Loop
10.1
8.7
8.7
8.7
8.7
10.1
14.5
13.0
14.5
13
Strict Con
6.0
7.7
7.7
7.7
7.7
9.4
19.7
7.7
15.4
9.4
10.1
8.7
8.7
8.7
8.7
14.5
11.1
10.3
13.7
15.4
8.7
8.7
10.1
10.1
10.1
13.7
13.7
16.2
15.4
9.4
8.7
8.7
10.1
10.1
10.1
8.7
34.8
8.5
10.3
14.5
12.8
12.8
11.1
94.9
Test Plan
Event
Execution
(in sec)
Execution
Delay (1000 sec)
Basic
Scientific
Hex
Dec
Octal
Binary
1
1
1
1
1
1
View
1.684
1.763
1.342
1.295
2.262
1.342
2.699
2.714
2.371
2.309
2.356
2.324
Circle
Rectangle
Parallelogram
Triangle
Trapezoid
11
7
6
12
12
Area
3.588
2.434
2.262
3.447
3.401
4.555
3.463
3.26
4.446
4.415
Rectangle
Prism
Cylinder
Pyramid
Cones
Sphere
6
8
11
10
11
14
Volume
2.278
2.036
3.525
2.995
3.588
4.119
3.291
3.65
4.633
4.009
4.556
5.085
Rectangle
Prims
Cylinder
Cones
Sphere
26
18
17
13
9
Surface
6.006
4.524
4.68
3.916
3.183
6.989
5.506
5.647
4.898
4.165
Page 184
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
EVENT PAIR
EVENT
STATEMENT
METHOD
BRANCH
e2,e5
e2,e4,e6
e2,e4,e8
e2,e4,e7
e2,e4,e9,e10,e9
e2,e4,e9
e2,e5,e11,e9,e10,e9
e2,e5,e11,e9
e9,e10,e9
e6,e9,e10,e9
e8,e9
e7,e9,e10,e9
e11,e9,e10,e9
e11,e9,e10
e6,e12
e2,e5,e11,e10
e9,e10,e2,e5,e11,e9
e6,e9
e7,e9
e8,e9
e2,e5,e11,e9,e10,e9
e2,e4,e9,e10,e9
e9,e10,e2,e5,e11,e9
e2,e4,e6
e2,e4,e8
e2,e4,e7
e2,e5,e11,e10
e9,e12
e8,e9
e9,e10,e2,e5,e11,e9
e2,e5
e6,e9
e7,e9
e8,e9
e2,e5,e11,e9,e10,e9
e2,e5,e11,e9
e9,e10,e9
e6,e9
e7,e9
e8,e9
e2,e5,e11,e9
e2,e4,e9,e10,e9
Illustrative Tests
e2
e4
e5
e6
e7
e8
e9
e10
e11
e12
TABLE: 4(A) FREQUENCY OF UNIQUE EVENTS OCCURRING IN THE TEST SUITE (LENGTH=0)
TEST SUITE
E2
E4
E5
E6
E7
E8
E9
E10
E11
E12
ORIGINAL
10
18
EVENT PAIR
EVENT
STMT
METHOD
BRANCH
Illus. Suite
TABLE: 4( B) FREQUENCY OF ALL EVENTS PAIR OCCURRING IN THE TEST SUITE (LENGTH=1)
TEST SUITE
E2,E4
E2,E5
E4,E6
E4,E7
E4,E8
E4,E9
E5,E11
E6,E9
E6,E10
E6,E12
E7,E9
E8,E9
ORIGINAL
EVENT PAIR
EVENT
STMT
METHOD
BRANCH
TEST SUITE
E2,E4
E2,E5
E4,E6
E4,E7
E4,E8
E4,E9
E5,E11
E6,E9
E6,E10
E6,E12
E7,E9
E8,E9
ORIGINAL
EVENT PAIR
EVENT
STMT
METHOD
BRANCH
Illus. Suite
TABLE: 4(C) FREQUENCY OF ALL EVENTS PAIR OCCURRING IN THE TEST SUITE (LENGTH=2)
Illus. Suite
Page 185
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE: 5 CONTESSI (N) VALUES FOR SUITE COMPARED TO ORIGINAL FOR ALL BUTTONS IN CALCULATOR APPLICATION IN GUI EXAMPLE SUITES
n
EVENT PAIR
EVENT
STMT
METHOD
BRANCH
ILLUS. SUITE
0.97308
0.78513
0.95434
0.94405
0.94571
0.7816
0.9274
0.60669
0.788116
0.82365
0.685188
0.0000
0.9106
0.39509
0.79697
0.79018
0.79018
0.0000
0.9428
0.73786
0.82495
0.7071
0.68041
0.0000
0.9999
0.0000
0.8660
0.0000
0.5000
0.0000
0.9999
0.0000
0.9999
0.0000
0.0000
0.0000
REFERENCES
[1] A. M. Memon."Gui testing: pitfalls and process". IEEE
Computer, 35(8):87-88, 2002.
[2] L.White, H.Almezen, "Generating Test Cases for GUI
Responsibilities Using Complete Interaction Sequences",
International Symposium on Software Reliability Engineering,
2001, pp54-63.Proc. the 11th International Symposium on
Software Reliability Engineering, 2000, pp110-121.
[3] L.White, H.Almezen, N.Alzeidi, "User-Based Testing of
GUI Sequences and Their Interactions", Proc. the
12thInternational Symposium on Software Reliability
Engineering,
Page 186
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 187
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
chitra@tsm.ac.in
sherin@tsm.ac.in
VI. INTRODUCTION
A relational database management system (RDBMS) is a
database management system (DBMS) that is based on the
relational model as introduced by E. F. Codd. It supports a
tabular structure for the data, with enforced relationships
between the tables. Most popular commercial and open source
databases currently in use are based on the relational model.
The problem with RDBMS is not that they do not scale, its
that they are incredibly hard to scale. The most popular
RDBMS are Microsoft SQL Server, DB2, Oracle, MYSQL
etc.
Many Web applications simply do not need to represent data
as a set of related tables that means all applications need not
be a traditional relational database management system
(RDBMS) that uses SQL to perform operations on data.
Rather, data can be stored in the form of objects, graphs,
documents and retrieved using a key. For example, a user
profile can be represented as an object graph (such as
pojo) with a single key being the user id. Another example:
documents or media files can be stored with a
single key with indexing of metadata handling by a separate
search engine.
These forms of data storage are not relational and lack SQL,
but they may be faster than RDBMS because they do not have
to maintain indexes, relationships, constraints and parse SQL.
Technology like that has existed since the 1960s (consider, for
example, IBMs VSAM file system).
Relational databases are able to handle millions of products
and service very large sites. However, it is difficult to create
redundancy and parallelism with relational databases, so they
become a single point of failure. In particular, replication is
not trivial. To understand why, consider the problem of
having two database servers that need to have identical data.
Having both servers for reading and writing data makes it
difficult to synchronize changes. Having one master server
and another slave is bad too, because the master has to take all
the heat when users are writing information. So as a relational
database grows, it becomes a bottleneck and the point of
failure for the entire system. As mega e-commerce sites grew
over the past decade they became aware of this issue - adding
more web servers does not help because it is the database that
ends up being a problem.
VII.
Page 188
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
open-source
schema-free
replication support
easy API
eventual consistency
Page 189
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
A relational database
Page 190
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Bigtable
Cassandra
CouchDB
Dynamo
MongoDB
Scalability
Key-Value
store DB
Column
Oriented DB
Highly Scalable
JSON
Document
Oriented DB
Key-Value
store DB
Easily scalable
and readily
extensible
Incremental
BSON
Document
Oriented DB
Scalable
Highly Scalable
Availability
Performance
Highly
Available
High availability
is achieved
using replication
Highly
Available
Highly
Available
High writeavailability
TABLE II
NONFUNCTIONAL REQUIREMENTS ANALYSIS OF NOSQL DATABASES
X. CONCLUSION
Next Generation Databases mostly address some of the points:
being non-relational, distributed, open-source and horizontal
High performance
Low
Loading speeds are
better than retrieval
speeds
Performance at
massive scale is one
of the biggest
challenges
Excellent solution for
short read
Reliability
Provides reliability at
a massive scale.
At massive scale is a
very big challenge
Reliable and efficient
system
Reliability at massive
scale is one of the
biggest challenges
Low
[46]
Chang, Fay; Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh,
Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and
Robert E. Gruber. Bigtable: A Distributed Storage System for Structured
Data. Google. 13 November 2009.
[47]
Kellerman, Jim. "HBase: structured storage of sparse data for
Hadoop", 13 November 2009.
[48]
Ian Eure, Looking to the future with Cassandra. Digg Technology
Blog, September 2009
Page 191
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[49]
Zhou Wei, Guillaume Pierre and Chi-Hung Chi. CloudTPS:
Scalable Transactions for Web Applications in the Cloud. Technical report
IR-CS-53, VU University Amsterdam, February 2010.
[50]
http://www.allthingsdistributed.com/2007/12/eventually_consistent
.
[51]
http://s3.amzonaws.com/AllThings/Distributed/sosp/amazondynamo
[52]
http://labs.google.com/papers/bigtable.html
http://www.cs.brown.edu/~ugur/osfa.pdf
[53]
[54]
http://www.readwriteweb.com/archives/amazon_dynamo.php
[55]
http://nosql-database.org/
[56]
http://couchdb.apache.org/
Page 192
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1.
Introduction
Error control systems concern themselves with practical ways
of achieving very low bit error rates after a transmission over
a noisy band limited channel. Several error correcting
techniques are deployed and their performances can be
measured by comparing them each other and to the theoretical
best performance given by Shannon channel information
capacity theory.
Following the invention of Turbo Codes in 1993, a number of
turbo coded systems exploiting the powerful error correction
capability have been developed. Many factors affect the
performance of Turbo codes. They include the interleaver size
and structure, the generator polynomial and constraint length,
the decoding algorithm, the number of iterations, and so on.
The presence of multipath with different time delays causes
inter-symbol interference (ISI) and degrades the performance
of the transmission system. To improve performance of the
system, complex adaptive equalization techniques are
required. Recently, OFDM systems have been attracting much
attention, because of its robustness against frequency-selective
fading. Some examples of existing systems, where OFDM is
used, are digital audio and video broadcasting, asymmetricdigital-subscriber line modems, and wireless local-areanetworks systems, such as IEEE 802.11 and HiperLan/2. In
mobile environment, since the radio channel is frequency
selective and time-varying, channel estimation is needed at the
receiver before demodulation process.
Orthogonal Frequency Division Multiplexing (OFDM) could
be tracked to 1950s but it had become very popular at these
days, allowing high speeds at wireless communications.
Orthogonal
frequency division multiplexing (OFDM) is
becoming the chosen modulation technique for wireless
communications.
In this paper, we propose a scheme that can improve greatly
the performance of turbo-coded OFDM systems without
2.
OFDM System
2.1
OFDM Fundamentals
Page 193
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Otherwise
Where:
T Symbol length; time between two
consecutive OFDM symbols
After FFT
block, assuming there is no ISI, demodulated signal is given
by following equation;
Where H(k) is FFT[h(n)], I(k) is ICI and W(k) is
FFT [w(n)].
2.2. OFDM Modeling
Mathematically, the OFDM signal is
expressed as a sum of the prototype
pulses shifted in time- and frequency
and multiplied by the data symbols. In continuous-time
notation, the kth
OFDM symbol is written as
( 7)
(6)
Page 194
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
expressed as
Fig. 3.
(8)
3.2 MAP
A map decoder chooses the most likely path so as to
maximize the a posteriori
probabilities and that can be mathematically presented as:
conditioned on that
Design Criteria
Page 195
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
0.2
0.4
0.6
BER
0.1316
0.1519
0.1098
0.1081
Eb/N0(dB)
0.8
1.2
1.4
BER
0.0926
0.0719
0.0663
0.0503
0.2
0.4
0.6
BER
0.1616
0.1298
0.0602
0.0321
Eb/N0(dB)
BER
0.8
0.0106
1
0.0011
1.2
1.527e-4
1.4
2.6551e4
SIMULATION RESULTS
0
0.1225
0.8
0.0034
0.2
0.0942
1
0.0014
0.4
0.0729
1.2
1.713e-4
0.6
0.0208
1.4
2.7042e4
Page 196
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
SNR = 10 dB
SNR = 15 dB
SNR = 25 dB
2048
800
512 samples
Half zero signal , half a cyclic
extension of the symbol
5. CONCLUSION
Red
QPSK
Green
Blue
8-PSK
16-PSK
6.Refernces.
The results show that the lower order modulation schemes has
low bit error rate as compared to higher order modulation at a
given SNR.
Page 197
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 198
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
J.MANIBHARATHI
UG STUDENT
DEPT OF ECE
KALASALINGAM UNIVERSITY
KRISHNAN KOIL
mani.malarathi@gmail.com
Abstract :
Bada is a complete smart phone platform.
Bada runs high-performance native apps and services. With
this use of BADA inour mobile phone the compatibility
,accessibility
and the interface of the mobile phone
version onescan be considerably increased. The use of
Badaallows us to implement mobile with no control technology
and speech with gesture recognitionfacility in the mobile
devices. Also further the development of BADA allows us to
install hand tracking detection in our mobile phones.in this
paper ,feasibilities of using Bada in mobile is discussed indetail
1.0 Introduction
There were nearly 2000 million cell phones sold each
year compared with fewer than 200 million PCs and the
gap was widening. Increasingly, phones were the way people
wanted to connect with each other and with everything else.
Phones were going to replace PCs as the main gateway to the
Internet, and they were going to do it soon. The cell phones
ran on different software, had less memory, and operated
under the constraints of pay-per-byte wireless networks, the
mobile Web was a stripped-down, mimeographed version of
the real thing. To avoid this, Bada had the solution. Bada is a
free, open source mobile platform that any coder could write
for and any handset maker could install It would be a global,
open operating system for the wireless future.
Bada, (bd, the Korean word for ocean,) is a new
smartphone platform that allows developers to create featurerich applications that elevate the user experience in mobile
spaces. The bada platform is kernel-configurable and it can
run either on the Linux kernel or real-time OS kernels and it
was first developed by Samsung.
Dr.S.DURAI RAJ
PROFESSOR & HOD
DEPT OF ECE
KALASALINGAM UNIVERSITY
KRISHNAN KOIL
rajsdr@rediffmail.com
2. Bada Runtime
Bada run
on any platform . Java run timeand dalvik virtual machine and
other computing platform.
2.1 Dalvik Virtual Machine
Every bada application runs in its own process, with its own
instance of the Dalvik virtual machine. Dalvik has been
written so that a device can run multiple VMs efficiently. The
Dalvik VM executes files in the Dalvik Executable (.dex)
format which is optimized for minimal memory footprint. The
VM is register-based, and runs classes compiled by a Java
language compiler that have been transformed (as ByteCodes,
which is good for run fast) into the .dex format by the included
"dx" tool.
Page 199
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4.0
Bada Architecture
5.2
Object, String, Date Time, Byte Buffer, uid, and other base
types. It has Wrapper classes for C++ primitive types, such as
Integer, Short, Double, and Long. Its has certain condition for
run time. It is interms of timer thread and its Synchronization
with Mutex, Semaphore, and Monitor. The other basic data
types are Collection and its utility. The SDKfeatures in data
types are, ArrayList, HashMap, Stack, and other collection
types. Object-based collections and template-based
collections.. Math, StringUtil, StringTokenizer, and Uri with
Standard library support. C++ STL (Standard Template
Library)
5.3 Osp::Io
Namespace for input/output data handling. It contains file and
Directory. The database involves transaction such as begin,
commit and rollback. Its also Support of both SQL queries and
enumerations with DB statements. It has registry data IO
which involves system-provided
data store in which
applications store and retrieve settings.
5.5 Osp::System
Namespace for controlling devices and getting system
information. It has the separate device control with register an
alarm.
1.
Screen power management:
2.
It has Keep LCD on with or without dimming .
3.
Vibrator activation with level, on/off period, and
count
4.
System information:
5.
Getting uptime and current time in UTC, Standard,
and Wall time mode. The features of sdk in Bada relates with
all mobile phone applications like connection of networking
which is in the form of Osp::Net, and for multimedia purpose
in Osp::Media . It has separate security systems and it is
different from all other mobile phones.
6.
5.6 Osp::Security
5.1 Bada SDK features
The sdk features is represented in osp: base model. It has
several osp for each and every application. There are separate
Namespaces for basic types, execution environment, and
utilities.
Page 200
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
7.0
InBuilt Applications
a.
Wii Remote
b.
Vector Quantization
on a two-dimensional circle its possible to put them on a threedimensional sphere, intersecting two circles orthogonal to
each other. Consequently this leads to
k = 8 + 6 = 14 centres. The radius of each circle/sphere
dynamically adapts itself to the incoming signal data.
9.1 Sensing
The wii remote has the ability to sense acceleration along
with three axes through the use of an ADXL 330
accelerometer. The wii remote also has the feature of pix art
optical sensors, allowing it to determine where the wii remote
is pointing. Unlike a light gun that senses light from a mobile
screen, the wii remote senses light from the sensor which is
inbuilted in the form of chip in the bada phones allows
consistent usage regardless of the mobile sizes. The infrared
LEDs in the sensor gives an accurate pointer. The sensor
allows the wii remote to be used as an accurate pointing
device 30 meters away from the mobile. The wii remotes
image sensor is used to locate the sensors points of light in
the wii remotes field of view. The inbuilted wii software in
the mobile devices calculates the distance between the wii
remote and sensor using triangulation. Games can be
programmed to sense whether the image sensor is covered,
which is demonstrated in a microgame of smooth moves,
where if the player does not uncover the sensor, the
champagne bottle that the remote represents will not open.
Page 201
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
10.0 Summary
The implementation of Bada in the mobile phones leads to lot
of facilties in each and every applications. The introduction of
WII in Bada ensures gesture recognition and its sensing is
used as the basic technique for no control technology. Bada
has the special feature to have Bluetooth in the standardized
Java API, in which no other mobile phones have that features.
This analysis lead to the highest cost reduction in WII games
if it is implemented.
11.0 References
1.
Wii:
The
Total
Story
IGN.
http://wii.ign.com/launchguide/hardware1.html
Retrieved
2006-11-202
2. Consolidated Financial Highlights(PDF). Nintendo. 200907-30. p. 9.
http://www.nintendo.co.jp/ir/pdf/2009/090730e.pdf#page=9,
Retrieved 2009-10-29.
3 Electronic Arts (2008-01-31). Supplemental Segment
Information
http://media.corporateir.net/media_files/IROL/88/88189/Q3FY08SupSeg.pdf#page=
4 Retrieved 2008 (PDF). Thomson Financial. p. 4. -02-09..
4.
Wii
Remote
Colors.
news.com.
http://news.cnet.com/i/ne/p/2005/0916nintendopic4_500x375.j
pg. Retrieved 2006-07-15.
5. http://badawave.com, what is bada, Retrieved 7 April
2010.
Page 202
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Abstract
The technique of orthogonal frequency division multiplexing
(OFDM) is famous for its robustness against frequency-selective
fading channel. In general, the fast Fourier transform (FFT)
and inverse FFT (IFFT) operations are used as the
modulation/demodulation kernel in the OFDM systems, and the
sizes of FFT/IFFT operations are varied in different
applications of OFDM systems. The modified Mixed Radix 4-2
Butterfly FFT with bit reversal for the output sequence derived
by index decomposition technique is our suggested VLSI system
architecture to design the prototype FFT/IFFT processor for
OFDM systems. In this paper the analysis of several FFT
algorithms such as radix-2, radix-4 and split radix were
designed using VHDL. The results show that the proposed
processor architecture can greatly save the area cost while
keeping a high-speed processing speed, which may be attractive
for many real-time systems.
Key words: OFDM, FFT/IFFT, VLSI, VHDL, Mixed Radix with
bit reversal.
I.
Introduction
Page 203
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
(1)
Fig 2: The basic butterfly for mixed-radix 4/2 DIF FFT algorithm.
Page 204
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
III.
RESULT
IV.
DFF, and
Conclusion
Page 205
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 206
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
kuttygec@gmail.com
Introduction
The major sources of energy wastage in a typical sensor
network are due to consumptions occurring in three domains,
viz; sensing, data processing and communication [1]. Of
these, the energy communication wastage is a significant
contributor and occurs due to idle listening as dominant
factor in most applications, collision of packets, overhearing
and control packet overhead. Mainly design of energy
efficient MAC protocols target a reduction or elimination of
the energy communication waste in particular idle listening.
The central approach to reduce idle listening is through duty
cycling.
Suitable wake-up mechanisms can save significant amount of
energy in Wireless Body Area Networks (WBAN) and
increase the network lifetime. In these schemes, the device
wake-ups only when necessary, otherwise it sleeps thereby
saving energy. Coordinated and controlled data transmission
can therefore reduce energy consumption.
Data traffic in a WBAN is classified into [2]:
Page 207
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
DTS
ETS ( CAP )
Batch Ack
( Optional )
Inactive Period
Or
CAP Extension
(Optional )
CAP
Beacon
Page 208
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Beacon
period
Request
period
Topo mgmt
period
Data portion
CAP
CFP
Inactive
111111111
000000000
000000000
111111111
Preamble
Packet source
DATA
111111
0
1
0
1
000000
1111111111111111111111111111
0000000000000000000000000000
111111
0
1
0
1
000000
TCI
Node #1
Node #i
1
0
0
1
11
00
00
11
1111111
0000000
0000000
1111111
0
1
0
1
000
111
0
1
0
1
000
111
0000000000000000000000000000
1111111111111111111111111111
0
1
0
1
000
111
0
1
0
1
1
0
Node #N
Receive mode
Transmit mode
Page 209
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
70
60
Rate(%)
50
40
30
20
10
PrioritizedBackoffwithSlottedAloha
SlottedAloha
0
1
NumberofNodeperClass
PERFORMANCE EVALUATION
A.. Beacon-free MAC mode :
The simulation is carried out for three setups viz ; Mode
when only one burst per device is emitted, Mode 1 when
burst is re-emitted by a device, under the LDC limit, until a
neighbor relays and Mode 2 which is same as Mode 1 plus
former relays, with emission credits left, can relay again.
References
[1] J.S Yoon, Gahng S. Ahn, Myung J Lee, Seong-soon Joo
Versatile MAC for BAN proposal responding to TG6 Call
for Proposals (15-08-0811-03-0006-tg6 call for proposals
[2] A. El-Hoiydi and J. D. Decotignie, WiseMAC: An
Ultra Low-Power MAC Protocol for Multi-hop Wireless
Sensor Networks, Proceeding of First International
Workshop on Algorithmic Aspects of Wireless Sensor
Networks (ALGOSENSOR), July 2004.
[3] E. A. Lin, J. M. Rabaey, and A. Wolisz, PowerEfficient Rendezvous Schemes for Dense Wireless Sensor
Networks, Proceedings of ICC, June 2004. TICER/RICER
[4] The French ANR "BANET" project
[5] M. J. Miller and N. H. Vaidya, A MAC Protocol to
Reduce Sensor Network Energy Consumption Using a
Wakeup Radio, IEEE Transactions on Mobile Computing,
4, 3, May/June 2005.
[6] 15-08-0644-09-0006-tg6-technical-requirementsdocument.doc
[7] Timmons, N.F. Scanlon, W.G. "Analysis of the
performance of IEEE 802.15.4 for medical sensor body area
Page 210
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 211
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1. INTRODUCTION
In the clinical context, medical image processing is generally
equated to radiology or "clinical imaging" and the medical
practitioner responsible for interpreting (and sometimes
acquiring) the images are a radiologist. Diagnostic
radiography designates the technical aspects of medical
imaging and in particular the acquisition of medical images.
The radiographer is usually responsible for acquiring medical
images of diagnostic quality, although some radiological
Medical imaging is often perceived to designate the set of
techniques that no invasively produce images of the internal
aspect of the body. In this restricted sense, medical imaging
can be seen as the solution of mathematical inverse problems.
This means that cause (the properties of living tissue) is
inferred from effect (the observed signal). In the case of ultra
sonography the probe consists of ultrasonic pressure waves
and echoes inside the tissue show the internal structure. In the
case of projection radiography, the probe is X-ray radiation
Guided by,
Hemalatha, Lecturer
Department of Computer Science
Dr N.G.P Arts and Science College
Coimbatore
which is absorbed at different rates in different tissue types
such as bone, muscle and fat.
Medical Image diagnosis is considered as one of the fields
taking advantage of high-technology and modern
instrumentation. Ultra sound, CT, MRI, PET etc., have
played an important role in diagnosis, research, and
treatment. Image registration is a process of overlaying two
or more images of the same scene taken at different times,
from different viewpoints, and/or by different sensors. The
present differences between images are introduced due to
different imaging conditions.
2. METHOD
Assuming that we have two images of the same Object, a
structural image and a functional image.
The Process of registration is composed of following steps:
Page 212
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 213
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Transformation model.
Image nature.
Other classifications.
3.
And
Conclusions
future work
Page 214
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 215
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
pmmshastry@yahoo.com
2
vkoppa@yahoo.com
I. INTRODUCTION
Since, the recent trends in HPC and even stand alone
systems employ a very large number of processors to
execute the large size application programs in a distributed
system environment, it is required to provide the fault
tolerance to such applications.
Page 216
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3.
R - restart time required to resume the execution
of an application from the most recent checkpoint.
4.
F - the number of failures during the execution of
the application.
5.
TS - time required to save each checkpoint on to a
local disk.
6.
Ni - the number of checkpoints taken in ith cycle.
7.
tc - checkpoint interval without restart cost.
8.
TC - optimum checkpoint interval size and is used
as fixed checkpoint interval.
9.
TCi ith checkpoint interval which is incremental.
10.
CCi - the cost of checkpoints in ith cycle.
11.
CC total cost of checkpoints
12.
P - the number of processes / processors used for
parallelism.
13.
Number of failures per hour.
14.
TF time to failure.
15.
Ti - the time at which the ith failure occurs.
IV. IMPLEMENTATION OF COORDINATED CHECKPOINTING
PROTOCOL
A. Protocol
TF = 1 / (P )
(1 )
Page 217
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
tC =
2 TS TF
(2)
Ci = TCK + (i 1) TS
(5)
k =1
But, the above equation does not consider the restart time
of the application after a failure. Hence, considering the
restart time (R) also, the above equation can be modified
by adding the restart time (R) multiplied by the number of
failures (F) to tC as follows to get the optimal checkpoint
interval TC.
TC =
2 TS T F + R F
(3)
TC i = i TC
( 6)
C1
C2
TC1
TS
CNi
TC2
TS
TS
ith checkpoint Ci is
Ci = i TC +(i 1) TS
(4)
In general, TCN = TC
C1
TC
TS
TC
C2
CNi
TS
TS
TC
N i = Ti / (TC + TS )
(7 )
CC i = N i TS
(8)
(9)
Page 218
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TLi = CC i + Rbi + R
(10)
TL = TLi
(11)
i =1
Restart
Application Failure
C2
TC
TS
CNi
TC
TS
K
k =1
(12)
T C + n T S ) < T i and
Ti < (
for
N1 = 0
n +1
K
k =1
n = 1, 2 , 3 , ..
T C + ( n + 1) T s )
and
for
(13 )
i = 1, 2 , 3 , .. F
C1
TS
TC
Ni = n
(14)
CCi = Ni TS
(15)
Rbi = (Ti ( N i TS + K TC ))
(16)
k =1
Page 219
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TS
C2
TC2
P { N (t ) = n} = ( t ) n e t / n !
The inter arrival times of the failures are independent and
obey the exponential distribution.
f ( x )
= e
= 0,
for
x >=
otherwise
CNi
TS
TS
TCNi
Page 220
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2 Min.
34
4 Min.
5 Min.
500
400
300
200
10 Min.
42
47
50
Failure Time (Min)
60
Incremental
Fixed
14
12
10
8
6
4
2
0
5 10 15 20 25 30 35 40 45 50 55 60
Failure Time (Min)
Fig 6. Comparison of Number of Checkpoints of Fixed and Incremental
Checkpoint Intervals
Incremental
500
400
300
200
100
0
5
100
5 Min.
10 Min.
600
4 Min.
3 Min.
3 Min.
900
800
700
600
500
400
300
200
100
0
Different
Number of Checkpoints
10 15 20
25 30 35 40 45 50 55 60
0
10
12
15
20
30
Fig 7. Comparison of Checkpoint Cost of Fixed and Incremental
Checkpoint Intervals
Page 221
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Incremental
1200
1000
800
600
400
200
0
5
10 15 20 25 30 35 40 45 50 55 60
Failure time (Min)
Fixed
Incremental
1200
1000
800
600
400
200
0
5 10 15 20 25 30 35 40 45 50 55 60
Failure Time (Min)
VII. CONCLUSIONS
Figures 5a and 5b show the comparison of total cost of
overheads with different checkpoint interval size. From
figures 5a and 5b, it is clear that the total cost of overheads
is quite minimum when checkpoint interval size is 4
minutes. We have even validated the model developed by
Young [5] to determine the optimum checkpoint interval.
But, the model [5] does not consider the restart cost
required to restart the application when it fails.
We have added the restart cost multiplied by the number of
times the application undergoes failures to Youngs model
[5] to get the optimum checkpoint interval which yields the
optimum overheads cost irrespective of number of failures
for any application. An approximate estimate of the
checkpoint interval can be calculated from equation (3).
From figure 6, we see that the fixed checkpoint interval
method causes more number of checkpoints than
Page 222
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[12]
R. Geist, R. Reynolds, and J. Westall, Selection of a
checkpoint interval in a critical-task environment, IEEE Trans.
Reliability, 37, (4), 395-400 (1988).
[13]
H. Paul Hargrove and C. Jason Duell, Berkeley lab
checkpoint / restart (BLCR) for Linux clusters, Journal of Physics,
Conference series 46 (2006), 494-499, SciDAC 2006.
[14]
James S. Plank and
MichG.Thomason, The Average
Availability of Parallel Checkpointing Systems and Its Importance in
th
Selecting Runtime Parameters,29 Internatioonal symposium on Fault
Tolerant Computing , Madison WI, Jun-1999, pg 250-259.
Page 223
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
K.V.Gayathiri Devi #2
M.Arunachalam*3
M.K.Hema*4
*3,4
I.INTRODUCTION
This project is the implementation of a simple, fast and
reliable method for automatically diagnosing diseases
through digitized images of blood. Hematopathology
reveals that there is an intrusion of the disease cell in the
blood having identical characteristics for each disease.
Principal effectors in the blood that are Erythrocyte,
Leukocyte and Platelets of the blood play a crucial role in
supplying blood to the various parts of the body, resisting
foreign particles and in clotting of blood respectively. The
effectors are identified and skipped for our observation. In
microscopic images of the diseased blood cells, the
diagnosis is based on the evaluation of some general
features of the blood cells such as color, shape, and border,
and the presence and the aspect of characteristic structures.
Perception of these structures depends both on
magnification and image resolution. The introduction of
Page 224
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Color/
Grey
Image
Image
capturing
(Reduced)
(Enlarged)
Fig2.1.1inputslidesize Standardization
Segmentation
Binarized
Image
Analysis
(BP Algorithm)
Color/Gray image
The image is then passed to the above component from
which the color image is converted to gray scale using the
necessary formula. Weights of the Red, Blue and Green are
mixed in proper proportions and then we calculate a single
value as required in gray scale.
Segmentation
The image is then segmented according to the required
values of height and width. When there is some exceeding
part of the block they are filled with appropriate values of
padding.
Disease
Detection
Image capturing
Binarized Image
Disease detection
Page 225
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Chronic Myeloid
Leukemia
Gaucher's Disease
Hypersplenism
Erythrocytes
Leukocytes
Platelets
as shown in the below figure.
organs of the human body. The size of red blood cell is 7.5
microns in diameter. It is spherical in shape. In a human
body weighing 75 kg containing 5 litres of blood can have
about 1000 red blood cells in it. Red blood cells are almost
same in diameter.
Leukocytes
Leukocytes are also termed as White blood cells. The
function of Leukocytes is to fight against the foreign
organisms or the cells getting into the blood due to the
various diseases. When compared to Erythrocytes,
Leukocytes are large in number. The main way in
differencing Erythrocytes and Leukocytes is the nucleus
that is only possessed by the Leukocytes. The size of
Leukocytes is also bigger in diameter compared to that of
Erythrocytes.
Platelets
Platelets play a vital role in performing the clotting of
blood. Platelets do not have a definite structure.
Digital Image Processing
Digital image processing is the processing of the
image data for storage, transmission and representation for
autonomous machine perception. A digital image is
composed of a finite number of elements each of which has
a particular location and value. These elements are referred
to as picture elements, image elements and pixels. Pixels
is the term most widely used to denote the elements of the
digital image. Vision is the most advanced of our senses, so
it is not surprising that images play the single most
important role in human perception. Thus digital image
processing encompasses a wide and varied field of
applications. There are fields such as computer vision
whose ultimate goal is to use computers to emulate human
vision, including learning and being able to make
inferences and take actions based on visual inputs. This
area is itself a branch of artificial intelligence whose
objective is to emulate human intelligence. The area of
image analysis also called as Image understanding is in
between image processing and computer vision. Lets us
now discuss about Medical Imaging that is part of Digital
image processing.
C.MedicalImaging
Fig 2.2.1
Components Of Blood
Erythrocytes
Erythrocytes are also termed as Red blood cells. The
function of Erythrocytes is to carry oxygen to various
Page 226
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
units.
The above error value Ep is set so that the difference
between the obtained and the actual image is fixed to a
certain quantity.
Result analysis of Error differences by BPN
Actual image
Obtained image
Result
53
67
12
22
54
22
44
55
11
33
76
44
13
53
67
88
14
44
55
55
Page 227
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Fig
2.5.3
Bloc
k
Extr
actio
n
In
orde
r to
still
reduce the running time of the algorithm we can select
some points may be in some fashion and find out
appropriate match. This is represented in the following
diagrams. This is represented as below:
Considering Slide dimensions as below the total no of
comparisons required are 100X100 =10000
TABLE 2.5 IMPROVED ALGORITHM ANALYSIS
Block
dimensions
NO of
Percentage of
Total no of
blocks
running time
comparisons
required in
saved
minimum
20X20
3200
68
10X10
16
1600
84
5X5
32
800
92
Graph Representation
100
90
80
70
60
%
50
40
30
20
10
0
Time saved
5X5
Dimensions
Inappropriate
Less accuracy
Fig2.
5 Graphical Representation Of Algorithm Analysis
Thus improved running time considering dimensions
10X10 =
16% [O(m1*m2)]
Thus total time saved [Improved Technique]
O(m2*n2*m1*n1) - 16%[O(m1*m2)]
that is considerably more to that of
O(m2*n2*m1*n1) - O(m1*m2)
III.RESULTS AND DISCUSSION
Blood cell recognition, disease diagnosis and some of the
morphological methods are briefly reviewed in this section.
This includes experimental investigations and image
processing algorithms that have been developed to
diagnosis diseases. To overcome the problems associated
with blood cells, disease constituents and effectors during
the diagnosis process, the image processing technique have
been formulated with vision system to provide closed loop
control. Several experiments were carried out to
characterize the disease of the main modeling steps, i.e.,
establishing corresponding cells from a set of learning
shapes and deriving a statistical model.
We applied the modeling procedure to distinct the
incoming image with the normal one. We test our methods
on 15 images of healthy and 30 images with disease. In our
approach, we established the corresponding cells prior to
pose and scaling estimation during statistical modeling.
This assumes that the position of the cells does not change
significantly with different pose and scaling, which is
justified by the high accuracy of the model.
Page 228
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 229
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Amazon,
Rackspace
Cloud,
SalesForce,
Sun)
to
the
Yahoo,
Cloud
Google,
Computing
Page 230
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2. Virtualization
Whenever an application or a service, is being developed
by a company, its reliability to deliver at the native
environment requires all its dependent services or resource
to be bundled together due to the reason that the
Application and Infrastructure are found to be dependent.
Here technologies advanced to find a solution like bundling
application with all it required services like Database,
subsequent immediate services as well as Operating
System into one, thereafter making Application
independent of the Infrastructure into which the application
is deployed.
Yes very close, fueling the application to work independent
of the Infrastructure is the architecture terms as
virtualization. It refers to the Abstraction of computer
resource. With the emergence of Virtualization, the internet
cloud gathered multiple resources to equip for the
Virtualization. Most of the time, servers don't run at full
capacity. That means there's unused processing power
going to waste. It's possible to fool a physical server into
thinking it's actually multiple servers, each running with its
own independent operating system. The technique is called
server virtualization. By maximizing the output of
individual servers, server virtualization reduces the need
for more physical machines.
Maneuvering virtualization technique according to the
demands of the vendors is the simplest definition for Cloud
Computing.
Page 231
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Architecure
Page 232
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 233
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 234
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Conclusion
Page 235
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[5]http://www.watblog.com/2008/03/25/yahoocomputational-research-laboratories-team-up-for-cloudcomputing-research/
[6]http://computersoftware.suite101.com/article.cfm/cloud
_computing_and_virtualization_impact
[7]http://www.microsoft.com/virtualization/en/us/privatecloud.aspx
Page 236
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
asmafaritha@yahoo.in
I. INTRODUCTION
Fuzzy set theory has been applied to many fields of
Operations Research.
The concept of fuzzy linear
programming (FLP) was first formulated by Zimmermann.
And then various types of the FLP problems are considered
by many authors and several approaches for solving these
problems are proposed. In this paper we consider a LPP in
which the coefficients of the objective function, constraint
coefficients and right hand side of the constraints are fuzzy.
Here we explain the concept of the comparison of fuzzy
numbers by introducing a linear ranking function. Our main
contribution here is the establishment of a new method for
solving the Fuzzy linear programming problems with TORA.
Moreover we illustrate our method with an example.
II. PRELIMINARY
henry23@refiffmail.com
A. FUZZY SETS
Definition
Let X be a classical set of objects called the universe whose
generic elements are denoted by x. The membership in a crisp
subset of X is often viewed as characteristic function A(x)
from X to {0, 1} such that
A(x) = 1 , if x A
= 0 , otherwise.
where {0, 1} is called valuation set.
If the valuation set is allowed to be the real interval [0, 1], A is
called a fuzzy set. A(x) is the degree of membership of x in
A. The closer the value of A(x) is to 1, the more x belong to
A. Therefore, A is completely characterized by the set of
ordered pairs:
A = { ( x, A(x)) / x X }
Definition
The support of a fuzzy set A is the crisp subset of X and is
presented as :
Supp A = { x X / A(x) > 0 }
Definition
The level ( cut ) set of a fuzzy set A is a crisp subset of
X and is denoted by
A = { x X / A(x) ) }
Definition
A fuzzy set A in X is convex if
A( x + (1-)y) min { A(x) , A(y) } x , y X and
[0, 1] . Alternatively, a fuzzy set is convex if all level
sets are convex .
Note that in this paper we suppose that X = R.
B. Fuzzy numbers
Definition
A fuzzy number A is a convex normalized fuzzy set on the
real line R such that
1) it exists atleast one x0 R with A(x0) = 1.
2) A(x) is piecewise continuous.
Page 237
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
x > 0 , x a = ( xaL , x aU , x , x )
x < 0 , x a = ( xaU , xaL , - x , - x )
IV. RANKING FUNCTIONS
A convenient method for comparing of the fuzzy
numbers is by use of ranking functions. A ranking function is
a map from F(R) into the real line. Now , we define orders on
F(R) as following:
1
x
0
aL -
aL
aU
aU +
0
s-l
s+r
Image of a : - a = (- a , - a , , )
Addition
: a + b = ( aL + bL , aU + bU , + , + )
Scalar Multiplication:
xj Bi (i Nm)
xj 0 (j Nn)
xj
Page 238
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Sub to
( ti , ui, vi )
( i Nm )
Max z = ( 5, 8, 2, 5) x1 + ( 6, 10, 2, 6) x2
xj 0 ( j Nn )
R(z) = 86.515
Defuzzifing the objective function using ranking function , the
FLPP becomes the following classical LPP.
xj
Sub to
Max z = 14.5 x1 + 18 x2
ti
ti - ui
ti + vi (i Nm)
xj 0 ( j Nn )
Now the Fuzzy linear programming problem becomes
Max z = c x
R
Sub to A x b
x0
c T (F(R))n , b Rm , x Rn , R is a linear ranking
function. Then according to [1] , the problem can be solved
by using simplex method. By using ranking function ,
objective function of (1) can be defuzzified and then (1) is
equivalent to the classical LPP which can be solved by using
TORA.
VII. CONCLUSION
In this paper we proposed a new method for solving the FLP
problems with TORA. The significance of this paper is
providing a new method for solving the fuzzy linear
programming in which the coefficients of the objective
function are trapezoidal fuzzy number and the coefficients of
constraints, the right hand side of the constraints are triangular
fuzzy number. We compared the solution of FLPP with the
defuzzified LPP solution using TORA.
REFERENCES
[1]
[2]
Max z = ( 5, 8, 2, 5) x1 + ( 6, 10, 2, 6) x2
[3]
Sub to ( 4, 2, 1) x1 + ( 5, 3, 1) x2 ( 24, 5, 8)
( 4, 1, 2) x1 + ( 1, 0.5,1)x2 ( 12, 6, 3)
x1 , x2 0
[4]
[5]
we can rewrite it as
Page 239
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I INTRODUCTION
ERP evolved from Manufacturing Resource Planning MRP
II (which originated from Material Resource Planning MRP
I). It has gained much prominence and utility with the
intervention of open source, web enabled and wireless
technologies.
ERP has undoubtedly become an important business
application to all industries. It has almost become a must for
all organizations irrespective of the type of business
manufacturing or service.
In this context it becomes important to analyze the direction in
which ERP is geared to progress or will ERP diminish in the
future, emerging technologies etc.
II LATEST TRENDS IN ERP
ERP calls for constant modifications and up gradations. ERP
developers are facing tremendous pressure both from vendors
and companies. In this context it becomes important to
analyze the ERPs trends and modalities. The following are
relevant issues in ERP.
A. Need based application
Organizations had to implement ERP throught their systems
irrespective of the fact whether they help in all the functions
or in one particular function. This was proving to be a big
hurdle to the firms. In addition this remained as the main
disadvantage or setback of ERP. They had to purchase the
whole applications even if it meant that most of them would
be idle except for the core function.
The latest ERP software programs have overcome this
menace. They offer need based applications.. They were given
the liberty to purchase and install software Programs
pertaining to that particular function. This advantage has
Page 240
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 241
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
D. Cloud Computing
Represents the next evolutionary step toward elastic IT. It
will transform the way your IT infrastructure is constituted
and managed, through consumable services for
infrastructure, platform and applications. This will convert
your IT infrastructure from a factory into a supply
chain.
Cloud computing is a general term for anything that
involves delivering hosted services over the Internet. These
services are broadly divided into three categories:
Infrastructure-as-a-Service (IaaS), Platform-as-a-Service
(PaaS) and Software-as-a-Service (SaaS). The name cloud
computing was inspired by the cloud symbol that's often
used to represent the Internet in flowcharts and diagrams.
Cloud Computing SaaS is the next generation ERP and
optimized for B2B/SMBs with built-in CRM and SFA to cut
costs and improve productivity.
Page 242
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3)
Message persistency: The frequency of message
transfer is referred as message persistency. This rate is
important in deciding the success of the communication
systems and ERP functions. The organization should ensure
that there is no slag in the systems or process or any other
procedures that is likely to affect this rate. Companies
should constantly monitor and suggest areas for
Page 243
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 244
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VI CONCLUSION
The "soft" benefits of ERP provide good bottom-line savings
and sustainable top-line growth, lower inventory and
operating cost, more efficient production, improved quality
with less waste, higher customer acquisition retention and
better cash flow.
The future of ERP holds an undisputed demand not only in
the national level but also at the global level. If the
technology can be improvised to the desired extent.
ERP trends reflect positive signals for the ERP vendors and
companies availing their service. It is important to remember
the fact that both the vendor and the company will be able to
make use of any advantage (including the modern facilities)
only through proper coordination, teamwork and nurturing
a cordial atmosphere. Mere IT ERP trends will not help in
this aspect.
Industry consultant Reed sums it up this way: "'Empower
me. Give me the tools to create differentiating processes that
allow me to define myself from my competitors. And make
sure that it's easier for me to do, so I don't have to hire 100
programmers. Give me the building blocks to put that
together quickly, so that it's just humming in the
background, and leave me free to focus on what makes us
better than other companies.' That's what customers are
expecting now and really want.
REFERENCES
www.erpwire.com
www.whitepaper.techrepublic.com
www.olcsoft.com
www.eioupdate.com
www.sdn.sap.com
www.bscaler.com
[7] www.plex.com
[8] www.itmanagement.earthweb.com
[1]
[2]
[3]
[4]
[5]
[6]
Page 245
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
charusanthaprasad@yahoo.com
3
sajinaraj@pec.edu
*
I . Introduction
The goal of Information retrieval process is to retrieve
Information relevant to a given query request. The aim is to
retrieve all the relevant information eliminating the nonrelevant information. An information retrieval system
comprises of document representation, semantic similarity
matching function and Query. Document representation
comprises the abstract description of documents in the
system. The semantic similarity matching function defines
how to compare query requests to the stored descriptions in
the representation.
The percentage of relevant information we get mainly
depends on the semantic similarity matching function we
used. So far, there are several semantic similarity methods
used which have certain limitations despite the advantages.
No one method replaces all the semantic similarity
methods. When a new information retrieval system is going
to be build, several questions arises related to the semantic
similarity matching function to be used. In the last few
decades, the number of semantic similarity methods
developed is high.
This paper discusses the overall view of different
existing similarity measuring methods used for ontology
concept comparison. We also discuss about the pros and
cons of existing similarity metrics. We have presented a
new approach which is independent of the corpora for
finding the semantic similarity between two concepts.
Page 246
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
g(Pj
Page 247
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 248
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 249
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Word1
Word2
Replica
R&G
M&C
Car
Automobile
3.82
3.92
Gem
Jewel
3.86
3.84
3.92
3.94
Journey
Voyage
3.58
3.54
3.58
Boy
Lad
3.10
3.76
3.84
Coast
Shore
3.38
3.70
3.60
Asylum
madhouse
2.14
3.61
3.04
Magician
Wizard
3.68
3.50
3.21
Midday
Noon
3.45
3.42
3.94
Furnace
Stove
2.60
3.11
3.11
Food
Fruit
2.87
3.08
2.69
Bird
Cock
2.62
3.05
2.63
Bird
Crane
2.08
2.97
2.63
Tool
implement
1.70
2.95
3.66
Brother
Monk
2.38
2.82
2.74
Lad
Brother
1.39
1.66
2.41
Crane
Implement
1.26
1.68
2.37
Journey
Car
1.05
1.16
1.55
Monk
Oracle
0.90
1.10
0.91
Cemetery
Woodland
0.32
0.95
1.18
Food
Rooster
1.18
0.89
Coast
Hill
1.24
0.87
1.09
1.26
Forest
Graveyard
0.41
0.84
Shore
Woodland
0.81
0.63
0.90
Monk
Slave
0.36
0.55
0.57
Coast
Forest
0.70
0.42
0.85
Lad
Wizard
0.61
0.42
0.99
Chord
Smile
0.15
0.13
0.02
Glass
Magician
0.52
0.11
0.44
Rooster
Voyage
0.02
0.08
0.04
Noon
String
0.02
0.08
0.04
1.00
Page 250
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Approach
Correlation
Resnik
0.744
0.850
Lin
0.829
0.744
0.816
News,Financial,Cultural
TABLE IV
Editor)
Press: Reviews (Theatre, Books, Music, Dance)
Religion (Books, Periodicals, Tracts)
Skill and Hobbies (Books, Periodicals)
Popular Lore (Books, Periodicals)
Belles-Lettres (Books, Periodicals)
0.97
Replica
0.93
Replica
0.95
Page 251
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2.
To investigate the existing IC metric on corpora
other than brown corpus.
TABLE V
WORDNET MERONYMY/HOLONYMY RELATION
Relation
Example
Meronym(Part-of)
Holonym(Has-part)
Car is a Holonym of Engine
Meronym(Has-Member)
Holonym
Player is the member of Team
(Member-of)
V. PROPOSED WORK
The semantic similarity measures are mostly based on the
information content.
Most of the corpus based similarity methods like Lin[8],
Jiang Cornath[7] and Resnik[5] are IC based and the IC
calculation is done using Brown corpus. All concepts of
WordNet are not present in the Brown Corpus. The noun
such as autograph, serf and slave are not present in the
Brown Corpus.
Similarity measures that rely on information content can
produce a zero value for even the most intuitive pairs
because the majority of WordNet concepts occur with a
frequency of zero. This makes the Lin method and Jiang
cornath method to return zero or infinity in the continuous
domain and hence the similarity measure is not true or
reliable.
Hence the computation of Information content should be
computed in a different way so that the similarity measure
becomes reliable.
Similarity
the
semantic
Compute
Frequency of
Wordnet
Concepts
Semantic Similarity
computation
Resnik
Jiang & Conrath
Lin
Pirro&Seco
similarity
system
Human
Judgments for
Extended R&G
Data set
Compute
Correlation
Coefficient
WordNet
Taxonomy
Rubenstein &
Goodenough
Extended DataSet
Performance
Analysis
Page 252
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IC (LCS (c1,c2))--hyper
---(16)
Log(maxcon)
where,
Hypo(c)-Function returns the number of hyponyms
Mero(c)-Function returns the number of meronyms
Maxcon- is a constant that indicates the total number of
concepts in the considered taxonomy
Note: Assume hypo(c), mero(c)>=0 & maxcon>0
The function hypo and mero returns the number of
hyponyms and meronyms of a given concept c. Note that
concepts representing leaves in the taxonomy will have an
IC of one, since they do not have hyponyms. The value of
one states that a concept is maximally expressed and
cannot be further differentiated. Moreover maxcon is a
constant that indicates the total number of concepts in the
considered taxonomy.
(c1 c2)
----(17)
Page 253
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VI. Conclusion
This paper has discussed the various approaches that could
be used for finding similar concepts in an ontology and
between ontologies. We have done a survey to exploit the
similarity methods for ontology based query expansion to
aid better retrieval effectiveness of Information retrieval
models. The experiments conducted by early researches
provide better correlation values which gives promising
direction of using them in Ontology based retrieval models.
A new semantic similarity metric has been introduced
which overcomes the shortcomings of existing semantic
similarity methods mainly sparse data problem. We are
working with the new similarity function which will
combine the advantages of the similarity methods
discussed in this paper and we will test it with ontologies of
particular domain. Since we are considering all the
relationship present in the WordNet taxonomy, the new
metric gives accurate results.
IC(c) = 1- log(hypo(c)+1)
log(maxcon)
go to step 3
Else if c1 and c2 are meronyms
Calculate IC(c1) for meronyms i.e
IC(c) = 1- log(mero(c)+1)
log(maxcon)
go to step 3
Step 2: For IC value for both hyponyms and meronyms
using the proposed IC formula,
VII. REFERENCES
IC(c) = 1- log(hypo(c)+mero(c)+1)
[1]
Roy Rada, H. Mili, Ellen Bicknell, and M. Blettner.
Development and application of a metric on semantic nets IEEE
Transactions on Systems, Man, and Cybernetics, 19(1), 17{30}, January
1989.
[2]
Michael Sussna: Word sense disambiguation for tree-text
indexing using a massive semantic network In Bharat Bhargava
[3]
Linguistics,
log(maxcon)
go to step 3
Step 3: Call the existing semantic similarity function
Simres(c1,c2), SimLin(c1,c2), SimJ&C(c1,c2)
And then go to step 4.
Step 4: Call the proposed semantic similarity function
for the given concepts c1 & c2
Simext(c1,c2) = IC(LCS(c1,c2))hyper(c1 c2)
Step 5: Collect human judgments and save it as a separate
table for the R&G and M&C data sets
Step 6: Calculate the correlation coefficients between
results of the similarity measures and human judgments
Step 7: Compare the similarity measures for R&G data set
using the proposed IC and proposed similarity existing
similarity measures.
Page 254
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[8]
Dekang
Lin.:
An
information-theoretic
definition
of
Graeme
Hirst and
Page 255
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
*2
vlak29@gmail.com
*3
pveswaripriya@gmail.com
fault
I.INTRODUCTION
Software modularization, Object oriented decomposition is
an approach for improving the organization and
comprehension of source code. In order to understand OO
software, software engineers need to create a well
connected representation of the classes that make up the
system. Each class must be understood individually and,
then, relationships among classes as well. One of the goals
of the OO analysis and design is to create a system where
classes have high cohesion and there is low coupling
among them. These class properties facilitate
comprehension, testing, reusability, maintainability, etc.
A.Software Cohesion
Software cohesion can be defined as a measure of
the degree to which elements of a module belong together.
Cohesion is also regarded from a conceptual point of view.
In this view, a cohesive module is a crisp abstraction of a
concept or feature from the problem domain, usually
described in the requirements or specifications. Such
Page 256
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
III.PROPOSED C3
II.RELATED WORKS
A.STRUCTURAL METRICS
We study the cohesion metric LCOM
B.Henderson-Sellers [2] of object oriented programs. The
LCOM metric is based on the assumption, that if methods
and instance variables of a class are interconnected, then
the class is cohesive. If a method x uses instance variable y,
then x and y are interconnected. Henderson-Sellers defined
the metric LCOM* that is similar to LCOM, but has a fixed
scale. Briand.et al observed that the scale of LCOM* is
from 0 to 2, and gave a refined version of this metric with
scale from 0 to 1. Other similar cohesion metrics are LCC
and TCC, and CBMC. Good surveys of cohesion metrics
have been made by Briand et al and Chae et al .
Page 257
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
A.Advantages
1. We can predict the Cohesion.
2. We can predict the particular system is cohesive or not.
B.Formula for C3
For every class C, the conceptual similarity between the
methods mk and mj.
C3=Mk*Mj /[Mk]2*[Mj]2
K=1, 2, 3n; n: up to n comments
j=2, 3..n; n: up to n comments
IV.COMPARISION
Structural metrics are calculated from the source code
such as references and data sharing between methods of a
class belong together for cohesion. It define and measure
relationships among the methods of a class based on the
number of pairs of methods that share instance or class
variables one way or another for cohesion. This metric
bring only approximate value of cohesion .Figure 4.1
reveals the measurement of cohesion using Lcom5 .
Page 258
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VI.REFERENCES
[1] Andrian Marcus, Denys Poshyvanyk, and Rudolf Ferenc,Using
Conceptual of Classes for Fault prediction in object oriented
system.IEEE TRANSACTIONS ON SOFTWARE ENGINEERING,
VOL. 34, NO. 2, MARCH/APRIL 2008.
[2] B. Henderson-Sellers, Software Metrics. Prentice Hall, 1996.
[3] E.B. Allen, T.M. Khoshgoftaar, and Y. Chen, Measuring Coupling
and Cohesion of Software Modules: An Information-Theory Approach,
Proc. Seventh IEEE Intl Software Metrics Symp.,pp.
124-134, Apr. 2001.
[4] A. Marcus, A. De Lucia, J. Huffman Hayes, and D. Poshyvanyk,
Working Session: Information-Retrieval-Based Approaches in Software
Evolution, Proc. 22nd IEEE Intl Conf. Software Maintenance,
pp. 197-199, Sept. 2006.
[5]P.W. Foltz, W. Kintsch, and T.K. Landauer, The Measurement of
Textual Coherence with Latent Semantic Analysis, Discourse Processes,
vol. 25, no. 2, pp. 285-307, 1998.
Page 259
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Hanumanthappa.J.1,Manjaiah.D.H.2,Aravinda.C.V.3
Teacher Fellow, Dos in CS, University of Mysore, Manasagangothri, Mysore, INDIA.
2
Reader &Chairman, Mangalore University, Mangalagangothri, Mangalore, INDIA.
3
M.Tech. KSOU. Manasagangotri, Mysore, INDIA.
hanums_j@yahoo.com
ylm321@yahoo.co.in
aravinda.cv@gmail.com
Abstract.
Today, the Internet consists of native IPv4 (IPv4-only), native
IPv6 and both IPv4/IPv6 dual networks. Currently both IPv4
and IPv6 existing protocols are considered as incompatible
protocols. Unfortunately IPv4 and IPv6 are incompatible
protocols when both the versions are available and the users of
internet want to connect without any restrictions, a transition
mechanism is required. Since a huge amount of resources have
been invested on current IPv4 based Internet, how to smoothly
transit the Internet from IPv4 to IPv6 is also a great
tremendous and interesting research topic. During the time of
migration from IPv4 to IPv6 networks, a number of transition
mechanisms have been proposed by IETF to ensure smooth,
stepwise and independent changeover. The development of
Internet Protocol Version 6(IPv6), in addition to being a
fundamental step to support growth of Internet is at the base
of the increase IP functionality and Performance. It will enable
the deployment of new applications over the Internet, opening
a broad scope of technological development.BD-SIIT is one of
the transitional technique which is mentioned in the IETF
draft to perform the transition from IPv4/IPv6.We implement
the BD-SIIT transition algorithm. The Stateless Internet
Protocol/Internet Control Messaging Protocol Translation
(SIIT) [RFC2765] is an IPv6 transition mechanism that allows
IPv6-only hosts to talk to IPv4-only hosts. The mechanism
involves a stateless mapping or bidirectional translation
algorithm between IPv4 and IPv6 packet headers as well as
between Internet Control Messaging Protocol version
4(ICMPv4) and ICMPv6 messages. SIIT is a stateless IP/ICMP
translation, which means that the translator is able to process
each conversion individually without any reference to
previously translated packets. Most IP header field
translations are relatively simple however; there is one issue,
namely, how to translate the IP addresses between IPv4 and
IPv6 packets. The NS-2 simulator is used to implement the
BD-SIITmechanism
I.
INTRODUCTION.
Page 260
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 261
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
requires the assignment of an IPv4 address to the IPv6only host, and this IPv4 address is used by the host in
forming a special IPv6 address that includes this IPv4
address. The mechanism which is intended to preserve
IPv4 address rather than permanently assigning IPv4
addresses to the IPv6 only-hosts. The method of
Page 262
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Network-based device.
79
80-Zero bits
Sl.No
1
2
3
80
95
FFFF(16 bits)
96
127
32 bits(IPv4 Address)
Fig.5:IPv4-mapped-IPv6 address.
Table.1: Address mapping IPv6/IPv4.
IPv6 Address
IPv4 Address
Address mapping value
ABC2::4321
195.18.231.17
1
ABC2::4321
195.18.231.17
2
ABC2::4321
223.15.1.3
37
1
2`
Sl.No
Sl.No
195.18.231.17
210.154.76.91
223.15.1.3.
ABC2::4321
ABC2::4321
ABC2::4321
1
2
37
Page 263
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
stack
view
of
BD-SIIT
Page 264
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 265
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
------------ (2)
Simulation parameters.
Buffer Size
Propagation delay
Payload Size
Very traffic Loads
Queue Management Scheme
Value
500 Packets.
10ms
200 Bytes
6~240 Nodes.
Drop tail
Page 266
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Fig.9: The comparison between EED of v4-to-v4 and v6-to-v6 communication sessions.
VI. References.
4.2. DNS Response Time:The DNS Response Time (DNSRT) metric shows that the
time needed to calculate, the communication session
between the two end systems which are located the two
heterogeneous networks.
The DNS Response time can be calculated by using the
equation (1)
Page 267
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Transition
14. Gilligan. & Nodmar .E. (1996) Transition Mechanisms for IPv6 Hosts
and Routers. OMNeT++Discrete Event Simulation.
15. L. Toutain
16.
17.
18.
19.
20.
21. J.Bound,
1.txt,april,2004,http://www.dstm.info.
Page 268
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Aravinda.C.V.,currently
pursuing
M.Tech(I.T) K.S.O.U., Manasagangotri,
Mysore-06.He received M.Sc ., M.Phil in
Computer Science.He has worked as a
Lecturer in the following institutions.
1.CIST, Manasagangotri,Mysore,
2.Vidya Vikas Institute of Engg and Technology,Mysore.
3.Govt First Grade college,Srirangapatna and Kollegal.
He has published two papers in National Conference
hosted by NITK,Surathkal,Mangalore.
Page 269
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Abstract
The Internet protocol was started in early 1980.In the
earlys1990s the Internet was growing some temporary
solutions were offered in order to cover the fast growth in
number of Internet users such as Network address
translation (NAT), CIDR (Classless Inter domain Routing).At
the Same time IETF began working to develop a new Internet
Protocol namely IPv6 which was designed to be a Successor to
the IPv4 Protocol. This paper proposes the new concept of
error handling at network layer (layer-3) instead of data link
layer (layer-2) in ISO/OSI reference model by adopting new
capabilities and by using IPv6 over Fiber. This paper also
shows how to reduce the over head In terms of header
processing at data link layer by eliminating cyclic redundancy
check (CRC) field by using IPv6 over Fiber. Therefore we can
also prove that ISO/OSI model contains 6 layers instead of 7
layers.
Key words: IPv4, IPv6, CRC, Ethernet, FCS.
I.
Introduction to IPv6
Page 270
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
.
Fig.1:An Exchange using the OSI model.
Page 271
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 272
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
.
Fig.7: The IPv6 Packet format.
Page 273
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 274
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
-----(1)
whereas
Thri=Paccept/Pcreated*100%----(2)
Where Thri is the throughput value when the packet i is
accepted at the intermediate device like router. and n is
the total number of packets received packets at the router,
and Prec is the number of received packets at router and Pcrea
The below Table-1 shows the Simulation results that are
mainly used to calculate, the Performance measurements
.
Page 275
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[3]
Manjaih.D.H.Hanumanthappa.J. 2009, IPv6 over Bluetooth:
Security Aspects, Issues and its Challenges, In Proceedings of
NCWNT-09, 2009, Nitte-574 110, Karnataka, INDIA18-22.
[4] Manjaih.D.H.,Hanumanthappa.J.,2009,Economical and Technical
costs for the Transition of IPv4 to-IPv6 Mechanisms [ETCTIPv4 to
ETCTIPv6],In
Proceedings
of
NCWNT-09,2009,Nitee574110,Karnataka,INDA-12-17.
[5]. Manjaih.D.H. Hanumanthappa.J. 2009, Smooth porting process
scenario during the IPv6 transition, in
Proceedings of IACC09,
March 6-7, Patiala, Punjab, INDIA.
[6] Manjaiah. D. H. and Hanumanthappa.J. IPv6 over IPv4 QoS metrics in
4GNetworks: Delay, Jitter, Packet Loss Performance, Throughput and
Tunnel Discovery mechanisms. Proceedings of NCOWN-2009, RLJIT,
Doddaballapur, Bangalore, Karnataka, INDIA, August-21-22-pp.122-137.
[7] Hanumanthappa. J. And Manjaiah.D.H.A Study on IPv6 in IPv4
Static Tunneling threat issues in 4G Networks using OOAD Class and
Instance Diagrams in Proceedings of the International Conference on
Emerging trends in Computer science and Information
technology(CSCIT-2010)organized by Dept.of.CS and Information
Technology,Yeshwant ahavidyalaya,Nanded,Maharashtra,India,(2010).
[8] Hanumanthappa.J. and Manjaiah.D.H.An IPv4to-IPv6 Threat
reviews with Dual Stack Transition mechanism Considerations a
Transitional threat model in 4G Wireless in Proceedings of the
International Conference on Emerging trends in Computer science and
Information technology (CSCIT-2010)organized by Dept of CS and
Information
Technology,Yeshwant
Mahavidyalaya,Nanded,Maharashtra,India,(2010).
[9]
Manjaiah.D.H. and Hanumanthappa.J,IPv6 and IPv4 Threat
review with Automatic Tunneling and Configuration Tunneling
Considerations Transitional Model: A Case Study for University of
Mysore NetworkInternational Journal of Computer Science and
Information Security (IJCSIS), Vol.3, (2009).
[10] S.Deering and R. Hinden Internet Protocol Version 6(IPv6)
Specification, RFC 2460, December 1998.
[11]. J.Postel, INTERNET PROTOCOL, RFC 0791, September 1981.
Page 276
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Mr.Hanumanthappa. J. is Lecturer at
the DoS in CS,University of Mysore,
Manasagangothri, Mysore-06 and currently pursuing
Ph.D in Computer Science and Engineering, from
Mangalore University under the supervision of
Dr.Manjaiah.D.H
on
entitled
Design
and
Implementation of IPv6 Transition Technologies for
University of Mysore Network (6TTUoM). His teaching
and Research interests include Computer Networks,
Wireless and Sensor Networks, Mobile Ad-Hoc
Networks, Intrusion detection System, Network
Security and Cryptography, Internet Protocols, Mobile
and
Client
Server
Computing,Traffic
management,Quality of Service,RFID,Bluetooth,Unix
internals, Linux internal, Kernel Programming,Object
Oriented Analysis and Design etc.His most recent
research focus is in the areas of Internet Protocols and their
applications.He received his Bachelor of Engineering
Degree in Computer Science and Engineering from
University
B.D.T
College
of
Engineering
,Davanagere,Karnataka(S),India(C),Kuvempu
University,Shimoga in the year 1998 and Master of
Technology
in
CS&Engineering
from
NITK
Surathkal,Karnataka(S ),India (C) in the year 2003.He has
been associated as a faculty of the Department of Studies in
Computer Science since 2004.He has worked as lecturer at
SIR.M.V.I.T,Y.D.I.T,S.V.I.T,of Bangalore. He has guided
about
250
Project
thesis
for
BE,B.Tech,M.Tech,MCA,MSc/MS.He has Published
about 15 technical articles in International ,and National
Peer reviewed conferences. He is a Life member of CSI,
ISTE,AMIE,IAENG,Embedded networking group of
TIFACCORE
in
Network
Engineering,ACM,Computer
Science
Teachers
Association(CSTA),ISOC,IANA,IETF,IAB,IRTG,etc.H
e is also a BOE Member of all the Universities of
Karnataka,INDIA.He has also visited Republic of China
as a Visiting Faculty of HUANG HUAI University of
ZHUMADIAN,Central China, to teach Computer
Science Subjects like OS and System Software and
Software Engineering,Object Oriented Programming
With C++,Multimedia Computing for B.Tech Students.
in the year 2008.He has also visited Thailand and Hong
Kong as a Tourist.
Dr.Manjaiah.D.H D.H. is currently Reader and
Chairman of BoS in both UG/PG in the Computer
Science
at
Dept.of
Computer
Science,Mangalore
University,
and
Mangalore.He is also the BoE Member of
all Universities of Karnataka and other
reputed universities in India.He received
Ph.D degree from University of Mangalore, M.Tech.
from
NITK,Surathkal
and
B.E.,from
Mysore
University.Dr.Manjaiah.D.H D.H has an extensive
academic,Industry and Research experience.He has
worked
at
many
technical
bodies
like
IAENG,WASET,ISOC,CSI,ISTE,and ACS. He has
authored more than -25 research papers in international
conferences and reputed journals. He is the recipient of the
several talks for his area of interest in many public
occasions. He is an expert committee member of an
AICTE and various technical bodies. He had written
Kannada text book,with an entitled,COMPUTER
PARICHAYA,for the benefits of all teaching and
Students Community of Karnataka.Dr.Manjaiah D.Hs
areas interest are Computer Networking & Sensor
Networks,
Mobile
Communication,
Operations
Research, E-commerce, Internet Technology and Web
Programming.
Aravinda.C.V.,currently
pursuing
M.Tech(I.T)
K.S.O.U.,
Manasagangotri, Mysore-06.He received M.Sc ., M.Phil
in Computer Science. He Worked has Lecturer at Various
Institutions :1) CIST University Of Mysore
Manasagangothri Mysore.2) Vidya Vikas Institute Of
Engineering and Techonology Mysore.3) Govt.First
Collge Srirangapatna,Mandya District .4) Govt First
College Kollegal ,Chamrajanagar Distirct. 5) Worked has
Technical Co-ordinator NIIT Gandhi Bazar,Bangalore.
He has published two National Papers hosted by NITK
Surthkal Mangalore, DK.
Page 277
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
P.Jeyapriya 3
I. INTRODUCTION
Many picture libraries use keywords as their main form
of retrieval often using indexing schemes developed inhouse, which reflect the special nature of their collections. A
good example of this is the system developed by Images to
index their collection of stock photographs .Users should be
able to explore images in a database or video clips by visual
similarities. Structuring and visualizing digital images are
based on their content similarities. Currently, many content
based image retrieval techniques have been developed to
incorporate higher-level feature extraction capabilities, but a
lot of work remains to be done. Ultimately, feature-extraction
techniques, combined with other techniques are expected to
narrow down the gap between relatively primitive features
extracted from images and high-level semantically-rich
perceptions by humans. Many picture libraries use keywords
as their main form of retrieval often using indexing
schemes developed in-house, which reflect the special nature
of their collections. Index terms are assigned to the whole
image, and the main objects. When discussing the indexing
of images and videos, one needs to distinguish between
systems which are geared to the formal description of the
image and those concerned with subject indexing and
Page 278
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
(W a f )(b ) = ( x )
a, b
( x )dx
----------1
where the wavelet b , a is computed from the mother
wavelet by translation and dilation,
a , b( x ) =
1
(( x a ) / b )
|a|
----------
-2
the mother wavelet satisfies the constraint of having zero
mean. The eq. (1) can be discretized by restraining a and
b to a discrete lattice. Typically it is imposed that the
transform should be non-redundant, complete and constitutes
a multi resolution representation of the original signal. In
practice, the transform is computed by applying a separable
filter bank to the image where denotes the convolution
operator, ) 2 , 1 ( 1 , 2 denotes the down sampling along
the rows (columns) and I = A0 is the original image, H and G
are low pass and high pass filters, respectively. The n A is
obtained by low pass filtering and is referred to as the low
resolution (Approximation) image at scale n. The 3 n 2 n 1 n
D , D , D are obtained by band pass filtering in a specific
direction(Horizontal, Vertical and Diagonal, respectively)
and thus contain directional detail information and is referred
to as high resolution(Detail) images at scale n. The original
image I is thus represented by a set of sub images at several
scales. This decomposition is called Pyramidal wavelet
transform decomposition or discrete wavelet decomposition
(DWT). Every detail sub image contains information of a
specific scale and orientation. The spatial information is
retained within the sub image. In the present paper, the
features are obtained using Haar Wavelet (Fig. 1.), which is
given by
1 0<=t <=1/2
---------------3
2. RELATED WORK
Visualization technique of images with more continuous
scenes
described in this article have a wide range of
potential applications, for example, data mining in remote
sensing images and image retrieval from film and video
archives. This methodology is suitable on a sample of images
with more continuous scenes, for example, video segments,
so that we will be able to keep track of the impact of various
feature-extraction techniques more closely [1]. A proposal is
presented for constructing an efficient image sharing system,
in which a user is able to interactively search interesting
images by content-based methods in a peer-to-peer network.
A distributed variant of the Tree-Structured Self- Organizing
Maps (TS-SOM) by incorporating the essence of DHT is
presented. Compared with the existing approaches, this
method has the following advantages: many operations can
be performed on the 2-D lattice instead of the highdimensional feature space, which significantly saves the
computing resource; the TS-SOM-like hierarchical search
speeds up the information lookup and facilitates the use of
large Self-Organizing Maps; a query can involve more than
one
feature simultaneously. The current design is
considered for the tree-structured index with fixed levels and
fixed size at each level. It is not feasible for some huge other
network communities during their evolving stages. Many
details, for instance the image normalization and the system
strategy have not been considered due to the limit of time and
space [2]. In another paper, an extraction of signatures, to
determine a comparison rule, including a querying scheme
and definition of a similarity measure between images is
proposed. Several querying schemes are available: regionbased searching, where the retrieval is based on a particular
region in the image, or searching by specifying the color
histogram or object shape of the images,
significant
semantic information is lost. Principal component analysis is
used to represent and retrieve images on the basis of content
which reduces the dimensionality of the search to a basis set
of prototype images that best describes the images. Each
image is described by its projection on the basis set; a match
to a query image is determined by comparing its projection
vector on the basis set with that of the images in the database,
the detection performance is transform variant [3]. Here,
images are normalized to line up the eyes, mouths and other
features. The eigenvectors of the covariance matrix of the
face image vectors are then extracted. These eigenvectors are
called eigen faces. Raw features are pixel intensity values Ai.
Each image has n m features, where n, m are the
dimensions of the image. Each image is encoded as a vector
G of these features. Compute mean face in training set and
detected [4]. A technique for automatic face recognition
based on 2D Discrete Cosine Transform (2D-DCT) together
with Principal Component Analysis (PCA) is suggested.
Applying the DCT to an input sequence decomposes it into a
Page 279
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 280
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4. FEATURE EXTRACTION
The co-occurrence histograms are constructed across
different wavelet coefficients of an image and its complement
decomposed up to 1-level. All combinations of the image
quadrants of wavelet coefficients of image and its
complement are considered to compute the correlation
between various coefficients.
From the normalized
cumulative histogram, color and texture
features are
extracted employing equation ( 5), (6) and (7).
256
Mean :
nchi
nch =
i =1
256
.5
.....4
|nchi nch|
i=1
256
-----------7
Page 281
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 282
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[5] Application of DCT Blocks with Principal Component Analysis for Face
Recognition MAHA SHARKAS, Proceedings of the 5th WSEAS Int. Conf.
on Signal, Speech And Image Processing, Corfu, Greece, August 17-19,
2005 (pp107-111).
[6] Face Recognition Using the Discrete Cosine Transform, Ziad M. Hafed
and Martin D. Levine, International Journal of Computer Vision 43(3),
167188, 2001, c _ 2001 Kluwer Academic Publishers, Netherlands
[7] Image Retrieval using Shape Feature, S.Arivazhagan, L.Ganesan,
S.Selvani
dhyananthan , International Journal Of Imaging Science And
Engineering (IJISE) , ISSN: 1934-9955,Vol.1,No.3, July 2007.
[8] Appearance Based 3D Object Recognition Using IPCA-ICA, V.K
Ananthashayana and
Asha. V, International Archives of the
Photogrammetry, Remote Sensing and spatial Information Sciences. Vol.
XXXVII. Part B1, Beijing 2008.
[9] Statistical Appearance Models for Automatic Pose Invariant Face
Recognition, M.Saquib Sarfraz and Olaf Hellwich, 978-1-4244-2154-1/08/
c2008 IEEE.
[10] Localized Content Based Image Retrieval, Rouhollah Rahmani, Sally A.
Goldman, Hui Zhang, Sharath R. Cholleti, and Jason E. Fritts IEEE
Transactions On Pattern
Analysis And Machine Intelligence, Special
Issue, November 2008.
OUTPUT FOR
EYE IMAGES
6. CONCLUSION
A novel method for indexing the image databases employing
the wavelet features obtained from the coefficients of an
image has been presented. The experimental results obtained
is good in Y Cb Cr color spaces and provides better retrieval
results. The Haar wavelets are chosen since they are more
effective in texture representation compared to other
wavelets. From the Table 5.1 and 5.2 , the robustness of the
proposed feature set for indexing image data base is proved.
The experimental results demonstrate the worth of the
proposed method for indexing of image databases and the
results are encouraging. It can be enhanced in future as an
intelligent system by employing genetic algorithm or neural
Networks.
7. REFERENCES
[1] Similarity-Based Image Browsing, Chaomei Chen, George Gagaudakis,
Paul Rosin
Brunel University, Cardiff University, U.K, funded by the
British research council EPSRC
(GR/L61088 and GR/L94628).
[2]
Precision rate %
Recall Rate %
CBIR
90.2
92
FGNET
85.3
89
IRDS
85
90
Database
Database
Success
Failure rate
rate in %
in %
CBIR
90.2
9.8
FGNET
85.3
14.7
IRDS
85
15
[4].Content-Based
Image
Retrieval,
http://www.cs.cmu.edu/afs/cs/academic/class/15385-06/lectures/.
Page 283
Ta
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Hanumathappa.J [3]
Research scholar
Mangalore University
Mangalore University
University of Mysore
Mangalore, India
vbjj@rediffmail.com
Mangalore, India
ylm321@yahoo.co.in
Mysore, India
hanums_j@yahoo.com
Abstract
Web Services can be published, discovered and invoked over the
web. Web Services can be implemented in any available technology
but they are accessible through a standard protocol. With web
services being accepted and deployed in both research and
industrial areas, the security related issues become important. In
this paper, architecture evaluation for web service on negotiating a
mutually acceptable security policy based on WSDL (web service
description language). It allows a service consumer to discover and
retrieve a service-providers security policy for service requests and
allows a service-consumer to send its own security policy for service
responses to the service-provider. The service consumer combines
its own policy for service requests with that of the service provider
to obtain the applied security policy for requests, which specifies
the set of security operations that the consumer must perform on
the request. The combining takes place in such a way that the
applied security policy is consistent with both the consumers and
providers security policies. The service provider also combines its
own policy for responses with that of the consumer, to obtain the
applied security policy for responses [1].
Nayak.Ramesh.Sunder [4]
A.P, ShreeDevi Institute of
Technology
kenjar,
Mangalore, India
ramesh.nayak.spi@gmail.com
2. System Overview
1. Introduction
Page 284
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3. Process Model
The model works with the exception that the containers
hosting the consumer and provider classes emit a SOAP
message, which is intercepted by the security service. The
consumer and provider classes could provide the <Security
Mechanisms> and <Security Services> elements to their
security services, in a WSS header, with the security service
module identified as the target role. Alternatively, the security
service could obtain the <Security Mechanisms> and
<Security Services> elements directly on its own. WSDL
binding to support the publication of the security policy in the
case that a provider offers a secured interface. Specifically,
elements called <Security Mechanisms> and <Security
Services> are associated with message definitions in the
services WSDL instance. In addition, we specify a WSS
header for conveying the consumers policy for service
responses using the same element definitions. The <Security
Mechanisms> element describes a set of security mechanism,
which may be applied to one or more nodes of the SOAP
document. Additionally, parameters of a security mechanism
may be specified in the element [1].
Page 285
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Quality
Attribute
Driver
Security
Interoperability
Availability
Performance
Rationale
It is a major concern to
this
area
of
the
architecture because it
should
support
authentication,
encryption, Integrity and
non
reputation
over
different communication
channel and platform
model.
The registry must be able
to interact with the
provider servers to be
able to operate.
The service should be in
need to run at any time
even
system
failure
occurs over registry or
service provider.
Continues user request
will affect the system
response.
we
will
establish
the
user
connection based on
Page 286
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
token request.
Integrity
Authentication
Non-reputation
Layering
Rationale
Trade-offs
It
organizes
the
system in hierarchical
structure that allows
for
easy
system
modification.
Security
potentially
reduced risk
Scenario#: 1
Attribute(s)
Environment
Stimulus
Arch decision
Sensitivity
Tradeoff
Risk
Quality
Attribut
e
Performance
Attribute
Refinement
Scenarios
(I, D)
Response Time
Overheads
of
trapping
user
events must be
imperceptible to
users.
At peak load the
service should be
able to perform
transaction with
limited time.
Users' information
shall only be
visible
to
(M, L)
Throughput
Security
Confidentiality
(H, M)
(H, M)
(H, M)
administrative
users
of
the
system and it is
encrypted before
transmitting to the
server.
The
System
resists
unauthorized
intrusion
and
modification
of
data.
This enables the
user to access the
service
with
required token
It verifies the
signed
information from
valid user
(M,M)
Non risk
Scenario#: 2
Attribute(s)
Environment
Stimulus
(H, L)
Response
Page 287
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Arch decision
Sensitivity
Tradeoff
Risk
Non risk
Reasoning
It support all data format conversion
between different platform
Need different data format for any
platform
New data format cannot be supported
Not apply here.
Arch decision
Sensitivity
Tradeoff
Risk
Non risk
Scenario#: 3
Attribute(s)
Environment
Stimulus
Response
Arch decision
Sensitivity
Tradeoff
Risk
Non risk
Scenario#: 4
Attribute(s)
Environment
Stimulus
Response
Arch decision
Sensitivity
Tradeoff
Risk
Non risk
Scenario#: 5
Attribute(s)
Environment
Stimulus
Response
Scenario: Confidentiality
Security
Normal operations
Certificate authority has to provide
security token to authenticate
Intermediary has no way to read the
message while establishing connection
with service provider
Reasoning
The encryption algorithm.
More computation time and resource
used. Performance is the tradeoff with
Security.
Not apply to architecture, but the
Encryption algorithm itself, if it is not
complex enough, could be hacked by
brute force.
Not apply here.
Scenario: Integrity
Security
Normal operations
Unauthorized user without security
token cannot able to access the service
available in the registry
Identity Certificate are required to
verify the user authentication
Reasoning
Identity certificate
Need resource to map data,
Performance, but not too much.
Provide certificate to user in more
secret
Not apply here.
Scenario: Authorization
Security
Normal operations
Service ticket has way to establish
trust relationship with more than one
security domain
utility certificate are required to
Scenario#: 6
Attribute(s)
Environment
Stimulus
Response
Arch decision
Sensitivity
Tradeoff
Risk
Non risk
Risk themes:
Heavily load on the service provider web server leads to
return of result to the consumer must be unavailable. This leads
to server unavailable.
Certificate Verifier should verify their security token before
access the service available in the registry. While during
modification of the web service by the service provider add on
failure may occur. Required service may be unavailable to the
consumer for particular time.
Inconsistencies:
Security has no way to identify the web security policies in
efficient manner.
Service provided has to provide secure interface to add new
security features with the existing provided security policy for
applied security policy otherwise consumer has no way to add
their own security requirements.
It is risky for transferring service ticket to consumer for
accessing the service that available in the registry. This may
be available to other without proper criteria.
Page 288
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
5. Conclusions
Universal Description Discovery and Integration has no way
to identify the secure web services when multiple service
providers are now providing similar functional services. With
an increasing number of web services providing similar
functionalities, security has become one of the most critical
issues. An evaluated architecture called agent based web
service discovery to automate secure web service discovery
for negotiating a mutually acceptable security policy based on
WSDL for both consumer and provider in dynamic nature.
6. References
[13] Zahid Ahmed, Martijn de Boer,, Monica Martin, Prateek
Page 289
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 290
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V.Sureshkumar
Professor,
Dept. of CSE
Velammal Engg. College,
Chennai.
Tamilnadu India.
lakshmikumarvs@gmail.com
P.Alli
Professor & Head
Dept. of CSE
Velammal College of Engg
& Tecchnology, Madurai.
Tamilnadu India.
alli_rajus@yahoo.com
ABSTRACT
Rough Set theory is a new mathematical tool to deal with
representation, learning, vagueness, uncertainty and
generalization of knowledge. It has been used in machine
learning, knowledge discovery, decision support systems
and pattern recognition. It is essential for professional
colleges to improve competitive advantage and increase in
placement performance. Students are a key factor for an
institutions success. In this paper, we use a ROSE system
to implement the rule generation for students placements.
Keywords: Data Mining, Decision Support, Human
resource development, Knowledge Discovery, Rough set
theory.
1. INTRODUCTION
Many researchers provide important methods for
human resource management. Since 80s people have
been product-oriented; most employees just need skills
to produce, which is the idea of job-based human
resource management. In 1973, the evaluation idea and
technology was "work as the center". Today students
do not need to pursue only their studies but also to
undergo soft skills training. The objective of this
research is to find out a compromising solution that
satisfies both the institution and students, and finds out
what kind of features and behaviors of students who
can build good relationships with an institution. The
results of this research will be used to guide institution
as they are useful in providing a good strategy for
human resource development and customer relationship
management
This paper is organized as follows: Section 2 reviews
basic concepts of rough sets. The mathematical model
employed here is briefly illustrated. Section 3 is spent
to explain the problem tracked in this paper. Its results
are discussed in Section 4. At the end in section 5
several remarks of this paper is given.
2. INTRODUCTION TO ROUGH SET
Page 291
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Example:
Table 1: Sample Data Set
Objects
Attributes
a1
English Medium
1
2
3
4
5
6
7
8
9
10
Yes
No
Yes
Yes
No
Yes
Yes
Yes
Yes
No
a2
Financial
status
Low
Low
Medium
High
Medium
Medium
Low
Medium
Medium
High
a3
Soft Skills
Decision
D1
Placement
Medium
Low
High
Low
Low
Medium
High
High
Low
Medium
High
Low
Medium
Low
Low
Medium
High
High
Medium
Low
{
}
PX = {x i U [x i ]ind (p) X }
PX = x i U [x i ]ind (p ) X
B X = {X U | K B ( x) X }
B X = {X U | K B ( x) X }
where xi is an elementary set contained in X,
1,2,,n.
The ordered pair
i =
Page 292
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
RED( P) A
COR(B) = RED(P)
Decision Rules:
The information system in RST is generally expressed
in terms of a decision table. Data in the table are got
through observation and measurement. Its rows
represent objects, and its columns represent attributes
divided into two sets, i.e, a condition attributes set and a
decision attributes set. RST reduces the decision table,
and then produces minimum decision rules sets. The
general form of RST rules can be expressed by
RULE : IF C THEN D
where C represents conditional attributes and their
values, D represents a decision attributes and their
values. Though the decision rules we can minimize the
set of attributes, reduct the superfluous attributes and
group elements into different groups. In this way we
can have many decision rules, each rule has meaningful
features. The stronger rule will cover more objects and
the strength of each decision rule can be calculated in
order to decide the appropriate rules.
Page 293
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Class
number
1
2
Number
of objects
3
7
Lower
approx.
3
7
Upper
approx.
3
7
accuracy
1.0000
1.0000
Page 294
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 295
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 296
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Said.kavula@student.dit.ie
2
fredrick.mtenzi@dit.ie
3
ronan.fitzpatrick@comp.dit.ie
4
brendan.oshea@comp.dit.ie
Abstract The application of computer based and
communication technologies in the healthcare domain has
resulted in a proliferation of systems through healthcare. These
systems whether in isolation or integrated, are generally referred
to as e-healthcare information systems.
Protection of healthcare information has become an increasingly
complex issue with the widespread use of advanced technologies,
complexity of healthcare domain and the need to improve
healthcare services. The huge amount of information generated
in the process of care, must be securely processed, stored and
disseminated to ensure a balanced preservation of personal
privacy versus its utility for overall improvement of healthcare.
The general IS security protection approaches are available to
address a wide range of security concerns. However, the
healthcare domain has unique security concerns that have to be
uniquely addressed. Understanding these specific security and
privacy challenges shall enable selection, development,
deployment and assessment of IS security approaches relevant for
the healthcare domain. One of the primary tasks should be
identifying the unique security concerns in the domain.
This paper discusses among others the unique security and
privacy concerns in the healthcare domain and proposes
metrics that address the challenges presented. The main
contribution of the paper is the demonstration of the
development of the security metrics for the healthcare domain.
Keywords security metrics, security measurement, e-health
security, security assessment, e-healthcare security.
VII.
I. INTRODUCTION
Page 297
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 298
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
D. Requirements Engineering
The fourth category is concerned with requirements
elicitation. Blobel [21] presented a framework to facilitate
security analysis and design for health information systems.
Tools such as those described in unified modelling language
(UML), object orientation and component architecture are
defined to suite the healthcare environment. Also, the
framework utilised generic security model and a layered
security scheme to address the challenge imposed by security
enhanced health information system. Cysneiros in [22],
presents requirements elicitation techniques that are useful
for compliance with the constraints imposed by several
peculiarities intrinsic to this domain. The techniques
discussed aim to assist in selecting a proper requirement
elicitation approach. Matousek in [23] introduces the basic
security and reliability considerations for developing
distributed healthcare systems in healthcare.
Based on the above review, there is large body of research
specialised in healthcare domain in attempts to address
various concerns. This indicates the domain has unique
challenges. However, what is unique requirement other than
wrapping on the sensitivity of information is not clearly
featured in many literature reviewed. This paper includes
literature [14]-[26] that in one way or another pointed and
elaborated the nature of sensitivity and how to address the
challenges. In the following section, a summary and
discussion of these challenges are presented.
A. Information Needs
In the first category, information need is considered. It is
well known that the primary function of healthcare services
is provision of care to patients and supporting public health.
Patient care requires capturing of detailed information about
the patient for identification, longitudinal and cross-section
information for supporting evidence-based care delivery
[24]. Addressing this demand becomes a significant
challenge in the course of balancing protection versus
accessibility of information. This could lead to information
aggregation risks, a situation where more people have access
to many records [14], [15].
B. Nature of Healthcare Service Delivery
The second category is the nature of healthcare service
delivery. The following are challenges and their associated
risks for this category.
The nature of work requires collaboration among multioccupations communities (e.g., physicians, nurses,
technicians, and administrative staff) [25]. In some cases,
duty of care is accomplished in teams (for example in
operation theatres). The work is often non-routine, so it is
difficult to pre-schedule events and activities [25].
Page 299
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 300
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 301
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[7]
A. C. Michael, "Who is liable for bugs and security
flaws in software?" Commun. ACM, vol. 47, pp. 25-27, 2004.
[8]
D. Rice, Geekonomics: the real cost of insecure
software: Addison-Wesley Professional, 2008.
[9]
M. N. Wybourne, M. F. Austin, and C. C. Palmer,
"National cybersecurity research and development
challenges related to economics, physical infrastructure and
human behavior," Institute for Information Infrastructure
Protection (I3P) 2009.
[10]
B. Schneier, "Secrets |& Lies," C. Long, Ed.: Wiley
Publishing, Inc., 2004, pp. 255-269.
[11]
B.
von
Solms, "Information
securitya
multidimensional discipline," Computers & Security, vol. 20,
pp. 504-508, 2001.
[12]
PwC, "Information security breaches survey 2006,"
PricewaterhouseCoopers 2006.
[13]
Berr,
"2008
INFORMATION
SECURITY
BREACHES SURVEY," Department of Business
Enterprises \& Regulatory Reform 2008.
[14]
R. J. Anderson, "A security policy model for
clinical information systems," presented at IEEE Symposium
on Security and Privacy, Oakland, CA, 1996.
[15]
R. J. Anderson, "Patient confidentiality-at risk from
NHS-wide networking," Current perspectives in healthcare
computing, pp. 687-692, 1996.
[16]
R. Agrawal, A. Kini, K. LeFevre, A. Wang, Y. Xu,
and D. Zhou, "Managing healthcare data hippocratically,"
presented at Intl. Conf. on Management of Data, 2004.
[17]
R. Agrawal and C. Johnson, "Securing electronic
health records without impeding the flow of information,"
International journal of medical informatics, vol. 76, pp.
471-479, 2007.
[18]
B. Blobel, "Authorisation and access control for
electronic health record systems," International Journal of
Medical Informatics, vol. 73, pp. 251-257, 2004.
[19]
D. Gritzalis and C. Lambrinoudakis, "A security
architecture for interconnecting health information systems,"
International Journal of Medical Informatics, vol. 73, pp.
305-309, 2004.
[20]
L. Rstad, "Access Control in Healthcare
Information Systems," in Computer and Information
Science, vol. PhD. Trondheim: Norwegian University of
Science and Technology, 2008, pp. 161.
[21]
B. Blobel, "Security requirements and solutions in
distributed electronic health records," Computers and
Security, vol. 16, pp. 208-209, 1997.
[22]
L. M. Cysneiros, "Requirements engineering in the
health care domain," presented at Requirements Eng. (RE
02), 2002.
[23]
K.
Matousek,
"Security
and
reliability
considerations for distributed healthcare systems," presented
at 42nd ANNUAL 2008 IEEE INTERNATIONAL
CARNAHAN
CONFERENCE
ON
SECURITY
Page 302
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Abstract
Face Recognition is an emerging field of
research with many challenges such as physical
appearance where faces does not share the same
physical features, face acquisition geometry that is, how
the images are obtained and what environment the face
imaged, illumination, pose and occlusion and so on. The
main objective of this paper is to recognize the given
face image from the database using wavelets and neural
networks. DWT is employed to extract the input
features to build a face recognition system and neural
network is used to identify the faces.
In this paper, discrete wavelet transforms are used to reduce
image information redundancy because only a subset of the
transform coefficients are necessary to preserve the most
important facial features such as hair outline, eyes and mouth.
We demonstrate experimentally that when DWT coefficients
are fed into a feed forward neural network for classification, a
high recognition rate can be achieved by using a very small
proportion of transform coefficients. This makes DWT-based
face recognition much faster than other approaches.
Key words: Face recognition, neural networks, feature
extraction, discrete wavelet transform
Introduction
Face recognition approaches on still images can be
broadly grouped into geometric and template matching
techniques. In the first case, geometric characteristics of
faces to be matched, such as distances between different
facial features, are compared. This technique provides
limited results although it has been used extensively in the
past. In the second case, face images represented as a two
dimensional array of pixel intensity values are compared
with a single or several templates representing the whole
face. More successful template matching approaches use
Principal Components Analysis (PCA) or Linear
Discriminant Analysis (LDA) to perform dimensionality
reduction achieving good performance at a reasonable
computational complexity/time. Other template matching
methods use neural network classification and deformable
templates, such as Elastic Graph Matching (EGM).
Recently, a set of approaches that use different techniques
to correct perspective distortion are being proposed. These
Proposed method
Wavelet Transform
In this paper, we proposed to modify the DCT approach
with the wavelets. The DCT has several drawbacks.
Computation of the DCT takes an extremely long time and
grows exponentially with signal size. In this case, we used
the Haar wavelets, representing the simplest wavelet basis.
We employed the non standard decomposition, which
alternates between row and column processing, allowing a
more efficient coefficients computation. The proposed
algorithm uses these coefficients as inputs for the Neural
network to be trained.
But the DWT has a distinct advantage; The DWT, in
essence, can be computed by performing a set of digital
filters which can be done quickly. This allows us to apply
the DWT on entire signals without taking a significant
performance hit. By analyzing the entire signal the DWT
captures more information than the DCT and can produce
better results.
The image is separated into four sub images. The bottom
left, bottom-right and top-right show the high-frequency
detail of the image. The top left quadrant contains the low
frequency or lower detail portion of the image; we can see
that most of the information is in this portion.
Page 303
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Neural Networks
A Neural Network is used in order to significantly reduce
the computation time, false recognition rate of the system.
The architecture of RBF Network consist of 3 layers they
are input, hidden and output layers as shown in Fig 2.2
1
Input layer
45
Hidden layer
Output layer
LOAD
DATABASE
LOAD QUERY
IMAGE
WAVELET
DECOMPOSITIO
FEATURE
TRAIN THE
FACE DETECTED
Page 304
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Conclusion
Results
The following shows the input image loaded for detection
Page 305
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 306
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
slv_nayaki@yahoo.com
bhuvanesh_v@yahoo.com
Abstract Bioinformatics is the science of storing, extracting,
Molecular Function. A functionally meaningful cluster
contains many genes that are annotated to a specific GO
organizing, analyzing, interpreting, and utilizing information
terms. The paper aims to cluster microarray data of yeast
from biological sequences and molecules. Microarrays consist
based on the functionalities of genes based on GO ontology to
of large numbers of molecules (often, but not always, DNA)
find meaningful associations of genes from the dataset.
distributed in rows in a very small space. Clustering methods
provide a useful technique for exploratory analysis of
microarray data since they group gene with similar
functionalities based on GO Ontology. In this paper, Gene
Keywords Bioinformatics, Data Mining, Microarray, GO
Ontology is used to provide external validation for the
clusters to determine if the genes in a cluster belong to a
Ontology, Clustering.
specific Biological Process, Cellular Component and
2
I. INTRODUCTION
Data mining is the non-trivial process of identifying valid,
novel, potentially useful, and ultimately understandable
patterns in data [3]. Data mining is a larger process known
as knowledge discovery in databases (KDD). Data mining
is the process of discovering meaningful, new correlation
patterns and trends by shifting through large amount of
data store in repositories, using patterns recognition
techniques as well as statistical and mathematical
techniques. Bioinformatics is the application of computer
technology to the management of biological information.
Computers are used to gather, store, analyze and integrate
biological and genetic information which can be applied to
gene-based drug discovery and development. The need for
Bioinformatics capabilities has been precipitated by the
explosion of publicly available genomic information
resulting from the Human Genome Project [4].
Page 307
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 308
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Fig 1.
Framework to infer association of genes
Page 309
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 310
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Dataset
Number of genes
37794
6400
6413
3677
Biological Process
1079
Cellular Component
1445
Molecular Function
1153
Page 311
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 312
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 313
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V. CONCLUSION
Grouping of Microarray data with similar
functionalities in genes are grouped based on Go
ontology and is implemented using the data mining
clustering technique the K-means algorithm. In the
proposed work the microarray data was extracted for
functionality for Biological Process and Molecular
Function and implemented and tested the clusters for
analyzing meaningful associations. The work
proposed helped to find the association of genes
using the clustering algorithm based on GO Ontology
and from the study we conclude that the clustering
based on functionality of genes have meaningful
associations than directly clustering the dataset.
[8]
REFERENCES
[1] A. Ben-Dor, R. Shamir, and Z. Yakhini, Clustering
gene expression patterns in J Comput Biol 6(34):281-97.
[2] A.Gruzdz, A.Ihnatowicz, J.Siddiqi, and B.Akhgar,
Mining Genes Relations in Microarray Data
Combined with Ontology in Colon Cancer Automated
Diagnosis System, Vol.16, Nov 2006, ISSN 13076884.
[3] Arun. K. Pujari, Data Mining Techniques,
Universities press (India) Limited 2001, ISBN-817371-3804.
[4] Bradley Coe and Christine Antler, S po t yo ur g en es
an ov erview of th e micr oarr ay, August
2004.
[5] Cheremushkin E.S, The Modified Fuzzy C-Means
Method for Clustering Of Microarray Data.
[6] Daxin Jiang, Chun Tang, and Aidong Zhang, Cluster
Analysis for Gene Expression Data: A Survey, IEEE
Transactions on knowledge AND Data Engineering,
Vol 16, No. 11,
November 2004.
[7] D. Jiang, Ch. Tang, A. Zhang, Cluster Analysis for
Gene Expression Data: A Survey, IEEE, Vol. 16, no.
11, November 2004.
Page 314
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 315
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
#1
Kgbabu73@yahoo.com
2
preee@tce.edu
I. INTRODUCTION
Wireless Sensor Network (WSN) comprises of several
autonomous sensor nodes communicating with each other
to perform a common task. A wireless sensor node consists
of a processor, sensor, communication module powered by
a battery. Power efficiency is considered to be a major
issue in WSN, because efficient use of energy extends the
network lifetime. Energy is consumed by the sensor node
during sensing, processing and transmission. But almost
80% of the energy is spent in the communication module
for data transmission in sensor network[1]. Sensor
networks have a wide range of application in temperature
monitoring, surveillance, bio medical, precision
agriculture. Failure of sensor node causes a partition of the
WSN resulting in critical information loss. Hence there is
great interest shown by the many researchers in extending
the lifetime of sensor nodes by reducing the energy
required for transmission. Several algorithms have been
proposed for energy efficient wireless sensor network in
literature. The spatio-temporal correlations among sensor
observations are a significant and unique characteristic of
the WSN which can be exploited to drastically increase the
overall network performance. The existence of the above
mentioned correlation in sensor data is exploited for the
development of energy efficient communication protocols
well suited to WSN. Recently there is a major interest in
Page 316
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 317
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
12001000800600400200-
50454035302520151050
Adaptive
Huffman
Modified Adaptive
Huffman
Static
Huffman
NYT
Modified Adaptive
Huffman
Original Data
Adaptive
Huffman
Static
Huffman
NYT
which contains this data is inserted. When the next data 0.1
arrives, the tree is traversed in search of the node1 which
contains the data 0.1 in the binary tree. Since the node is
not available, the code corresponding to NYT node i.e., 0 is
transmitted, followed by 1 corresponding to the data 0.1..
So the code transmitted is 01. For the next data is 0.3, the
tree is traversed in search of the node 2 containing the data
0.3. Since node 2 is already available in the tree, the prefix
corresponding to node traversal 1 is transmitted. The binary
representation of the array index containing the data is
transmitted as suffix i.e., 11 corresponding to 3 which is
the array index containing the data 0.3 is transmitted. So
the code is transmitted is 111. Initially the algorithm starts
with subsets each having different probability. But this
does not require to transmit the tree prior to
decompression.
In the proposed algorithm, once we reach a leaf while
decoding, we know how many bits are there still to be read
and that allows us to make a single operation to get a bit
string which is readily converted to an integer.
Page 318
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Compression ratio
Compression ratio is calculated as defined in[5]. The
formula used for compression ratio analysis is given as
follows:
Compression ratio = 100(1- compressed size/original size)
Compressed size is the number of bits obtained after
compression and original size will be the total number of
bits required without using compression algorithm.
The performance of the modified adaptive Huffman, static
and adaptive Huffman compression algorithm were
analyzed in terms of compression in number of bits
required for transmission is shown in fig. 3 and
compression ratio of each algorithm is shown in fig. 4.
IV.CONCLUSION
[2]
Chou, J. and K. Ramachandran, 2003. A distributed and adaptive
signal processing approach to reducing energy consumption inn sensor
networks. Proceeding of the INFOCOM 22nd Annual Joint Conference of
the IEEE Computer and Communications Societies, Mar. 30-Apr. 3, IEEE
Xplore
Press,
USA.,
pp:
1054-1062.
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1208942
[3]
Tang, C., C.S. Raghavendra and K.V. Prasanna, 2003. An energy
efficient adaptive distributed source coding scheme in wireless sensor
networks. Proceeding of the IEEE International Conference on
Communications, May 11-15, IEEE Xplore Press, USA., pp: 732-737.
DOI: 10.1109/ICC.2003.1204270.
[4]
Ye, Q., Y. Liu and L. Zhang, 2006. An extended DISCUS scheme
for distributed source coding in wireless sensor networks. Proceeding of
the International Conference on Wireless Communications, Networking
and Mobile Computing, Sept. 22-24, IEEE Xplore Press, Wuhan, pp: 1-4.
DOI: 10.1109/WiCOM.2006.286
[5]
Francesco Marcelloni, 2008. A simple algorithm for data
compression in wireless sensor networks. IEEE. Commun. Lett., 12: 411413. DOI: 10.1109/LCOMM.2008.080300
[6]
Lin, M.B., J.F. Lee and G.E. Jan, 2006. A lossless data
compression and decompression algorithm and its hardware architecture.
IEEE Trans. Very Large Scale Integrat. Syst., 14: 925-936. DOI:
10.1109/TVLSI.2006.884045
[7]
Pradhan, S.S., Julius Kusuma and K. Ramachandran, 2002.
Distributed compression for dense microsensor networks. IEEE. Signal
Proc. Mag.,19: 15-60. DOI: 10.1109/79.985684
[8]
Cesare, A., Romolo Camplani and C. Galperti, 2007. Lossless
compression techniques in wireless sensor networks: Monitoring
microacoustic emissions. Proceeding of the IEEE International Workshop
on Robotic and Sensor Environments, Oct. 12-13, IEEE Xplore Press,
Ottawa, Canada, pp: 1-5. DOI: 10.1109/ROSE.2007.4373963
[9]
http://www.mass.gov/dep/water/resources/
Page 319
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
sivadasankottayi@yahoo.com
csirc@rediffmail.com
Page 320
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Secondary
source
Monitor
microphone
Primary
Source
Duct
Loudspeaker
Fig. 1
Electronic
controller
d
Loudspeaker
+
S
Fig. 2
Primary
Source
Secondary
source
Error
microphone
Reference
microphone
Anti-aliasing
filter and
A/D
t
D/A converter
& reconstructtion filter
Anti-aliasing
filter & A/D
converter
Digital
filter
Fig. 3
A Feed forward
Adaptive ANC
Algorithm
Page 321
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Unknown
Plant P(z)
d(n)
e(n)
y(n)
Digital
filter W(z)
Fig. 4
x(n)
Adaptive
filter
W(z)
(z)
e(n)
d(n)
Unknown
Plant P(z)
S(z)
y(n)
LMS
Algorithm
Fig. 5 Block diagram of ANC system using the FXLMS
algorithm.
to solve this problem is to place an identical filter in the
reference signal path as shown in Fig. 5 to weight update of
LMS algorithm which realizes the Filtered X-LMS
(FXLMS) algorithm [7]. FXLMS algorithm for active
noise control has been derived by Burgess [8] as
In ANC applications, S(z) is unknown and must be
. Therefore, the
estimated by an additional filter
filtered reference signal is generated by passing the
reference signal through this estimate of the secondary path
as
P
+
+
F
S
M
x(n)
d(n)
y(n)
e(n)
J
M
Page 322
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
=0.26
=0.27
=0.28
=0.29
=0.30
-23.56
=1.8
-23.67
=1.9
-23.79
=2.0
-24.02
=2.1
-23.62
=2.2
-17.70
=2.3
-26.65
=0.08
-27.01
=0.09
-27.19
=0.10
-27.60
=0.11
-27.94
=0.12
-27.79
=0.13
-23.26
=0.006
-23.80
=0.007
-23.67
=0.008
-24.61
=0.009
-25.10
=0.01
-25.25
=0.02
-15.90
=0.004
-16.14
=0.005
-16.57
=0.006
-16.55
=0.007
-16.77
=0.008
-15.84
=0.009
-16.42
-16.30
-16.11
-15.57
-14.78
-14.73
LMS
NLMS
SDLMS
SELMS
SSLMS
0.1
0
-0.1
-0.2
-0.3
-0.4
100
200
300
400
500
600
Signal samples
700
800
900
1000
Page 323
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
0.5
0.4
Signal value
0.3
0.2
0.1
0.2
0
0.15
Signal value
-0.1
0.1
100
200
300
400
500
600
Signal samples
700
800
900
1000
0.05
0.6
-0.05
100
200
300
400
500
600
Signal samples
700
800
900
1000
0.5
0.4
Signal value
0.3
0.25
Signal value
0.2
0.3
0.2
0.1
0.15
0
0.1
-0.1
0.05
100
200
300
400
500
600
Signal samples
700
800
900
1000
-0.05
100
200
300
400
500
600
Signal samples
700
800
900
1000
0.25
Signal value
0.2
0.15
0.1
0.05
-0.05
100
200
300
400
500
600
Signal samples
700
800
900
1000
Page 324
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCE
[ 1]
S.M. Kuo and D.R. Morgan, Active Noise Control Systems
Algorithms and DSP implementations. New Yourk: Wiley, 1996.
[ 2]
S.J. Elliott and P.A. Nelson, Active Noise Control, IEEE
Signal Processing Magazine, October 1983.
[3]
Colin H. Hansen, Understanding active noise cancellation,
Spon Press, London, 2001
[4]
L. L. Beranek and I. L. Ver, Noise and Vibration Control
Engineering Principles and applications. New York, Wiley, 1992.
[5]
Olson H.F & May E.G Electronic sound absorber, Journal of
the Acoustical Society of America, 25, pp 1130-1136, 1953.
[6]
D.R. Morgan, An analysis of multiple correlation cancellation
loops with a filter in the auxiliary path, IEEE Trans. Acoust., speech,
Signal Processing, Vol. ASSP-28, pp. 454-467, Aug. 1980.
[7]
B. Widrow and S.D. Stearns, Adaptive Signal Processing.
Englewood Cliffs, NJ: Prentice-Hall, 1985.
[8]
J.C. Burgess, Active adaptive sound control in a duct: A
Computer simulation, Journal of Acoust. Soc. of America, vol. 70, pp.
715-726, Sept. 1981.
[9]
Stephen J. Elliott, Down with Noise, IEEE Spectrum, June
1999.
[10]
Monson H. Hayes, Statistical Digital Signal Processing &
Modeling. John Wiley & Sons, 1996.
Page 325
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ABSTRACT
Segmentation process for texture analysis is widely used for
pattern recognition, industrial automation, biomedical image
processing and remote sensing. Segmentation based on
combinations of morphological and statistical operation is
preferred in this project on wavelet transform images.The
features like shape, size, contrast or connectivity makes
attractive usage of mathematical morphology and can be
considered for segmented oriented features.In order to get a
better performance with wavelet transform for haar, db1, db6,
coif6 and sym8 the system proposes derived equation on dilation,
erosion, mean and median which results in segmentation.The
process divides the wavelet combinatorial segmentation
algorithm in to 3 groups based on type of operation and number
of operations. The method using wavelet transform is applied on
Brodatz Texture which results in good segmentation. The 500
Brodatz Texture image considered for performance Analysis
which is compared with derived equations and filters in order to
get a segmented results. The segmented results are used in
comparative study with Peak Signal to Noise Ratio (PSNR)
technique.
Keywords: Discrete Wavelet transform, morphological and
statistical operation, filter analysis.
I.
INTRODUCTION
Page 326
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
II. METHODOLOGY
Input
image
DWT(FILT
ERSELECTI
ON
MORPHOLOGICAL
SEGMENTATIONSYST
EM
.(3.1)
(3.5)
Page 327
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
(3.6)
25
Eq3
20
(3.7)
(3.8)
10
V.
DISCUSSION
EXPERIMENTAL
RESULTS
AND
Eq1
Eq7
Eq5
Eq6
Eq8
Approxim ation
Eq 4
Eq3
25
Eq4
20
15
10
5
Eq 1
Eq7
Eq6
Eq 3
Eq1
Eq1
Detailed
Eq8
Eq 6
EQ5
Eq7
Eq3
Eq5
Eq6 Eq8
Eq2
Eq4
Eq6
Eq7
Eq8
EQ5 Eq 7
Eq 1 Eq 3 Eq 5 Eq7
Eq 2 Eq 4
Eq 6 Eq 8
EQ5
Eq8
Detailed
15
10
Eq4
-5
20
Eq2
30
25
Eq3
Eq1
30
5
0
Eq4
15
Approximation
Detailed
Approxim ation
Inf
5
4
Eq3
25
Eq4
1.458
20
1.722
-1.883
15
10
0
EQ5
Eq1
Eq3
Eq2
Eq4
Eq6
Eq8
-1
Eq7
Eq1
Eq7
Eq5
Eq6
Eq8
-2
Db1
Detailed
Db6
Sym 8
Coif6
0
Approximation
Fig5Segmentation2resultPSNRvaluecomparisionofHaar
withDb1,Db6,sym8andcoif6usingalpha=4,peta=3,gama=1
Page 328
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[10]
[11]
[12]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Page 329
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
#1
anitha_rvs@yahoo.co.in
2
preee@tce.edu
1. INTRODUCTION
Network on chip has emerged as a dominant paradigm for
the synthesis of multicore SoCs they are generally viewed
as the ultimate solution for the design of modular and
scalable communication architectures and provide inherent
support for the integration of heterogeneous core through
the standardization of the network boundary. The network
topology and routing algorithm used in the underlying onchip communication network are the two important
aspects. A highly adaptive routing algorithm has a potential
of providing high performance, fault tolerance and uniform
utilization of network resources. Generally routing
algorithms providing high adaptiveness guarantee freedom
from deadlock by means of virtual channels (VCs).For
arbitary network topologies, a relatively large number of
VCs could be required to provide deadlock freedom, high
adaptivity, and shortest paths. However the use of the VCs
introduces some overhead in terms of both additional
resources and mechanisms for their management. To limit
such overhead, some proposals try to minimize the number
of VCs required to obtain a high degree of adaptivity. For
these reasons, we believe that reducing as much as possible
Applic
ation
T
Task
T
Ta
sk
T
1
T
2
Mappi
ng
Topology Graph
T
l
I
P
P
7
APSR
Compres
sion
C
C
P6
Cb
dS C
Rout
i
Compre
ssed
Configure/Rec
Page 330
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
cfg
setup
dst
InReg
ion
Top
hit
ao
Admissible
Outputs
Dem
dst
addr
ao
cfg
setup
dst
cfg
hit
setup
ao
cfg
setup
InReg
ion
Top
dst
hit
Enco
der
M:lg
Data
En
Cfg=(TL,BR,Col
setup
dat
Re
TL
Data
En
Re
BR
color
InReg
ion
ao
hit
dat
En
Re
Dest.
Admissible
Outputs
North, East
East
South, East
South, East
South, East
South, East
South
South
East
A B
A B
R
1
C D
F
G H I
C D
E
F E
G H I
R
2
Fig. 3. (b). Color based Clustering
Page 331
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Dest.
Admissible
Outputs
North, East
East
R1
South, East
R2
South
East
Page 332
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
SimulationResultsForEastPort
4. CONCLUSION
In this paper, we have proposed a topology agnostic
methodology to develop efficient APSRA for NoCs. Our
methodology not only uses the information about the
topology of communicating cores but also exploits
information about the concurrency of communication
transactions Routing algorithms generated by the APSRA
method has even higher performance and adaptivity
advantage over other deadlock-free routing algorithm. The
APSRA methodology uses a heuristic to remove the
minimum number of edges in the channel dependency
graph to ensure deadlock-free routing. A straightforward
table implementation may not be practically useful, since
the table size grows with the number of nodes in the
network. Therefore, we have also proposed a technique to
compress tables such that the effect on the original routing
algorithm is minimal.
5. REFERENCES
Page 333
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
=4
ramesh116@hotmail.com
drvkannan62@yahoo.com
aarunagiri@yic.edu.sa
cpooranachandran@hotmail.com
Abstract
For a management consultant in order to successfully assist
an organization in creating new actionable knowledge
(knowledge that is used to create value), the consultant must
be aware of a knowledge dimension called Coalescent
knowledge. The process for creating actionable knowledge in
this dimension is a dialogue process. For example, product
development is guided by several expert knowledge including
critical process relationships which are dynamic, derived
from experience and are often nonlinear. A new inductive
learning intelligent technique called Rough set theory, which
is to be used along with coalescent knowledge is described and
proposed. The frame work of the classic management has
been discussed and is modified using the Rough set theory
method.
Coalescent
New Knowledge
Transition
Transition Form
Form
Socialization
Tacit-to-Tacit
Tacit to Coalescent
Externalization
Tacit to Explicit
Coalescent to
Explicit
Combination
1.0 Introduction
For a management consultant to successfully assist an
organization in creating new actionable knowledge
(knowledge that is used to create value), the consultant
must be aware of a new knowledge dimension called
Coalescent knowledge (Morgan, Morabito, Merino,Reilly,
2001). Knowledge in the Coalescent dimension has the
following attributes:
Created via a dialogue process involved group of experts
The knowledge is visible, expressible, shared and virtual
Can be private or public knowledge
Can be used to create a sustainable competitive advantage
Facilitates the opportunity for group of experts to act as if
they have one mind (decide by rough set ) to accomplish
organizational objectives
NK Knowledge
Internalization
Explicit to
Explicit to
Explicit
Coalescent
Explicit to Tacit
Coalescent to Tacit
Table 1
2.0 Rough Set Theory
The concept of rough sets has been introduced by Zdzislaw
Pawlak in the early 1980s as a new concept of set used in
many application and research. The Rough Set theory is an
important part of soft computing. The methodology is
concerned with the classificatory analysis of imprecise,
uncertain or incomplete information or knowledge
expressed in terms of data acquired from experience. It
generates output as a list of certain and possible rules. It is
Page 334
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Combination
Socialization
Internalization
Expert
No.1
Rough Set
Explicit
Coalescent
. Tacit
..
4.0 Refined Knowledge Creation Process
Figure 2 shows the Refined knowledge creation process
that includes the interaction of people or (experts) in the
organization.
.
.
.
Expert
No. N
Socialization
Externalization
Internalization
RefinedKnowledgeCreationProcessFlow
Figure2
Page 335
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Class i cManagementProcess
Planning
Leading
Organizing
Controlling
Feedback
Loop
(PLOC)
Figure3
6.0
Modified Classic Management process by Rough Set
theory
By adopting the critical reflective practice principals as
defined by Van Aswegen (1998)[6] to an
open group dialogue format, management consultant could
make use of the Coalescent knowledge derived from the
tacit dimension. Based on the formation of the uncertainty
problems, the Coalescent knowledge can be refined by
implementing the rough set concept in order to eliminate
the uncertainty and to minimize the deviation from the
standards.
ModifiedClassicManagementProcess
CriticalReflectivePractice
Figure4
7.0 Conclusion :
This paper describes the concept of tacit knowledge
dimension and the formation of Coalescent knowledge
from tacit dimension. As the tacit dimension increases, the
uncertainty problem is eliminated through Rough set
method. The concept of Classical Management Process is
explained and is modified by the refined Coalescent
knowledge through Rough set in order to minimize the
deviation between the reference and the actual outcome.
References:
[1] Morgan, Morabito, Merino, and Reilly, 2001, Defining
Coalescent Knowledge: A Revision of
Page 336
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
fouzul_hameed@yahoo.com
2
anggeetha@yahoo.com
I.
INTRODUCTION
Business to customer (B2C) data exchanges like the online
shopping, banking. gaming, ticket booking, travel booking
etc. has been made feasible by the web applications. Web
application work with a back-end database to store or
retrieve data. These data are accepted from the user or
retrieved from the database to be given to the user
dynamically by embedding them in a HTML code. The
input supplied by the user could be benign or malicious. If
the data is malicious, undesired data could be extracted
from the database. This type of attack is called the SQL
injection attack (SQLIA).
An example of SQL Injection is given here. A SQL
injection attack occurs when an attacker causes the web
application to generate SQL queries that are functionally
different from what the user interface programmer
intended. For example the application contains the
following code,
Query = SELECT * FROM members WHERE login
='+
request.getParameter(login)+ And
password= + request.getParameter(password)+
;
In this code the web application retrieves user
inputs from login and password, and concatenates these
two user inputs into the query. This is used to authenticate
the user. If the attacker enters admin into the login filed
and xyz OR 1=1 into the password field. Now the query
string will be,
SELECT * from members WHERE name=admin
AND password=xyz OR 1=1
In this case, the password field, which should have only
one string, is replaced with sub strings, which will finally
lead to the retrieval of the data of the administrator which
is undesirable.
An effective and easy method, for detecting and
protecting existing web applications from SQL Injection
Attack, is need of the moment, which will help the
organizations to be secured against these attacks. Detection
or prevention of SQLIA is a topic of active research in
industry and academia. The most commonly injection
mechanisms [13] are
1.
Injection through user input.
2.
Injection through cookies
3.
Injection through server variables
4.
Second order injection
The proposed attack works on injection through user input.
The rest of the paper is organized as follows. Section 2
discusses the related work carried out in this area, Section 3
illustrates the system design. Section 4 evaluates the
effectiveness of the proposed approach and Section 5
concludes the paper.
II.
RELATED WORK
Researchers have proposed a wide range of techniques to
address the problem of SQL Injection. These techniques
range from development best practices to fully automated
frameworks for detecting and preventing SQLIAs[12]. One
technique being used to mitigate SQLIA is the defensive
coding practice. Since the root cause for the SQL injection
vulnerabilities is insufficient input validation, the straight
forward solution for eliminating these vulnerabilities is to
apply suitable defensive coding practices, like Input type
checking, encoding of inputs, pattern matching and
identification of all input sources. But defensive coding is
prone to human error and is not as rigorously and
completely applied as automated techniques. While most
developers do make an effort to code safety, it is difficult
to apply defensive coding practices correctly to all sources
of input.
Page 337
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Web application
Prototype
document
SYSTEM DESIGN
System Architecture
Query Extractor
II.
Normal Phase
The normal phase is similar to the general operation of the
web application without any SQLIA detecting mechanisms.
The HTTP interceptor intercepts the HTTP request sent by
the user and sends the request to the web application. The
web application then sends the request to the database and
retrieves the data. This data is then framed as a HTTP
response and instead of sending it to the user the response
here is send to the comparator for evaluation.
C.
Validator phase
In this phase, the intercepted HTTP request is send to query
interceptor and the query is extracted from the HTML and
send to the parse tree generator and the syntax analyzer.
The syntax analyzer uses XSLs pattern matching.
The syntax analyzer works as follows:
1.
Accepts the query from the query
interceptor and tokenizes the query.
Page 338
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2.
Transforms the query into XML
format.
3.
Uses the XSLs pattern matching
algorithm to find the prototype model
corresponding to received query.
4.
From the prototype query it
identifies the user input data.
5.
Receives the parse tree from the
parse tree generator and fits in the nonleaf nodes.
6.
Validates the resulting parse tree
using the validation algorithm and
identifies if the query is benign or
malicious.
7.
Sends the prototype query and the
result to the comparator.
1)
XSLs Pattern Matching : When the query is
submitted by the query interceptor, the query is first
analysed and tokenized as elements. The prototype
document contains the query pertained to that particular
application and it also has the intended output for each
query model.
For example, as explained in section 1, the input query is,
SELECT * FROM members WHERE login=admin
AND password=XYZ OR 1=1
When this query is received this is converted into XML
format using a XML schema. The resulting XML would
be,
<SELECT>
<*>
<FROM>
<members>
<WHERE login= admin>
<AND password= XYZ>
<OR 1=1>
</OR>
</AND>
</WHERE>
</members>
</FROM>
</*>
</SELECT>
Using the pattern matching the elements is searched so that
the nested elements is similar to query tokens. The
corresponding matching XML mapping is,
<SELECT>
<identifier>
<FROM>
<identifier>
<WHERE id_list= userip>
<AND id_list=userip>
</AND>
</WHERE>
</identifier>
</FROM>
</identifier>
</SELECT>
When the match is found, the corresponding prototype
query would be,
SELECT identifier FROM identifier WHERE identifier
op userip AND identifier op userip
which will be used to identify the user input data. This
search is less time consuming because the search is based
on text and string comparison. The time complexity is
O(n). This helps in increasing the effectiveness of the
program and reduces the latency time.
2)
Syntax analyzer: The syntax analyzer uses the
prototype query model to recognize the user inputs and the
this inputs are inserted in the parse tree which is generated
by the parse tree generator. Now the syntax analyzer has
the parse tree and the user inputs. Figure3 represents the
parse tree for the benign query in section I. Every user
input string can find a non-leaf node in the parse tree, such
that its sub-tree leaf nodes comprise the entire user input
string. An example application of the parse tree generator
is given in figure 4. This algorithm for identifying
malicious output is used in SQLProb[14].This system uses
the validation algorithm to validate if the request is benign
or malicious. The password field is parsed into the set
Uni=1 (leaf(ui)) with five leaf nodes: XYZ, OR, 1, = and 1.
The analyzer algorithm is given in Table 1. Next, we do
depth-first-search from these five leaf nodes. The traversed
paths intersect at a non-leaf node, SQLExpression. Finally,
we do breath-first-search from SQLExpression to reach all
the leaf nodes of the parse tree, which is a superset of
Uni=1(leaf(ui)) implying that the input string ui is malicious.
The algorithm described above takes quadratic time,
because step 2 and step 3 take time of n h, where n is the
number of leaf nodes parsed by ui, and h is the average
number of layers from leaf nodes to nl node in the parse
tree. In addition, step 4 takes time complexity for a breathfirstsearch is O(n2). Therefore, the overall time complexity
is O(n2).
3)
Comparator Phase: The validator phase sends the
result, indicating if the request is benign or malicious
depending on the analyses it carried out, to the comparator.
The comparator now has the results retrieved from the
database and if the analyzer had certified that the request is
benign without any delay the requested data is immediately
send to the user. If the request is analyzed to be malicious,
the comparator extracts the intended output of the model
query from the Prototype document and compares it with
the actual output from the application encoded in the
HTTP response is database. If the result is the same, the
requested data is send to the user as HTTP response. Else
HTTP response with a Data not available reply is sent to
the
user.
Page 339
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
SELECT
select_list
identifier
FROM
Table_list
WHERE
Where_cond
identifier
SQLExpression
SQLAndExpression
*
members
SQLAndExpression
SQLAndExpressio
selectItem
ID
factor
Operator
login
selectItem
ID
ID
factor
Operator
password
admin
ID
XYZ
SELECT
select_list
identifier
FROM
Table_list
identifier
Where_cond
SQLExpression
members
SQLAndExpression
SQLAndExpression
SQLAndExpressio
selectItem
factor
ID
WHERE
Operator
ID
SQLAndExpressio
factor
factor
Operator
ID
ID
ID
Operator
1
login
admin
factor
selectItem
password
ID
XYZ
OR
Page 340
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IV.
A.
EVALUATION
Experimental Setup
if Uni=1(leaf(ui))
Umk=1leaf(node)k) then Return
True;
else Return False;
end
end
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
Actual
COMPVAL
st o
re
ir
ok
pd
Bo
Em
Po
rt a
R e s p o n s e ti m e i n S e c
6.
7.
8.
V.
CONCLUSION
COMPVAL, as indicated in section 1, is a novel system
used to detect SQLIAs efficiently. The proposed system
does not make any changes to the source code of the
application and so the source code need not be known to
COMPVAL. The proposed system the detection and
prevention of SQLIA is fully automated. Prototype of
COMPVAL was designed to measure the performance
factor and it worked successfully with 100% detection rate.
In future work, the focus will be on benchmarking the
system with standard algorithm and studying of alternate
techniques for the detection and prevention of SQLIA
using dynamic analysis.
REFERENCES
Web Applications
1.
C. Anley, Advanced SQL injection In SQL Server
Applications, White paper, Next Generation Security Software Ltd.,
2002.
2.
S.W. Boyd and A.D. Keromytis, SQLrand:Preventing SQL
Injection Attacks, Proc. ACNS 04, pp. 292-302, June 2004.
3.
Gregory Buehrer, Bruce W. Weide and Paolo A. G. Sivilotti,
Using Parse Tree Validation to Prevent SQL Injection Attacks, Proc.
International Workshop on Software Engineering and Middleware, 2005.
4.
C.Gould, Z.Su and P.Devanbu, JDBC Checker: A Static
Analysis Tool for SQL/JDBC Application, Proc. International
Conference on Software Engineering 04, pp.697-698, 2004.
5.
W. G. Halfond and A. Orso, AMNESIA: Analysis and
Monitoring for NEutralizing SQL-Injection Attacks, Proc. ACM
International Conference on Automated Software Engineering 05,
November 2005.
6.
Y.Huang, F. Huang, T.Lin and C.Tsai, Web Application
Security Assessment by Fault Injection and Behavior Monitoring, Proc.
International World Wide Web Conference 03, May 2003.
7.
V.B. Livshits and M.S. Lam, Finding Security Errors in Java
Programs with Static Analysis, Proc. Usenix Security Symposium 05,
pp. 271-286, August 2005.
8.
Y.Huang, F.Yu, C. Hang,C.H. Tsai, D.T.Lee and S.Y.Kuo,
Securing Web Application Code by Static Analysis and Runtime
Protection, Proc. International World Wide Web Conference 04, May
2004.
9.
D.Scott and R.Sharps, Abstracting Application-level Web
Security, Proc. International Conference on the World Wide Web 02,pp.
396-407, 2002.
10.
Zhendong Su and Gary Wassermann, The Essence of
Command Injection Attacks in Web Applications, Proc. ACM SIGPLANSIGACT Symposium on Principles of Programming Languages 06,
January 2006.
11.
Burp
proxy.
[Online].
Available:
http://www.portswinger.net/proxy.
12.
JJTree.
[online].
Available:
http://javacc.dev.java.net/doc/JJTree.html.
13.
W. Halfond, J. Vigeas and A.Orso, A Classification of SQL
Injection Attacks and Counter Measures, Proc. International Symposium
on Secure Software Engineering 06, March 2006.
14.
Anyi Liu, Yi Yuan, Duminda wijesekera and Angelos Stavrou,
SQLProb: a Proxy-based Architecture Towards Preventing SQL
Injection Attacks, Proc. ACM symposium on Applied Computing 09, pp.
2054-2061, 2009.
15.
W. Halfond, A. Orso and P. Manolios, Using Positive
Tainting and Syntax-Aware Evaluation to Counter SQL Injection
Attacks, Proc. ACM SIGSOFT Symposium on the Foundations of
Software Engineering 06, November 2006.
Page 341
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
16.
Adam Kieyzun, Philip J. Guo, Kartick Jayaraman and Micheal
D. Emst, Automatic Creation of SQL Injection and Cross-Site Scripting
Attacks, Proc. IEEE International Conference on Software Engineering
09, pp. 199-209, 2009.
17.
Konstantinos Kemalis and Theodores Tzouramanis, SQLIDS: A Specification-Based Approach for SQL-Injection Detection,
Proc. ACM symposium on Applied computing 08, pp. 2153-2158,2008.
18.
Xiang Fu and Kai Quian, SAFELI: SQL Injection Scanner
using Symbolic Execution, Proc. Workshop on Testing, Analysis and
Verification of Web Services and Applications 08, pp. 34-39, 2008.
19.
M.Johns and C. Beyerlein, SMask: Preventing Injection
Attacks in Web Application by Approximating Automatic Data/Code
Separation, Proc. ACM Symposium on Applied Computing 07, March
2007.
Page 342
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
hepsi.mku@gmail.com
alagar_samy54@rediffmail.com
justus.dr@gmail.com
I. INTRODUCTION
The static and dynamic processes involved in the development
of software products are of significant concern in research and
in practice. Since most of the activities involved in software
development are loosely coupled they need to be integrated to
form a single framework. Since the beginning of software
development and software engineering in the early 1960s,
research on the activities and the processes governing the
design and development of software products are active; and
to till date they are active.
However, it is studied from the literature [3], [7] [12] [23],
[26], [30], and case studies only a limited number of
publications are available for the processes that governs in
software development, when compared to other areas of
research. Also there is a need for a rigorous study on the
process sets and key process areas that forms the capsule of
Page 343
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 344
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE I
LEVELS OF PROCESS MATURITY
Process Maturity
Levels
Process Initiation
Process
Development
Process
Improvement
Process Extension
Hybrid Processes
Initial State
Actions
Implementation
Static
Address Goals
Process Set
Develop Activities
Dynamic
Process
Improved
Process Set
Componential
Dynamic
Processes
Use dedicated
resources
Refine Processes
and Sets
Plan and apply
Engineering
Practices
Activities for
Specific needs
Use Tools for
improvement
Expand to other
projects
Derive new process
sets
Result
Identified KPAs
Process Set
CMM level
improvement
Process reusability
Process regeneration
and improvement
Static Processes
The processes involved in software development are deemed
for practice and improvement. Still some of the processes
remain unchanged and they are termed as Static Processes.
These processes mainly deal with the developmental activity
of the software and the processes that govern them. There may
be many factors that influence them, but whatever be the
external factors these processes have to remain static
throughout the software development life cycle [4]. Attempt to
Page 345
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE II
STATIC PROCESS SET
Static Processes
Design and Coding Processes
Finish the product on time and
within budget
Finished product maintenance
process
Expires at the end of the scope
Stick on to the processes
defined for development and
maintenance
Document preparation
processes
Populate Repositories
Plan and Control
Use Decision support tools
Dynamic Processes
Interaction and Sharing of
resources
Exchangeability of gained
experience
Document revision &
refinement
Interaction within the
Production Processes
Management Processes
development team
Improvement and Generation
of new processes
Codify data to repositories
Generate feedback to the
development team
Interaction with project
leaders
Share expertise knowledge
Generate hybrid processes
Page 346
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Critical Success
Factors
Creating process
action teams
Interactions
Formal
Methodology
Experimenting
SPI
Process
Awareness
Process
Ownership
Hybrid
Processes
Staff
Involvement
Tailoring
processes
Occurrence in
Literature (n=47)
Frequency
%
7
26
Occurrence in
Practice (n=23)
Frequency
%
2
9
12
-
21
-
5
8
22
35
15
13
24
12
52
37
12
26
11
24
46
10
54
18
28
Page 347
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
methodology
Lack of
resources
Organizational
politics
Time pressure
50
35
29
12
52
36
17
Critical
Barriers
Inexperienced
staff
Lack of
awareness
Lack of
formal
Occurrence in
Literature (n=14)
Frequency
%
5
36
Occurrence in
Practice (n=23)
Frequency
4
%
17
10
43
12
52
Page 348
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1] Alagarsamy. K., Justus S and K. Iyakutti,
Implementation Specification of a SPI supportive Knowledge
Management Tool, IET Software, Vol. 2, No.2, pp. 123-133,
April 2008.
[2] Alagarsamy. K., Justus S and K. Iyakutti, Knowledge
based Software Process Improvement, Proc. of IEEE
International Conference on Software Engineering Advances,
France, 2007.
[3] Anne Walker, and Kathleen Millington Business
intelligence and knowledge management: tools to personalize
knowledge delivery Information Outlook, August, 2003.
http://www.findarticles.com/p/articles/mi_m0FWE/is_8_7/ai_
106863496
[4] Baccarini. D, The logical framework method for
defining project success, Project Management Journal,
pp.25-32 1999.
[5] Baddoo, N., Hall, T., De-motivators of software process
improvement: an analysis of practitioners views, Journal of
Systems and Software, Vol. 66, No. 1, pp. 2333, 2003
[6] Baddoo, N., Hall, T., Wilson, D., Implementing a people
focused SPI programme, Proc. in 11th European Software
Control and Metrics Conference and the Third SCOPE
Conference on Software Product Quality, 2000
[7] Beecham, S., Hall, T., Rainer, A., Building a
requirements process improvement model, Department of
Computer Science, University of Hertfordshire, Technical
Report No. 378, 2003.
[8] Belout. A, Effects of human resource management on
project effectiveness and success: toward a new conceptual
framework, International Journal of Project Management,
Vol.16, pp. 2126, 1998.
[9] Coolican, H., Research Methods and Statistics in
Psychology. Hodder and Stoughton, London, 1999.
[10] CMMI of Development, ver 1.2, SEI, Pitsburgh, 2006, PA
15213-390, CMU/SEI-2006-TR-008, 2006.
[11] David N Card, Research Directions in Software Process
Improvement, Proc. in 28th Annual International Computer
Software and Application Conference, 2004.
[12] Emam. K.E., J.N. Drouin, and W. Melo, SPICE:The
Theory and Practice of Software Process Improvement and
Capability Determination, IEEE Computer Society, Los
Alamitos, 1998.
[13] Haris Papoutsakis, and Ramon Salvador Valls, Linking
Knowledge Management and Information Technology to
Business Performance: A Literature Review and a Proposed
Page 349
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 350
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
rkalpana@pec.edu
Graph
Theory,
parallel
VI. INTRODUCTION
Computing shortest paths is a basic operation which finds
application in various fields. The most common
applications[2] for such a computation includes route planning
systems for bikes, cars and hikers, and timetable based
information systems for scheduled vehicles such as buses and
trains. In addition to these, shortest path computation also
finds its application in spatial databases and even in web
searching.
The core algorithm that serves as the base for the above
applications is a special case of single-source shortest-path
problem on a given directed graph with non-negative edge
weights called Dijkstras algorithm[1]. The particular
graphs[3], [4], [5] considered for the above applications are
huge and the number of shortest path queries to be computed
within a short time is huge as well. This is the main reason for
the use of speed-up techniques for shortest-path computations.
The focus of these techniques is to optimize the response time
for the queries, that is, a speed-up technique is considered as a
technique that reduces the search space of Dijkstras algorithm
by using pre-computed values or by using problem specific
information. Often the underlying data contains geographic
information, that is, a layout of the graph is easily obtained. In
Page 351
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1 i < k
l ( u i , u i +1 ) .
A. Bidirectional Search
In bidirectional search[2] a normal or forward variant of the
algorithm starting at the source and a reverse or backward
variant of the algorithm starting at the destination are run
simultaneously. The backward variant of the algorithm is
applied to the reverse graph, that is, a graph having the same
node set V as the given graph and the reverse edge set
E = {(u , v ) | ( v , u ) E }. The algorithm can be terminated
when a node has been labeled permanent by both the forward
and backward searches. The shortest path can be obtained by
combining the path obtained in the forward search with the
one obtained in backward search.
B. Goal-Directed Search
This technique [2], [8] alters the priority of the active nodes to
change the order in which the nodes are processed. More
precisely a potential p t (v ) depending on a target t is added
to the priority dist (v ) . So the modified priority of a node
v V is given by dist ( v ) + p t ( v ) . With an appropriate
potential the search can be directed to the target thereby
reducing the running time and at the same time finding a
shortest path.
An alternate approach to modify the priority, is to change the
edge lengths so that the search is directed towards the target
t. In this case, the length of an edge ( u , v ) E is replaced by
l ' (u , v ) = l (u , v ) p t ( v ) + p t (u ) . It can be easily verified
that a path from s to t is a shortest s-t path according to l ' , if
and only if it is a shortest s-t path according to l.
C. Multilevel Approach
This speed-up technique has a preprocessing step in which the
input Graph G is decomposed into l + 1 levels ( l 1) and
enriched with extra edges which represent shortest path
between certain nodes[2],[9],[10]. The decomposition of the
graph depends on the selected nodes Si, at level i such
that S 0 := V S 1 K S l . The decomposition of the node
sets can be based on various criteria. Selecting desired number
of nodes with the highest degree in the graph, works out to be
an appropriate criterion.
Three different types of edges are added to the graph: upward
edges, going from a node that is not selected at a particular
level to a node selected at that level, downward edges,
connecting selected to non-selected nodes and level edges,
connecting selected nodes at one level.
For finding a shortest path between two nodes, it is sufficient
to consider a smaller subgraph of the multilevel graph.
VIII.
RELATED WORK
There is a high volume of ongoing research on shortest paths.
The basic speedup techniques and the combined speedup
techniques [2], [4], [7] for Dijkstras algorithm that usually
reduce the number of nodes visited and running time of the
algorithm in practice which are discussed here.
D. Shortest-Path Containers
This speed-up technique requires a preprocessing computation
of all shortest-path trees. For each edge, e E , a node set,
S (e ) , to which the shortest path begins with e is
computed[2],[11]. By using a layout, for each edge e E , the
bounding box of S ( e ) is stored in an associative array C with
Page 352
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
A. Bidirectional Search
The bidirectional search[4] algorithm has a considerable
portion of its operations suitable for parallel execution.
Initialization (Forward Search) [IFS]
Initialization (Backward Search) [IBS]
Node Selection (Forward Search) [SFS]
Distance updation (Forward Search) [UFS]
Node Selection (Backward Search) [SBS]
Distance updation (Backward Search) [UBS]
Page 353
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
IBS
SFS
SBS
//Initialization phase
for all nodes u V set dist ( u ) :=
UFS
UBS
dist ( s ) := 0
for all nodes u V set rdist (u ) :=
initialize priority queue rQ with target t and
Initialization parallelized
using OpenMP Sections
rdist ( t ) := 0
initialize a perm to empty set
// Node selection phase begins
while priority queue Q and rQ are not empty
{
get node u with smallest tentative distance dist(u) in Q
if( u = t or u perm ) halt.
add u to perm
get node ru with smallest distance rdist(ru) in rQ
if( ru = s or ru perm ) halt.
add ru to perm
// update phase forward search
for all neighbor nodes v of u
set new-dist := dist(u) + w(u, v)
update dist(v) of forward priority queue Q
// update phase reverse search
for all neighbor nodes rv of ru
set new-dist := rdist(ru) + w(ru, rv)
update dist(rv) of reverse priority queue
rQ
}
Algorithm 1: Bidirectional Dijkstras Algorithm
//Initialization phase
# pragma omp section
{
initialize forward priority queue Q with dist(u)= and
dist(s)=0
}
# pragma omp section
{
initialize reverse priority queue rQ with rdist(u)= and
rdist(t)=0
}
initialize perm array with empty set
//Node selection phase begins
While (priority queue Q and rQ are not empty)
{
get node with min dist(u) in Q and assign it to u
if (u==t or u perm) then exit else add u to perm
get node with min rdist(u) in rQ and assign it to ru
if (ru==s or ru perm) then exit else add ru to
perm
//Update phase forward search
# pragma omp section
{
for all neighbor nodes v of u
set new-dist := dist(u) + w(u, v)
update dist(v) of forward priority queue Q
}
//Update phase reverse search
# pragma omp section
{
for all neighbor nodes rv of ru
set new-dist := rdist(ru) + w(ru,rv)
update dist(rv) of reverse priority queue rQ
}
}
Algorithm 2: Bidirectional Search with parallelized updates
Page 354
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Nodes
1000
0.001
0.0015
Parallel
Bi+Dij
0.003
5000
0.0115
0.006
0.0055
10
10000
0.0295
0.014
0.013
15
15000
0.0495
0.0225
0.0175
20
20000
0.0685
0.036
0.023
25
25000
0.103
0.043
0.0305
30
30000
0.126
0.05
0.0355
1.692
0.7695
0.546
0.0564
0.0257
0.0182
2.1988304
09
3.098901
Dij
Bi+Dij
TABLE V
RUNNING TIME IN DIJKSTRA AND BIDIRECTIONAL DIJKSTRA WITH
PARALLELIZED UPDATES
RRun No.
Nodes
Dij
Bi+Dij
1000
576
55
5000
2103
134
10
10000
4190
170
15
15000
7263
236
20
20000
11633
271
25
25000
11932
202
30
30000
14692
275
238203
6020
7940.1
200.6666667
39.56860465
TABLE IVI
NUMBER OF NODES VISITED IN DIJKSTRA AND BIDIRECTIONAL DIJKSTRA
Page 355
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
[2] Martin Holzer, Frank Schulz, Dorothea Wagner, and Thomas Willhalm,
Combining Speed-up Techniques for Shortest-Path Computations, ACM
Journal of Experimental Algorithmics, Vol. 10, Article No. 2.5, 2005.
[3] Johnson, D. B, Efficient algorithms for shortest paths in sparse networks,
Journal of the Association for Computing Machinery,1959, 113.
16000
14000
[4] Frederikson, G. N, Fast algorithms for shortest paths in planar graphs with
applications, SIAM Journal on Computing, 16(6), 1987, 10041022.
12000
10000
8000
Dij
Bi+Dij
6000
4000
2000
0
1000
5000
10000
15000
20000
25000
30000
No.of Nodes
[7] Willhalm, T, Engineering shortest paths and layout algorithms for large
graphs., Ph.D.thesis, Faculty of Informatics, University of Karlsruhe, 2005
[8 Andrew V.Goldberg, Chris Harrelson, Computing the Shortest Path: A*
Search Meets Graph Theory, Microsoft Research, 1065 La Avenida,
Mountain View, CA 94062. 2005
[9] Peter Sanders and Dominik Schultes , Highway Hierarchies Hasten Exact
Shortest Path Queries, , ESA 2005, LNCS 3669, Springer-Verlag Berlin
Heidelberg 2005, 568579..
[10] Holzer, Hierarchical speedup techniques for shortest path algorithms, M,
Tech. report, Dept of Informatics, University of Konstanz, Germany, 2003.
[11] Dorothea Wagner and Thomas Willhalm, Geometric Containers for
Efficient Shortest-Path Computation, ACM Journal of Experimental
Algorithmics, 10(1.3), 2005.
[12] Gutman. R.J, Reach-based routing: A new approach to shortest path
algorithms optimized for road networks, In Proceedings of the Sixth
Workshop on Algorithm Engineering and Experiments and the First
Workshop on Analytic Algorithmics and Combinatorics, 2004.
[13] OpenMP Website (http://www.openmp.org)
REFERENCES
[1] Dijkstra. E. W , A note on two problems in connection with graphs, In
Numerische Mathematik. Vol. 1. Mathematisch Centrum, Amsterdam, The
Netherlands, 1959, 269271.
Page 356
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
SMK_PRS2007@Yahoo.com
I. INTRODUCTION
A subfield of privacy preserving data mining is
knowledge hiding. The paper presents a novel approach
that strategically performs sensitive frequent itemset hiding
based on a new notion of hybrid database generation. This
approach broadens the regular process of data sanitization
by applying an extension to the original database instead of
either modifying existing transactions, or rebuilding the
dataset from scratch. The extended portion of the dataset
contains a set of carefully crafted transactions that achieve
to lower the importance of the sensitive patterns to a degree
that they become uninteresting from the perspective of the
data mining algorithm, while minimally affecting the
importance of the nonsensitive ones. The hiding process is
A. Frequent Itemset:
Let I = {i1, i2, ., iM} be a finite set of literals,
called items, where M denotes the cardinality of the set.
Any subset I I is an item set over I. A transaction T over
I is a pair T = (tid, I), where I is the item set and tid is a
unique identifier, used to distinguish among transactions
that correspond to the same item set. A transaction
database D = {T1, T2, ....TN} over I is an
N x M table consisting of N transactions over I carrying
different identifiers, where entry Tnm = 1 if and only if the
mth item (m [1, M]) appears in the nth transaction (n
[1, N]). Otherwise, Tnm = 0. A transaction T = (tid, J)
supports an item set I over I, if I J. Let S be a set of
items; notation p(S) denotes the powerset of S, which is the
set of all subsets of S.
Page 357
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
B.Hiding methodology
To properly introduce the hiding methodology, one needs
to consider the existence of three databases, all depicted in
binary format. They are defined as follows:
1)
Construct conditional pattern base for each node
in the FP-tree
2)
Construct conditional
conditional pattern-base
FP-tree
from
each
3)
Recursively mine conditional FP-trees and grow
frequent patterns obtained so far
Page 358
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
C. Border Revision
Lower bound Q is
sup( I N , DO )
Q=
N +1
mfreq
.1
C1 :
Item set I was frequent in DO and remains frequent in D.
C2 :
Item set I was infrequent in DO and is infrequent in D.
C3 :
Item set I was frequent in DO and became infrequent in D.
C4 :
Item set I was infrequent in DO and became frequent in D.
Since the borders are revised to accommodate for an exact
solution, the revised hyper plane is designed to be ideal in
the sense that it excludes only the sensitive item sets and
their supersets from the set of frequent patterns in D,
leaving the rest of the item sets in their previous status as in
database DO.
The first step in the hiding methodology rests on
the identification of the revised borders for D. The hiding
algorithm relies on both the revised positive and the
negative borders, denoted as Bd+ (F1D) and Bd (F1D),
respectively. After identifying the new (ideal) borders, the
hiding process has to perform all the required minimal
adjustments of the transactions in Dx to enforce the
existence of the new borderline in the result database.
Page 359
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Sup (I,DO) +
uqm
mfreq x (N + Q) (3)
q =1 i M I
equivalently if
Q
sup (I,DO) +
q =1 i M I
The cover relations that exist between the item sets of Bd+
(F`D) and those of F`D. In the same manner, the item sets
of Bd (FD) are generalized covers for all the item sets of P \
(Bd+ (F`D) {}). Therefore, the item sets of the positive
and the negative borders cover all item sets in P.
Optimal solution set C: The exact hiding solution, which is
identical to the solution of the entire system of the 2M 1
inequalities, can be attained based on the item sets of set
C = Bd+ (F`D) Bd (F`D)
Based on (7), the item sets of the revised borders Bd+ (F`D)
and Bd (F`D) can be used to produce the corresponding
inequalities, which will allow for an exact hiding solution
for DO.
D. Handling of Suboptimality
Since an exact solution may not always be feasible, the
hiding algorithm should be capable of identifying good
approximate solutions. There are two possible scenarios
that may lead to nonexistence of an exact solution. Under
the first scenario, DO itself does not allow for an optimal
solution due to the various supports of the participating
item sets. Under the second scenario, database DO is
capable of providing an exact solution, but the size of the
database extension is insufficient to satisfy all the required
inequalities of this solution. To tackle the first case, the
hiding algorithm assigns different degrees of importance to
different inequalities. To be more precise, while it is crucial
to ensure that (4) holds for all sensitive item sets in D, thus
they are properly protected from disclosure, satisfaction of
(3) for an item set rests in the discretion of ensuring the
minimal possible impact of the sanitization process to DO.
This inherent difference in the significance of the two
inequalities, along with the fact that solving the system of
all inequalities of the form (4) always leads to a feasible
solution (i.e., for any database DO), allows the relaxation of
Page 360
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
F. Validity of Transactions
The incorporation of the safety margin threshold in the
hiding process may lead to an unnecessary extension of DX.
Page 361
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V. CONCLUSION
REFERENCES
[1]
Aris Gkoulalas Divanis, Vassilios S. Verykois, (May 2009)
Exact Knowledge Hiding through Database Extension IEEE
Transaction on Knowledge and Data Engineering, Vol. 21, No. 5,
[2] Gkoulalas-Divanis. A and Verykios. V.S, (Nov. 2006) An Integer
Programming Approach for Frequent Itemset Hiding, Proc. ACM Conf.
Information and Knowledge Management (CIKM 06), pp. 748-757,
[3] Oliveira. S.R.M. and Zaane. O.R.,, (2003) Protecting Sensitive
Knowledge by Data Sanitization, Proc. Third IEEE Intl Conf. Data
Mining (ICDM 03), pp. 211-218.
[4] Sun. X and Yu. P.S, (2005) A Border-Based Approach for Hiding
Sensitive Frequent Itemsets, Proc. Fifth IEEE Intl Conf. Data Mining
(ICDM 05), pp. 426-433.
[5] Verykios. V.S., Emagarmid. A.K., Bertino. E., Saygin. Y, and
Dasseni. E, (Apr. 2004) Association Rule Hiding, IEEE Trans.
Knowledge and Data Eng., vol. 16, no. 4, pp. 434-447,
Page 362
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Denial of service:
New metrics and
Their measurement
Dr.KannanBalasubramanian1, Kavithapandian.P2
Professor, Department of Computer Science & Engineering
Mepco Schlenk Engineering College, Sivakasi, Tamilnadu, India.1
II M.E [CSE], Anna University, Thiruchirapall, Tamilnadu, Indiai.2
kavithaasm@yahoo.com, muthukumar_ssce@yahoo.com
I. INTRODUCTION
Denial of service (DoS) is a major threat. DoS severely
disrupts legitimate communication by exhausting some
critical limited resource via packet floods or by sending
malformed packets that cause network elements to crash.
The large number of devices, applications, and resources
involved in communication offers a wide variety of
mechanisms to deny service. Effects of DoS attacks are
experienced by users as a severe slowdown, service quality
degradation, or service disruption.
DoS attacks have been studied through network simulation
or testbed experiments. Accurately measuring the
impairment of service quality perceived by human clients
during an attack is essential for evaluation and comparison
of potential DoS defenses, and for study of novel attacks.
Researchers and developers need accurate, quantitative,
and versatile DoS impact metrics whose use does not
require significant changes in current simulators and
experimental tools. Accurate metrics produce measures of
service denial existing metrics agree with human
perception of service denial. .
QOS
Measurement
Introduce
DOS
Calculate
DOS Metrics
Analysis Traffic
UDP Bandwidth
Flood
Analysis
Graph
Representation
Page 363
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
C. Request/Response delay
Request/response delay is defined as the interval between
the time when a request is issued and the time when a
complete response is received from the destination. It
measures service denial of interactive applications (e.g.,
telnet) well, but fails to measure it for non-interactive
applications (e.g., email), which have much larger
thresholds for acceptable request/response delay. Further, it
is completely inapplicable to one-way traffic (e.g., media
traffic), which does not generate responses but is sensitive
to one-way delay, loss and jitter.
D. Transaction duration
It is the time needed for an exchange of a meaningful set of
messages between a source and a destination. This metric
Page 364
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Interactive applications such as Web, file transfer, telnet,email (between a user and a server), DNS, and ping involve
a human user requesting a service from a remote server,
and waiting for a response. Their primary QoS requirement
is that a response is served within a user-acceptable delay.
Research on human perception of Web traffic delay shows
that people can tolerate higher latencies for entire task
completion if some data is served incrementally.
Media applications such as conversational and streaming
audio and video have strict requirements for low loss,low
jitter, and low one-way delay. These applications further
involve a media channel (where the audio and video traffic
are sent, usually via UDP) and a control channel (for media
control). Both of these channels must provide satisfactory
service to the user. We treat control traffic as interactive
traffic requiring a 4-second partial delay.
Chat applications can be used for text and media transfer
between two human users. While the request/response
delays depend on human conversation dynamics,the receipt
of user messages by the server must be acknowledged
within a certain time. This delay requirement as a 4-second
threshold on the round-trip time between the client and the
server. Additionally, we apply the QoS requirements for
media applications to the media channel of the chat
application.
DoS Metrics
We aggregate the transaction success/failure measures into
several intuitive composite metrics. Percentage of failed
transactions (pft)
This metric directly captures the impact of a DoS attack on
network services by quantifying the QoS experienced by
users. For each transaction that overlaps with the attack, we
evaluate transaction success or failure applying Definition
3. A straightforward approach to the pft calculation is
dividing the number of failed transactions by the number of
all transactions during the attack. This produces biased
results for clients that generate transactions serially.
QoS-ratio
It is the ratio of the difference between a transactions
traffic measurement and its corresponding threshold,
divided by this threshold. The QoS metric for each
successful transaction shows the user-perceived service
quality, in the range (0, 1], where higher numbers indicate
better quality. It is useful to evaluate service quality
degradation during attacks. We compute it by averaging
QoS-ratios for all traffic measurements of a given
transaction that have defined thresholds.
Page 365
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Topology
Four legitimate networks and two attack networks are
connected via four core routers. Each legitimate network
has four server nodes and two client nodes, and is
connected to the core via an access router.
Links between the access router and the core have 100Mbps bandwidth and 10-40-ms delay, while other links
have 1-Gbps bandwidth and no added delay. The location
of bottlenecks is chosen to mimic high-bandwidth local
networks that connect over a limited access link to an over
provisioned core. Attack networks host two attackers each,
and connect directly to core routers.
1) Programming in OPNET
Background Traffic
Each client generates a mixture of Web, DNS, FTP, IRC,
VoIP, ping, and telnet traffic. We used open-source servers
and clients when possible to generate realistic traffic at the
application, transport, and network level. For example, we
used an Apache server and wget client for Web traffic, bind
server and dig client for DNS traffic, etc. Telnet, IRC, and
VoIP clients and the VoIP server were custom-built in Perl.
Clients talk with servers in their own and adjacent
networks.
Page 366
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VII.CONCLUSION
Our metrics are usable and they offer the real opportunity to compare and
contrast different DoS attacks and defenses on an objective head-to-head
basis. This work will advance DoS research by providing a clear measure
of success for any proposed defense, and helping researchers gain insight
into strengths and weaknesses of their solutions.
REFERENCES
[1] B.N. Chun and D.E. Culler, User-Centric Performance Analysis of
Market-Based Cluster Batch Schedulers, Proc. Second IEEE Intl Symp.
Cluster Computing and the GridProc. Second IEEE/ACM Intl Conf.
Cluster Computing and the Grid (CCGRID 02), May 2002.
Page 367
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 368
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
High Performance Evaluation of 600-1200V, 140A Silicon Carbide Schottky Barrier Diodes and
Their Applications Using Mat Lab
Manickavasagan K, P.G. Student, 401, Adhipraskthi Engineering College.
Melmaruvathur, Tamil Nadu-603319.
kmanickavasagan@gmail.com
Abstract - High performance evaluation of a 600-1200 V / 140 A range Sic schottky Barrier Diode and their applications
is experimentally evaluated. The SiC Schottky Barrier Diode
(SBD) is commercially available in the 600-1200 V / 1-40 A
range. The main advantage of a high voltage SiC SBD lies in
its superior dynamic performance in this respect: (a) The
reverse recovery charge in the SiC SBD is extremely low (< 20
nC) and is the result of junction capacitance, not stored
charge.
Furthermore, unlike the Si PiN diode, it is
independent of di/dt, forward current and temperature. (b)
Higher junction temperature operation up to 175C, (c)
Reduction in the number of MOSFETs by 50%, (d) Faster
switching up to 500 kHz to reduce the EMI Filter size and
other passives, and (e) Reduction or elimination of the active
or passive snubber. The performance of a 600 V, 4 A silicon
carbide (SiC) Schottky diode is experimentally evaluated. A
300 W boost power factor corrector with average current
mode control (PFC) is considered as a key application.
Measurements of overall efficiency, switch and diode losses
and conducted electromagnetic interference (EMI) are
performed both with the SiC diode and with two ultra-fast,
soft-recoveries, silicon power diodes, and the recently
presented. The paper compares the results to quantify the
impact of the recovery current reduction provided by SiC
diode on these key aspects of the converter behavior.
Keywords: (SBD) SiC Schottky Barrier Diode, (SiC) silicon
carbide, Galium Arsenide (GaAs)
1.
Introduction
2. Electronic Properties
Although there are about 170 known crystal structures,
or polytypics of SiC, only two (4H-SiC and 6H-SiC) are
available commercially. 4H-SiC is preferred over 6HSiC
for most electronics applications
because it has higher and more isotropic electron mobility
than 6H-SiC. Table 1 compares the key electronic
Page 369
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3.
Page 370
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
minute on/off cycles (3.5 min. on / 3.5 min off) with the
on-current set to the device rated current, a maximum
junction temperature of 175C and a junction temperature
delta of greater than 100C during the cycle.
120 V RMS, and the output voltage was 370 V DC. The
operating frequency was 90 kHz, and the gate resistance at
the MOSFET was 50 . The current rating of the
MOSFET was higher than the average rating to
accommodate the reverse recovery current of the diode,
and to maintain a high efficiency of the circuit. Under full
load condition, a 600 resistor was utilized, while at half
load condition, 1200 was used. Voltage and current
measurements were taken on both the MOSFET as well as
the diode, in order to estimate the power losses in these
components. The input and output power was also
measured to calculate the efficiency of the circuit. Under
full load conditions, the temperature on the MOSFET case
was measured with and without an external fan on the
device. After all these measurements were taken using the
ultrafast Si diode, they were repeated using Crees 4 A, 600
V SiC SBD.
Fig. 11 shows the comparison of the switching energy
losses per switching cycle in the MOSFET and Diode
under half load and full load conditions. Further, the turnON and turn-OFF losses within each device are separated.
Under half load conditions, the total switching losses
decrease by about 25% from 266 J to 200 J when the Si
diode is replaced by SiC SBD. The 50% decrease during
Diode turn OFF losses, and 27% decrease during MOSFET
turn ON are primarily responsible for this overall reduction
in losses when a SiC SBD is used in the circuit as
compared to when a Si diode is used. The MOSFET turn
OFF losses and Diode turn ON losses are similar when Si
and SiC Diodes are used in this circuit.
Under full load conditions, Diode turn OFF losses
decrease by 44%, MOSFET turn ON losses decrease by
39%, and Diode turn ON losses decrease by 29% when a
SiC diode is used in this circuit as compared to a Si diode.
The MOSFET turn OFF losses remain similar in both
cases. An overall decrease of 27% in switching losses is
measured when the circuit uses a SiC diode as compared to
a Si diode. It is worth noting that diode turn ON losses are
significantly lower as compared to Si PiN diodes under full
load conditions because of a slower turn ON process in a
PiN diode as compared to a SBD under higher current
operation. These results also show that the dominant
reduction in switching losses occurs due to the small
reverse recovery losses in the SiC diode as compared to the
case with Si diodes. Fig. 13 shows the comparison of the
measured efficiency of the entire PFC circuit between Si
and SiC diodes. At half load condition, the circuit
efficiency increases from 88.4% with the Si diode to 95%
with the SiC diode. At full load condition, the circuit
efficiency increases from 90% with the Si diode as
compared to 93% with the SiC diode. Ostensibly, the
slightly higher on-state losses in the SiC SBD result in the
relatively smaller gain in the overall circuit efficiency
under full load operating condition.
Page 371
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
FIGURES
Fig:1
Fig:2
Property
Silicon
Band gap,Eg(eV)
1.12
Electron mobility
1400
Hole mobility,n
450
Intrinisic carrier
1.5*1010
concentration,
Electron mobility, p(cm2/vs)
1.0
Critical breakdown electicfield, Ecrit(MV/cm)
0.25
Thermal conductivity,(W/cm*K) 1.5
3.26
800
140
5*10-9
3.26
0.3
2.2
0.5
3.0-3.8
Page 372
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 373
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 374
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Output Voltage
Output Current
Output Current
Ac Input Voltage
Page 375
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
OUTPUT VOLTAGE
th
th
8. CONCLUSION
It may be concluded that SiC SBDs offer significant
advantages over silicon PiN diodes in power electronic
applications such as PFC. SiC SBDs are commercially
available in 600-1200 V, 1-10 A range and can be utilized
today to enhance the performance of the PFC circuit by
improving the efficiency, reducing the switching losses in
the diode and the MOSFET, reducing the MOSFET case
temperature and reducing the number of MOSFETs.
Additionally, they can be used to simplify or even
eliminate the snubber circuits, reducing the heat sink size,
or increase the frequency and reduce the size of the
magnetic components. In a typical 250 W PFC circuit, an
overall decrease of 27% in switching losses is measured
when the circuit uses a SiC diode as compared to a Si
diode. At full load condition, the circuit efficiency
increases from 90% with the Si diode as compared to 93%
with the SiC diode. Silicon Carbide is a wide band gap,
high breakdown field material allowing high voltage
Schottky diodes to be made. SiC Schottky diodes with 300,
600 and 1200-volt are commercially available at CREE.
The 600-volt diodes are available with 1, 2, 4, 6, 10 and
20-amp current ratings. The 1200-volt diodes are available
with 5, 10 and 20-amp current ratings. The main advantage
of a high voltage SiC Schottky diode lies in its superior
dynamic performance. Schottky diodes are majority carrier
devices and thus do not store charge in their junctions. The
reverse recovery charge in the SiC Schottky diode is
extremely low and is only the result of junction
capacitance, not stored charge. Furthermore, unlike the
silicon PiN diode, the reverse recovery characteristics of
SiC Schottkys are independent of di/dt, forward current and
junction temperature. The maximum junction temperature
of 175C in the SiC Schottkys represents the actual
operational temperature. The ultra-low junction charge in
SiC Schottkys results in reduced switching losses in a
typical hard switched CCM PFC boost converter
application.
K. MANICKAVASAGAN
Place
: 169, Irumbedu Village &Post, Vandavasi Taluk,
Kildodungalur, Tiruvannamalai District; Tamil Nadu, India-604403.
Date of Birth
: 30/07/1969
9. References
[1] I. Zverev, M. Treu, H. Kapels, O. Hellmund, R. Rupp,
SiC Schottky rectifiers: performance, reliability and key application,
th
th
time, Electronics Letters, 6 July, 2000, vol. 36, no 14, pp. 1241-1242.
[3] W. Wright et al. Comparison of Si and SiC diodes during operation in
th
Page 376
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I.
INTRODUCTION
With rapid development in the computer based technology,
new application areas for computer networks have emerged
in Local Area Network and Wide Area Network .This
became an attractive target for the abuse and a big
vulnerability for the community. Securing this
infrastructure has become the one research area. Network
intrusion detection systems have become a standard
component in security infrastructures. Intrusion detection
is the process of monitoring the events occurring in
a computer system or network and analyzing them for
signs of intrusions, defined as attempts to compromise the
confidentiality, integrity, availability, or to bypass the
security mechanisms of a computer or network.
There are generally two types of attacks in network
intrusion detection. In misuse detection, each instance in a
data set is labeled as normal or intrusion and a learning
algorithm are trained over the labeled data. . An anomaly
detection technique builds models of normal behavior, and
automatically detects any deviation from it, flagging the
latter as suspect. Attacks fall into four main categories[8]
Page 377
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
A.
Page 378
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
HN I HS =
where HN is the subset of numerical attribute (e.g., no of
bytes),
and HS is the subset of character attribute.
(e.g.,
service, Protocol).
Notation 3: Let, ei = ( hi1 , hi 2 ,.....him ) , ei is a record, the
m is
number of attribute values and hij is the value of
Hm
Notation 4: E = {e1 , e2 ,....., en } , E is the set of
records; n is the number of packets.
(nhik + nhjk )
*A
hik * nkjk )
k =1
p
Sim P (ei,ej ) =
(n
(1)
(2)
k =1
(3)
P N i = (1 / n) hji
(4)
j =1
i = 1,2,...., p( p <= m)
The hji is the numerical attribute. The Ps is a frequent
character attribute set which consists of q (q = m-p) most
frequent character attribute.
Page 379
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 380
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2)
Attribute Selection: The information gain measure
used in step (6) of above C4.5 algorithm is used to select
the test attribute at each node in the tree. Such a measure is
referred to as an attribute selection measure or a measure of
the goodness of split. The attribute with the highest
information gain is chosen as the test attribute for the
current node. This attribute minimizes the information
needed to classify the samples in the resulting partitions.
Such an information-theoretic approach minimizes the
expected number of tests needed to classify an object and
guarantees that a simple (but not necessarily the simplest)
tree is found.
Information Gain: Imagine selecting one case at
3)
random from a set S of cases and announcing that it
belongs to some class Cj. The probability that an arbitrary
sample belongs to class Cj is estimated by
Pi =
freq(cj , s )
|s|
(5)
Info( P) = Pi log 2 Pi
(6)
i =1
(7)
GainRatio( D, T ) =
Gain( D, T )
SplitInfo( D, T )
(10)
IV.
(8)
Page 381
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Training instances
Cluster
d1
<
DT 1
Cluster
2
d2 <
Cluster n
.. dn
DT 2
DT k
automatically.
TABLE I
CONFUSION MATRIX
A. Nearest-Consensus Rule
For clusters C1,C2 . . .Cn formed by applying the heuristic
clustering method on the training instances, r1, r2 . . . rn be
the centroids of C1, C2, . . .Ck, respectively. Calculate the
distances (d1, d2 ,. dn ) among centroids.
In the Nearest-consensus rule, the decision of the nearest
candidate cluster in which there is consensus between the
decisions of the Heuristic and the C4.5 decision tree
methods.
EXPERIMENTS AND RESULTS
V.
In this section discuss experiment with the heuristic
clustering using the 10% subset of KDD-99 data. The data
in the experiment was acquired from the 1998 DARPA
intrusion detection evaluation .They set up an environment
to acquire raw TCP/IP dump data for a local-area
network(LAN) simulating a typical U.S.Air Force LAN.
More than 200 instances of 58 attack types were launched
against victim UNlX and Windows NT hosts in tree weeks
of training data and two week of test data. For each TCP/IP
connection, 41 various quantitative and qualitative features
were extracted. Attacks fall into four main categories:
DOS: denial of service
R2L: unauthorized access from a remote machine
U2R: unauthorized access to local super user
(root)
Probing: surveillance and other probing
This dataset was acquired from the 1998 DARPA intrusion
detection evaluation program consists of 11000
connections and 22 types of intrusions in the test set.
The test set consists of 9 subtests .In heuristic clustering the
relations between categorical and continuous features are
handled naturally, without any forced conversions (kmeans) between these two types of features. The decision
tree on each cluster refines the decisions by learning the
subgroups within the cluster.
Actual \
Normal
Predicted
Abnormal
Class
Normal
Abnormal
DetectionRate =
TN
* 100
( FN + TN )
FP
* 100
(TP + FP)
(12)
( FP + FN )
* 100
(TP + FP + FN + TN )
(13)
(11) FalsePositiveRate =
ErrorRate =
Page 382
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
TABLE II
Iteration
Iteration
Training
Detection
False
Error
instances
Rate
Positive
Rate
Training
Detection
False
Error
Instances
Rate
Positive
Rate
Rate
Rate
415
90.9%
2.8%
3.4%
820
85.7%
2.5%
2.9%
415
78.1%
5.0%
7.2%
1200
80.0%
4.4%
5.4%
820
80.0%
4.4%
7.0%
3000
80.0%
6.4%
7.5%
1200
79.4%
4.2%
7.3%
5200
83.3%
2.4%
3.1%
3000
77.7%
6.8%
10.1%
6130
85.7%
1.9%
2.7%
5200
82.0%
6.3%
9.3%
7117
83.3%
3.1%
3.9%
6130
80.5%
5.9%
8.7%
8045
77.7%
6.1%
7.3%
7117
80.4%
6.4%
9.4%
9300
87.5%
2.8%
3.6%
8045
79.5%
6.7%
10.5%
11245
85.7%
5.3%
5.4%
9300
82.6%
5.5%
7.6%
11245
80.4%
6.4%
10.7%
95.00%
90.00%
85.00%
Detection rate
7.00%
6.00%
80.00%
5.00%
FPR
75.00%
4.00%
3.00%
70.00%
0
9
2.00%
iteration
without NC rule
with NC rule
1.00%
0.00%
0
Iteration
without NC rule
with NC rule
Page 383
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
security
,
IEEE
Trans.Syst.,Man
and
Cybern.,
vol.35,
no.2,pp.302,Apr.2005.
[6]
M. Thottan and C. Ji, Anomaly Detection in IP Networks, IEEE
Trans. Signal Processing, vol. 51, no. 8, pp. 2191-2204, May.2003.
[7]
C. Kruegel and G. Vigna, Anomaly Detection of Web-Based
Attacks, Proc. ACM Conf. Computer and Comm. Security, Oct. 2003.
[8]
Z. Zhang, J. Li, C.N. Manikopoulos, J. Jorgenson, and J.
Ucles,HIDE: A Hierarchical Network Intrusion Detection System Using
Statistical Preprocessing and Neural Network Classification, Proc. 2001
IEEE Workshop Information Assurance, pp. 85-90, June 2001.
[9]
S.T. Sarasamma, Q.A. Zhu, and J. Huff, Hierarchical Kohonen Net
for Anomaly Detection in Network Security, IEEE Trans.Systems, Man, and
Cybernetics-Part B, vol. 35, no. 2, pp.450,Apr. 2005.
[10]
J. Gomez and D.D. Gup ta, Evolving Fuzzy Classifiers for
Intrusion Detection, Proc. 2002 IEEE Workshop Information Assurance,
June 2001.
[11] A. Ray, Symbolic Dynamic Analysis of Complex Systems for
Anomaly Detection, Signal Processing, vol. 84, no. 7, pp. 1115-1130, 2004.
[12]
N. Ye, S.M. Emran, Q. Chen, and S. Vilbert, Multivariate
Statistical Analysis of Audit Trails for Host-Based Intrusion Detection,
IEEE Trans. Computers, vol. 51, no. 7, pp. 810-820, 2002.
[13]
H.S. Javitz and A. Valdes, The SRI IDES Statistical Anomaly
Detector, Proc. IEEE Symp. Security and Privacy, Vol.87,no.7,pp. 316326, May 1991.
[14]
J. Kittler, M. Hatef, R.P.W. Duin, and J. Matas, On Combining
Classifiers, IEEE Trans. Pattern Analysis and Machine Intelligence,vol. 20,
no. 3, pp. 226-239, Mar. 1998.
[15] L.I. Kuncheva, Switching between Selection and Fusion in
Combining Classifiers: An Experiment, IEEE Trans. Systems, Man, and
Cybernetics, vol. 32, no. 2, pp. 146-156, Apr. 2002.
[16]
R.P. Lippman, D.J. Fried, I. Graf, J. Haines, K. Kendall,
D.McClung, D. Weber, S. Webster, D. Wyschogrod, R.K. Cunningham, and
M.A. Zissman, Evaluating Intrusion Detection Systems:The 1998 DARPA
Off-Line Intrusion Detection Evaluation, Proc.DARPA Information
Survivability Conf. and Exposition (DISCEX 00), pp. 12-26, Jan. 2000.
[17]
The third international knowledge discovery and data mining
tools competition dataset KDD99-Cup,
http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html, 1999.
ACKNOWLEDGEMENT
This research is supported by Bose of Anna University for
his support in completion of this work. Special thanks to
Sam D Raj Kumar for their efforts in evaluating the
system. Finally I am also very thankful to the reviewers for
their detailed reviews and constructive comments, which
have helped to significantly improve the quality of this
paper.
REFERENCES
[1]
S.R.Gaddam, V.V. Phoha, and K.S. Balagani, KMeans+ID3: A
Novel Method for Supervised Anomaly Detection by Cascading K-Means
Clustering and ID3 Decision Tree Learning Methods , IEEE
Trans.Knowledge and Data Eng, vol.19, no.3,pp.345, Mar.2007.
[2]
Zhi-Xin Yu; Jing-Ran Chen; Tian-Qing Zhu, A novel adaptive
intrusion detection system based on data mining, Proceedings of Fourth
International Conference on Machine Learning and Cybernetics Volume 4, 1821 Aug. 2005.
[3]
Jungsuk SONG, Kenji OHIRA, Hiroki TAKAKURA, Nonmembers,
Yasuo OKABE,and Yongjin KWON, A Clustering Method for Improving
Performance of Anomaly-Based Intrusion Detection System , IEICE Trans.
Inf. & Syst., vol 91d, no.5,pp.350, May 2008.
[4]
A. Lazarevic, A. Ozgur, L. Ertoz, J. Srivastava, and V. Kumar, A
Comparative Study of Anomaly Detection Schemes in Network Intrusion
Detection , Proceedings of SIAM International Confonference on Data
Mining, May 2003.
[5]
Suseela T.Sarasamma , Quiming A.Zhu , and Julie Huff ,
Hierarchial KMeans+ID3: A Novel Method for Supervised
Anomaly
Detection by Cascading K-Meaohenen net for anomaly detection in network
Page 384
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ganesh_akash@yahoo.com
*
Head Incharge,
Department of Computer Applications,
Bharathiar University,
Coimbatore 641 046, Tamil Nadu, India.
2
tdevi@buc.edu.in
I. NATURAL COMPUTING
Nature has always fascinated mankind through all
its processes. Inspired by Nature, many inventions have
happened: from aeroplanes that simulated birds to
computers which simulate brain. Now after the buzz of
Information and Communication technologies, this is the
age of Nano-technology which closely resembles Natures
technology. Nano-technology is engineering of nano
particles, i.e. particles of size 1 to 100 nm (1 nm = 10 -9 m).
Examples of Nano particles are atoms, Zinc oxide particles
and DNA molecules. If nature is closely observed, it can
be inferred that any natural process is a nano process at
basic level. (e.g.) Raindrops are formed using nano
particles called condensation nuclei. And a pumpkin under
SEM (Scanning Electron Microscopy) will fascinate
anyone with marvellous nano structures which cannot be
artificially manufactured. Thus Nature has been the source
of inspiration behind any man made technology.
E.
Membrane Computing
The Second form of Natural computing, Membrane
Computing does the computing in an imaginary membrane
Page 385
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
region
3
elementary membrane
environment
Fig.1 Example
Page 386
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
12 2 1
w1 = aac
w2 = a
R1 = { r1 : c---> (c, in2),
r2 : c .---> (b, in2)}
P1 = { r1 > r2 }
R2 = {a --> (a, out), ac --> )
P2 =
membrane.
Also in a P system rules are used in the
maximally parallel manner, non deterministically choosing
the rules and the objects. A sequence of transitions
constitutes a computation. A computation is successful if it
halts, it reaches a configuration where no rule can be
applied to the existing objects, and the output region i0 still
exists in the halting configuration. If the output region
specified is an internal region, then it is called an internal
output.
And the objects present in the output region in the
halting configuration is the result of the computation. If io
= 0, then count the objects which leave the system during
the computation and this is called as external output. A
possible extension of the definition is to consider a terminal
set of objects, T and to count only the copies of objects
from T, present in the output region. Thus basically a P
system computes a set of numbers. And a system is said to
be propagating if there is no rule which diminishes the
number of objects in the system.
III. DEFINING TRANSITION P SYSTEMS
A transition P system of degree n, n>1 is a construct
= (V, , w1, .... wn, (R1, P1) .....(Rn, Pn), i0), where :
i)
V is an alphabet; its elements are called
objects
is a membrane structure of degree n, with
ii)
membranes and the regions labeled in a one to one manner
with elements in a given set.
wi, 1 i n are strings from V* associated
iii)
with the regions 1, 2 .... n of ;
Ri, 1 i n, are finite sets of evolution rules
iv)
over V associated with the regions 1, 2 n of ; Pi is a
partial order relation over Ri 1 i n specifying a priority
relation among rules of R i.
An evolution rule is a pair (u,v) which denotes u
v where u is in V and v = v or v = where v is a
string over (V x { here, out} ) U (V x { inj /1 j n } ) and
is a special symbol not in V. The length of u is called the
radius of the rule u
v) i0, is a number betwen 1 and n which specifies the
output membrane of
Example :
A P system of degree 2 can be defined as follows:
= (V, , w1, w2, (R1, P1) , (R2, P2) , 1),
V = { a, b, c, d}
= [[ ] ]
af
a ---> ab'
a ---> b' 8
f ---> ff
b' --> b
b --> b c(c, in 4 )
ff --> af > f --> a8
4
3
2
1
Page 387
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ACKNOWLEDGEMENT
I record my sincere thanks to Prof. Kamala Krithivasan,
Department of Computer Science and Engineering, IITM,
Chennai for kindling my interest in Membrane Computing.
Also
I
express
my
regards
to
Dr.Mrs.A.Pethalakshmi, Head, Department of Computer
Science,M.V.M Govt Arts College(W), Dindigul who
constantly encouraged all my research activities.
REFERENCES
[10] P Systems webpage. [Online]: http://ppage.psystems.eu
[11] Bernardini.F, Membrane Systems for Molecular Computing and
Biological Modelling, Ph.D thesis, Department of Computer Science,
University of Sheffield, Sheffield, 2005.
[12] Krishna.S.N, Languages of P Systems, Ph.D thesis, Department
of Mathematics, IIT Madras, Chennai, 2001.
[13] Madhu.M, Studies of P Systems as a model of cellular
computing, Ph.D thesis, Department of Computer Science and
Engineering, IIT Madras, Chennai, India, 2003.
[14] Oswald.M, P Automata, Ph.D thesis, Faculty of Computer
Science TU Vienna, Vienna, Austria, 2003 .
[15] Paun.Gh et al, DNA Computing New Computing Paradigms, Texts
in Theoretical Computer Science, 1998.
[16] Paun.Gh, Computing with membranes, Journal of Computer
System Sciences, 2000.
[17] Paun.Gh et.al.,A guide to membrane Computing, Theoretical
Computer Science, 2002.
[18] Paun.Gh, Introduction to Membrane Computing,
Proc.
Membrane
Computing Workshop, 2004.
[19] Paun.Gh, Membrane Computing. An Introduction, Springer,
Berlin, 2002.
V. CONCLUSIONS
Through this research paper, a preliminary review
of membrane computing has been done. And membrane
computing can be applied in computer science to simulate
distributed computing environment and in solving NPComplete Class of problems. Also there are many variants
[5] of this basic P system and some variants are used in
modeling operating system. With the advent of Cloud
Computing and its huge parallel processing power, P
systems can facilitate development of a parallel processing
language.
Also in BioNano technology, Membrane
Computing is used to model biological processes [2]. Still
there are many avenues where membrane computing can be
applied.
Page 388
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1. INTRODUCTION
Mobile systems provide mobile users with global
roaming services. To support that, numerous
authentication approaches employ the public-key system
to develop their protocols. A private authentication
protocol to prevent the home location register(HLR)
from eavesdropping on communications between the
roaming station (MS) and the visited location register
(VLR).Due to hardware limitations, MS cannot support
heavy encryption and decryption, and therefore wastes a
lot of time in exponential computations. Lee and Yeh [5]
proposed a delegation based authentication protocol to
solve the problems of data security, user privacy,
computational loads and communicational efficiency in
PCSs. Their protocol also adopted the public-key system
to achieve the security requirements. To increase the
communicational efficiency, and save authentication
time, their protocol employs off-line authentication
processes; such that VLR does not need to contact HLR
frequently, and can rapidly re-authenticate MS.
Therefore, compared with, the protocol of Lee and Yeh
not only has a lower computational load for MS than
related approaches such as GSM and MGSM [5][6][7],
but also provides greater security. Though the protocol
of Lee and Yeh exhibits non repudiation in on-line
authentication process, it still has a weakness in off-line
Page 389
VLR,
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
authentications.
Page 390
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 391
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1] H.-Y. Lin, Security and authentication in PCS,"
Comput. Elect. Eng.,vol. 25, no. 4, pp. 225-248, 1999.
[2] A. Merotra and L. Golding, Mobility and security
management in the GSM system and some proposed
future improvements, Proc. IEEE, vol. 86, pp. 14801496, July 1998.
[3] M. Long, C.-H. Wu, and J. D. Irwin, Localised
authentication for internet work roaming across wireless
LANs, IEE Proc. Commun., vol. 151,no. 5, pp. 496500, Oct. 2004.
[4] T.-F. Lee, C.-C. Chang, and T. Hwang, Private
authentication techniques for the global mobility
network, Wireless Personal Commun. vol. 35, no. 4,
pp. 329-336, Dec. 2005
[5] W.-B. Lee and C.-K. Yeh, A new delegation-based
authentication protocol for use in portable
communication systems, IEEE Trans. Wireless
Commun., vol. 4, no. 1, pp. 57-64, Jan. 2005.
[6] M. Rahnema, Overview of the GSM system and
protocol architecture," IEEE Commun. Mag., pp. 92100, Apr. 1993.
Page 392
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ABSTRACT:
People today are demanding a communications
infrastructure that is both secure and mobile. Meeting both
these requirements cause considerable challenge. Currently,
mobile communication networks and systems are designed
on the basis of detailed analysis of RF coverage and
capacity requirements. Security and privacy issues can be
addressed through good design, but eavesdropping
remains a real vulnerability
An emerging technology named FSS (frequency selective
surface) is increasingly being proposed as an answer to the
deployment of secure wireless systems for indoor wireless
environments, taking advantage of innovative techniques in
building design and the use of attenuating materials. FSSs
could be deployed to allow certain frequencies to propagate
into a room, while reflecting other frequencies.
In this paper we analyzed a Frequency Selective Surface
(FSS) attached onto existing common construction material
at certain distance that transforms it into frequency
selective filtering wall which isolates indoor wireless
system from external interference.
INTRODUCTION
The interference between the co-existing indoor
wireless systems is becoming an important issue.
Unwanted interference not only degrades system
performance but also compromise security. Interference
mitigation can be achieved using advance signal processing
or antenna design but in indoor environments, a simpler
and more effective approach may be to physically modify
the propagation environment. More specifically, a building
wall might be transformed into frequency selective (FS)
filter which filters
I.
BFSS
BWALL
CFSS DFSS
DWALL
AAIR
BAIR
CAIR
DAIR
AFSS
AWALL
CWALL
Page 393
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
II.
FACTORS
INFLUENCING
THE
FSS
PERFORMANCE AND DESIGN
The performance and behavior of the FSS filters depends
on the following factors. The conductivity of the FSS
conductor. The geometry of the FSS element (shape, with
of conductive strip lines, proximity of conductive strip
lines, thickness of conductor). The permittivity of the FSS
substrate. The period of the FSS array. The number of FSS
arrays when these are employed in a cascade. The electrical
distance between the FSS arrays in cascade configurations.
The choice of element types in hybrid FSS configuration.
The finite number of periods and the metallic frames
surrounding the FSS window.
III.
EQUIVALENT CIRCUIT METHOD
Equivalent circuit modeling is the preferred modeling
technique used in this research. Although it can only be
used for linear polarization and is limited to certain element
shapes, it offers the designer a quick and simple way to
evaluate possible FSS solutions without intensive
computing. EC modeling is ideal for use as an initial tool in
FSS design. Each layer of square loop FSS can be
represented by an equivalent circuit with an inductive
component in series with a capacitive components. The
FSS is modeled by separating the square loop elements into
Vertical and horizontal strips (gratings). For TE wave
incidence, the vertical strips Contribute to the inductive
impedance in the equivalent circuit, where the value L is
calculated according to equation 1.
XL/Z0=L=d/p*F (p, 2s,,).
...(1)
Similarly, the horizontal gratings correspond to the
capacitive component and the C value is calculated
according to equation 2.
BC/ZO=C=d/p*4*F (p, g,,)*eff..( 2)
p
c
c
Equivalent circuit model for the square loop FSS. The field
F is given by,
F(p,w,,)=p/*cos[ln(cosec(w/2p))+
G (p, w,,)]
G (p, w,,)] =
{0.5*(1-)[(1-()/4) (A+ + A_)+4 A+ A_]}
{[1(/4)] +[(1+ (/2) (^4/8)] (A++A_) + 2^6 A+
A_}
A ={1/[1(2psin /)-(p cos /) ]}-1
=sin (/2p)
It was suggested that with sufficient spacing (comparable
to a wavelength), the mutual influence between a wall and
its FSS cover can be neglected. This paper considers the
minimum spacing required between the FSS and the wall in
order that they may be modeled as acting essentially
independently.
IV.
SIMULATION
AND
CASCADING
MATRICES
The measurements presented in this paper were simulated
at S band frequency. The frequency responses of FS
structure were obtained at various illumination angles. The
transmission (S21) and reflection (S11) coefficient of both
the FSS cover and existing wall are obtained.
Simulations were preceded with different air spacing
between the FSS and wall surface (0, 10 and 20mm). The
FSS and wall should act sufficiently independent for simple
wave propagation.
V.
INTERACTION BETWEEN FSS AND
WALL.
The simulations of FS wall performance at 0 signal
incidence with air spacing is shown in fig.3. When the air
spacing is 0mm fr for FS wall is at 2GHz from the designed
fr at 2.4GHz which suggest that the overall response was
significantly influenced by the wall structure conversely
when the air is increased to 10mm or 20mm, fr did not shift
from 10GHz. Therefore, 10mm air spacing provides
sufficient independence to ensure insignificant interaction
between FSS and the wall.
VI.
RESULTS
The simulation carried with 10mm air spacing at 0
incidence is obtained at 2.15 GHz with 50dB attenuation.
Page 394
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VII.
APPLICATION OF FSS.
Selective shielding of the electromagnetic interference
from high power
microwave heating machines adjacent to wireless
communication base-stations.
Selective shielding of frequencies of communication in
sensitive areas (military
installations, airport, police etc.). Protection from harmful
electromagnetic radiation especially in the 2-3GHz band
arising externally (wireless communication base stations)
or internally (microwave ovens) in the domestic
environment, schools, hospitals etc.
Control of radiation at unlicensed frequency bands (e.g.
Bluetooth applications,2.45GHz). Pico cellular wireless
communications in office environments such as the
Personal Handy-phone System in offices whereas to
improve efficiency each room needs to prevent leakage of
radio waves into another room. This implies that windows,
floor and ceiling need to be shielded. Isolation of unwanted
Page 395
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VIII.
CONCLUSION AND FUTURE SCOPE
It is concluded that 10mm air spacing is sufficient to
provide independence between the FSS cover and the wall.
This allows engineers to focus on designing a FSS with
desired response without knowledge of the specific
properties of each building/office wall. The wall material
may affect the absolute attenuation but not the frequency
selectivity. However, the spacing required could be reduced
down to almost 0mm with an appropriate FSS design and
careful choice of dielectric substrate to provide the spacing
rather than air. The future works are concerned with
simulating the FSS surface with various structures.
REFERENCES
[1]
G.H.H.Sung,
K.W.
Sowerby
and
A.G.Williamson, The impact of frequency selective
surfaces applied to standard wall construction materials.
Proc.IEEE 2004 Antennas Propag.
[2]
A.Newbold, Designing buildings for the wireless
age. Compat. Contn. Eng. J., vol.15.
[3]
G.H.H.Sung, K.W. Sowerby, Modeling a lowcost frequency selective wall for wireless friendly indoor
environments.IEEE Antennas and wireless propagation
letters.vol.5.2006.
[4]
G.H.H.Sung, K.W. Sowerby and
A.G.Williamson, Equivalent circuit modeling of
a frequency selective plasterboard wall. In Proc.IEEE
2005 Antennas Propag.Symp., vol.4A. 2005.
[5]
D.M.Pozar, Microwave Engineering. AdisonWesley.1990.
[6]
An investigation into the feasibility of designing
frequency selective windows employing periodic structure
C.Mias, C.Oswald and C.Tsakonas- The Nottingham Trent
University.
Page 396
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Department of Master in Computer Applications, Sri Manakula Vinayagar Engineering College, Puducherry, India.
4
Page 397
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 398
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Learning approach
The preference data stores a set of student preferences
regarding the system customization. Most of the
preferences are gathered from the student, but some of
them are defined by the system administrator. The
attributes are:
Preferred presentation format
Video speed
Sound volume
Page 399
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
otherwise.
System
Interface (GUI)
Profiler service
(Agent)
Learner Service
(Agent)
Content
Service (Agent)
Monitoring
service (Agent)
Profile
Database
Feedback
service (Agent)
Adaptation
service (Agent)
F. Feedback service
Feedback service consists of collecting multiple feedbacks
such reading time, no of scroll, no of print/save and
relational index on chatting history and storing it in user
profile.
Page 400
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
G. Adaptation service
The feedbacks which are maintained in user profile can be
adapted and provided to the individual user and group of
users. The adaptation service consists of adaptation
algorithm which adapts the contents and provides the
personalized content to the e-learners.
H. Adaptation algorithm
The web pages are organized by the topics which are
structured in e-content ontology. Each topic is attached
with several keywords.
XI. EXPERIMENTAL RESULTS
A prototype system has been implemented and multiple
feedback measures are recorded for feedback extractor.
From the feedback we obtained the learning performance
curve over various iterations and the fig.2 shows the
improvements. For system training purpose, we ask a
group of students to do the following experiments:
Step 1: Select a topic such as Data Structures
Page 401
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1. Introduction
The use of most information systems depends on user
identification. The purpose of identification is
representation of the user or another system to the
information system being used. Based on the identification,
the information system will allow or deny access to the
system or one of its parts. With the integration of
information systems, high availability of e-services and
ubiquitous solutions, the importance of identification
mechanisms and efficient authorization schemes is also
becoming more and more important. The identification
mechanisms should guarantee first level security to the
information systems as well as the user, while the
authorization prevents unauthorized access to the
information, data and services.
The costs of public health care schemes are substantially
increasing and governments are calling for new strategies
[1]. One of the solutions is open integrated electronic
health system that is able to exchange the information
between all stakeholders in the public health care system.
The key stakeholders are:hospitals, medical centers, health
centers, insurance companies, government, patients,
employers, pharmacies, drugstores, pharmaceutical
companies, health care professionals (i.e. doctors, nurses,
pharmacists) and others. The central part of the electronic
health system is electronic health record (EHR), the
electronic document that contains the comprehensive data
of the past and present physical and mental state of health
of an individual. The EHR is highly sensitive in regard to
personal data protection as well as confidentiality, safety
and accuracy. The identification plays an important part in
fulfilling these requirements. The three basic principles of
identification are [2]:
Page 402
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
individual itself.
In addition, the processing of sensitive data generally
requires prior approval from national data protection
authorities. In Slovenia, the law requires the approval of
the Information Commissioner. In Italy a detailed security
policy document is required, and specific technical
requirements must be met. In Spain, the processing of
health-related data triggers a requirement for more rigorous
security measures under Royal Decree 994/1999. After the
Member States have implemented the exceptions
differently and inconsistently, theEuropean Commission
Report on the implementation of the Data Protection
Directive (95/46/EC [6]) recognized the problem and the
Data Protection Working Party, an independent European
advisory body on data protection and privacy, issued the
Working Document on the processing of personal data
relating to health in electronic health records (EHR).The
document provides guidelines on the interpretation of the
applicable data protection legal framework for EHR
systems,
presents
the
general
principles
and
recommendations on eleven topics namely respecting self
determination, identification and authentication of patients
and health care professionals, authorization for accessing
EHR in order to read and write in EHR, use of EHR for
other purposes, organizational structure of an EHR system,
categories of data stored in EHR and modes of their
presentation, international transfer of medical records, data
security, transparency, liability issues and control
mechanisms for processing data in EHR.
In 2004 Slovenia adopted the Personal Data Protection Act
[7] that determines the rights, responsibilities, principles
and measures to prevent unconstitutional, unlawful and
unjustified encroachments on the privacy and dignity of an
individualin the processing of personal data. The
enforcement of the law is under the supervision of the
Information Commissioner of Republic of Slovenia. In
2008 the commissioner issued two important guidelines
[8]:
Guidelines for safeguarding the personal data in
1.
the Hospital Information Systems (HIS) and
Guidelines regarding the use of biometric
2.
technologies.
In the first guideline the commissioner points out that the
logging of user interaction could be at different levels: (1)
change log, (2) data access log and (3) full audit trail.
While the law does not require the audit trail, technically it
is possible to implement it within the HIS. The second
level is mandatory for sensitive data and must be
implemented regardless of the circumstances. The use of
group login accounts is not advisable and should be used
only in special cases (i.e. emergency team). However, for
these exceptions additional measures should take place
Page 403
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3. Technical Solutions
Something you know (usernames and passwords, PIN) is
the most widely used identification principle for accessing
IT resources today. The administrator or the system itself,
using the algorithm, sets the username and password for
the new user. The user can later partially (only password)
or fully (username and password) change it. This
identification principle has several drawbacks:
Reliability of identification - when the system
property.
Therefore, the use of biometric identification could be
more appropriate in healthcare systems for some users (for
example children, senior users and people with disabilities)
and special use cases (for example emergency).
4. Identification in the Slovenian Health System
In the year 2000 the Health Insurance Institute of Slovenia
(ZZZS) [9] introduced the Slovene Health Insurance Card
(KZZ) [9] as the official document applied in the
implementation of the rights deriving from the compulsory
and voluntary health insurance in Slovenia. Slovenia was
the first country to introduce an electronic card at a national
scale within the EU. The common EU member countries
objective is to introduce an electronic document applicable
within a country and across its borders. The Slovenian card
is issued, free of charge, to every person upon the first
regulation of the compulsory health insurance status in
Slovenia and is used for the identification of the policy
owner, the patient, as well as the health service providers.
The card is made of plastic, measures 8.5 x 5.5 centimeters
and has the chip with the following data:
thecard owner data,
Page 404
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
References
1.
Working Document on the processing of personal
data relating to health in electronic health records (EHR),
WP131, 2007
Ramesh Subramanian, Computer Security,Privacy
2.
and, Politics-Current Issues Challenges and Solutions, IRM
Press,2008
ISO/DIS 21091:2008 Health informatics 3.
Directory services for security, communications and
identification of professionals and patients
ISACA, Control Objectives for Information and
4.
related Technology(COBIT), version 4.1, IT Governance
Institute, 2007
CEN/TR 15872 Health informatics - Guidelines
5.
on patient identification and cross-referencing of identities
Directive 95/46/EC of the European Parliament
6.
and of the Council,
http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46ce/dir1995-46_part1_en.pdf (accessed March 19, 2010)
Personal Data Protection Act (Slovenia),
7.
http://ec.europa.eu/justice_home/fsj/privacy/docs/impleme
ntation/personal_data_protection_act_rs_2004.pdf
(accessed March 19, 2010)
Information Commissioner (Slovenia),
8.
http://www.ip-rs.si/ (accessed March 19, 2010)
Health Insurance Institute of Slovenia,
9.
http://www.zzzs.si/indexeng.html
Page 405
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
raji_nav@yahoo.com
purushgct@yahoo.com
1. Introduction
Major
technological
developments
and
innovations in the field of information technology have
made it easy for organizations to store a huge amount of
data within its affordable limit. Data mining techniques
come in handy to extract valuable information for strategic
decision making from voluminous data which is either
centralized or distributed [1], [11].
The term data mining refers to extracting or mining
knowledge from a massive amount of data. Data mining
functionalities like association rule mining, cluster
analysis, classification, prediction etc. specify the different
kinds of patterns mined. Association Rule Mining (ARM)
finds interesting association or correlation among a large
Page 406
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2. Related Work
While practicing data mining, the database community has
identified several severe drawbacks. One of the drawbacks
frequently mentioned in many research papers is about
Page 407
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
3. Problem Definition
4. Proposed Approach
A new approach is proposed in this paper to
estimate the global frequent itemsets from multiple data
sources while preserving the privacy of the participating
sites. All sites in the mining process are assumed as semihonest. The semi-honest parties are honest but try to learn
more from received information. Any site can initiate the
mining process. To protect sensitive information of each
participating site, elliptic curve cryptography and
randomization are applied.
Algorithm 1
Input: n local frequent itemsets. Each set, denoted by Ai
(1 i n), belongs to each party. s is the Threshold support.
Si denotes the local storage of each party. A1 denotes the
commoner, i.e. i=1.LL is the count of locally frequent
itemsets received so far by the commoner. Initially LL=0.
CLL is the total count of locally frequent itemsets.
Output: UiAi (1in), that is the set union of locally
frequent itemsets (Global candidate itemsets).
1. Compute the locally frequent itemsets at each site.
Make an entry in the local storage, Si of each party.
2. Encryption traversal:
Encrypt the locally frequent itemsets using the
Page 408
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4. Increment LL
If any received subset is not present in the local storage Si
Then add to Si
5. The process ends when LL equals CLL at the
commoner.
The above algorithm meets the requirement that no party
can determine any itemsets owner. Initially, each of the
participating sites generates their locally frequent itemsets
using any of the frequent itemset generation algorithms like
Apriori, FP growth tree algorithm, etc. The generated
locally frequent itemsets at each site, denoted as Ai, are
added to the local storage, Si of respective sites.
The purpose of the local storage is to avoid the infinite
looping of any itemset before reaching the commoner. This
is accomplished by forwarding any received itemset by
verifying its presence in the local storage. Thus the local
storage ensures that any itemset will pass through all the
participating sites only once, that is at the worst case.
The count of locally frequent itemsets generated at each of
the participating sites is calculated to determine the end of
the Algorithm 1. For this, the commoner chooses a random
number to be added with its count of locally frequent
itemsets. This value is sent to the next participating site,
which adds its count of locally frequent itemsets to the
received value and forwards the result to the next site. The
process is repeated until all the participating sites have
added their counts of locally frequent itemsets and when
the value reaches back the commoner, it subtracts the
random number to get the count of all locally frequent
itemsets. It is denoted as CLL.
The commoner maintains the count of received itemsets
(LL). On receiving any forwarded itemset, the commoner
increments LL and if it is not present in the local storage,
an entry is made. If it is present already then the received
itemset is discarded. Thus the commoner removes the
redundancies. When the other sites receive any forwarded
itemset, it is verified with the respective local storage and if
an entry is present, then the itemset is directly forwarded to
the commoner, else an entry is made and the itemset is
forwarded to random destinations. Thus we avoid the
repeated looping of any itemset.
The end of Algorithm 1 is when the commoners count of
received itemsets (LL) reaches CLL. The commoners local
storage contains the union of the locally frequent itemsets
of all the participating sites.
The following algorithm privately computes the
globally supported itemsets [5].
Algorithm 2
Input: Global candidate itemsets. Number of participating
sites n >= 3. The site initiate the mining process can be the
commoner.
5 Performance
Evaluation
Distributed environment with three participating sites was
simulated to evaluate the performance of the proposed
algorithm. At each site, a P-IV, 2.8 GHz, 2 GB RAM
machine is run on Windows operating system. The
proposed method is implemented using Java and MSAccess. Three different data sources of size 10K, 50K, and
100K are synthetically generated to study the performance
of the proposed work. The local frequent itemsets of the
participating sites which are the inputs for the algorithm is
generated by using the FP tree algorithm for supports
varying from 10% to 30%.
Page 409
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
60
140
50
120
100
40
RSA
30
ECC
20
10
80
RSA
60
ECC
40
20
10
20
30
10
30
Fig 1.3 Time complexity for Encryption and Decryption of 100K data
source
20
Support (%)
Support (%)
100
80
RSA
60
ECC
40
20
0
10
20
30
Support (%)
Data set
10K
50K
100K
Support
No. of Sites
10%
20%
10%
20%
3
3
3
3
10%
20%
3
3
60
10
60
10
Accuracy
100%
100%
100%
100%
100%
100%
Page 410
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
6 Conclusion and
Future Work
Several existing algorithms for preserving privacy in a
distributed environment have been analyzed and an
efficient algorithm for the same has been proposed and
implemented. This proposed method uses a mathematically
rich procedure of cryptography namely the elliptic curve
cryptography. The proposed model believes that the
participating sites are semi honest. We can extend the
proposed model to work even for dishonest parties. The
proposed model works for only homogeneous databases.
We can improve the system to work even for
heterogeneous databases where the attributes of the
participating sites are different.
References
[32] R.Agrawal, R.Srikant, Fast Algorithms for Mining Association
Rules, in proceedings of the 20th VLDB Conference Santiago,
Chile,1994, pp.487-499.
[33] M.Ashrafi, D.Taniar, K.Smith, ODAM : An Optimized
Distributed Association Rule Mining Algorithm, IEEE Distributed
Systems Online, 2004, 5th ed., Vol. 3.
[34] M.Ashrafi, D.Taniar, K.Smith, Privacy-PreservingDistributed
Association Rule Mining Algorithm. in proceedings of International
Journal of Intelligent Information Technologies, 1st ed. Vol.1, 2005, pp.
46-69.
[35] M.Atallah, E.Bertino, A. Elmagarmid, M.Ibrahim, V.S.Verykios,
Disclosure limitation of sensitive rules, in Proceedings of the IEEE
Knowledge and Data Exchange Workshop (KDEX'99). IEEE Computer
Society, 1999, pp. 45-52.
[36] D. Cheung, V.Ng, A.Fu, Y.Fu, Efficient Mining of Association
Rules in Distributed Databases, IEEE Transactions on Knowledge and
Data Engineering. 1996, 8th ed., Vol. 6, pp. 911-922.
[37] C.Clifton, D.Marks, Security and Privacy Implications of Data
Mining, in Proceedings of the ACM SIGMOD Workshop on Data
Mining and Knowledge Discovery, 1996, pp.15-19.
[38] C. Clifton, Secure Multiparty Computation Problems and Their
Applications: A Review and Open Problems, in Proceedings of the
Workshop on New Security Paradigms, Cloudcroft, New Mexico, 2001.
[39] C.Clifton, M. Kantarcioglu, J.Vaidya, Defining privacy for data
mining. Book Chapter Data Mining, Next generation challenges and
future directions, 2004.
[40] O.Goldreich, S, Micali, Wigderson, How to play any mental game
- a completeness theorem for protocols with honest majority, in 19th ACM
symposium on the theory of computing, 1987, 218-229.
[41] O.Goldreich, Secure Multiparty Computation (Working Draft),
1998.
[42] J.Han, M. Kamber, Data Mining: Concepts and Technique,
Morgan Kaufmann Publishers, 2001.
[43] J. Han, J. Pei , Y.Yin, R. Mao, Mining Frequent Patterns without
Candidate Generation
: A Frequent Pattern Approach, IEEE
Transactions on Data Mining and Knowledge
Discovery, 8th ed.,
Vol.1, 2004, pp.53-87.
[44] Inan.A., Saygyn.Y., Savas.E., Hintoglu.A.A, & Levi.A, Privacy
preserving clustering on horizontally portioned data, in proceedings of
Page 411
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
mustajab@yic.edu.sa
aarunagiri@yic.edu.sa
1.
INTRODUCTION
2.
HARMONICS
Effects of Harmonics:
Page 412
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
6.
disturbances on communications networks and
telephone lines.
The harmonics most frequently encountered (and
consequently the most troublesome) on three-phase
distribution systems are the odd-order harmonics (3rd, 5th,
7th, etc.).Utilities monitor harmonic orders 3, 5, 7, 11 and
13. It follows that conditioning of harmonics is imperative
up to order 13 and ideally should include harmonics up to
order 25.
2.2
Selection of measurement devices:
Electronic regulator:
2.2.1
3.3
Capacitive type:
2.2.2
Operating principle of Digital Analyzers and dataprocessing techniques:
3.4
3.5
3.5.1
Resistive type:
Page 413
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V1
100.5 110.
7
90 110.
6
75.7 110.
6
60.5 110.
6
45 110.
6
30.2 110.
6
15 110.
6
7 110.
6
0 10.6
1
V2
W2
I1
W1
0.11
9
0.13
1
0.14
3
0.14
3
0.14
7
0.14
7
0.14
7
0.14
6
0.14
9
2.9
66.8 730
62.4 850
54
1200
2.9
43.5 1500
2.6
31.3 1715
3.6
19.9 1805
4.2
10.3 1835
3.6
6.3 1860
3.3
4.9 1880
Page 414
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
V1
I1
110
109.7
110.3
110.7
110.7
W1
0.1
0.11
0.12
0.13
0.15
V2
W2
10.8 64
11.85 71
13.34 81.24
14.61 90.7
16.63 110.6
V1
I1
W1
V2
W2
120
120
120
120
120
120
0.09
0.124
0.133
0.136
0.141
0.152
5.45
10.65
12.33
12.72
14.46
17.2
61
86
94
94
102.8
113
5.45
10.65
12.33
12.72
14.46
17.2
Cos() %THD
(V)
0.5
1.5
0.675 1.4
0.778 1.5
0.781 1.7
0.857 1.2
0.942 1.6
%THD
(I)
5.6
4.1
3.8
5.6
3.4
4.7
N(rpm)
690
1278
1450
1545
1700
1940
CONCLUSION
Page 415
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 416
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
I . INTRODUCTION
At present, optical fiber communication plays an important
role in cable communication technology for wideband,
multimedia and high-speed applications [3] [5]. In order to
be able to manufacture wireless terminals for optical fiber
links at reasonable cost, good agreement must be achieved
between the photo-detector and the millimeter-wave
circuit, as well as small size and low weight. When a GaAs
MESFET device is optically illuminated, absorption
phenomena take place at the gatedrain and gatesource
regions, which induce both photoconductive and
photovoltaic effects. The performance of a GaAs MESFET
can be significantly enhanced by scaling down the device
geometry. The radiation is allowed to fall on the semitransparent Schottky gate of the device. When light is
turned on the parameters such as threshold voltage,
channel charge, channel current, channel conductance and
gate to source capacitance reach the steady state value at a
lesser time than that when light is turned off. The device
performance is greatly improved by shortening the gate
length. The OPFET performance is also dependent on the
NEP(Noise Equivalent Power) and signal to noise
ratio(S/N)[4]. It is understood that the received optical
signal generates electron-hole pair in the semiconductor
resulting in the modulation of the channel conductivity due
to photoconductive effect and channel conductance through
a development of a forward photovoltage due to
photovoltaic effect. The electrical parameters such as
threshold voltage, drain-source current, gate capacitances
and switching response affect the OPFETs performance.
The diffusion process introduces less process-induced
damage compared to ion-implantation, which suffers from
current reduction due to a large number of defects
introduced by ion-implantation process.
THEORY
Fig.1. b Schematic structure of the device with fiber inserted partially into
the substrate [8].
Fig 1 c Schematic structure of the device when the fiber is inserted up to
the substrate-active layer interface [8].
Page 417
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 418
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Fig. 2. switching time versus Active layer thickness for dark and optical
illumination condition.
III.
Page 419
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Microwave and optical technology letters / vol. 26, no. 4, August 20 2000
[8]. Nandita Saha Roy and B. B. Pal
Frequency-Dependent
OPFET
Characteristics
Improved Absorption under Back Illumination.
Journal of Lightwave Technology, Vol. 18, No. 4, April 2000.
with
CONCLUSION
Page 420
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
kumarprasanna.r@gmail.com
travi675@gmail.com
INTRODUCTION
Many data mining applications deal with privacy sensitive
data. Financial transactions, health-care records, and network
communication traffic are some examples. Data mining in
such privacy-sensitive domains is facing growing concerns.
Therefore, we need to develop data mining techniques that are
sensitive to the privacy issue. This has fostered the
development of a class of data
Page 421
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2. RELATED WORK
There exists a growing body of literature on privacy sensitive
data mining. These algorithms can be divided into several
Page 422
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
5.1
FFT
AND
SVD
BASED
DATA
Page 423
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
REFERENCES
[1] Hillol Kargupta and Souptik Datta Qi Wang and
Krishnamoorthy Sivakumar On the Privacy Preserving
Properties of Random Data Perturbation Techniques
[2] R. Agrawal and R. Srikant. Privacy-preserving data
mining.
Accuracy VS Rank
Cut ratio Vs RP
Rank Vs RP
Cut ratio Vs Rk
Rank Vs Rk
XIII.
TABLE I
COMPARISON OF FFT AND SVD BASED DATA PERTURBATION
METHODS.
Data
P
M
RP
RK
CP
C
K
SV
D
FFT
918.982
6
921.536
5
0.003
6
0.010
0
5.2308
2
5.2308
2
Accurac
y
0.8114
Tim
e
8.79
0.8125
3.24
XIV.
CONCLUSIONS
XV.
In this paper we propose a Fast Fourier Transform (FFT)
based data perturbation method and compare its performance
Page 424
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 425
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ksadaiyandi111@yahoo.co.in, #2 zafrulla63@yahoo.co.in
Page 426
I.
INTRODUCTION
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
II.
Tm = Tmb
6v 0
0.0005736d
Tm
= 1
Tmb
d
(1)
(2)
Tm
= 1
Tmb
zd
(3)
a
1
=
a
1 + Kd
where,
K=
(4)
(5)
IV.
SIZE DEPENDENT MELTING
TEMPERATURE CONSIDERING LATTICE
CONTRACTION
In the present work, the size dependent lattice
constants for the tungsten nanoparticles are
TABLE I
SURFACE ENERGY ( ), BULK MELTING TEMPERATURE (Tmb), BULK MOLAR
VOLUME (vo) AND ( ) FOR TUNGSTEN [8]
Page 427
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
J/m2 at
298 K
2.753
Tmb K
vo x 10-6
m3
3680
9.53
nm
0.772
4000
T
D = C Lind m2
mv 0 3
3600
3200
Liquid Drop
Model
From Sp.ht
2800
2400
2000
0
0.2
0.4
0.6
1/size (nm)
u2 =
9h 2 T
mk B 2D
(6)
(7)
V.
DEBYE TEMPERATURE AND MEAN
SQUARE DISPLACEMENTS
The forces between the atoms are reflected in the
Debye temperature and it is useful to have this as a reference
to characterise a crystal. The Debye temperature is a measure
of the vibrational response of the material and therefore
intimately connected with properties like the specific heat,
thermal expansion and vibrational entropy [30]. Lighter inert
gas solids melt below their D while the other crystals remain
solids above it. The first theory explaining the mechanism of
melting in the bulk was proposed by Lindemann [11], who
used vibration of atoms in the crystal to explain the melting
transition. The average amplitude of thermal vibrations
increases when the temperature of the solid increases. At some
point the amplitude of vibration becomes so large that the
atoms start to invade the space of their nearest neighbors and
Page 428
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VI.
SIZE DEPENDENT MELTING
TEMPERATURE AND DEBYE TEMPERATURE
FROM SPECIFIC HEAT MEASUREMENTS
The specific heat capacity Cv of a solid according to
Debye approximation is given as [13],
x
where
x =
1.3E-02
1.2E-02
1.1E-02
1.0E-02
9.0E-03
8.0E-03
7.0E-03
6.0E-03
0
x e
C v = 9 Nk B x x
dx
2
0 (e 1)
3
1.4E-02
M S D x 10 -22 m 2
0.1
0.2
0.3
0.4
0.5
0.6
1/size (nm)
(8)
Fig.3 Size effect on the MSDs of tungsten nanoparticles at 300K. The MSD
values are increasing with respect to the reduction in size. Also it can be seen
that the size effect is dominant below 10nm.
D (T )
(9)
290
270
20000
260
Liquid drop
model
From Sp.Ht.
250
240
230
220
210
200
0
0.1
0.2
0.3
1/Size
0.4
0.5
0.6
280
16000
12000
8000
4000
(nm)
Fig.2 Size dependent Debye temperature for nanotungsten. It can be seen that
the Debye temperatures determined from the idea proposed by Sadaiyandi
[10] and the Debye temperatures calculated from specific heat data [13] are
agreeing well.
0
0
1000
2000
3000
4000
Temperature (K)
Fig.4 Debye temperature for nanotungsten with size 8nm. Points are the
calculated values. The solid line is the fitting curve. The fitted value of
is
271K
Page 429
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
VII.
D from specific
heat measurements by Herr et al [33] showed that D
decreases significantly for nanocrystalline materials. The
depressed Debye temperature in nanocrystalline sample
implies a decrease in the cohesion of atoms in the
nanocrystallites, which agrees well with the measured grain
size dependence of lattice parameter[34].
20000
D e b y e T e m p e r a tu r e (K )
18000
16000
14000
12000
10000
8000
6000
4000
2000
0
0
1000
2000
3000
4000
Temperature (K)
Fig.5 Debye temperature for nanotungsten with size 10nm. Points are the
calculated values. The solid line is the fitting curve. The fitted value of
is
273.5K.
2) ACKNOWLEDGEMENT
The author wishes to acknowledge the management
of Velammal College of Engineering and Technology,
Madurai and Udaya School of Engineering, Nagercoil for
constant encouragement.
18000
3) REFERENCES
16000
14000
12000
10000
8000
6000
4000
2000
0
0
1000
2000
Temperature (K)
3000
4000
Fig.6 Debye temperature for nanotungsten with size 12nm. Points are the
calculated values. The solid line is the fitting curve. The fitted value of
is
278K.
Page 430
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 431
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Abstract:
Introduction:
Page 432
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
failures
in
Cryptographic Properties:
The following cryptographic properties is assured in the group
key formation[2].
Member joins:
Consider, initially the group has n users {M1,M2. Mn}.
When the group communication system announces the arrival
of a new member, both the new member and the prior group
members receive this notification simultaneously. The new
Page 433
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
prior group.
Step 1: The new member broadcasts request for join
Mn+1
brn+1 = r n+1
C = { M1, . Mn }
BT(s)
Ms
Member Leaves:
From a a group of n members when a member
M d (d n) leaves the group. If d > 1, the sponsor Ms is the
leaf node directly below the leaving member, i.e., M d-1.
Otherwise, the sponsor is M2. Upon hearing about the leave
Page 434
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Map",Information and Communications University (ICU), 584, Hwaam-Dong, Yuseong-gu, Daej eon, 305-732, Korea.
Conclusion:
References:
[1] Sang won Leel, Yongdae Kim2, Kwangjo Kiml "An
Efficient Tree-based Group Key Agreement using Bilinear
Page 435
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
muthuece.eng@gmail.com
Abstract
A biometric measures an individual's unique physical or
behavioral characteristics in order to recognize or
authenticate his identity. Biometric identification is made
up of two stages: enrolment and verification. Automatic
and reliable extraction of minutiae from fingerprint
images is a critical step in fingerprint matching. The
quality of input fingerprint images plays an important
role in the performance of automatic identification and
verification algorithms. Error-correcting codes are
proposed as a means of correcting iris readings for
authentication purposes. This paper presents a technique
that combined with a bimodal biometric verification
system that makes use of finger print images and iris
images. Each individual verification has been optimized
to operate in automatic mode and designed for security
and authentication access application.
1. INTRODUCTION
Fingerprint based identification has been one of the most
successful biometric techniques used for personal
identification. Each individual has unique fingerprints. A
fingerprint is the pattern of ridges and valleys on the finger
tip. A fingerprint is thus defined by the uniqueness of the
local ridge characteristics and their relationships. Minutiae
points are these local ridge characteristics that occur either at
a ridge ending or a ridge bifurcation[1]. A ridge ending is
defined as the point where the ridge ends abruptly and the
ridge bifurcation is the point where the ridge splits into two
or more branches. Automatic minutiae detection becomes a
difficult task in low quality fingerprint images where noise
and contrast deficiency result in pixel configurations similar
to that of minutiae. This is an important aspect that has been
taken into consideration in this project for extraction of the
minutiae with a minimum error in a particular location.
Page 436
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
2.1.1.Pre-Processing
IRIS
Fin
ger
prin
t
Edge
detect
ion
Norm
alisati
on
Norm
alisat
ion
Frequ
ency
and
Orient
ation
Iris
Matc
hing
using
Corre
i
Mask
Gener
ation
and
Match
Comb
ine
Iris
and
if I(i, j) > M
G(i, j) =
M0 - v(VAR0(I(i, j) - M)2 /(VAR))
otherwise
where I(i,j) denotes the gray-level value at pixel (i, j), M and
VAR denote the estimated mean and variance of I
respectively and G(I,j) denotes the normalized gray-level
value at pixel (i, j).
and VAR0 are the desired mean and variance values
respectively.
The estimation of the orientation of the image is then carried
out as the next step. The whole image is divided into blocks
of size 1616 and the local orientation in the figure is
computed by
Vx(i,j) =
Vy(i,j) =
i +/2
j+/2
u =i -/2
v = j -/2
i +/2
j+/2
2x(u, v)y(u, v)
u =i - /2
v = j - /2
X [k ] =
1 1
d =0
Page 437
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
u=i+(d-(/2))cosO(i,j)+(k-(l/2))sinO(i,j),
v=i+(d-(/2))sinO(i,j)+((l/2)-k)cosO(i,j),
The following method is adopted for the calculation of the
frequency of the local blocks. X-signatures of each block are
computed along the direction perpendicular to the orientation
angle in each block. The window used for this purpose is of
size 1632. The frequency is then computed by the distance
between the peaks obtained in the X-signatures. The window
for this is given by the formula
X [k ] =
1 1
d =0
u=i+(d-(/2))cosO(i,j)+(k-(l/2))sinO(i,j),
2.1.3Post-Processing
The minutiae points obtained in the above step may contain
many spurious minutiae. This may occur due to the presence
of ridge breaks in the given figure itself which could not be
improved even after enhancement. This results in false
minutiae points which need to be removed. These unwanted
minutiae points are removed in the post-processing stage.
False minutiae points will be obtained at the borders as the
image ends abruptly. These are deleted using the segmented
mask. As a first step, a segmented mask is created. This is
created during segmentation carried out in the stage of preprocessing and contains ones in the blocks which have higher
variance than the threshold and zeros for the blocks having
lower variance. Finally after enhancing and false removal of
the minutiae, then finger print is matched with the template to
give the person is authenticated or not. This is first matching
output of this paper.
v=i+(d-(/2))sinO(i,j)+((l/2)-k)cosO(i,j),
2.2.IRIS MATCHING
Page 438
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
value of local variances is taken for carrying out the process
of filtering. A Gabor filter takes care of both the frequency
components as well as the spatial coordinates. The inputs
required to create a Gabor mask are frequency, orientation
angle and variances along x and y directions. Filtering is
done for each block using the local orientation angle and
frequency. Pre-processing of the image is completed by the
steps as mentioned and the enhanced image is obtained.
Normalization is done by same process which done in finger
print process.
2.2.2Post-Processing
If we choose to work with a Reed-Solomon code over
GF(28), this has a maximum length of 255 octets. The
ground field of 28 lends a natural ease to dealing with octetbased calculations, an obvious advantage when working with
bytes of data. A further advantage is that it is of characteristic
two, which means that the operations of addition and
subtraction are identical. This greatly simplifies arithmetic in
the field, allowing us to make use of the XOR (exclusiveOR) function.
Let the code length, n, be equal to 255, the maximum value it
can attain in this code.
The number of information symbols, k, cannot be
greater than 200, as we know that this is the maximum
number of valid bytes in a good Caucasian image. In
addition, the check digits are effectively unhidden and thus
could be accessed by an unscrupulous systems operator. As
such, we must ensure that their number does not exceed n/2,
as otherwise this would allow the codeword to be deciphered.
Finally after finding the results of finger print and iris
matching which will given to xor operation to prove that both
finger print and iris will be come from same person in order
to authenticate the user.
Page 439
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 440
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
4.CONCLUSION
The main benefit of this method is its fast running speed. The
method identifies the unrecoverable corrupted areas in the
fingerprint and removes them from further processing. By
treating the enrolment iris reading as a codeword, and the
verification reading as a corrupted codeword, we have shown
that it is possible to reliably match iris readings using ReedSolomon codes. Such an approach provides greater security
than matching algorithms currently in use, as it only requires
that a hashed version of the iris reading be stored, rather than
an explicit template.
5.REFERENCES
[1] D.H. Ballard, Generalizing the Hough Transform to
Detect Arbitrary Shapes, Pattern Recognition, vol. 13, no. 3,
pp. 111-112,1997
[2] A. Ross, J. Shah, and A. Jain, Towards Reconstructing
Fingerprints
from Minutiae Points, Proc. SPIE Conf.
Biometric Technology for Human Identification II, pp. 68-80,
2005
[3]
J. Daugman. High confidence visual recognition of
persons by a test of statistical independence IEEE
Transactions on Pattern Analysis and Machine Intelligence,
vol.15,no.11, 1148-1161, Nov 1993.
[4] J. Daugman. How iris recognition works, IEEE
Transactions on Circuits and Systems for Video Technology,
vol.14 no.1, 21-30,Jan 2004.
[5] A. K. Jain, L. Hong, S. Pantanki and R. Bolle, An
Identity Authentication System Using Fingerprints, Proc of
the IEEE, vol.85, no.9,1365-1388, 1997
Page 441
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ABSTRACT
Nowadays all paper documents are in electronic
format, because of quick access and smaller storage. So, it
is a major issue to retrieve relevant documents from the
larger database. Clustering documents to relevant groups
is an active field of research finding various applications
in the fields of text mining, topic tracking systems,
intelligent web search engines and question answering
systems. This paper proposes a frame work for
comparing the existing document clustering algorithm.
From the frame work, the details regarding the
algorithms, their capabilities, evaluation metrics, data set
and performance of the various methods are analyzed.
From the analyses it is revealed that majority of the
researches consider vector space model for document
representation, F-measure, Isim & Esim , Precision and
Recall are the frequently used metrics. Document
clustering still has future scope in incremental document
clustering, semi-supervised clustering, ontology based
clustering, topic detection and document summarization.
KEY WORDS Text mining Document clustering
unsupervised learning Semi-supervised learning
ontology.
INTRODUCTION
Information extraction plays a vital role in todays
life. How efficiently and effectively the relevant documents
are extracted from World Wide Web is a challenging
issue[16].
As todays search engine does just string
matching, documents retrieved may not be so relevant
according to users query. A good document clustering
approach can assist computers in organizing the document
corpus automatically into a meaningful cluster hierarchy for
efficient browsing and navigation, which is very valuable for
overcoming the deficiencies of traditional information
PROCEDURE
FOR
DOCUMENT
Page 442
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 443
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 444
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Page 445
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ALGORITHM/
TECHNIQUES
DATA SET
METRICS
PERFORMANCE
Recent Trends in
Data Mining (DM):
Document
clustering of DM
Publications
2005
K-means
Implemented
in
CLUTO - Toolkit
1436
papers
from different
data
mining
publications
Human
Inspection
Hierarchical
Document
Clustering - 2006
100k
documents
F-measure
Improved
Accuracy
A Hybrid Strategy
for Clustering Data
Mining
Documents- 2006
A
qualitative
method
which
extract noun/phrases
instead of words
from the documents.
Isim Intracluster s
Similarity
Esim intercluster
similarity
Incorporating User
Provided
constraints
into
Document
clustering's - 2007
SS-NMF : semi
supervised
nonnegative
matrix
factorization
framework
MEDLINE
Database,
Reuters-21578
Text Collection
and
Foreign
Broadcast
Information
Service Data
Confusion
matrix
Accuracy
metric
Topic
Oriented
Semi- Supervised
Document
Clustering - 200
7
Ontology
based
Document
topic
semantic annotation
(HOWNET
ontology knowledge
base)
Chinese
web
pages about Li
Ming same
name different
personality
football player
who
belongs
different clubs
Time analysis
for computing
Dissimilarity
matrix and
Dimensionality
analysis
A
document
Clustering
and
Ranking system for
Exploring
MEDLINE
Society
of
surgical
oncology
Bibliography
10 categories of
Document
clustering is a
part
of
Information
retrieval in this
Bisecting
K
means
clustering
algorithm
Scalability
and
Page 446
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
PAPER TITLE
ALGORITHM/
TECHNIQUES
DATA SET
METRICS
citations - 2007
Spectroscopy
cancer
paper
Text
Document
Clustering based
on Frequent word
meaning sequences
Clustering based on
Frequent
Word
Sequences(CFWS)
and
Clustering
based on Frequent
Word
Meaning
Sequences
(CFWMS)
CLUTO kit
9
categories
from
Reuters
data set, Classic
data set and
200 documents
from
search
engine results.
F-measure
Purity
Similarity
Measures for Text
Document
Clustering
Similarity measure
Euclidian, Jaccard
coefficient,Cosine
simiarity, Pearson
correlation
coefficient, Average
Kullback Leibler
Divergence
20 newsgroups,
Classic
academic
papers, wap
web pages,
web knowldege
base - webkb
Purity
Entropy
Research
Field
Discovery Based
on Text Clustering
Newman
Fast
clustering algorithm
for network model
Research
proposals
of
National
Natural Science
and Foundation
3423 relevant
proposals
Modularity
Topic Detection by
Clustering
Keywords
Identifies
most
informative
keyword
by
Probability
,
Clustering
using
Bisecting K-means
algorithm
Wikipedia
articles
of
various
categories like
architecture,
popmusic etc
F-measure
Implemented
using
3
similarity measures like
Cosine,
Jensen
shanon
divergence for term and
document
distribution.
Jensen shanon for term gives
better result
PERFORMANCE
Page 447
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
PAPER TITLE
ALGORITHM/
TECHNIQUES
DATA SET
METRICS
PERFORMANCE
An
Active
Learning
Framework
for
Semi-supervised
document
Clustering
with
language modeling
DevisedGain
Function
for
selecting documents
pairs,
Language
Model, Term-Term
dependence instead
of bag of words
Designed
a
Framework
Active Semi
20 newsgroups,
Reuters
newswire
stories
And
TDTTopic detection
and detection
project
F-measure
Document
Clustering
Description
Extraction and its
Applications
Uses
Machine
Learning approach
like SVM, Multiple
Linear Regression
Model and Two
Benchmarks
Documents
from
Information
Center
for
Social Science
of RUC 2000
documents
Precision
Recall
SVM
performs
better
compared to other
4
Machine Learning methods
A
Semantic
Approach
for
Document
Clustering
A
Framework
designed Parses
document
syntactically,
Semantically
converts
to
Semantic
Graph
Model
Algorithm KNearest Neighbor,
HAC, Single Pass
Clustering
Reuters
Set
F-measure
Entropy
Quality
of
Clustering
outperforms well compared
to VSM
Ontology
Enhanced
Clustering Based
Summarization of
Medical
Documents
PubMed
documents
Precision
Recall
MesH
descriptor
based
summarization gives better
summarization
Data
NMI
(Normalized
mutual
information)
Page 448
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
PAPER TITLE
ALGORITHM/
TECHNIQUES
DATA SET
METRICS
PERFORMANCE
Text
Document
Clustering Based
on Neighbors
3 methods : 1. new
method for selecting
initial
cluster
centroid
2. Similarity links
and cosine
3. Heuristic function
for
spiting
the
cluster
Reuters
and
MEDLINE
F-measure
Purity
A Comparison of
Document
Clustering
Technique - 2000
Similarity Measure
Centroid,
UPGMA and Inter
cluster similarity
Reuters,
web AceA Web Agent
for Document
Categorization
and Exploration
F-measure
Page 449
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
P recision&
R ecall
35
C onfusionmatrix
30
25
Inter&Intraclus
similarity
20
E ntrophy&
Modularity
NMI
15
Misclassification
index
10
P urity
5
F measure
0
ME T R IC S
Page 450
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
35
30
25
R euters
20
20News group
15
Medline
10
W kipedia
C las s ic
P ubmed
CONCLUSION
In this paper a frame work to compare existing document
clustering algorithms was proposed, which gives the
summarization of recent research work on various algorithms
in document clustering. This frame work precisely states the
details about the algorithms, data sets, metrics and
performance. Document clustering still has scope in various
issues like incremental document clustering, semi-supervised
clustering, topic detection and in document summarization.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Page 451
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22 .
23.
24.
25.
Page 452
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Abstract
Optimization has significant practical importance
particularly for operating in machines. In order to
increase the accuracy of finishing product the tool must
be in good condition always as much as possible. The
cutting tool must be in good condition when go for high
precision of components i.e. the tolerance is very closer
there the optimized cutting conditions were needed. To
achieve good condition of tool, the machining parameters
like speed, feed, depth of cut and average gray intensity
level should be optimized. This paper aims to find out the
safety cutting conditions for minimizing the tool wear by
applying the optimized input parameters using genetic
algorithm technique.
Introduction
The growing demand for product quality and economy
necessity have forced incorporation of monitoring the process
parameters in automated manufacturing systems. The greatest
limitation of automation in the machining operation is tool
wear. If the tool wear increases then the tool life will be
minimum. So the tool wear has been optimized by selecting
the optimal cutting parameters such as speed and depth of
cut. The main machining parameters which are to be
considered as variables of the optimization are speed and
depth of cut, the required output is minimum tool wear.
The optimization of tool wear is very important in modern
machining processes. The cutting tool is the most critical part
of the machining system, and recent advances in tool
technology mean that many traditionally difficult materials
Page 453
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
tool wear i.e. the gray levels. With the given speed, feed, and
depth of cut, shaping operation is performed on work piece,
and the image of this tool is captured through the machine
vision system. The image is imported to the computer and
image processing techniques, the average gray level is found
out.
Genetic algorithm
A genetic algorithm (GA) is a search technique used in
computing to find exact or approximate solutions to
optimization and search problems. Genetic algorithm is a
global population search technique based on the operations of
natural genetics and mimics the natural biological process.
Genetic algorithm first encodes all variables into a finite bit
binary string called as chromosomes. Chromosomes
represent a possible solution of chromosomes are formed.
Each chromosome is decoded and is evaluated according to
the fitness function.
Input parameters
Page 454
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Training parameters
Experimental results
Page 455
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Conclusion
In this research an adaptive Genetic algorithm has been used
to predict the optimum cutting conditions of a tool and tool
wear in machining operation by employing the GA learning
algorithm. The following conclusions can be drawn from this
research. The reason for introducing the Genetic algorithm
network technique in the present study is that the machining
process is complex and uncertain in
nature; the present machining theories are not adequate for
the purpose of practical problem.
References
1. Alegre.E, Alaiz.R, Barreiro.J & Ruiz.J, Assessment and
visualization of machine tool wear using computer
vision.(2008)
2. C.Flix Prasad, S.Jayabal & U.Nataraajan.Optimization of
tool wear in turning using genetic algorithm. (2007)
3. C.Bradly & Y.S.Wong. Surface texture indicators of tool
wear-A machine vision approach.(2001)
4. Sukhomay pal, P.Stephan Heyns, Burkhard H. Freyer,
Nico J.Theron & Surjya K.Pal. Tool wear monitoring and
selection of optimum cutting conditions with progressive tool
wear effect and input uncertainties.(2009)
5. H. H. Shahabi & M. M. Ratnam. Assessment of flank wear
and nose radius wear from workpiece roughness profile in
turning operation using machine vision. (2008)
6. Rick Riolo and Bill Worzel. Genetic Programming
Theory& Practice.(2006)
7. Wen-Tung chein & Chung-shay Tsai. The investigation on
the prediction of tool wear and determination of optimum
cutting conditions in machining of stainless steel.( 2003)
Page 456
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
ABSTRACT
Optical wavelength division multiplexing (WDM) networking
technology has been identified as a suitable candidate for future
wide area network (WAN) environments, due to its potential
ability to meet rising demands of high bandwidth and low latency
communication. Networking protocols and algorithms are being
developed to meet the changing operational requirements in future
optical WANs. Simulation is used in the study and evaluation of
such new protocols, and is considered a critical component of
protocol design. In this paper, we construct an optical WDM
network simulation tool, which facilitates the study of switching
and routing schemes in WDM networks. This tool is designed as an
extension to the network simulator ns2. In this work, the
effectiveness of the proposed approach has been verified through
numerical example and simulated results for various network
scenarios, such as ring, mesh and interconnection topologies and
we analyse the blocking probability of distributed light path
establishment in wavelength-routed WDM networks with no
wavelength conversion. We discuss the basic types of connection
blocking: i) due to the dimension of the network. ii) due to offered
network load. iii) due to network load in Erlang and Packet delay
due to load in Erlang.
Key words: Optical Networking, WDM network, topology,
simulator, blocking probability, delay.
I. INTRODUCTION
Optical fiber communications was mainly confined to
transmitting a single optical channel until the late 1980s.
Because the fibre attenuation was involved, this channel
required periodic regeneration, which included detection,
electronic processing, and optical retransmission. Such
regeneration causes a high-speed optoelectronic bottleneck and
can handle only a single wavelength. After the new generation
amplifiers were developed, it enabled us to accomplish highspeed repeaterless single channel transmission. A WDM system
enables the fiber to carry more throughputs. By using
wavelength-selective devices, independent signal routing also
can be accomplished. Two common network topologies can use
WDM, namely, the mesh and the ring networks. Each node in
the mesh has a transmitter and a receiver, with the transmitter
connected to one of the central passive meshs inputs and the
receiver connected to one of the meshs outputs. WDM
networks can also be of the ring variety. Rings are popular
because so many electrical networks use this topology and
because rings are easy to implement for any network
geographical configuration.
In an ideal WDM network, each user would have its own
unique signature wavelength. Routing in such a network would
be straightforward. This situation may be possible in a small
network, but it is unlikely in a large network whose number of
users is larger than the number of provided wavelengths. In
fact, technologies that can provide and cope with 20 distinct
wavelengths are the state of the art. There are some
technological limitations in providing a large number of
wavelengths, for instance: due to channel broadening effects
and non-ideal optical filtering, channels must have minimum
wavelength spacing. Wavelength range, accuracy, and stability
are extremely difficult to control. Therefore, it is quite possible
that a given network may have more users than available
wavelengths, which will necessitate the reuse of a given set of
wavelengths at different points in the network. The same
wavelength could be reused by any of the input ports to access a
completely different output port and establish an additional
connection. This technique increases the capacity of a WDM
network. The recent emerge of high-bit rate IP networking
application is creating the need for on demand provisioning of
wavelength routing with service - differentiated offering within
transport layer. To fulfill these requirements, different WDM
Page 457
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
optical network architecture has been proposed. For servicedifferentiated offering network, an accurate engineering of
WDM span design is needed. So the factors such as additive
nature of signal degradation, limited cascade ability of optical
components and traffic dependent signal quality should be
taken into account for accurate WDM span design.
1.RING TOPOLOGY
Ring
Mesh and
Interconnected rings.
Page 458
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
1.2 Snapshot of the simulation for 9 node ring topology
2. MESH TOPOLOGY
2.1.Mesh Topology
Page 459
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
transmission impairments and service requirements, we
consider an example where networks are designed with four
different link types and the three link Parameters that are
considered for routing: transmission degradation, reliability and
delay.
4. PERFORMANCE ANALYSIS
In order to demonstrate that our approach performs better
than that reported in the literature and to investigate the
performance of algorithms, we must resort to simulation studies
based upon the ns-2.0 Network Simulator.
Not able to find a suitable simulator that could support our
proposed DWP algorithm, we designed and developed a
simulator to implement routing and wavelength assignment in
all-optical networks for various topologies such as ring , mesh
and interconnected rings. The simulator accepts input
parameters such as the number of nodes in the network, link
information with weight, number of wavelengths per fiber,
connection requests.
Page 460
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
shows that DWP methods are useful for mesh networks, while
they bring less benefit for rings.
With increasing number of nodes, the blocking probability
decreases for mesh, and increases for ring networks. This is due
to that DWP can use one alternative path only in bi-directional
ring structures, which is N-n hops long (being number of nodes,
being length of the shortest path), with which, if selected, even
more network resources get allocated.
This is due to the fact that with larger ring networks, larger
amounts of network resources are allocated and cause delay of
future calls whenever a service is accommodated and also DWP
can use one alternative path only in bi-directional ring
structures while it can use more candidate paths in mesh
structure.
4.3 Blocking due to Load
Page 461
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
The call blocking probabilities are obtained as a function of
load in Erlang for the network for all topologies. Here, the
traffic load and other conflicts are kept constant. Then the graph
is plotted between the load in Erlang traffic and the blocking
probability.
At low arrival rates, primarily the load on the network causes
blocking; however, as the load in Erlang increases by increasing
the traffic arrival rate while the offered load remains constant,
the blocking due to load doesnt increase by much. However,
the blocking due to conflicting connection requests increases as
expected and becomes the dominant source of blocking for
networks with low loads. We also observe that, in the
simulation, the blocking due to load increases slightly as the
arrival rate increases. Although the offered load remains
constant, the actual network utilization is increasing, since
connections, which are blocked still reserve network resources
for a short period of time, leading to a slight increase in
blocking probability. Furthermore, during the connection setup
process when resources are being reserved, the reserved
resources will go unused for a short period of time before the
connection can begin transmitting data. This resource
reservation overhead will be higher when the connections are
established for shorter time duration, and the number of
connections being established is higher. Thus, as the arrival rate
increases, the overall load in the network will tend to increase.
Then graph is plotted between the delay time and the load in
Erlang. The load in Erlang is increased by increasing either
traffic arrival rate or traffic holding time. In the simulation, we
observed that, the delay time increases with load in Erlang. This
is due to the fact that, larger amounts of network resources are
allocated and cause delay of future calls whenever a service is
accommodated.
5. CONCLUSION
In this paper, we proposed a new approach to constraintbased path selection for dynamic routing and wavelength
allocation in optical networks based on WDM. Our approach
considered service-specific path quality attributes, such as
physical layer impairments, reliability, policy and traffic
conditions. Here we considered dynamic routing method.
Although this method requires a long set of time, it is more
efficient than other methods term of blocking probability. To
validate the network modeling, we presented the detail of the
revised OWns architecture, which is designed to fulfill the key
characteristic of WDM network and we implemented the
behavior of DWP algorithm for dynamic routing and wave
length assignment problem for various networks such as ring,
mesh and interconnected topologies with no conversion and we
compared the behavior of blocking probability and packet delay
for various network topologies. The study of blocking and delay
is very useful to know about the suitable topology for various
network sizes, traffic offered load and load due to conflicting
factors of a network.
BIBLIOGRAPHY
1) A. Jukan, and Gerald Franzl, Path Selection Methods With
Multiple Constraints in Service-Guaranteed WDM Networks
IEEE/ACM TRANSACTIONS ON NETWORKING, vol
12,pp.59-71(2004).
2) A. Jukan and H. R. van As, Service-specific resource
allocation in WDM networks with quality constraints, IEEE J.
Select/AREAS COMMUNICATION, vol 18, pp. 2051
2061(2000).
3) J. P .Jue and Gaoxi Xiao, Analysis of Blocking Probability
for Connection Management Schemes in Optical Networks,
IEEE/ ACM TRANSACTIONS ON NETWORKING,vol 17,
pp.1546-1550(2005).
4) Ashwin Sridharan and Kumar N. Sivarajan, Blocking in
All-Optical Networks,IEEE/ACM TRANSACTIONS ON
NETWORKING, vol. 12,pp.384- 396 (2004).
Page 462
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
5) Paramjeet Singh,Ajay K. Sharma, Shaveta Rani, Routing
and
wavelength
assignment
strategies
in
optical
networkswww.sciencedirect.com/
OPTICAL
FIBER
TECHNOLOGY,vol.13, pp. 191197(2007).
6) Vinh Trong Le, Xiaohong Jiang, Yasushi Inoguchi, Son
Hong Ngoa, Susumu Horiguchib, A novel dynamic survivable
routing in WDM optical
networks with/without sparse wavelength conversion
www.sciencedirect.com/ OPTICAL SWITCHING AND
NETWORKING, vol.3,pp. 173190(2006).
7) C. Siva Rain Murthy and Mohan Guruswamy Chapter 4
Wavelength Rerouting Algolithms," WDM OPTICAL
NETWORKS Concepts De-sign
and Algorithms", PHI 2002.
8) R. Ramaswami, et al., Optical Networks: A Practical
Perspective, Morgan Kaufmann Publishers Inc., 2002.
Mr. P. Poothathan, Senior Lecturer, Department of Physics,
Velammal College of Engg. & Tech., Madurai, completed his
M.Sc. Physics from Madurai Kamaraj University. He received
his M.Tech.
degree in Optoelectronics & Optical
communication from Rajiv Gandhi Techincal Univerisity,
Bhopal. He is having a rich 14 years of teaching experience. He
published many papers in various national and international
conferences. His areas of interest are optical communication
and Nano-computing.
Ms. S. Devipriya, she received her M.Sc., and M.Phil degrees
in Physics from the Pondicherry University, India, She is
currently working as a Lecturer in the Department of Physics at
Velammal College of Engg. & Tech., Madurai. She is having a
teaching experience of 5 years and her research areas of interest
are nonlinear Physics and Nanotechnology. She is an University
rank holder in II position in her M.Sc. Physics.
Dr. S. John Ethilton, Assistant Professor & Head, Department
of Physics, Velammal College of Engg. & Tech., Madurai. He
completed his M.Sc. Physics from Madurai Kamaraj University.
He obtained his doctoral degree from Manonmaniam
Sundaranar University, Tirunelveli. He is having totally 10
years of teaching experiences. He is having 3 international
publications and 1 national publication. His research interests
are Nanotechnology and fuel cells.
Page 463
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
mar@vcet.ac.in
rameshkumaredu@gmail.com
I.
INTRODUCTION
There is a heavy demand for the advanced materials with
high strength, high hardness, high temperature resistance and
high strength to weight ratio, in the present-days
technologically advanced industries like, Automobile,
Aeronautics, Nuclear, Gas Turbine Industries etc. This
necessity leads to evolution of advance materials like High
strength alloys, Ceramics, Fibre-reinforced composites etc. In
machining of these materials, conventional manufacturing
processes are increasingly being replaced by more advanced
techniques, and it is difficult to attain good surface finish and
close tolerance. The appropriate range of feeds and cutting
speeds, which provide a satisfactory tool life, is very limited
[1,2].
Machinability of material provides an indication of its
adaptability to be manufactured by a machining process. In
general, machinability can be defined as an optimal
combination of factors such as low cutting force, high material
removal rate, good surface integrity, accurate and consistent
workpiece geometrical characteristics; low tool feed rate and
good curling of chip and breakdown of chips [3].
II.
EXPERIMENTATION
A number of experiments were conducted to study the
effects of various machining parameters on Machining
process. These studies have been undertaken to investigate
the effects of cutting speed, feed rate and depth of cut on
Page 464
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
surface roughness. In this work, EN 24 steel is considered
as work piece material and its composition is given
inTable1.
TABLE. 1
COMPOSITION OF EN 24 STEEL
Si
Mn
Cr
Mo
Ni
Designation: 40 Ni 2 Cr 1 Mo 28
III.
RESPONSE SURFACE METHODOLOGY
Response surface methodology (RSM) is a collection of
mathematical and statistical techniques that are useful for
modelling and analysis of problems in which output or
response influenced by several variables and the goal is to find
the correlation between the response and the variables. It can
be used for optimising the response [9,10,11]. It is an
empirical modelization technique devoted to the evaluation of
relations existing between a group of controlled experimental
factors and the observed results of one or more selected
criteria. A prior knowledge of the studied process is thus
necessary to achieve a realistic model.
The first step of RSM is to define the limits of the
experimental domain to be explored. These limits are made as
wide as possible to obtain a clear response from the model.
The cutting speed, feed rate and depth of cut are the
machining variables, selected in this investigation. The
different levels retained for this study are depicted in Table 2.
In the next step, the planning to accomplish the
experiments by means of Response Surface Methodology
(RSM) using a Central Composite Design (CCD) with three
variables, eight cube points, four central points, six axial
points and two centre points in axial, in total 20 runs are
carried out. Total number of experiments conducted with the
combination of machining parameters is presented in Table 2.
The levels of significant are depicted in this Table. The
Central Composite Design is used, since it gives a
comparatively accurate prediction of all response variables
related to quantities measured during experimentation. CCD
offers the advantage that certain level adjustments are allowed
and can be used in two-step chronological response surface
methods [12].
Run
Ra (m)
1.00000
-1.00000
1.00000
0.678
-1.00000
1.00000
-1.00000
0.589
0.00000
0.00000
0.00000
1.366
1.68179
0.00000
0.00000
0.510
0.00000
0.00000
0.00000
0.866
0.00000
0.00000
0.00000
1.051
1.00000
-1.00000
-1.00000
0.483
0.00000
0.00000
-1.68179
0.716
1.00000
1.00000
-1.00000
0.983
10
0.00000
0.00000
0.00000
0.516
11
-1.68179
0.00000
0.00000
0.561
12
0.00000
0.00000
0.00000
0.983
13
-1.00000
-1.00000
-1.00000
0.489
14
0.00000
0.00000
1.68179
0.733
15
0.00000
1.68179
0.00000
0.689
16
0.00000
-1.68179
0.00000
0.438
17
-1.00000
1.00000
1.00000
0.923
TABLE. 2
DIFFERENT VARIABLES USED IN THE EXPERIMENTATION AND THEIR LEVELS
18
1.00000
1.00000
1.00000
0.899
19
0.00000
0.00000
0.00000
0.466
20
-1.00000
-1.00000
1.00000
0.456
Variable
Coding
A
B
C
Level
1
2
3
(-1)
(0)
(1)
80
110 140
0.04 0.06 0.08
0.4 0.45 0.5
Page 465
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
= b x +b x + b x + b x + b x + b x +b x +b x .
2
0 0
1 1
2 2
3 3
12 12
13 13
23 23
11 11
(1)
TABLE. 4
ANOVA TABLE FOR SURFACE ROUGHNESS, ESTIMATED REGRESSION
Source
Regression
DF
Seq SS
Adj SS
Adj MS
0.49487
0.494868
0.054985
0.76
0.652
Linear
0.24668
0.246681
0.082227
1.14
0.378
Square
0.23974
0.239742
0.079914
1.11
0.390
Interaction
0.00844
0.008445
0.002815
0.04
0.989
Residual Error
10
0.71925
0.719246
0.071925
Lack-of-Fit
0.13928
0.139282
0.027856
0.24
0.928
Pure Error
0.57996
0.579963
0.115993
Total
19
1.21411
Term
Coef
SE Coef
Constant
0.87157
0.10938
7.968
0.000
0.03663
0.07257
0.505
0.625
0.12522
0.07257
1.726
0.115
0.03226
0.07257
0.445
0.666
A*A
- 0.09969
0.07065
-1.411
0.189
B*B
-0.08979
0.07065
-1.271
0.233
C*C
-0.03286
0.07065
0.465
0.652
A*B
0.01925
0.09482
0.203
0.843
A*C
-0.02375
0.09482
-0.250
0.807
B*C
0.01100
0.09482
0.116
0.910
(C2)
Page 466
Proceedings of International Conference on Computers, Communication & Intelligence, July 22nd & 23rd 2010
Also in Fig. 1, in the 3D graphics, it can be seen that when the
cutting speed is low and the feed rate is high the surface
roughness is in its maximums. The 3D surface graphs for the
surface roughness which have curvilinear profile in
accordance to the quadratic model are obtained and the
adequacy of the model is thus verified.
In the Fig. 2, between the cutting speed and feed rate, with
their elliptical shape, there is a strong, positive, second degree
relationship.
V.
CONCLUSIONS
The present study develops surface roughness models for
three different parameters namely cutting speed, feed rate and
depth of cut for machining process of EN 24 steel using
response surface method. The second-order response models
have been validated with analysis of variance. It is found that
all the three machining parameters and some of their
interactions have significant effect on surface roughness
considered in the present study. With the model equations
obtained, a designer can subsequently select the best
combination of design variables for achieving optimum
surface roughness. This eventually will reduce the machining
time and save the cutting tools.
References
0.75
[1]
FA CE ROUGHNESS
0.50
0.25
1
0
0.00
-2
-1
-1
0
A
[2]
-2
[3]
[4]
Fig. 1 Surface plot of surface roughness vs B, A
Fig. 2 Example
of of
anSURFACE
unacceptable
low-resolution
Contour Plot
ROUGHNESS
vs B, A image
1.5
Fig. 3 Example of an image with
1.0
0.5
SURFACE
ROUGHNESS
< 0.2
0.2 0.4
acceptable resolution
0.4 0.6
0.6 0.8
> 0.8
[5]
[6]
[7]
[8]
Hold Values
C 0
0.0
[9]
-0.5
[10]
-1.0
[11]
-1.5
-1.5 -1.0 -0.5 0.0
A
0.5
1.0
1.5
[12]
[13]
L.N. Lopez del acalle, J. Perez, J.I. Llorente and J.A. Sanchez,
Advanced cutting conditions for the milling of aeronautical alloys,
Journal of Materials Processing Technology, vol 100, No. 1-3, pp. 111, 2000.
E. Brinksmeier, U. Berger and R. Jannsen, High speed milling of Ti6Al-4V for aircraft application, First French and German Conference
on High Speed Machining, Conf. Proceeding, Metz, pp. 295-306, 1997.
M.Y. Noordin, V.C. Venkatesh, S. Sharif, S. Elting and A. Abdullah,
Application of response surface methodology in describing the
performance of coated carbide tools when turning AISI 1045 steel,
Journal of Materials Processing Technology, vol. 145, No. 1, pp. 4658, 2004.
R. Snoyes, F. Van Dijck, Investigations of EDM operations by means
of thermo mathematical models, Annals of CIRP 20 (1), pp. 35, 1971.
A. Senthil Kumar, A. Raja Durai and T. Sornakumar, Machinability
of hardened steel using alumina based Ceramic cutting tools,
International Journal of refractory Metals and Hard Materials, vol. 21,
pp 109-117, 2003.
T. Sornakumar, R. Krishnamurthy and C.V. Gogularathnam,
Machining performance of ZTA cutting tool, 12th ISME conference,
vol. 2, 2001.
A.J.Thomas, and J.Antony, A Comparative analysis of the Taguchi
and DOE techniques in an aerospace environment, International
Journal of Productivity and Performance Management, vol. 54, pp-56,
2001.
D. Mandal, S.K. Pal, and P. Saha, Modeling of electrical discharge
machining process using back propagation neural network and multiobjective optimizations using non-dominating sorting genetic
algorithm-II, Journal of Materials Processing Technology, vol. 186,
154-162, 2007.
K. Wang, Hirpa L. Gelgele, Yi Wang , Qingfeng Yuan and Minglung
Fang., A hybrid intelligent method for modeling the EDM process,
International Journal of Machine Tools & Manufacture, vol. 43,
pp.995999, 2003.
P. J. Wang and K.M. Tsai, Semi-empirical model on work removal
and tool wear in electrical discharge machining, Journal of Materials
Processing Technology, vol. 114, Issue 1, pp. 1-17, 2001.
K. Palanikumar, Modeling and analysis for surface roughness in
machining glass fiber reinforced plastics using response surface
methodology, Materials and Design, vol. 28, pp. 26112618, 2007.
D. C. Montgomery, Design and Analysis of Experiments (second
ed.), Wiley, New John Wiley and Sons, New York, 1984.
L. Robert Mason, F. Richard Gunst, Dallas, Texas, James L. Hess.
Statistical Design and Analysis of Experiments with applications to
Engineering and Science (Second Edition), A John Wiley & sons
publication, 2003.
Page 467