Professional Documents
Culture Documents
Evaluation of Fruit Ripeness Using Electronic Nose: This Paper Describes The Use of An
Evaluation of Fruit Ripeness Using Electronic Nose: This Paper Describes The Use of An
Evaluation of Fruit Ripeness Using Electronic Nose: This Paper Describes The Use of An
This paper describes the use of an Electronic nose or an artificial nose that mimics the
behavior of human nose. Electronic nose is defined as an instrument which comprises of a sensor
for recognizing simple or complex odor. One of the main concerns of the food industry is the
systematic determination of fruit ripeness under harvest and post-harvest conditions, because
variability in ripeness is identified by consumers as a lack of quality. Most of the traditional
methods that have been used to access fruit ripeness are destructive and thus cannot be readily
applied. Hence we use ethylene gas sensor to detect the fruit ripeness as ethylene gas is the key
component in fruit maturation. A good correlation between sensor signals and some fruit quality
indicators was also found. These results prove that E-NOSE can be used as a quality control tool
i.e., it has been used for continuous monitoring of fruit freshness during its point of sale and
shipment.
2. MALICIOUS NODE DETECTION IN A QOS ORIENTED DISTRIBUTED
APPROACH
Anitha.M , Gobinath T, Gomathi B J, Gowthaman D ,Prabhakaran T
Department of Electronics and Communication Engineering
SNS College of Technology
NCFC2T15
With the advent of cloud computing, data owners are motivated to outsource their
complex data management systems from local sites to the commercial public cloud for great
flexibility and economic savings. But for protecting data privacy, sensitive data have to be
encrypted before out sourcing, which obsoletes traditional data utilization based on plaintext
keyword search. Thus, enabling an encrypted cloud data search service is of paramount
importance. Considering the large number of data users and documents in the cloud, it is
necessary to allow multiple keywords in the search request and return documents in the order of
their relevance to these keywords. Related works on searchable encryption focus on single
keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for
the first time, we define and solve the challenging problem of privacy-preserving multi-keyword
ranked search over encrypted data in cloud computing (MRSE). We establish a set of strict
privacy requirements for such a secure cloud data utilization system. Among various multikeyword semantics, we choose the efficient similarity measure of coordinate matching, i.e., as
many matches as possible, to capture the relevance of data documents to the search query. We
further use inner product similarity to quantitatively evaluate such similarity measure. We first
propose a basic idea for the MRSE based on secure inner product computation, and then give
two significantly improved MRSE schemes to achieve various stringent privacy requirements in
two different threat models. To improve search experience of the data search service, we further
extend these two schemes to support more search semantics. Thorough analysis investigating
NCFC2T15
privacy and efficiency guarantees of proposed schemes isgiven. Experiments on the real-world
data set further show proposed schemes indeed introduce lowoverhead on computation and
communication.
5. AN EMBEDDED SYSTEM-ON-CHIP ARCHITECTURE FOR REAL-TIME
FEATURE DETECTION AND MATCHING
E.Sabarinathan, M.Senthilkumar
Department of Electrical And Electronical Engineering
K.S.R. College of Engineering
Lung cancer has killed many people in recent years. Early diagnosis is very important to
enhance the patients chance for survival .The overall 5-year survival rate for lung cancer patients
increases from 14 to 49% if the disease is detected in time. Pulmonary nodule is the initial
indication of lung cancer. A CAD system that is adopted for the diagnosis lung cancer, uses lung
CT images as input and based on an algorithm helps radiologists to perform an image analysis.
The algorithm initiates with the preprocessing step improves images by removing distortion and
enhance the important features. This preprocessing step used to lead the following image
segmentation stage by Otsus thresholding and image classification stage by SVM classifier
3
NCFC2T15
.This paper proposes GLCM feature extraction and SVM classifier which is used to check the
state of a patient in its early stage whether the given lung CT image is Benign or Malignant.
7. SECURE IRIS AUTHENTICATION USING VISUAL CRYPTOGRAPHY
S.Lavanya
Department of Master of Computer Applications
K.S. Rangasamy College of Technology
This
NCFC2T15
The content based image retrieval (CBIR) is one of the most popular, rising research
areas of the digital image processing. Most of the available image search tools, such as Google
Images and Yahoo! Image search, are based on textual annotation of images. In these tools,
images are manually annotated with keywords and then retrieved using text-based search
methods. The performances of these systems are not satisfactory. The goal of CBIR is to extract
visual content of an image automatically, like color, texture, or shape. This project aims to
introduce the problems and challenges concerned with the design and the creation of CBIR
systems, which is based on the accurate image search mechanism. For efficient data
management, a system is proposed which generates metadata for image contents. This system is
using Content-Based Image Retrieval System(CBIR) based on Mpeg-7 descriptors. First, lowlevel features are extracted from the query image without metadata and the images with similar
low-level features are retrieved from the CBIR system. Metadata of the result images which are
similar to the query image are extracted from the metadata database. From the resulting
metadata, common keywords are extracted and proposed as the keywords for the query image.
The extraction of color features from digital images depends on an understanding of the theory
of color and the representation of color in digital images. Color spaces are an important
component for relating color to its representation in digital form. The transformations between
different color spaces and the quantization of color information are primary determinants of a
given feature extraction method. The approach is found to be robust in terms of accuracy and is
92.4% amongst five categories.
10. DATA INTEGRITY IN CLOUD COMPUTING
K.Latha
Department of Master of Computer Applications
K.S. Rangasamy College of Technology
Cloud computing has been envisioned as the de-facto solution to the rising storage costs
of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at which
data is being generated it proves costly for enterprises or individual users to frequently update
their hardware. Apart from reduction in storage costs data outsourcing to the cloud also helps in
reducing the maintenance. Cloud storage moves the users data to large data centers, which are
remotely located, on which user does not have any control. However, this unique feature of the
cloud poses many new security challenges which need to be clearly understood and resolved. It
provide a scheme which gives a proofof data integrity in the cloud which the customer can
employ to check the correctness of his data in the cloud. This proof can be agreed upon by both
the cloud and the customer and can be incorporated in the Service level agreement (SLA).
NCFC2T15
OSNs provide built-in mechanisms enabling users to communicate and share contents
with other members. OSN users can post statuses and notes, upload photos and videos in their
own spaces, tag others to their contents, and share the contents with their friends. On the other
hand, users can also post contents in their friends spaces. The shared contents may be connected
with multiple users. In interest-based online social media networks, users can easily create and
share personal content of interest, such as tweets, photos, music tracks, and videos. The largescale user-contributed content contains rich social media information such as tags, views,
favorites, and comments, which are very useful for mining social influence. The social links such
as views, favorites, and re tweets, indicate certain influence in the community. Since the content
of interest is essentially topic-specific, the underlying social influence is topic-sensitive. TSIM
aims to mine topic-specific influential nodes in the networks. In particular, we take Flickr, one of
the most popular photos sharing websites, as the social media platform in our study. Novel
Topic-Sensitive Influencer Mining (TSIM) framework is used due to interest based social media
networks. TSIM aims to find topical influential users and images. The influence estimation is
determined with a hyper graph learning approach. In the hyper graph, the vertices represent users
and images, and the hyper edges are utilized to capture multi type relations including
visualtextual content relations among images, and social links between users and images. The
influence estimation is determined with a hyper graph learning approach. In the hyper graph, the
vertices represent users and images, and the hyper edges are utilized to capture multi type
6
NCFC2T15
relations including visual textual content relations among images, and social links between users
and images.
13.COLLEGE BUS LOCATOR
V.Surryaprabbha,M.R.Subashini, S.Sangavi, N.Sivaranjani
Department of Information Technology
V.S.B. Engineering College
Mobile Phone Tracker application is a tracking application with which you can control
track the mobile phones. Using this application we can have variety of tracking systems that are
ideal for personal tracking, Child Tracking, Elderly Tracking or Business use like Vehicle
Tracking, Fleet Management, Bus Tracking. It will also allow your Relatives/College bus to
Track your location. This is especially useful for Security & Business applications. Instead of
using complex and costly GPS tracking devices you can convert your Mobile phone into a
powerful tracking device. You can see the live tracking using Google Earth or GMAP using web
site.
14. INTEGRATE OF ASPECT BASED ON OPINION MINING USING NAIVE BAYES
CLASSIFIER FOR PRODUCT REVIEW
V.Priyadharsini, N.Subashree, M.Sindhuja, V.Kavitha
Department of Information Technology
Nandha College of Technology
It is a common practice that merchants selling products on the Web ask their customers to review
the products and associated services. As e-commerce is becoming more and more popular, the number of
customer reviews that a product receives grows rapidly. For a popular product, the number of reviews
can be in hundreds. This makes it difficult for a potential customer to read them in order to make a
decision on whether to buy the product. In this project, we aim to summarize all the customer reviews of
a product. This summarization task is different from traditional text summarization because we are only
interested in the specific features of the product that customers have opinions on and also whether the
opinions are positive or negative. We do not summarize the reviews by selecting or rewriting a subset of
the original sentences from the reviews to capture their main points as in the classic text summarization.
In this paper, we only focus on mining opinion/product features that the reviewers have commented on.
A number of techniques are presented to mine such features. The proposed system is used to decisive the
customer reviews and find the aspect from the review and classify the review whether they wrote
positive or negative. This summarization task is different from traditional text summarization because
we are only interested in the specific features of the product that customers have opinions on and also
whether the opinions are positive or negative. In this paper, we only focus on mining opinion/product
features that the reviewers have commented on and compare the more product and rank the product
based on the reviews automatically.
NCFC2T15
Big data is an evolving term that describes any voluminous amount of structured, semistructured and unstructured data. The term is often used when speaking about peat bytes and exabytes of data. The security and privacy is the static and huge challenging issue in big data
storage. There are many ways to compromise data because of insufficient authentication,
authorization, and audit (AAA) controls, such as deletion or alteration of records without a
backup of the original content. The existing research work showed that it can fully support
authorized auditing and fine-grained update requests. However, such schemes in existence suffer
from several common drawbacks:1. Maintaining the storages can be a difficult task and 2. It
requires high resource costs for the implementation. This paper, Propose a formal analysis
technique called full grained updates. It includes the efficient searching for downloading the
uploaded file and also focus on designing the auditing protocol to improve the server-side
protection for the efficient data confidentiality and data availability.
16.NETWORK SECURITY
Prathicksha Viswanathan, Sushmitha Selvam,
Department of Information Technology
Nandha College of Technology
In the field of networking, the specialist area of network security consists of the
provisions made in an underlying computer network infrastructure, policies adopted by
the network administrator to protect the network and the network-accessible resources from
unauthorized access, and consistent and continuous monitoring and measurement of its
effectiveness (or lack) combined together. That way, if something terrible is happening you can
detect it. Therefore, all the tasks that have to be done in network security break down into three
phases or classes: Protection, where we configure our systems and networks as correctly as
possible Detection, where we identify the configuration has changed or that some network traffic
indicates a problem Reaction, after identifying quickly, we respond to any problem and return to
a safe state as rapidly as possible.
17. COMBINING TAGGING SHARE FOR SOCIAL NETWORKS USING MULTI
ACCESS CONTROL AND COMMENT SPAM FILTER
Aishwaryamanju.S, Nithya.E, Soundharyam.S, Sowndharya.S, Saveetha.P
Department of Information Technology
Nandha College of Technology
In most group key management protocols, group members are authenticated by the group
leader one by one. That is, n authentication messages are required to authenticate n group
NCFC2T15
members. Then, these members share one common group key for the group communication. In
authentication protocols, users are simultaneously authenticated by the requester. That is, one
authentication message is required to authenticate n session peers. Then, the requester negotiates
one secret key with each user instead of sharing one group key among all users. Spam is
commonly defined as unsolicited messages and the goal of spam categorization is to distinguish
between spam and legitimate messages. Spam used to be considered a mere nuisance, but due to
the abundant amounts of spam being sent today, it has progressed from being a nuisance to
becoming a major problem. Naive Bays classifier spam filters calculate the probability of a
message being spam based on its contents. Unlike simple other filters, Bayesian spam filtering
learns from spam and from good message, resulting in a very robust, adapting and efficient antispam approach that, best of all, returns hardly any false positives.
18. SUMMARIZED AUTOMATED HASH-TAG TWEET SEGMENTATION
K.K.Deepa,K.Gowthamapriya,B.Keerthana,T.Krishnakaarthik
Department of Information Technology
Nandha College of Technology
An increasing number of datas over online social network have created tremendous issues
which become hard to accessible and use. So there is a need to improve the efficiency of the tweet
maintenance and searching. While information extraction algorithms facilitate the extraction of
structured relations, they are often expensive and inaccurate. The proposed system of the project
presents an automatic annotation approach with semantic content extraction. The first aligns the data
units on a cluster into different groups such that the data in the same group have the same semantic.
The main objective of the project is extracting the labels from the documents. The documents are
predicted and prioritized by labels. Then, for each group this annotates it from different aspects and
aggregates the different annotations to predict a final annotation label for it. An annotation wrapper
for the document is automatically constructed and can be used to explain the new result pages from the
same web database. The proposed system will makes the data extraction in a better way than the
existing one.
19. AN ITRUST MISBEHAVIOR DETECTION SCHEME IN DELAY-TOLERANT
NETWORKS
P.Sakthivel, C.Selvarathi
Department of Computer Science And Engineering
M.Kumarasamy College of Engineering
NCFC2T15
idea of iTrust is introducing a periodically available Trusted Authority (TA) to judge the nodes
behavior based on the collected routing evidences and probabilistically checking. Model iTrust
as the inspection game and use game theoretical analysis to demonstrate that by setting an
appropriate investigation probability. The TA ensure the security of DTN routing at a reduced
cost. To improve the efficiency of the proposed scheme collecting the forwarding history
evidence from its upstream and downstream nodes by TA. This is further reduce the cost of
detection probability. The extensive analysis results demonstrate the effectiveness and efficiency
of the proposed scheme.
20. ADVANCED APPLICATIONS FOR SMART MOBILE USING 9 AXIS
GYROSCOPIC SENSOR ON ANDROID PLATFORM
G.Senthil Kumar,
Dr.Nallini Institute of Engineering And Technology
S.Sridharan
Department of Computer Science And Engineering
Pollachi Institute of Engineering And Technology
Now a days one of the most important device in our lives are the mobile phones. Mobile
phones were designed to primarily support voice communication. The rapid development of
technology is placing an enormous demand for mobile phones and similar devices as we are
requiring more and more applications from these mobile devices like weather forecasting,
navigation, gaming, entertainment, health monitoring, seismic exploration, map orientation,
gesture recognition, vibration, tap and tilt detection etc. These applications can be done by 9 axis
gyroscopic sensors.9 axis gyroscopic sensor is a combination of three sensors those are
accelerometer, gyroscope, compass. By using 9 axis gyroscopic sensors in smart mobiles we can
have smart sensing like sensitive gesture detection, tap detections, tilt detections, angular
velocity detections, and direction detections, vibration detections etc.
21. IMPLEMENTATION OF M-LEARNING SYSTEMS IN SMARTPHONES
S.Krishna , R.Ramya
Department of Information Technology
Tejaa Shakthi Institute of Technology For Women
This paper first analyzes the concept and features of micro lecture, mobile learning, and
ubiquitous learning, then presents the combination of micro lecture and mobile learning, to
propose an overlay of micro-learning through mobile terminals. Details are presented of a
micro lecture mobile learning system (MMLS) that can support multi-platforms, including PC
terminals and smart phones. The system combine intelligent push, speech recognition, video
annotation, Lucene full-text search, clustering analysis, Android elopement, another
technologies. The platform allows learners to access micro lecture videos and other highquality micro lecture resources wherever and whenever they like, in whatever time intervals
they have available. Teachers can obtain statistic a analysis results of the micro lecture in
MMLS to provide teaching/learning feedback and an effective communication platform. MML
Spromotesthe development of micro lecture and mobile earning. A statistical analysis of the
implementation of the system shows that students using MMLS to assist their learning had
improved results on their final exams and gave a higher evaluation of the curriculum than those
who did not. The advantages and disadvantages of MMLS are also analyzed.
10
NCFC2T15
The prevalent blocks used in digital signal processing hardware are the adder, multiplier
and delay elements. Better the performance of adder structure better will be the performance of
multipliers in total aspect. Reducing power dissipation, delay and area at the circuit level is
considered as one of the major factors in developing low power systems. In this paper we have
introduced a new (i) 8 transistor (8T) full adder (ii) Proposed Shannon based (8T)adder using
pass transistor logic which has better power, delay performance than the existing adders.
Performance comparison of the proposed 8T adder has been made by comparing its performance
with 10T SERF, 10T CLRCL, and the existing 14T full adders. The proposed 8T full adder
structure has improved performance characteristics and suitable for Array, Carry Save and Dadda
multipliers. Also three versions of 3 tap FIR filter namely Broadcast, Unfolded Broadcast,
Unfolded and Retimed Broadcast structures have been implemented using three different
multipliers. Each of these multipliers used for the Filters are implemented using all the existing
full adders and the proposed 8 T full adder. Results show that circuits implemented using
proposed 8T full adder has better power, delay and cascaded performance when compared with
the peer ones. All the simulations were carried out using TSMC Complementary Metal Oxide
Semiconductor (CMOS) 120 nm technology file with a supply voltage of 1.8V. Tools used are
Tanner EDA tools.
23. ENHANCED AUTHENTICATED ANONYMOUSSECURE ROUTING FOR
MANETS
Divya.D, Girija.M , Gokul Krishnan.T, Manoj Kumar.PJagadhesh.M
Department of Electronics And Communication Engineering
SNS College of Technology
Anonymous communications are important for many applications of the mobile ad hoc
networks (MANETs). A major requirement on the network is to provide un identifiability and
unlink ability for mobile nodes and their traffics. (MANETs) use anonymous routing protocols
that hide node identities and routes from outside observers in order to provide anonymity
protection. However, the existing routing protocol Authenticated anonymous secure routing
(AASR) do not provide anonymity protection to data sources, destinations and routes. In this
paper, we propose an enhanced authenticated anonymous secure routing (EAASR) to offer high
anonymity protection and to prevent the attacks. This dynamically partitions the network field
into zones and randomly chooses nodes in zones as intermediate relay nodes, which form a non
traceable anonymous route. In addition, it hides the data initiator and receiver to strengthen
source and destination anonymity protection. It effectively counters intersection and timing
attacks. We implemented the node creation, route discovery, data transmission and attacker
11
NCFC2T15
prevention using Network simulator2 (NS2) tool. Then the performance analysis is done for
different parameters and it is shown in Xgraph.
24. ADOPTION OF SELF CONTAINED PUBLIC KEY MANAGEMENT SCHEME IN
AN ACKNOWLEDGEMENT BASED INTRUSION DETECTION SYSTEM FOR
MANETS
Deepa .M ,Parvathi.M
Department of Computer Science And Engineering
Nandha Engineering College
MANET has become the popular trend nowadays which is a migration from wired
networks to wireless network. MANET gains popularity because of its self configuring ability,
dynamic topology. It is highly used in mission critical applications and emergency disasters.
Because of its dynamic nature, MANET is prone to security risks and attacks. In this paper, a key
management scheme which is self contained and public is represented which is to be used in the
acknowledgement based intrusion detection system for authenticating the acknowledgement
packets. This scheme achieves near zero communication overhead while providing security
services. Cryptographic keys in small numbers are given as input at all nodes prior to the
deployment in network. Mathematical Combinations of pairs of keys, both public and private is
used for better utilization of storage space. This means a combination of more than one key pair
is utilized by nodes for the encryption and the decryption of messages.
25. VEHICLE SHOWROOM MANAGEMENT SYSTEM
C.Dinesh Kumar, G.Manikandan, C.Kirubakaran, R.Sengottaiyan, M.Karthika
Department of Computer Science And Engineering
Nandha Engineering College
Data mining is the process of extraction of Hidden knowledge from the databases.
Clustering is one the important functionality of the data mining Clustering is an adaptive
methodology in which objects are grouped together, based on the principle of optimizing the
inside class similarity and minimizing the class-class similarity. Various clustering algorithms
have been developed resulting in a better performance on datasets for clustering. In k-means
clustering, we are given a set of n data points in d-dimensional space Rd and an integer k and
the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean
squared distance from each data point to its nearest center. A popular heuristic for k-means
clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation
of Lloyd's k-means clustering algorithm, which we call the filtering algorithm, genetic Kmeans clustering algorithm and Pre K-means clustering model.
NCFC2T15
K.Rajamurugan, K.Gunasekar
Department of Computer Science And Engineering
Nandha Engineering College
Data size has grown a large in present years by the development of internet in
large manner so that big data era arrives, with the cloud computing users able to store
large amount of data in ease manner. Users now use both the structured data and as well
as unstructured data. In big data due to its large size all the tasks are consuming more
amount of time. The internet users also share their private data like health records and
financial transaction records for mining or data analysis purpose during that time data
anonymization is used for hiding identity or sensitive intelligence so that data owners do
not suffer with economical loss. Anonymizing large scale data within a short span of
time is a challenging task to overcome that Enhanced Top Down Specialization
approach (ETDS) can be developed which is an enhancement of Two Phase Top down
Specialization approach (TPTDS).
NCFC2T15
Search engine optimization (SEO) is the process of affecting the visibility of a website or
a web page in a search engine's "natural" or un-paid ("organic") search results. In general, the
earlier (or higher ranked on the search results page), and more frequently a site appears in the
search results list, the more visitors it will receive from the search engine's users. SEO may target
different kinds of search, including image search, local search, video search, anemic search news
search and industry-specific vertical search engines. As an Internet marketing strategy, SEO
considers how search engines work, what people search for, the actual search terms or keywords
typed into search engines and which search engines are preferred by their targeted audience.
Optimizing a website may involve editing its content, HTML and associated coding to both
increase its relevance to specific keywords and to remove barriers to the activities of search
engines. Promoting a site to increase the number of back links, or inbound links, is another SEO
tactic.
14
NCFC2T15
Early detection of breast cancer can improve survival rates to a great extent. Interobserver and intra-observer errors occur frequently in analysis of medical images, given the high
variability between interpretations of different radiologists. To offset this variability and to
standardize the diagnostic procedures, efforts are being made to develop automated techniques
for diagnosis and grading of breast cancer images. This review aims at providing an overview
about recent advances and developments in the field of Computer Aided Diagnosis(CAD) of
breast cancer using mammograms, specifically focusing on the mathematical aspects of the
same, aiming to act as a mathematical primer for intermediates and experts in the field.
32. A RELIABLE DATA TRANSMISSION FOR CLUSTER-BASED WIRELESS
SENSOR NETWORKS
Jeevitha.A, Satheeshkumar.S
Department of Computer Science And Engineering
NCFC2T15
of a sensor network. We propose two secure and efficient data transmission (SET) protocols for
clustered Wireless sensor Network CWSNs, called SET-IBS by using the identity-based digital
signature (IBS) scheme and SET-IBOOS by using identity-based online/offline digital
signature(IBOOS) scheme. This application facilitate to facilitate require packet Delivery from
one or more senders to multiple receivers, provisioning security in group communications is
pointed out as a critical and challenging goal In this paper, we study a secure data transmission
for cluster-based Wireless Sensor Network (CWSNs).The results show that the proposed
protocols have more performance than the existing secure protocols for CWSNs, in terms of
security overhead and energy consumption.
33. A LIGHTWEIGHT PROACTIVE SOURCE ROUTING PROTOCOL TO IMPROVE
OPPORTUNISTIC DATA FORWARDING IN MANET
Elackya E.C, Sasirekha.S
Department of Computer Science And Engineering
Nandha Engineering College
Opportunistic data forwarding has drawn much attention in the research community of
multi hop wireless networking, with most research conducted for stationary wireless networks.
One of the reasons why opportunistic data forwarding has not been widely utilized in mobile ad
hoc networks (MANETs) is the lack of an efficient lightweight proactive routing scheme with
strong source routing capability. In this paper, a lightweight proactive source routing (PSR)
protocol is proposed. PSR can maintain more network topology information than distance vector
(DV) routing to facilitate source routing, although it has much smaller overhead than traditional
DV-based protocols [e.g., destination-sequenced DV (DSDV)], link state (LS)-based routing
[e.g., optimized link state routing (OLSR)], and reactive source routing [e.g., dynamic source
routing (DSR)].PSR yields similar or better data transportation performance than all other
baseline protocols.
34. ANALYSIS OF ITEMS IN LARGE TRANSACTIONAL DATABASE USING
FREQUENT AND UTILITY MINING
B.Sangameshwari. P.Uma
Department of Computer Science And Engineering
Nandha Engineering College
There are grouping of techniques, strategies and unique zones of the investigation which
are valuable and stamped as the essential field of information mining Advancements. Various
MNC's and inconceivable affiliations are worked in better places of the unique countries. Every
one spot of operation may create extensive volumes of information. Corporate boss oblige access
from all such sources and take crucial decisions .The information conveyance focus is used
inside the basic business regard by improving the sufficiency of managerial decision making. In
a questionable and extremely forceful nature's turf, the estimation of basic information structures,
for instance, these are smoothly seen however in today the earth, adequacy or speed is by all
record by all account not the only key for forcefulness. This kind of immense measure of
information's are available as tera- to peta-bytes which has doubtlessly changed in the scopes of
16
NCFC2T15
science and building. To look at, supervise and settle on a decision of such kind of colossal
measure of information that oblige frameworks called the data mining which will changing in
various fields.This paper confers extent of the data mining which will accommodating in the
business arena.
35. A SECURE AUDITING PROTOCOL FOR DATA SHARING IN
CLOUD COMPUTING ENVIRONMENT
T.Esther Dyana,S.Maheswari
Department of Computer Science And Engineering
Nandha Engineering College
Cloud Computing is an internet based computing where virtual shared servers provide
software, infrastructure, platform, devices and many other resources and hosting to customers on
a pay-as-you-use basis. Cloud computing customers do not own the physical infrastructure rather
they rent the usage from a third-party provider. Data owners host their data on cloud servers and
users can access the data from cloud servers. Due to the data outsourcing also introduces some
new security challenges, which requires an auditing service to check the data integrity in the
cloud. Some of the existing remote integrity checking methods can only serve for static archive
data and thus, cannot be applied to the auditing service since the data in the cloud can be
dynamically updated. Thus an efficient and secure dynamic auditing protocol is preferred to
convince data owners that the data are correctly stored in the cloud. In this paper, first design an
auditing framework for cloud storage systems and propose an efficient and privacy-preserving
auditing protocol to support the data dynamic operations, which is efficient and provably secure
in the random oracle model.
36. SEMANTIC ENHANCED WEB-PAGE RECOMMENDATION BASED ON
ONTOLOGY USING WEB USAGE MINING
Shrigowtham.M.N, S.Kavitha
Department of Computer Science And Engineering
Nandha Engineering College
17
NCFC2T15
Opinion mining (also known as sentiment analysis) aims to analyze peoples opinions,
sentiments, and attitudes facing entities such as products, services, and their attributes.
Information retrieval is the process of extracting the informations based on the occurrences of
the terms in the document. We discuss about the method to identify features from online reviews
by extracting the difference opinion feature statistics across two different large numbers of
documents namely domain specific corpus and domain independent corpus. Defining a set of
syntactic dependence rules, we extract the list of candidate opinion features from the domain
review corpus. For each extracted candidate feature, we estimate a Intrinsic domain relevance,
which represents the statistical association of the candidate to the given domain corpus. The
Extrinsic domain relevance, which reflects the statistical relevance of the candidate to the
domain independent corpus. The candidates with IDR scores exceeding a predefined intrinsic
relevance threshold and EDR scores less than another extrinsic relevance threshold are
confirmed as valid opinion features.
38. AN IMPROVED RESOURCEALLOCATION MECHANISM FOR VM-BASED
DATA CENTERS USING QUALITY OF SERVICE
V.Manimaran, S.Prabhu
Department of Computer Science And Engineering
Nandha Engineering College
Cloud computing grants business client to scale up and down their resource usage based
on their needs. Many of the touted benefits in the cloud model come from resource multiplexin
g through virtualization concepts. Dynamic consolidation of virtual machines (VMs) is an
effective approach to improve the utilization of resources and energy efficiency in cloud data
centers. Finding out when it is best to reallocate VMs from an overfull host is an aspect of
dynamic VM consolidation that directly influences the reserve exploitation and quality of
service (QoS) delivered by the system In this paper, we introduces VM Migration and optimal
precedence, a technique that obviously migrates only the working set of an idle VM and support
green computing by optimizing the number of servers in use. We use the maximum precedence
algorithm to reduce the trouble in virtual machine. We develop a set of heuristics that put off
trouble in the system efficiently while saving energy used.
18
NCFC2T15
This paper proposes the job scheduling in cloud environment. Cloud computing
is a model for delivering information technology services in which resources are
retrieved from the internet through web-based tools and applications, alternative a direct
connection to a server. Users can set up and boot the needed resources and they have to
pay only for the required resources. Thus, in the future hand over a mechanism for
efficient resource management and assignment will be an important objective of Cloud
computing. The issue is to achieve the goal of management multiple virtualization
platforms and multiple virtual machine migrations across physical machines without
disruption method. We discuss that ensure load balance when multiple virtual machines
run on multiple physical machines. We present a system which is implementation of
optimization with Dynamic Resource Allocation dealing with virtualization machines
on physical machines, practice DRA method in this system. The dynamic results
accepted that the virtual machine which loading becomes too high it will automatically
migrated to another low loading physical machine without service interrupt. We realize
that this approach results in a tractable solution for scheduling applications in the public
cloud. It also saves on electricity which share to a significant portion of the operational
expenses in large data centers. We develop a set of heuristics that prevent overload in
the system effectively while saving energy used. It trace driven simulation and
experiment results demonstrate that our algorithm achieves good performance.
40. A FRAMEWORK TO CREDIT CARD ENDORSEMENT USING FINGERPRINT
AND ONE TIME PASSWORD FOR AUTHENTICATION COMBINED WITH SSO
PROTOCOL IN CLOUD
V.Karunya ,Dr.S.Prabhadevi
Department of Computer Science And Engineering
Nandha Engineering College
Cloud computing is one of the emerging technologies, that takes network users to the
next level. Cloud is a technology where resources are paid per usage rather than owned. One of
the biggest challenges in this technology is Security. Though users use service providers
resources, there is a great level of reluctance from users end because of significant security
threats packed with this technology. Research in this core has provided a number of solutions to
overcome these security barriers; each of these has its own pros and cons. This paper brings
about a new model of a security system where in users are to provide multiple biometric finger
prints during enrolment for a service. These templates are stored at the cloud providers end. The
users are authenticated based on these finger print templates which have to be provided in the
order of random numbers that are generated every time. Both finger prints templates and images
provided every time are encrypted for enhanced security. When working with credit card
transaction SSO solutions allow users to sign on only once and have their identities automatically
verified by each application or service they want to access afterwards. We build on proxy
signature schemes to introduce the public key cryptographic approach to single sign-on
19
NCFC2T15
Mobile ad hoc networks (MANETs) are a dynamic network in which the mobile node
does not have any infrastructure. Link breakages exist due to its high mobility of nodes which
leads to frequent path failures and route discoveries. The neighbor coverage and probabilistic
mechanism significantly decreases the number of retransmissions so as to reduce the routing
overhead. Since security is also a challenging factor in adhoc networks a concept of secured
efficient routing is included with NCPR which enables a new trust approach based on the extent
of friendship between the nodes is proposed which makes the nodes to co-operate and prevent
flooding attacks in an ad hoc environment. All the nodes in an ad hoc network are categorized as
friends, acquaintances or strangers based on their relationships with their neighboring nodes.
During network initiation all nodes will be strangers to each other. A trust estimator is used in
each node to evaluate the trust level of its neighboring nodes. This approach combines the
advantages of the neighbor coverage knowledge and the probabilistic mechanism, which can
significantly decrease the number of retransmissions so as to reduce the routing overhead, and
improve the security. Specifically, throughput and packet delivery ratio can be improved
significantly.
42. LINK SCHEDULING FOR EXPLOITING SPATIAL REUSE INMULTIHOP MIMO
NETWORKS
A.Mohan Kumar,K.Gunasekaran
Department of Computer Science And Engineering
Nandha Engineering College
20
NCFC2T15
stream control, with any interference range, number of antennas, and average hop length of data
flows. 2) The traffic-aware scheduling is enticingly complementary to the link scheduling based
on ROIS model. Accordingly, the two scheduling schemes can be combined to further enhance
the network throughput.
43. EFFICIENT AND SECURE WIRELESS COMMUNICATIONS FOR ADVANCED
METERING INFRASTRUCTURE IN SMART GRIDS
Venkateswaran.D ,Satheeshkumar.S
Department of Computer Science And Engineering
Nandha Engineering College
Cloud computing has been considered as a solution for solving enterprise application
distribution and configuration challenges in the traditional software sales model. Migrating from
traditional software to Cloud enables on-going revenue for software providers. However, in order
to deliver hosted services to customers, SaaS companies have to either maintain their own
hardware or rent it from infrastructure providers. This requirement means that SaaS providers
will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a
minimum service level to customers. Therefore, this paper proposes resource allocation
algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our
proposed algorithms are designed in a way to ensure that Saas providers are able to manage the
dynamic change of customers, mapping customer requests to infrastructure level parameters and
handling heterogeneity of Virtual Machines. We take into account the customers Quality of
Service parameters such as response time, and infrastructure level parameters such as service
initiation time. This paper also presents an extensive evaluation study to analyse and demonstrate
that our proposed algorithms minimize the SaaS providers cost and the number of SLA
violations in a dynamic resource sharing Cloud environment.
21
NCFC2T15
Cloud computing allows business customers to scale up and down their resource usage
based on needs. Many of the touted gains in the cloud model come from resource multiplexing
through virtualization technology. In this paper, we present a system that uses virtualization
technology to allocate data center resources dynamically based on application demands and
support green computing by optimizing the number of servers in use. We introduce the concept
of skewness to measure the unevenness in the multidimensional resource utilization of a
server. By minimizing skewness, we can combine different types of workloads nicely and
improve the overall utilization of server resources. We develop a set of heuristics that prevent
overload in the system effectively while saving energy used. Trace driven simulation and
experiment results demonstrate that our algorithm achieves good performance.
46. WORKLOAD BALANCING AND ADAPTIVE RESOURCE MANAGEMENT FOR
THE SWIFT STORAGE SYSTEM ON CLOUD
E.Keerthi , S.Maheshwari
Department of Computer Science And Engineering
Nandha Engineering College
The demand for big data storage and processing has become a challenge in todays
industry. To meet the challenge, there is an increasing number of enterprises adopting distributed
storage systems. Frequently, in these systems, storage nodes intensively holding hotspot data
could become system bottlenecks while storage nodes without hotspot data might result in low
utilization of computing resource. This stems from the fact that almost all the typical distributed
storage systems only provide data-amount-oriented balancing mechanisms without considering
the different access load of data. To eliminate the system bot-tlenecks and optimize the resource
utilization, there is a demand for such distributed storage systems to employ a workload
balancing and adaptive resource management framework. In this paper, we propose a framework
of workload balancing and resource management for Swift, a widely used and typical distributed storage system on cloud. In this framework, we design workload monitoring and
analysis algo-rithms for discovering overloaded and underloaded nodes in the cluster. To balance
the workload among those nodes, Split, Merge and Pair Algorithms are implemented to regulate
physical machines while Re-source Reallocate Algorithm is designed to regulate virtual
machines on cloud. In addition, by leveraging the mature architecture of distributed storage
systems, the framework resides in the hosts and operates through API interception. To
demonstrate its effectiveness, we conduct experiments to evaluate it. And the experimental
results show the framework can achieve its goals.
22
NCFC2T15
Big data is defined as large amount of data which requires new technologies and
architectures so that it becomes possible to extract value from it by capturing and analysis
process. Due to such large size of data it becomes very difficult to perform effective analysis
using the existing traditional techniques. Big data due to its various properties like volume,
velocity, variety, variability, value and complexity put forward many challenges. Since Big data
is a recent upcoming technology in the market which can bring huge benefits to the business
organizations, it becomes necessary that various challenges and issues associated in bringing and
adapting to this technology are brought into light. This paper introduces the Big data technology
along with its importance in the modern world and existing projects which are effective and
important in changing the concept of science into big science and society too. The various
challenges and issues in adapting and accepting Big data technology, its tools (Hadoop) are also
discussed in detail along with the problems Hadoop is facing. The paper concludes with the
Good Big data practices to be followed.
48. A SIMPLE BUT POWERFUL HEURISTIC METHOD FOR ACCELERATING KMEANS CLUSTERINGOF LARGE-SCALE DATA IN LIFE SCIENCE
S.Monisha,D.Vanathi
Department of Computer Science And Engineering
Nandha Engineering College
K-means clustering has been widely used to gain insight into biological systems from
large-scale life science data. To quantify the similarities among biological data sets, Pearson
correlation distance and standardized Euclidean distance are used most frequently; however,
optimization methods have been largely unexplored. These two distance measurements are
equivalent in the sense that they yield the same k-means clustering result for identical sets of k
initial cancroids. Thus, an efficient algorithm used for one is applicable to the other. Several
optimization methods are available for the Euclidean distance and can be used for processing the
standardized Euclidean distance; however, they are not customized for this context. We instead
approached the problem by studying the properties of the Pearson correlation distance, and we
invented a simple but powerful heuristic method for markedly pruning unnecessary computation
while retaining the final solution. Tests using real biological data sets with 50-60K vectors of
dimensions 10 2001 (_400 MB in size) demonstrated marked reduction in computation time for
k 10-500 in comparison with other state-of-the-art pruning methods such as Elkans and
Hamerlys algorithms.
49. HIGHLY COMPARATIVE FEATURE-BASEDTIME-SERIES CLASSIFICATION
Subhashini Padma.M, Vanitha.D
Department of Computer Science And Engineering
23
NCFC2T15
In many text mining applications, side-information is available along with the text
documents. Such side-information may be of different kinds, such as document provenance
information, the links in the document, user-access behavior from web logs, or other non-textual
attributes which are embedded into the text document. Such attributes may contain a tremendous
amount of information for clustering purposes. However, the relative importance of this sideinformation may be difficult to estimate, especially when some of the information is noisy. In
such cases, it can be risky to incorporate side-information into the mining process, because it can
either improve the quality of the representation for the mining process, or can add noise to the
process. Therefore, we need a principled way to perform the mining process, so as to maximize
the advantages from using this side information. In this paper, we design an algorithm which
combines classical partitioning algorithms with probabilistic models in order to create an
effective clustering approach. We then show how to extend the approach to the classification
problem. We present experimental results on a number of real data sets in order to illustrate the
advantages of using such an approach.
51. A COOPERATIVE SEARCH BASED SOFTWARE ENGINEERING APPROACH
FOR CODE SMELL DETECTION
S.Kiruthika, S.Karuppusamy
Department of Computer Science And Engineering
Nandha Engineering College
24
NCFC2T15
25
NCFC2T15
Social networking has extended its popularity from the Internet to mobile domains.
Nowadays, the Internet can work collaboratively with cellular networks and self-organized
mobile ad hoc networks to offer advanced pervasive social networking (PSN) at any time and in
any place. It is important to secure data communications in PSN for protecting crucial instant
social activities and supporting reliable social computing and data mining. Obviously, trust plays
an important role in PSN for reciprocal activities among strangers. It helps people overcome
perceptions of uncertainty and risk and engages in trusted social behaviors. In this paper, we
utilize two dimensions of trust levels evaluatedby either a trusted server or individual PSN nodes
or both to control PSN data access in a heterogeneous manner on the basis of attribute-based
encryption. We formally prove the security of our scheme and analyze its communication and
computation complexity. Extensive analysis and performance evaluation based on
implementation show that our proposed scheme is highly efficient and provably secure under
relevant system and security models.
54. SECURE AND EFFICIENT DATATRANSMISSION FOR CLUSTER
BASED WIRELESS SENSOR NETWORK.
B.Sharmila.M.Parvathi
Department of Computer Science And Engineering
Nandha Engineering College
Secure data transmission is a critical issue for wireless sensor networks (WSNs).
Clustering is an effective and practical way to enhance the system performance of WSNs. In
this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the
clusters are formed dynamically and periodically. We propose two secure and efficient data
transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the
identity-based digital signature (IBS) scheme and the identity-based online/ offline digital
signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the
Diffie-Hellman problem in the pairing domain. SET-IBOOS further reduces the
computational overhead for protocol security, which is crucial for WSNs, while its security
relies on the hardness of the discrete logarithm problem. We show the feasibility of the SETIBS and SET-IBOOS protocols with respect to the security requirements and security
analysis against various attacks. The calculations and simulations are provided to illustrate
the efficiency of the proposed protocols. The results show that the proposed protocols have
better performance than the existing secure protocols for CWSNs, in terms of security overhead
and energy consumption
55. INFORMATION SECURITY IN BIG DATA: PRIVACY AND DATA MINING
R.Gowthamy, P.Uma
Department of Computer Science And Engineering
Nandha Engineering College
The growing popularity and development of data mining technologies bring serious threat
26
NCFC2T15
to the security of individual's sensitive information. An emerging research topic in data mining,
known as privacy-preserving data mining (PPDM), has been extensively studied in recent years.
The basic idea of PPDM is to modify the data in such a way so as to perform data mining
algorithms effectively without compromising the security of sensitive information contained in
the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by
data mining operations, while in fact, unwanted disclosure of sensitive information may also
happen in the process of data collecting, data publishing, and information (i.e., the data mining
results) delivering. In this paper, we view the privacy issues related to data mining from a wider
perspective and investigate various approaches that can help to protect sensitive information. In
particular, we identify four different types of users involved in data mining applications, namely,
data provider, data collector, data miner, and decision maker. For each type of user, we discuss
his privacy concerns and the methods that can be adopted to protect sensitive information. We
briey introduce the basics of related research topics, review state-of-the-art approaches, and
present some preliminary thoughts on future research directions. Besides exploring the privacypreserving approaches for each type of user, we also review the game theoretical approaches,
which are proposed for analyzing the interactions among different users in a data mining
scenario, each of whom has his own valuation on the sensitive information. By differentiating the
responsibilities of different users with respect to security of sensitive information, we would like
to provide some useful insights into the study of PPDM.
56. ISSUES CURRENT PROPOSALS AND FUTURE ENHANCEMENTS IN WIRELESS
SENSOR NETWORKS
S.Anbumalar, Dr.S.Prabhadevi
Department of Computer Science And Engineering
Nandha Engineering College
NCFC2T15
As the foundation of routing, topology control should minimize the interference among
nodes, and increase the network capacity. With the development of mobile ad hoc networks
(MANETs), there is a growing requirement of quality of service (QoS) in terms of delay. In order
to meet the delay requirement, it is important to consider topology control in delay constrained
environment, which is contradictory to the objective of minimizing interference. In this paper,
we focus on the delay-constrained topology control problem, and take into account delay and
interference jointly. We propose a cross-layer distributed algorithm called interference-based
topology control algorithm for delay-constrained (ITCD) MANETs with considering both the
interference constraint and the delay constraint, which is different from the previous work. The
transmission delay, contention delay and the queuing delay are taken into account in the
proposed algorithm. Moreover, the impact of node mobility on the interference-based topology
control algorithm is investigated and the unstable links are removed from the topology. The
simulation results show that ITCD can reduce the delay and improve the performance effectively
in delay-constrained mobile ad hoc networks.
58. SECURE DATA AGGREGRATION TECHNIC FOR WIRELESS SENSOR
NETWORK IN PRESCRENCE OF COLLUSION ATTACK
P. Bhuvaneswari , P. Thirumoorthi
Department of Computer Science And Engineering
Nandha Engineering College
Due to limited computational power and energy resources, aggregation of data from
multiple sensor nodes aggregation is known to be highly vulnerable to node compromising
attacks. Since WSN are usually unattended and without tamper resistant hardware, they are
highly susceptible to such attacks. Thus, ascertaining trustworthiness of data and reputation of
sensor nodes is crucial for WSN. As the performance of very low power processors dramatically
improves, future aggregator nodes will be capable of performing more sophisticated data
aggregation algorithms, thus making WSN less vulnerable. Iterative filtering algorithms hold
great promise for such a purpose. Such algorithms simultaneously aggregate data from multiple
sources and provide trust assessment of these sources, usually in a form of corresponding weight
factors assigned to data provided by each source. In this paper we demonstrate that several
existing iterative filtering algorithms, while significantly more robust against collusion attacks
than the simple averaging methods, are nevertheless susceptive to a novel sophisticated collusion
attack we introduce. To address this security issue, we propose an improvement for iterative
filtering techniques by providing an initial converging done at approximation for such algorithms
which makes them not only collusion robust, but also more accurate and faster the aggregating
node is usually accomplished by simple methods such as averaging
59. PRIVACY-PRESERVING MULTI-KEYWORD RANKED SEARCHOVER
ENCRYPTED CLOUD DATA
28
NCFC2T15
K.Deepa,S.Prabhu
Department of Computer Science And Engineering
Nandha Engineering College
With the advent of cloud computing, data owners are motivated to outsource their
complex data management systems from local sites to commercial public cloud for great
flexibility and economic savings. But for protecting data privacy, sensitive data has to be
encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext
keyword search. Thus, enabling an encrypted cloud data search service is of paramount
importance. Considering the large number of data users and documents in cloud, it is crucial for
the search service to allow multi-keyword query and provide result similarity ranking to meet the
effective data retrieval need. Related works on searchable encryption focus on single keyword
search or Boolean keyword search, and rarely differentiate the search results. In this paper, for
the first time, we define and solve the challenging problem of privacy-preserving multi-keyword
ranked search over encrypted cloud data (MRSE), and establish a set of strict privacy
requirements for such a secure cloud data utilization system to become a reality. Among various
multi-keyword semantics, we choose the efficient principle of coordinate matching, i.e., as
many matches as possible, to capture the similarity between search query and data documents,
and further use inner product similarity to quantitatively formalize such principle for similarity
measurement. We first propose a basic MRSE scheme using secure inner product computation,
and then significantly improve it to meet different privacy requirements in two levels of threat
models. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes
is given, and experiments on the real-world dataset further show proposed schemes indeed
introduce low overhead on computation and communication.
60. A STUDY ON MULTIMODAL BIOMETRICS AUTHENTICATION AND
TEMPLATE PROTECTION
N.Aravindhraj, Dr.N.Shanthi
Department of Computer Science And Engineering
Nandha Engineering College
29
NCFC2T15
Drought affects a large number of people and cause more losses to society compared to
other natural disasters. India is a drought disaster-prone country. The frequent occurrences of
drought possess an increasingly severe threat to the Indian agricultural production. Drought a
very complex phenomenon and it is difficult to accurately quantify it , because it has immense
spatial and temporal variability. In the existing system, implement the ISDI model construction
for evaluating the accuracy and the effectiveness. The ISDI model using a variety of methods
and data, there is still need some work to be done in our future research because of its complex
spatial and temporal characteristics of drought. To overcome limitation, the performance of the
drought can be measured by using the Spatial and temporal characteristics of information. We
collect the dataset from different regions and also collect the time varying information. In
proposed system, we predict the drought conditions by using the supervised learning mechanism.
It can be implemented by using the Bayesian supervised machine learning algorithm. Through
this algorithm we can achieve the accuracy and performance, and also improve the effectiveness
of predicting various drought conditions.
62. FIDELITY-BASED PROBABILISTIC Q-LEARNING FOR CONTROL OF
QUANTUM SYSTEMS
A.Gomala Priya, S.Sasireka
Department of Computer Science And Engineering
Nandha Engineering College
The balance between exploration and exploitation is a key problem for reinforcement
learning methods, especially for Q-learning. In this paper, a fidelity-based probabilistic Qlearning (FPQL) approach is presented to naturally solve this problem and applied for learning
control of quantum systems. In this approach, fidelity is adopted to help direct the learning
process and the probability of each action to be selected at a certain state is updated iteratively
along with the learning process, which leads to a natural exploration strategy instead of a pointed
one with configured parameters. A probabilistic Q-learning (PQL) algorithm is first presented to
demonstrate the basic idea of probabilistic action selection. Then the FPQL algorithm is
presented for learning control of quantum systems. One example (a spin-1/2 system) is
demonstrated to test the performance of the FPQL algorithm. The results show that FPQL
algorithms attain a better balance between exploration and exploitation, and can also avoid local
optimal policies and accelerate the learning process.
63. PRIVACY CONSERVATION FOR FRAMEWORK SECURE DATA ENCRYPTED
USING OSN
R.Krishnamoorthy,V.Aruna
Department of Computer Science And Engineering
Nandha Engineering College
A secure data sharing scheme in OSNs based on cipher text policy attribute based proxy
re-encryption and secret sharing. This system presents a multiparty access control model, which
enables the disseminator to update the access policy of cipher text if their attributes gratify the
30
NCFC2T15
existing access policy. Also provide check ability on the outputs returned from the OSNs service
earner to guarantee the exactness of part decrypted cipher text. The refuge and routine analysis
results indicate that the proposed scheme is secure and efficient in OSNs. The key policy
attribute are used for unfolding encrypting data and policy employed in user`s key, and the cipher
text policy is the access structure on the cipher text and the access structure can also be presentday in either monotonic or non-monotonic. Privately achieve this protection through a new
approach to building secure Systems: building practical systems that compute on encrypted data,
without access to the decryption key. In this setting, individually designed and built a database
system (CryptDB), a web application platform (Mylar), and two mobile systems, as well as
developed new cryptographic schemes for them.
64. AN ENCHANCED AND RELIABLE AUTHENTICATION PROTOCOL AGAINST
KEYLOGGING ATTACKS
B.Amutha ,R.Indhumathi ,M.Jayashree ,M.Mathumathi
Department of Computer Science And Engineering
Sasurie College of Engineering
Intelligent video surveillance systems deal with the real-time monitoring of persistent
and transient objects within a specific environment. An automated video surveillance and
alarming system provides surveillance and alerts the security guard of any undesired activity via
his cell phone. It would be a promising replacement of traditional human video surveillance
31
NCFC2T15
system. It provides a high degree of security. Detection of moving objects in video streams is the
first relevant step of information extraction in many computer vision applications. Aside from the
intrinsic usefulness of being able to segment video streams into moving and background
components, detecting moving objects provides a focus of attention for recognition,
classification, and activity analysis, making these later steps more efficient. We propose an
approach based on self-organization through artificial neural networks, widely applied in human
image processing systems and more generally in cognitive science. The proposed approach can
handle scenes containing moving backgrounds, gradual illumination variations and camouflage,
has no bootstrapping limitations, can include into the background model shadows cast by moving
objects.
66. AUTOMATIC SUMMARISATION OF BUG REPORT
Deivasigamani .M,Mukesh kumar.K,Bibin k biju ,Jafar Shadiq .S
Department of Computer Science And Engineering
Sasurie College of Engineering
A bug tracking and reporting system or defect tracking system is a software application
that keeps track of reported software bugs in software development projects. It may be regarded
as a type of issue tracking system .Bug Tracking System is the system which enables to detect
the Defects. It not merely detects the Defects but provides the complete information regarding
Defects detected. Bug Tracking System ensures the user of it who needs to know about a provide
information regarding the identified Defect. Using this no Defect will be unfixed in the
developed application. The developer develops the project as per customer requirements. The
Defect details in the database table are accessible to both project manager and developer.
67. AUTOMATIC LICENSE PLATE RECOGNITION USING ADAPTIVE
THRESHOLDING AND CC ANALYSIS TECHNIQUE
Aiswarya.V, Nandhine shree,N,Rajesh Kumar.M,Prasanna ManikandanS
Department of Information Technology
Karpagam Institute of Technology
The project presents license plate recognition system using connected component
analysis and template matching model for accurate identification. Automatic license plate
recognition (ALPR) is the extraction of vehicle license plate information from an image and
verify with stored license plate samples. The system model uses already captured images for this
recognition process. First the recognition system starts with character identification based on
number plate extraction, Splitting characters and template matching. Here, Adaptive thresholding
used to suppress background and detect foreground region having brighter pixels. Morphological
filtering is used to reduce noise objects from background. Connected component analysis is
utilized to split the segmented objects for extraction of individual objects geometric features.
ALPR as a real life application has to quickly and successfully process license plates under
different environmental conditions, such as indoors, outdoors, day or night time. It plays an
important role in numerous real-life applications, such as automatic toll collection, traffic law
enforcement, parking lot access control, and road traffic monitoring. The system uses different
templates for identifying the characters from input image. After character recognition, an
32
NCFC2T15
identified group of characters will be compared with database number plates for authentication.
The proposed model has low complexity and less time consuming interms of number plate
segmentation and character recognition. This can improve the system performance and make the
system more efficient by taking relevant samples.
68. PRIVACY PRESERVING PUBLIC AUDITING FOR SHARED DATA IN THE
CLOUD USING MD5
A.Afzal Ahmed,K.Jayaram,R.Logeshwaran,K.SamGodwin, P.UmaMaheshwari
Department of Information Technology
Karpagam Institute of Technology
Using cloud storage, users can remotely store their data and enjoy the on-demand highquality applications and services from a shared pool of configurable computing resources without
the burden of local data storage and maintenance. However, the fact that users no longer have
physical possession of the outsourced data makes the data integrity protection in cloud computing a
formidable task, especially for users with constrained computing resources. Moreover, users should
be able to just use the cloud storage as if it is local, without worrying about the need to verify its
integrity. Sharing data in a multi-owner manner while preserving data and identity privacy from
an untrusted cloud is still a challenging issue, due to the frequent change of the membership. Data
access control is an effective way to ensure the data security in the cloud. Due to data outsourcing
and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage
systems. MD5 is regarded as one of the most suitable technologies for data access control in cloud
storage, because it gives data owners more direct control on access policies. In this paper, we
design an expressive, efficient data access control scheme for multi-authority cloud storage
systems, where there are multiple authorities co-exist and each authority is able to issue attributes
independently. This efficiently audits the integrity of shared data with dynamic groups.
69. SECURED DATA AGGREGATION SCHEME IN SENSOR NETWORKS
K.E.Eswari,K.Gunasekar
Department of Computer Science And Engineering
Nandha Engineering College
Mobile devices such as smart phones are gaining an ever-increasing popularity. Most
smart phones are equipped with a rich set of embedded sensors such as camera, microphone,
GPS, accelerometer, ambient light sensor, gyroscope, and so on. The data generated by these
sensors provide opportunities to make sophisticated inferences about not only people but also
their surrounding and thus can help improve peoples health as well as life. This paper studies
how an entrusted aggregator in mobile sensing can periodically obtain desired statistics over the
data contributed by multiple mobile users, without compromising the privacy of each user.
Although there are some existing works in this area, they either require bidirectional
communications between the aggregator and mobile users in every aggregation period, or have
high-computation overhead or cannot support large plaintext spaces. Also, they do not consider
the Min aggregate, which is quite useful in mobile sensing. To address these problems, we
propose an efficient protocol to obtain the Sum aggregate, which employs an additive
homomorphism encryption and a novel key management technique to support large plaintext
33
NCFC2T15
space. We also extend the sum aggregation protocol to obtain the Min aggregate of time-series
data. The proposed scheme has three contributions. First, it is designed for a multi-application
environment. The base station extracts application-specific data from aggregated cipher texts.
Next, it mitigates the impact of compromising attacks in single application environments.
Finally, it degrades the damage from unauthorized aggregations.
70. AUTHENTICATE EMAIL BY FRACTAL RECOGNITION AND SECURE SPAM
FILTERING
S.Merlin Subidha
Department of Computer Science And Engineering
Jerusalem Engineering College
Effective network security targets a variety of threats and stops them from entering or
spreading on your network . Network security consists of the provisions and policies to prevent
and monitor unauthorized access, misuse, modification, or the denial of computer network and
network accessible resources. Network Security hinges on simple goals such as keeping
unauthorized persons from gaining access to resources and ensuring that authorized persons can
access the resource they need. Authentication is the process of confirming the identification of a
user that is trying to log on or access resources. Network operating systems require that a user be
authenticated in order to log onto the network. This can be done by entering a password,
inserting a smartcard and entering the associated PIN, providing a fingerprint, voice pattern
sample, or retinal scan,etc but still the user faces insecurity in authentication. Spam is
increasingly a core problem affecting network security and performance. Indeed, it has been
estimated that 80% of all email messages are spam. In the existing system the facial recognition
technique is introduced for authentication and spam filtering technique is implemented for
detecting spam mail. However these techniques are not effective because in case of any changes
in face the system results in false authentication and in case of spam detection, spammers use
different names or words to intrude the users mail. The proposed system introduces fractal
recognition technique for authentication and blocks the spammers domain to avoid intrusion. In
fractal recognition technique the users skull is detected for providing effective authentication
and in spam blocking the spammers domain id is traced and blocked for providing secure spam
filtering. This project ensures efficient similarity matching, reducing storage utilization in mails
and securing email from spam.
34
NCFC2T15