Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Faculty Project Titles May-2024

Requirements Rem-
S.No. Name of the faculty Title of project Description
If any arks
In real world applications, the concept hierarchy play a
1.Generate
Generate Concept Hierarchy major role mainly in online supermarket, product
identification, service identification, etc. Currently, domain
Automatically from the Dataset
experts are employed to generate the concept hierarchies. In
this project, a focus needs ds to be on generating concept
hierarchy automatically from the given dataset.
In social networks, many people share the news or some
information. Identification of trust worthiness of the news
2.Fake News Detection item is not possible. In this project, a classification method
needs to be worked out identify the fake news from the social
media data.
Judiciary data needs has lot of legal documents such as Python,
constitution, previous judgements, etc. While creating a Programming
Dr.M. Kumara
1 judgement, a judge will go through not only the constitution skills,
Swamy
3.Legal
Legal document analysis but also previous judgements. In this project, the legal Algorithms
repositories need to be analysed and come up with an
approach to create a legal graph which can be used by legal
experts.
Recommending an item to a new user, in online shopping, is
made by recommender system. In this project, an efficient
4.Recommender system
method needs to be identified using collaborative filtering
algorithm.
Given agro-climatic
climatic environment, the crops are decided for
different environments. In this project, needs to investigate
5.ICTs for Agriculture
an approach to recommend a crop in a given agro-climatic climatic
environment.
Drowsy Driving can be extremely dangerous, a lot of road
accidents are related to the driver falling asleep while driving
and subsequently losing control of the vehicle. However,
1.Driver Drowsiness Detection
initial signs of fatigue and drowsiness
ess can be detected before
System for Accident Prevention a critical situation arises. Driver drowsiness detection is a car
safety technology that helps to prevent accidents caused by
driver getting drowsy.
Businesses must customize according to their target audience
because this helps them better understand their customers’
2.Secure
Secure Persona Prediction and needs and preferences, and tailor their products, services,
Data Leakage Prevention System and marketing strategies accordingly. By customizing their
using Python offerings to meet the specific needs and preferences of their
target audience, businesses can increase the relevance of
2 Dr. S. Rasheed uddin their products and services, which can help to attract more
customers and improve customer satisfaction.
The brain is the most important organ in our body, as it
regulates the rest of our organs. Brain disorders have beeneen
3. Detecting Brain Tumors and
recognized as the world’s second leading cause of mortality. If
Alzheimer’s Using Python
they are not detected early and treated appropriately. Early
discovery and diagnosis of the tumor may aid and save the
patient’s life.
Customer get many benefits via online shopping this helps e- e
4. Customer Targeted E- commerce companies to build long-lastinglasting and profitable
Commerce relationship with their customers. For making ing strong
relationship with these users it is very important to focus on
the customer as a whole and making sense of a flood of real-
real
time information that goes well beyond demographics or
shopping behavior.
Diabetes is one of deadliest infections on the planet. It isn’t
just an ailment yet additionally a maker of various types of
5. Diabetes Prediction Using Data
maladies like heart assault, visual deficiency, kidney
Mining
infections, and so on. The typical recognizing process is that
patients need to visit an indicative focus, counsel their
specialist, and sit tight for a day or more to get their reports.
The objective of this study is to recognize a fake product logo
and assess how much it resembles a legitimate product logo.
We used the YOLO technique and a Darknet framework to
1.Fake logo detection
construct a fake logo detector. Brand visibility monitoring,
copyright violation detection, and social media product brand
management are all examples of uses for logo detection.
This is an Arduino based fingerprint door lock prototype. In
this Model, when we put our finger on the sensor the
2. Arduino finger print door lock program takes an image of it and tallies it with the enrolled
sensor fingerprints. If the finger is recognized then the door
automatically opens for a specific time we declared in the
code and after that it close on its own.
Waste management leads to the demolition of waste
conducted by recycling and landfilling. Deep learning and the
3.Intelligent waste management
Internet of things (IoT) confer an agile solution in
3 Mr. N. Venkatesh system using deep learning with
classification and real-time data monitoring, respectively. This
IoT
paper reflects a capable architecture of the waste
management system based on deep learning and IoT.
Tiny ML is an emerging area in machine learning algorithms
that can be used to detect multiple diseases in lung CT scan
4.Multiple disease detection images with better accuracy and in less time. This tool can aid
using ML doctors in their diagnosis and treatment of patients and help
increase the efficiency of the treatment process.in this way
the accuracy of finding the multiple disease easily
An algorithm visualizer is a tool designed to help users
understand how algorithms work by graphically representing
5. A Depiction of Visualization in their step-by-step execution. It provides visual feedback for
Algorithms each operation, allowing users to observe the algorithm's
behavior in real-time. This can include sorting algorithms,
search algorithms, graph algorithms, and more.
In this study, we make use of the information included in
user reviews as well as available rating scores to develop a
review-based rating prediction system. The proposed scheme
1.Review based Recommender
attempts to handle the uncertainty problem of the rating
system using Fuzzy Logic
histories, by fuzzifying the given ratings. Another advantage
of the proposed system is the use of a word embedding
representation model for textual reviews
As we are approaching modernity, the trend of paying online
2.Online Payment Fraud is increasing tremendously. It is very beneficial for the buyer
Detection using Machine to pay online as it saves time, and solves the problem of free
Learning in Python money. The online payment method leads to fraud that can
4 Mrs. Suhasini T S
happen when using any payment app.
It describes incorrect and misleading articles published mostly
for the purpose of making money through page views. The
topic of machine learning methods for fake news detection
using machine learning, most of it has been focusing on
3.Fake News Detection using classifying online reviews and publicly available social
Machine learning media posts. This project could be practically used by any
media company to automatically predict whether the
circulating news is fake or not. The process could be done
automatically without having humans manually review
thousands of news-related articles.
The advances in machines and technologies used in smart
farming using machine learning, useful and accurate
information about different matters plays a significant role in
it. It focuses on predicting the appropriate crop based on the
4.Smart Farming using Machine
climatic situations and the yield of the crop based on the
Learning
historic data by using supervised machine learning
algorithms. The only remedy to the crisis is to do all that is
possible to make agriculture a profitable enterprise and
attract the farmers to continue the crop production activities.
In this article, we will use image processing to detect and
count the number of faces. We are not supposed to get all
the features of the face. Instead, the objective is to obtain the
5.Count number of Faces using
bounding box through some methods i.e. coordinates of the
Python – OpenCV
face in the image, depending on different areas covered by
the number of the coordinates, number faces that will be
computed.
Artificial Intelligence (AI) can be used for monitoring soil and
1.Automatic Crop and Soil crop conditions with the aid of automatic systems driven by
monitoring using Artificial advanced AI algorithms for yielding better crop produce. He
Intelligence describes the steps involved in training such automatic CNN
systems and also elaborates on their potential merits and
demerits.
Nowadays plants are suffering many diseases due to
widespread use of pesticides and sprays but identifying
rotten areas of plants in the early stage can save plants.
Examination of plants disease literally means examining
various observable pattern on plants. Manually detecting
2. Plant disease detection using
disease in plants can be a tiresome process, hence image
image processing (MATLAB)
processing can do wonders in this context. Plant disease can
be seen in different parts like in stem, root, shoot and even in
fruit. Detection of plant disease by the automatic way not MATLAB,
only reduces time but also it is able to save the plant from the Image
disease in the beginning stage itself. We use different image processing,
processing techniques to predict the problem in plants. CNN
Firstly, let me tell you a bit about the Traffic sign Recognition Programming
system. It is the procedure of naturally perceiving traffic signs language -
5 Mrs. M. Sabitha along the street, including speed limit signs, caution signs, Python
blend signs, and so forth. Being able to automatically
recognize traffic signs empowers us to construct “smarter Operating
cars”. Traffic signs are a vital piece of our street and road System - any os
infrastructure. They give basic data/info, at times convincing like a window,
suggestions, for street clients, which expects them to change ubuntu.
their driving conduct to ensure they stick to whatever street
guideline at present authorized. Without such valuable signs,
3. Traffic recognition using we would in all likelihood be confronted with more mishaps,
python as drivers would not be given basic input on how quickly they
could securely go, or educated about street works, sharp
turn, or school intersections ahead. Millions of people die on
roads each year and this number would be a lot higher
without our street signs. Normally, autonomous vehicles
must also abide by road legislation and in this way perceive
and comprehend traffic signs. And a project like this system
can be pretty useful for the driver and to avoid any kind of
mishappening and can have a safe and healthy drive.
Generally, vision strategies were utilized to identify and order
traffic signs. And I think we need this kind of implementation
in this modern age.
Human activity recognition using smartphone sensors like
accelerometer is one of the hectic topics of research. HAR is
one of the time series classification problem. In this project
various machine learning and deep learning models have
4.Human Activity Recognition –
been worked out to get the best final result. In the same
Using Deep Learning Model
sequence, we can use LSTM (long short term memory) model
of the Recurrent Neural Network (RNN) to recognize various
activities of humans like standing, climbing upstairs and
downstairs etc.
As the title suggests, the project will be a desktop application
that has a feature to work in the background itself and by
using Machine Learning and Deep Learning concepts, it will
5. Human Scream Detection and detect and analyze human screams in a real-time
Analysis for Controlling Crime environment and if this application found something serious Python, Neural
Rate. in it’s surrounding then it will automatically send alert networks
message to the nearest police station with the location of its
user. Not only this, the application will capable to detect clear
human sound from the background noise.

In the Classify Song Genres machine learning project, you will


be using the song dataset to classify songs into two
categories: 'Hip-Hop' or 'Rock.' You will check the correlation
between features, normalize data using scikit-learn’s
StandardScaler, apply PCA (Principal Component Analysis) on
scaled data, and visualize the results. After that, you will use
1.Classify Song Genres from the scikit-learn Logistic Regression and Decision Tree model
Audio Data to train and validate the results. In this project, you will also
learn some of the advanced techniques such as class
balancing and cross-validation to reduce model bias and
overfitting. Classifying Song Genres from Audio Data is a
guided project. You can replicate the result on a different
dataset, such as the Hotel Booking Demand one. You can use
it to predict whether a customer will cancel the booking or
not.
In the Give Life: Predict Blood Donations project, you will
predict whether or not a donor will give blood in a given time
window. The dataset used in the project is from a mobile
blood donation vehicle in Taiwan, and as part of a blood
donation drive, the blood transfusion service center drives to
various universities to collect the blood. In this project, you
6 Mrs. B. Sree Saranya
are processing raw data and feeding it to TPOT Python
AutoML(Automated Machine Learning) tool. It will search
2.Predict Blood Donations
hundreds of machine learning pipelines to find the best one
for our dataset. We will then use the information from TPOT
to create our model with normalized features and get an
even better score. Give Life: Predict Blood Donations is a
guided project. You can replicate the result on a different
dataset, such as the Unicorn Companies. You can use TPOT to
predict whether a company reaches a valuation of over 5
billion.
In this ML project, we will convert a time series problem to a
supervised machine learning problem to predict driver
demand. Exploratory analysis has to be performed on the
time series to identify patterns. Auto-Correlation Function
(ACF) and Partial Auto-Correlation Function (PACF) will be
3. Taxi Demand Prediction applied to analyze the time series. A regression model must
be built and used to solve this time-series problem. Once the
training model is prepared, spot testing will be performed on
it. Following this, the driver demand prediction will be
performed using Random Forest and Xgboost as the
ensemble models.
The goal of this AI case is to automate the process of
determining a person’s age. This information can be
incredibly valuable for companies as it helps them to better
understand their target audience. But applications of age
detection tools are not limited to analytics.For example, they
4. Age detection
can be used to: prevent the sale of tobacco products or
alcohol to minors at self-service checkouts; limit access to
adult content based on profile image; choose the
communication style in bot-human conversations; help
authorities recognize people via their old photos.
Ted talks are a good source to learn and take inspiration
from. These days every platform is having a recommendation
system to provide a better user experience. Most of the
applications collect data to recommend similar content
according to the interests of the user. We can use the same
strategy to recommend ted talks. So in this article, I will take
you through the Ted Talks recommendation system with
Machine Learning using Python. Ted Talks Recommendation
System has to be purely based on the content rather than
based on data of a user. As a user generally watches videos
5.Ted Talk Recommendation
on Youtube and other applications mostly to get entertained.
System with machine learning
But a user watches Ted Talks to take some inspiration, so the
data of the user has nothing to do here. To recommend Ted
Talks to a user we need to create a content-based
recommendation system where all the ted talks will be
recommended based on the content of the video that the
user watched earlier. To create such a system we can use the
concept of cosine similarity in machine learning. In the
section below, I will take you through how to create a Ted
Talks recommendation system with machine learning using
Python.
Discover efficient greenhouse farming with our
comprehensive web guide. From site selection to crop
management, optimize yields while conserving resources. Get
1. Web application guide for
tips on climate control, irrigation, and data insights. Join our
efficient greenhouse farming
community for support, use interactive tools, and stay
updated on the latest advancements. Start maximizing your
greenhouse's potential today.
Empower your entrepreneurial journey with our guidance
platform. From idea generation to business planning, legal
2. Guidance platform for compliance to funding, marketing to operations, we've got
entrepreneurship you covered. Connect with mentors, access resources, and
join a supportive community of fellow entrepreneurs. Start
building your dream business today!
Using advanced image detection technology, our system
monitors accident-prone areas for hazardous vehicle
7 Mrs S. Priyanka 3. Detecting hazardous vehicle
movements. Through real-time analysis of video feeds, it
movement in accident prone
identifies speeding, reckless driving, and other risky
areas through image detection
behaviors. Alerts are promptly generated to authorities,
enhancing road safety and preventing accidents.
Online Persona is your AI expert companion, ready to provide
personalized assistance across various topics. With vast
4. Online Persona! An Ai bot of
knowledge and adaptive learning, it offers accurate guidance
expertise.
in real-time, making it your go-to resource for diverse
interests and challenges.
Our mass communication tool streamlines communication
within educational institutions, connecting administrators,
5. Mass Communication tool for teachers, students, and parents on a single platform. Features
Educational institutions include announcements, event scheduling, assignment
sharing, progress tracking, and messaging, enhancing
collaboration and engagement for all stakeholders
In conclusion, our study highlights the efficacy of automated
1.Using deep learning to detect
deep learning-based detection of referable DR and DME using
diabetic retinopathy on handheld
handheld non-mydriatic retinal images in community settings.
non-mydriatic retinal images
Our fndings have particular relevance for policy makers in
acquired by field workers in
LMIC aiming to implement cost-efective, scalable and
community settings
sustainable DR screening programmes.
GAI models, recently catching attention, have promising
potential in healthcare. This paper has discussed various
applications in which different GAI models enhanced
2. Generative AI for healthcare operations. The diverse applications, including
Transformative Healthcare: A medical imaging, drug discovery, personalized patient
Comprehensive Study of treatment, medical simulation and training, clinical trial
Emerging Models, Applications, optimization, mental health support, healthcare operations
Case Studies, and Limitations and resource management, chatbots, human movement
simulation and analysis and text generation and
summarization indicate how flexible and reliable this
technology is and how it can be implemented in healthcare.
Recent large language models (LLMs), such as ChatGPT, have
demonstrated remarkable prediction performance for a
growing array of tasks. We explore two instantiations of Aug-
imodels in natural-language processing: Aug-Linear, which
augments a linear model with decoupled embeddings from
3. Augmenting interpretable
an LLM and AugTree, which augments a decision tree with
models with large language
LLM feature expansions. Across a variety of text-classification
8 Mrs. B.Revathi models during training
datasets, both outperform their non-augmented,
interpretable counterparts. Aug-Linear can even outperform
much larger models, e.g. a 6-billion parameter GPT-J model,
despite having 10,000x fewer parameters and being fully
transparent.
We introduced a framework for achieving strong natural
language understanding with a single task-agnostic model
through generative pre-training and discriminative fine-
tuning. By pre-training on a diverse corpus with long
4. Improving Language
stretches of contiguous text our model acquires significant
Understanding by Generative
world knowledge and ability to process long-range
Pre-Training
dependencies which are then successfully transferred to
solving discriminative tasks such as question answering,
semantic similarity assessment, entailment determination,
and text classification.
This research advances the field of multi-class HS detection by
introducing an effective model for the Norwegian language,
5. Multi-class hate speech employing a BiLSTM-GRU architecture, known as FAST-RNN.
detection in the Norwegian Through rigorous regularization and hyper parameter tuning,
language using FAST-RNN and the FAST-RNN model has demonstrated superior
multilingual fine-tuned performance over the baseline across all evaluation metrics.
transformers The application of supervised Fast Text embedding has
proven especially beneficial for categorical classification
tasks.
Building a recommendation system with deep learning
involves several steps. Here's a high-level overview of the
process: 1) Data Collection: Gather data on user preferences,
item attributes, and interactions (e.g., ratings, purchases,
clicks). 2) Data Preprocessing: This involves cleaning the data,
handling missing values, and encoding categorical variables.
Mr. Lal Bahadur 1.Recommendation system using
9 For example, you might represent users and items as unique
Pandey DL
IDs. 3) Model Selection: Choose a deep learning architecture
suitable for recommendation tasks.
Common choices include: i)Matrix Factorization Models:
These models learn latent representations of users and items
and predict ratings or interactions based on the dot product
of these representations. ii) Deep Neural Networks: You can
use fully connected neural networks or more complex
architectures like convolutional neural networks (CNNs) or
recurrent neural networks (RNNs) to learn embeddings and
make recommendations. iii) Autoencoders: These are neural
networks trained to reconstruct their input and can be used
to learn low-dimensional representations of users and items.
iv) Model Training: Train the selected model on the
preprocessed data. You can use techniques like mini-batch
gradient descent and regularization to improve
generalization. 5)Model Training: Train the selected model on
the preprocessed data. You can use techniques like mini-
batch gradient descent and regularization to improve
generalization. 6) Evaluation: Assess the model's performance
using appropriate evaluation metrics such as accuracy,
precision, recall, or mean squared error (MSE), depending on
the type of recommendation task (e.g., rating prediction, top-
N recommendation). 6) Hyperparameter Tuning: Fine-tune
hyperparameters (e.g., learning rate, batch size, number of
layers) to improve performance further. Techniques like grid
search or random search can be used for this purpose. 7)
Deployment: Deploy the trained model in a production
environment, integrating it with your application or website.
Ensure that the recommendation process is scalable and
efficient. 8) Monitoring and Maintenance: Continuously
monitor the model's performance in production and update it
periodically as new data becomes available or user
preferences change.
Throughout this process, it's essential to consider factors such
as scalability, interpretability, and fairness to ensure that the
recommendation system is effective and ethical. Additionally,
you may need to address challenges like cold start problems
(e.g., recommending items for new users or items with few
interactions) and model explainability.

Certainly! Here's an outline for a sentiment analysis project


using Machine Learning (ML) or Deep Learning (DL):
1. Problem Definition: Define the goal of the project. In
sentiment analysis, the aim is to automatically classify the
sentiment of a piece of text (e.g., positive, negative, or
neutral).
2. Data Collection: Gather a labeled dataset of text samples
along with their corresponding sentiment labels. You can
obtain such datasets from sources like social media
platforms, customer reviews, or specialized sentiment
analysis datasets.
3. Data Preprocessing: Clean the text data by removing noise
such as punctuation, special characters, and stopwords.
2. Sentiment analysis using ML or
Tokenize the text into individual words or subwords and
DL
convert them into numerical representations (e.g., word
embeddings).
4. Feature Engineering: Extract features from the
preprocessed text data. This might involve techniques like
Bag-of-Words (BoW), TF-IDF (Term Frequency-Inverse
Document Frequency), or more advanced methods like
word embeddings (e.g., Word2Vec, GloVe) or contextual
embeddings (e.g., BERT, GPT).
5. Model Selection:
o Machine Learning Approach: You can use traditional ML
algorithms such as Support Vector Machines (SVM), Naive
Bayes, or Random Forests. These algorithms take the
extracted features as input and learn to classify the
sentiment.
o Deep Learning Approach: Deep learning models like
Recurrent Neural Networks (RNNs), Long Short-Term
Memory networks (LSTMs), or Convolutional Neural
Networks (CNNs) can learn hierarchical representations of
text data for sentiment classification.
6. Model Training: Split the dataset into training, validation,
and test sets. Train the selected model on the training
data, optimizing the chosen objective function (e.g., cross-
entropy loss) using techniques like gradient descent or its
variants (e.g., Adam optimizer). Monitor the model's
performance on the validation set to prevent overfitting.
7. Evaluation: Evaluate the trained model on the test set
using appropriate evaluation metrics such as accuracy,
precision, recall, F1-score, or ROC-AUC, depending on the
nature of the sentiment classification task.
8. Hyperparameter Tuning: Fine-tune the hyperparameters of
the model (e.g., learning rate, dropout rate, number of
layers) using techniques like grid search or random search
to improve performance further.
9. Deployment: Deploy the trained model into production,
integrating it with your application or system. Ensure that
the inference process is efficient and scalable.
10. Monitoring and Maintenance: Continuously
monitor the model's performance in production, retraining
it periodically with new data, and updating it as needed to
maintain its effectiveness. Throughout the project,
consider ethical implications such as bias detection and
mitigation, privacy protection, and transparency in
decision-making. Also, document the entire process
thoroughly for reproducibility and future reference.
Creating a stock market prediction system using Machine
Learning (ML) involves several steps. Here's a structured
approach to building such a system:
1. Data Collection: Gather historical stock market data
including features like opening price, closing price, highest
price, lowest price, trading volume, etc. Additionally, you
might include external factors like economic indicators,
news sentiment, or social media activity that could impact
stock prices.
2. Data Preprocessing: Clean the collected data by handling
missing values, removing outliers, and normalizing
numerical features. Also, consider engineering new
features that might be relevant for prediction, such as
moving averages, volatility measures, or technical
3. Stock Market indicators.
prediction system using ML 3. Feature Selection/Extraction: Select the most relevant
features for predicting stock prices. You can use
techniques like correlation analysis, feature importance
ranking, or dimensionality reduction methods (e.g., PCA) to
identify the most informative features.
4. Model Selection:
o Regression Models: Traditional regression algorithms like
Linear Regression, Ridge Regression, Lasso Regression, or
ElasticNet can be used for predicting continuous stock
price values.
o Time Series Models: Techniques like Autoregressive
Integrated Moving Average (ARIMA), Seasonal
Autoregressive Integrated Moving-Average (SARIMA), or
Exponential Smoothing Methods (e.g., Holt-Winters) are
specifically designed for time series forecasting.
o Machine Learning Models: Models like Random Forest
Regression, Gradient Boosting Regression (e.g., XGBoost,
LightGBM), or Support Vector Regression (SVR) can capture
non-linear relationships between features and stock prices.
o Deep Learning Models: Recurrent Neural Networks (RNNs),
Long Short-Term Memory networks (LSTMs), or Gated
Recurrent Units (GRUs) can learn complex temporal
patterns from historical stock data.
5. Model Training: Split the dataset into training and testing
sets. Train the selected model on the training data,
optimizing the chosen objective function (e.g., Mean
Squared Error for regression models) using techniques like
gradient descent or its variants (e.g., Adam optimizer).
6. Model Evaluation: Evaluate the trained model on the test
set using appropriate evaluation metrics such as Mean
Absolute Error (MAE), Mean Squared Error (MSE), Root
Mean Squared Error (RMSE), or R-squared (R2) score to
assess its predictive performance.
7. Hyperparameter Tuning: Fine-tune the hyperparameters of
the model (e.g., learning rate, regularization parameters,
number of layers) using techniques like grid search,
random search, or Bayesian optimization to improve
performance further.
8. Deployment: Deploy the trained model into production,
integrating it with your stock trading platform or
investment system. Ensure that the prediction process is
efficient and scalable, and provide mechanisms for real-
time or batch predictions.
9. Monitoring and Maintenance: Continuously monitor the
model's performance in production, retraining it periodically
with new data, and updating it as needed to adapt to
changing market conditions.
Remember that predicting stock prices is inherently
uncertain and influenced by various factors, so it's essential
to manage expectations and consider the risks associated
with trading based on predictions. Additionally, be mindful of
ethical considerations such as fairness, transparency, and
regulatory compliance when developing and deploying
predictive models for financial applications.
Predicting climate change involves forecasting various aspects
of the Earth's climate system, such as temperature patterns,
precipitation levels, sea level rise, and extreme weather
events. Machine Learning (ML) and Deep Learning (DL) can be
powerful tools for analyzing climate data and making
predictions. Here's a general framework for building a climate
change prediction model using ML or DL:
1. Data Collection: Gather historical climate data from various
sources such as weather stations, satellites, ocean buoys,
and climate model simulations. This data may include
4. Climate Change variables like temperature, humidity, wind speed,
prediction using ML or DL atmospheric pressure, ocean temperature, sea ice extent,
greenhouse gas concentrations, etc.
2. Data Preprocessing: Clean the collected data by handling
missing values, removing outliers, and performing quality
control checks. Convert the data into a suitable format for
analysis, such as time series or spatial grids. Additionally,
you may need to aggregate or interpolate data to ensure
consistency across different spatial and temporal scales.
3. Feature Engineering: Extract relevant features from the
raw climate data that capture important patterns and
relationships. This might involve computing statistical
summaries (e.g., mean, variance, trend) over specific time
periods, deriving climatic indices (e.g., El Niño Southern
Oscillation indices), or incorporating domain knowledge
about climate processes.
4. Model Selection:
o Regression Models: Linear regression, Ridge regression,
Lasso regression, or Random forest regression can be used
for predicting continuous climate variables like
temperature or precipitation.
o Time Series Models: Autoregressive Integrated Moving
Average (ARIMA), Seasonal ARIMA (SARIMA), or
Exponential Smoothing methods can capture temporal
dependencies in climate data and make short-term
forecasts.
o Deep Learning Models: Recurrent Neural Networks (RNNs),
Long Short-Term Memory networks (LSTMs), or
Convolutional Neural Networks (CNNs) can learn complex
spatiotemporal patterns from climate data and make both
short-term and long-term predictions.
5. Model Training: Split the dataset into training, validation,
and test sets. Train the selected model on the training
data, optimizing the chosen objective function (e.g., Mean
Squared Error for regression models) using techniques like
gradient descent or its variants (e.g., Adam optimizer).
Monitor the model's performance on the validation set to
prevent overfitting.
6. Model Evaluation: Evaluate the trained model on the test
set using appropriate evaluation metrics such as Mean
Absolute Error (MAE), Mean Squared Error (MSE), Root
Mean Squared Error (RMSE), or correlation coefficient
(e.g., Pearson's r) to assess its predictive performance.
7. Hyperparameter Tuning: Fine-tune the hyperparameters of
the model (e.g., learning rate, batch size, number of layers)
using techniques like grid search, random search, or
Bayesian optimization to improve performance further.
8. Deployment: Deploy the trained model into production,
integrating it with climate monitoring systems, weather
forecasting agencies, or decision support tools for
policymakers. Ensure that the prediction process is
efficient and scalable, and provide mechanisms for real-
time or batch predictions.
9. Monitoring and Maintenance: Continuously monitor the
model's performance in production, retraining it
periodically with new data, and updating it as needed to
adapt to changing climate conditions. Collaborate with
domain experts and stakeholders to validate model
outputs and incorporate feedback into the prediction
process.
It's important to acknowledge the inherent uncertainties and
limitations in climate predictions due to the complexity of
Earth's climate system and the inherent variability in natural
processes. Therefore, climate models should be used as tools
to inform decision-making rather than precise predictors of
future outcomes. Additionally, consider ethical considerations
such as fairness, transparency, and inclusivity when
developing and deploying predictive models for climate-
related applications.
Developing a disease detection system using Machine
5. Disease detection system Learning (ML) involves several steps. Below is a structured
using ML approach to building such a system:
1. Problem Definition: Define the target disease(s) that the
system will detect. It could be a specific disease like cancer,
diabetes, or cardiovascular disease, or a broader category
like infectious diseases.
2. Data Collection: Gather labeled medical data relevant to
the target disease(s). This data may include patient
demographics, symptoms, medical history, laboratory test
results, imaging scans (e.g., X-rays, MRIs), biopsy reports,
genetic markers, and treatment outcomes.
3. Data Preprocessing: Clean the collected data by handling
missing values, removing outliers, and normalizing
numerical features. Encode categorical variables and
handle imbalanced classes if present. Ensure compliance
with data privacy regulations (e.g., HIPAA) by anonymizing
sensitive information.
4. Feature Selection/Extraction: Identify the most informative
features for disease detection. This may involve domain
knowledge, statistical analysis, or feature importance
ranking techniques. Feature extraction techniques like
Principal Component Analysis (PCA) or deep learning-based
feature extraction can also be used to derive meaningful
representations from raw data.
5. Model Selection:
o Classification Models: Choose appropriate classification
algorithms such as Logistic Regression, Support Vector
Machines (SVM), Random Forests, Gradient Boosting
Machines (GBM), or Neural Networks (e.g., Multilayer
Perceptron) for binary or multi-class disease classification
tasks.
o Anomaly Detection Models: For diseases with rare
occurrences or unusual patterns, anomaly detection
techniques like One-Class SVM or Isolation Forest can be
used to identify abnormal cases.
o Deep Learning Models: Convolutional Neural Networks
(CNNs), Recurrent Neural Networks (RNNs), or
Transformer-based architectures (e.g., BERT) can learn
complex patterns from raw medical data such as images,
time-series, or text.
6. Model Training: Split the dataset into training, validation,
and test sets. Train the selected model on the training
data, optimizing the chosen objective function (e.g., cross-
entropy loss for classification tasks) using techniques like
gradient descent or its variants (e.g., Adam optimizer).
Monitor the model's performance on the validation set to
prevent overfitting.
7. Model Evaluation: Evaluate the trained model on the test
set using appropriate evaluation metrics such as accuracy,
precision, recall, F1-score, receiver operating characteristic
(ROC) curve, and area under the curve (AUC) to assess its
performance in disease detection.
8. Hyperparameter Tuning: Fine-tune the hyperparameters of
the model (e.g., learning rate, batch size, regularization
parameters) using techniques like grid search, random
search, or Bayesian optimization to optimize performance
further.
9. Deployment: Deploy the trained model into production,
integrating it with healthcare systems, electronic medical
records (EMRs), or mobile health applications. Ensure that
the prediction process is efficient, scalable, and compliant
with regulatory requirements (e.g., FDA approval for
medical devices).
10. Monitoring and Maintenance: Continuously
monitor the model's performance in production, retraining
it periodically with new data, and updating it as needed to
adapt to evolving disease patterns or changes in patient
populations. Collaborate with healthcare professionals to
validate model outputs and incorporate clinical expertise
into the decision-making process.
Throughout the development process, prioritize patient
privacy, data security, and ethical considerations. Ensure
transparency in model predictions and provide explanations
for decision-making to build trust among healthcare
providers and patients. Additionally, seek feedback from
domain experts and stakeholders to improve the system's
effectiveness and usability in real-world clinical settings.
Crop diseases area major threat to food security, but their
rapid identification remains difficult in many parts of the
world due to the lack of the necessary infrastructure. The
combination of increasing global smart phone penetration
1.Using Deep Learning for Image- and recent advances in computer vision made possible by
Based Plant Disease Detection deep learning has paved the way for smart phone-assisted
disease diagnosis. The approach of training deep learning
models on increasingly large and publicly available image
datasets presents a clear path toward smart phone-assisted
crop disease diagnosis on a massive global scale.
Tomato is an important vegetable crop cultivated worldwide
coming next only to potato. However, the crop can be
damaged due to various diseases. It is important for the
2. Development of Efficient CNN farmer to know the type of disease for timely treatment of
model for Tomato crop disease the crop. It has been observed that leaves are clear indicator
identification of specific diseases. A number of Machine Learning (ML)
algorithms and Convolution Neural Network (CNN) models
have been proposed in literature for identification of tomato
crop diseases.
Pest management in field and greenhouse conditions has
been one of the main concerns for agricultural Scientifics and
Mr. Himanshu producers. Reductions in production loss and crop damages,
10 3. Vision-based pest detection
Nayak which can severely affect marketable yields, enforce farmers
based on SVM classification to use different methods to control and protect fields against
method pest damages. In the present century, the use of pesticides
has increased due to its initial low cost, easy accessibility,
quick influence and the lack of knowledge on the part of
growers, resulting in dangerous consequences.
Facial emotions play an important role in communication
among humans and help us to understand the intentions of
4. Modified Convolutional Neural others and how they feel. Humans have a strong tendency to
Network Architecture Analysis express emotions. They play an essential role in our daily
for Facial Emotion Recognition lives. Human spend great amount of time in understanding
the emotions of others, decoding what these signals mean
and then determine how to respond and deal with them.
Pigeon pea is the most significant pulse crop in the world
economically, also its production has increased significantly
over the years. Pigeon pea crop affected by several diseases
5. Exploring artificial intelligence that occur in mild to severe forms however among these,
technique for detection of pigeon sterility mosaic disease is major disease caused by pigeon pea
pea sterility mosaic disease sterility mosaic virus (PPSMV) transmitted by the vector
eriophyidmite (A. cajani). Sterility mosaic disease is one the
most damaging disease is an endemic disease in most pigeon
pea producing regions in India.
While having a face-to-face conversation with another
person, it is often possible to gauge their emotions through
cues such as their expressions, body language etc. However,
1.Speech Emotion Detection while having a telephonic conversation, it becomes very
11 Mr. U.Nagaiah
System using Python difficult to get a sense of the emotional state of an individual.
Following the Covid 19 Pandemic, a lot of us were confined to
our homes. In such trying times, it is essential to keep tabs on
the mental health and well-being of your family, friends, co-
workers, etc. This Speech Emotion Detection project has been
designed to help with detecting the emotions of a person
based on their voice.
Depression is a leading cause of mental illness, and it has
been linked to an increased risk of premature death.
Furthermore, it is a major contributor to suicidal thoughts
and causes significant impairment in daily life. Every year, one
in every 15 adults suffers from depression, affecting 300
million people worldwide. Several previous empirical studies
2. Depression Detection System have shown that certain linguistic characteristics can be
analyzed and correlated to likely depression symptoms, as
using Python
well as help predict self-destructive behavior.This Depression
Detection System detects the type of depression (anxiety,
PTSD, or bipolar) and recommends nearby clinics to consult a
psychiatrist. Furthermore, the user must speak for 1 minute
about themselves while their facial expressions are being
recorded. The user must take a quiz and answer all of the
questions.
Skin disease among humans has been a common disease,
millions of people are suffering from various kinds of skin
diseases. Usually, these diseases have hidden dangers which
lead to not only a lack of self-confidence and psychological
depression but also lead a risk of skin cancer. According to
World Health Organization (WHO), around 30% to 70% of the
population has fallen victim to skin disease. And most of
3. Skin Disease Detection System these individuals don’t know much about the classification of
Using CNN skin disease. To tackle the above-mentioned problem, we
have designed a Skin Disease Detection System Using CNN.
The idea behind this project is to make it possible for the
common man to get a sense of the disease affecting his/her
skin so they can get a head start in preparing for its
betterment and also the doctor in charge can get an idea
about the type of cancer, which ultimately helps in faster and
efficient diagnosis.
Data mining is the process of investigating hidden patterns of
information from various perspectives for categorization into
useful data, which is collected and assembled in specific areas
such as data warehouses, efficient analysis, data mining
algorithms, assisting decision making, and other data
4. Predicting House Price Using requirements, ultimately cost-cutting and revenue
Decision Tree generation. The decision tree is the most powerful and widely
used classification and prediction tool. A Decision tree is a
tree structure that looks like a flowchart, with each internal
node representing a test on an attribute, each branch
representing a test outcome, and each leaf node (terminal
node) holding a class label.
As most of the people require review about a product before
spending their money on the product. So people come across
various reviews in the website but these reviews are genuine
or fake is not identified by the user. In some review websites
some good reviews are added by the product company
people itself in order to make in order to produce false
5. Fake Product Review positive product reviews. They give good reviews for many
Monitoring & Removal For different products manufactured by their own firm. User will
Genuine Ratings Php not be able to find out whether the review is genuine or fake.
To find out fake review in the website this “Fake Product
Review Monitoring and Removal for Genuine Online Product
Reviews Using Opinion Mining” system is introduced. This
system will find out fake reviews made by posting fake
comments about a product by identifying the IP address
along with review posting patterns.
When anyone is currently afflicted with an illness, they must
see a doctor, which is both time-consuming and expensive. It
Mr. Ch.Malleshwar 1.Multiple Disease Prediction can also be difficult for the user if they are out of reach of
12
Rao System using Machine Learning doctors and hospitals because the illness cannot be detected.
So, if the above procedure can be done using automated
software that saves time and money, it could be better for
the patient, making the process go more smoothly.
By keeping this in mind, we have developed our Multiple
Disease Prediction System using Machine Learning. It is a
web-based program that predicts a user’s disease based on
the symptoms they have. It will enable end users to predict
chronic diseases without having to visit a physician or doctor
for a diagnosis. The aim is to identify various diseases by
observing the symptoms of patients and applying various
Machine Learning Models techniques.
Businesses must customize according to their target audience
because this helps them better understand their customers’
needs and preferences, and tailor their products, services,
and marketing strategies accordingly. By customizing their
offerings to meet the specific needs and preferences of their
target audience, businesses can increase the relevance of
2.Secure Persona Prediction and their products and services, which can help to attract more
customers and improve customer satisfaction. In order to
Data Leakage Prevention System
help businesses get to know more about their target
using Python
audience, we have introduced a Secure Persona Prediction. It
can be used in marketing and user experience design to
anticipate the needs, wants, and behaviours of specific
groups of individuals or personas. It involves analyzing data
and gathering insights about a group of users, such as their
demographics and using that information to recommend
products accordingly.
According to the World Health Organization, 12 million
deaths occur yearly due to heart disease. Load of
cardiovascular disease is rapidly increasing all over the world
in the past few years. Early detection of cardiac diseases can
decrease the mortality rate and overall complications.
However, it is not possible to monitor patients every day in all
cases accurately and consultation with a patient for 24 hours
3.Heart Failure Prediction System by a doctor is not available since it requires more patience,
time and expertise. Our Heart Failure Prediction System is
intended to assist patients in recognizing their heart state
early and receiving treatment at an earlier stage, allowing
them to avoid any serious conditions. We have designed this
system using the Machine Learning model to predict the
future possibility of heart disease by implementing the
Logistic Regression algorithm.
Learning disability also known as a learning disorder is best
described as a condition in the brain that causes difficulties
comprehending or processing information and can be caused
by several different factors. There are different types of
Learning Disabilities each with its own unique causes,
symptoms, and treatment methods. Every child requires
4. Learning Disability Detector education to mould how they perceive and learn about the
world around them. However, children with learning
and Classifier System
disabilities often find it difficult to cope with conventional
teaching methods. Due to this, they might get left behind.
Developed using Python, the Classification and Detection of
Learning Disability Among Students and Personalizing
Learning Materials Using ML project helps gauge the learning
disability that a student is suffering from using a series of
tests.
Traffic sign detection and recognition have gained importance
with advances in image processing due to the benefits that
such a system may provide. The recent developments and
interest in self-driving cars have also increased the interest in
this field. Automatic detection and recognition of traffic signs
5.Traffic Sign Recognition System is very important and could potentially be used for driver
assistance to reduce accidents and eventually in driverless
using CNN
automobiles. There are many sign boards on roads that
people are unaware of. They might have seen traffic or road
signs all the time but don’t understand what those signs are
indicating. Our Traffic Sign Recognition System detects and
recognizes road signs or traffic signs from an image and
video. It will also recognize signs in real-time using CNN. Here,
Deep Convolutional Neural Network (CNN) is used to develop
Autonomous Traffic or Road Sign detection.
Students will be creating a single-page web application using
Dash (a python framework) and some machine learning
models which will show company information (logo,
1.Visualising and Forecasting
registered name and description) and stock plots based on
stocks using Dash.
the stock code given by the user. Also the ML model will
enable the user to get predicted stock prices for the date
inputted by the user.
This project involves the use of K-Means Clustering to find
the best accommodation for students in any city of your
2. Exploratory analysis of
choice by classifying accommodation for incoming students
Geolocational Data
on the basis of their preferences on amenities, budget and
proximity to the location.
Mr. Radhe Shyam
13 The application allows the users to track their moods,
Panda
highlighting the importance of self-care . By using Firebase
3. Mental health tracker using
and Flutter technologies, the application ensures a seamless
flutter and firebase.
user experience, advancing the accessibility of mental health
support in the digital age.
The Transformer model extracts the features for each word
4. Language translation model. using a self-attention mechanism to know the importance of
each word in the sentence.
This project deals with Social media sentiment analysis that is
the process of collecting and analyzing information on the
5. Sentiment analysis of social
emotions behind how people talk about your brand on social
media posts.
media. Rather than a simple count of mentions or comments,
sentiment analysis considers feelings and opinions.
Our Stroke Prediction System is designed to help predict
stroke risk and find nearby hospitals in an emergency
situation. A stroke is defined as an acute neurological
disorder of the blood vessels in the brain that occurs when
the blood supply to an area of the brain stops and the brain
cells are deprived of the necessary oxygen. According to the
1.Stroke Prediction System using World Stroke Organization, 13 million people get a stroke
Linear Regression each year, and approximately 5.5 million people will die as a
result. It is the leading cause of death and disability
worldwide, and that is why its imprint is serious in all aspects
of life. Stroke not only affects the patient but also affects the
patient’s social environment, family and workplace. In
addition, contrary to popular belief, it can happen to anyone,
at any age, regardless of gender or physical condition.
Our Automatic Pronunciation Mistake Detection project is an
efficient automatic correction system for English
Ms. G.S.Sandhya pronunciation errors for new language learners. Given the
14
Rani drawbacks of traditional English pronunciation correction
systems, such as failure to provide timely feedback and
correct learners’ pronunciation errors, slow improvement of
learners’ English proficiency, and even misleading learners, it
2. Automatic Pronunciation is critical to developing a scientific and efficient automatic
Mistake Detector correction system for English pronunciation errors. Our
Automatic Pronunciation Mistake Detection project is an
efficient automatic correction system for English
pronunciation errors. It is designed to enable students/user
to improve their pronunciation skills. By using Speech
recognition, pyaudio and pyttsx3, the project aims to
efficiently diminish the error rate and enhance the accuracy
of error detection.
Users can predict the success of the movie before its release
3. Movie Success Prediction of the movie. It will predict if certain data exists in the
System using Python database. Today, the trouble is that the more things change,
the more they stay in the same horizons. Now, filmmaking in
India is a multimillion-dollar industry employing over 6
million workers and reaching millions of people worldwide.
In 2008 industry was valued at 107.1 billion rupees. With
such a fortune and the employment of so many people at
stake every Friday, it will be of immense interest to
producers to know the probability of success or failure of a
movie. Nevertheless, producers and distributors of new
movies need to forecast box-office results in an attempt to
reduce the uncertainty in the motion picture. This
application helps to predict movie success rates based on
past data.
Humans often have different moods and facial expressions
changes accordingly. Human emotion recognition plays a
very important role in social relations. The automatic
recognition of emotions has been an active analysis topic
4. Facial Emotion Recognition
from early eras. In this deep learning system user’s emotions
and Detection in Python using
using its facial expression will be detected. Real-time
Deep Learning
detection of the face and interpreting different facial
expressions like happy, sad, angry, afraid, surprise, disgust,
and neutral. etc. This system can detect six different human
emotions.
Every person has a unique signature that is used primarily for
personal identification and verification of important
documents or legal transactions. Mostly used to authenticate
checks, draughts, certificates, approvals, letters, and other
legal documents. Because a signature is used in such critical
5. Signature Verification System activities, verification of its authenticity is essential. This type
of verification is critical in preventing document forgery and
Using CNN
falsification in a variety of financial, legal, and commercial
settings. Traditionally, signatures were manually verified by
comparing them to copies of genuine signatures. This simple
method may not be sufficient as technology advances,
bringing with it new techniques for forgery and falsification
of signatures.
This project aims to improve natural language understanding
(NLU) by leveraging transformer-based models such as BERT,
GPT, and T5. The research focuses on developing techniques
1.Enhancing Natural Language
to enhance the performance of these models in various NLU
Understanding with Transformer-
tasks, including text classification, question answering, and
Based Models
sentiment analysis. By experimenting with different model
architectures, fine-tuning strategies, and data augmentation
methods, the goal is to achieve state-of-the-art results in
benchmark datasets and practical applications.
This project explores the development of AI-powered
diagnostic systems that integrate multi-modal data, including
medical imaging, electronic health records, and genomic data.
The research aims to create machine learning models that
can accurately diagnose diseases and predict patient
15 Mr. B.Bala Krishna
outcomes by combining and analyzing diverse data sources.
2.AI-Powered Healthcare
The study will investigate deep learning techniques for image
Diagnostics Using Multi-Modal
analysis, natural language processing for text data, and data
Data
fusion methods to provide comprehensive and accurate
healthcare diagnostics. Each of these research projects
addresses a unique aspect of AI, ranging from natural
language processing and predictive maintenance to
autonomous vehicles, image synthesis, and healthcare
diagnostics. They offer opportunities to contribute to cutting-
edge developments in the field of artificial intelligence.
3. AI-Driven Predictive This research project explores the application of AI
Maintenance for Industrial IoT techniques in predictive maintenance for industrial Internet
Systems of Things (IoT) systems. The study involves developing
machine learning models to predict equipment failures and
maintenance needs based on sensor data collected from
industrial machinery. The project will investigate the use of
time-series analysis, anomaly detection, and deep learning
models to provide accurate and timely predictions, ultimately
reducing downtime and maintenance costs in industrial
settings.
This research focuses on the use of unsupervised learning
methods to detect anomalies in network traffic for cyber
4. Unsupervised Learning for
security purposes. The study compares various algorithms,
Anomaly Detection in Cyber
such as auto encoders, clustering, and generative adversarial
security
networks (GANs), to determine their effectiveness in
identifying potential security threats without prior labeled
data.
This study explores the field of explainable AI (XAI) with a
focus on healthcare applications. The research aims to
5.Explainable AI: Interpreting
develop methods that provide transparent and interpretable
Machine Learning Models for
insights from complex machine learning models, enabling
Healthcare Decision-Making
healthcare professionals to make informed decisions based
on model predictions, thereby increasing trust and reliability
in AI-assisted diagnostics.
There has been a spectacular growth in the development of
virtual reality systems and particularly special-purpose
applications for education over recent years. Youngblut's
work includes and describes over seventy educational
Design of Virtual Reality Systems
applications, all created in the Other noteworthy examples of
for Education: A
the interest raised by this technology in the teaching Feld are:
CognitiveApproach
the appearance of electronic journals accessible via internet
and specialized in this issue (like `Virtual Reality in Schools'),
monographic journal issues like `Presence', volume 8, issue or
the growing number of university and private research
centres involved in creating virtual worlds for education
Virtual reality (VR) is a promising technological advancement
that provides learners with immersive and interactive
experiences that closely resemble reality. The immersive
environment is created by utilising the visual, auditory,
tactile, and olfactory senses. The increasing prevalence of
virtual reality in diverse educational domains such as science,
history, language, biology, and medicine is a noteworthy
2.Implications of virtual reality development. These technologies exhibit the capability to
(VR) for school teachers and augment engagement, knowledge dissemination, empathy,
instructional designers: An and learner autonomy. The progressions made in virtual
16 Mr. J. Srikanth empirical investigation reality technology have significantly improved the level of
immersion and interactivity within virtual environments. It is
believed that with ongoing endeavours, the desired level of
learning performance can be attained. Several studies have
indicated that low-cost head-mounted displays (HMDs) are
more effective for instructional purposes than expensive
immersive virtual reality (VR) technology.

Virtual reality technology has a clear development direction:


high-quality, fast graphics and excellent processing capacity.
Through the continuous improvement of these aspects, it can
promote the development of output equipment and input
equipment, strengthen the processing capacity, and finally
3.Real time Analysis of Virtual strengthen the development of virtual reality technology in
Reality Art Design Based on environmental art design, reform the essence and methods of
Immersive communication environmental art design, and fully express the potential of
environmental art design. However, virtual reality technology
also has the shortcomings of imperfect technology, and there
is still a lot of room for development in the field of design. We
should improve the demand for interpersonal activities and
human activities, implement the idea of serving the people
into the whole process of design, and put people's spiritual
and material needs, that is, people's needs for the
environment, in an important position. Due to the complexity
of the specific design process, Designers should put
customers' feelings.

Computational thinking (CT) has become an essential skill


nowadays. For young students, CT competency is required to
prepare them for future jobs. This competency can facilitate
students’ understanding of programming knowledge which
has been a challenge for many novices pursuing a computer
science degree. This study focuses on designing and
implementing a virtual reality (VR) game-based application
(iThinkSmart) to support CT knowledge. The study followed
the design science research methodology to design,
implement, and evaluate the first prototype of the VR
application. An initial evaluation of the prototype was
conducted with 47 computer science students from a
Nigerian university who voluntarily participated in an
experimental process. To determine what works and what
needs to be improved in the iThinkSmart VR game-based
4.Design, development, application, two groups were randomly formed, consisting of
and evaluation of a virtual reality the experimental and the control groups respectively. Our
findings suggest that VR increases motivation and therefore
game-based application
increase students’ CT skills, which contribute to knowledge
to support computational regarding the affordances of VR in education and particularly
thinking provide evidence on the use of visualization of CT concepts to
facilitate programming education. Furthermore, the study
revealed that immersion, interaction, and engagement in a
VR educational application can promote students’ CT
competency in higher education institutions (HEI). In addition,
it was shown that students who played the iThinkSmart VR
game-based application gained higher cognitive benefits,
increased interest and attitude to learning CT concepts.
Although further investigation is required in order to gain
more insights into students learning process, this study made
significant contributions in positioning CT in the HEI context
and provides empirical evidence regarding the use of
educational VR mini games to support students learning
achievements.

With the continuous development of building technology,


people in the community design corresponding landscape
have become a new choice. Garden landscape is the soul
pillar of architectural design, which directly represents the
grade and grade of architectural design. Generally, garden
landscape can be divided into two categories: one is the
scene perceived through experience; the other is displayed
5. Research on landscape design
by real scene. Usually in the process of landscape design, the
system based on 3D virtual
traditional method has been unable to make the design
reality and image processing
results directly displayed in front of people. In the
technology
architectural design, in order to show the design drawings
concretely, 3D virtual imaging technology is usually used. By
using this technology, the graphic design drawings are
displayed in a three-dimensional form, so that the design
results are close to the object itself. Aiming at the problems
of low image resolution and single system structure of
traditional imaging system, this paper designs a 3D virtual
imaging system in landscape design
We introduce a new language representation model called
BERT, which stands for Bidirectional Encoder Representations
1.BERT: Pre-training of Deep
from Transformers. Unlike recent language representation
17 Mrs. G.Mrunalini Bidirectional Transformers for
models (Peters et al., 2018a; Radford et al., 2018), BERT is
Language Understanding
designed to pre train deep bidirectional representations from
Unlabeled text by jointly conditioning on both left and right
context in all layers. As a result, the pre-trained BERT model
can be fine tuned with just one additional output layer to
create state-of-the-art models for a wide range of tasks, such
as question answering and language inference, without
substantial task specific architecture modifications.
Entity alignment (EA) aims to find equivalent entities in
different knowledge graphs (KGs). Current EA approaches
suffer from scalability issues, limiting their usage in real-world
EA scenarios. To tackle this challenge, we propose Large EA to
align entities between large-scale KGs. Large EA consists of
two channels, i.e., structure channel and name channel. For
the structure channel, we present METIS-CPS, a memory-
2. Large EA: Aligning Entities for saving mini-batch generation strategy, to partition large KGs
Large-scale Knowledge Graphs into smaller mini-batches. Large EA, designed as a general
tool, can adopt any existing EA approach to learn entities’
structural features within each mini-batch independently. For
the name channel, we first introduce NFF, a name feature
fusion method, to capture rich name features of entities
without involving any complex training process; we then
exploit a name-based data augmentation to generate seed
alignment without any human intervention.
In deep learning, models typically reuse the same parameters
for all inputs. Mixture of Experts (MoE) models defy this and
instead select different parameters for each incoming
example. The result is a sparsely-activated model—with an
outrageous number of parameters—but a constant
computational cost. However, despite several notable
successes of MoE, widespread adoption has been hindered by
complexity, communication costs, and training instability. We
3. Switch Transformers: Scaling
address these with the introduction of the Switch
to Trillion Parameter Models
Transformer. We simplify the MoE routing algorithm and
with Simple and Efficient Sparsity
design intuitive improved models with reduced
communication and computational costs. Our proposed
training techniques mitigate the instabilities, and we show
large sparse models may be trained, for the first time, with
lower precision (bfloat16) formats. We design models based
off T5-Base and T5-Large (Raffelet al., 2019) to obtain up to
7x increases in pre-training speed with the same
computational resources.
Autoimmune diseases are chronic, multi factorial conditions.
Through machine learning (ML), a branch of the wider field of
artificial intelligence, it is possible to extract patterns within
patient data, and exploit these patterns to predict patient
outcomes for improved clinical management. Here, we
surveyed the use of ML methods to address clinical problems
in autoimmune disease. A systematic review was conducted
using MEDLINE, embase and computers and applied sciences
complete databases. Relevant papers included “machine
4. A systematic review of the
learning” or “artificial intelligence” and the autoimmune
applications of artificial
diseases search term(s) in their title, abstract or key words.
intelligence and machine learning
Exclusion criteria: studies not written in English, no real
in autoimmune diseases
human patient data included, publication prior to 2001,
studies that were not peer reviewed, non-autoimmune
disease co morbidity research and review papers. 169 (of
702) studies met the criteria for inclusion. Support vector
machines and random forests were the most popular ML
methods used. ML models using data on multiple sclerosis,
rheumatoid arthritis and inflammatory bowel disease were
most common. A small proportion of studies (7.7% or 13/169)
combined different data types in the modeling process.
Dermatology remains one of the foremost branches of
science that is uncertain and complicated because of the
sheer number of diseases that affect the skin and the
uncertainty surrounding their diagnosis. The variation in
these diseases can be seen because of many environmental,
geographical, and gene factors and also the human skin is
considered one of the most uncertain and troublesome
terrains particularly due to the presence of hair, its
deviations in tone and other similar mitigating factors. Skin
disease diagnosis at present includes a series of pathological
5. Skin Disease Detection using laboratory tests for the identification of the correct disease
Convolutional Neural Network and among them, cancers of the skin are some of the worst.
Skin cancers can prove to be fatal, particularly if not treated
at the initial stage. The Convolutional Neural Network system
proposed in this paper aims at identifying seven skin cancers:
Melanocytic Nevi, Melanoma, Benign keratosis-like lesions,
Basal cell carcinoma, Actinic keratoses, Vascular lesions, and
Dermato fibroma. The dataset used is "Skin Cancer MNSIT:
HAM10000" and was obtained from Kaggle. It has a
disproportionate number of images for each disease class,
some have well over a thousand while others have a few
hundreds.
The liver is the cleaning and detoxification mechanism of our
body. If there is any problem with our livers, our bodies
cannot properly dispose of their wastes. This can lead to
several other problems. Liver diseases are responsible for
around 2% of the world’s deaths. The detection of liver
cirrhosis in its early stages is very important to prevent any
1.Liver Cirrhosis Prediction adverse effects in the future. Early diagnosis of these diseases
System using Random Forest helps in preventing deaths. We have developed a Liver
Cirrhosis Prediction System using Random Forest to help
medical professionals detect liver diseases in their early
stages and help reduce the rates of liver disease. Using this
system, the medical professional will need to input various
liver functioning data to know whether the person suffers
from liver cirrhosis or not based on the random forest
algorithm used for prediction.
When anyone is currently afflicted with an illness, they must
see a doctor, which is both time-consuming and expensive. It
can also be difficult for the user if they are out of reach of
doctors and hospitals because the illness cannot be detected.
Mr.
18 So, if the above procedure can be done using automated
G.Venkateswarlu
software that saves time and money, it could be better for
the patient, making the process go more smoothly. By
2. Multiple Disease Prediction
keeping this in mind, we have developed our Multiple Disease
Systems using Machine Learning
Prediction System using Machine Learning. It is a web-based
program that predicts a user's disease based on the
symptoms they have. It will enable end users to predict
chronic diseases without having to visit a physician or doctor
for a diagnosis. The aim is to identify various diseases by
observing the symptoms of patients and applying various
Machine Learning Models techniques.
A stroke is defined as an acute neurological disorder of the
blood vessels in the brain that occurs when the blood supply
to an area of the brain stops and the brain cells are deprived
3. Stroke Prediction System using of the necessary oxygen. According to the World Stroke
Linear Regression Organization, 13 million people get a stroke each year, and
approximately 5.5 million people will die as a result. It is the
leading cause of death and disability worldwide, and that is
why its imprint is serious in all aspects of life. Stroke not only
affects the patient but also affects the patient’s social
environment, family and workplace. In addition, contrary to
popular belief, it can happen to anyone, at any age,
regardless of gender or physical condition. To help save a life
who might have a probability of stroke, we have designed a
Stroke Prediction System using Linear Regression. The
objective of implementing the system on a web platform is to
reach as many individuals as possible. The development of
this ML model could aid in the early detection of stroke and
the subsequent mitigation of its severe consequences.
Yoga, a centuries-old practice that is originally from India but
is globally famous for its numerous spiritual, corporeal, and
mental benefits is a type of exercise with complex postures.
The problem with yoga is that, like any other exercise, it is
critical to practice it correctly because any incorrect position
during a yoga session can be ineffectual and potentially
inconvenient. This necessitates the presence of a trainer to
4. Yoga Poses Detection using supervise the meeting and correct the individual's stance.
Open Pose Since not every client approaches or has access to a trainer, a
computerized reasoning-based application might be used to
detect yoga poses and provide customized feedback to help
people improve their structure. Our Yoga Pose Detection
System is designed and developed to recognize yoga stances
and respond with a customized response to help users
improve their postures. Our system will detect various yoga
poses, namely Chair, Cobra, Dog, Shoulder Stand, Triangle,
Tree, Warrior and No Pose.
To find a prominent topic in a collection of documents. We
here propose a system to detect topics from a collection of
documents. We use an efficient method to discover a topic in
a collection of documents known as the topic model. A topic
model is a type of statistical model for discovering topics from
a collection of documents. One would expect particular words
to appear in the document more or less frequently: "dog" and
"bone" will appear more often in documents about dogs,
"cat" and "meow" will appear in documents about cats, and
"the" and "is" will appear equally in both. A document
typically concerns multiple topics in different proportions;
thus, in a document that is 10% about cats and 90% about
dogs, there would probably be about 9 times more dog words
than cat words. Our proposed system captures this intuition
5.Topic Detection by Clustering in a mathematical framework and will examine topic of
Keywords particular set of documents. Here the system will extract
keywords and will use clustering algorithm in order to
discover topic from particular set of documents. System will
extract keywords which occur often and will cluster this
keywords using clustering algorithm and will detect topic
from a collection of documents. This system takes co
occurrence of terms into account which gives best result. This
system can be useful for web crawlers and for web users. This
system will help the web users to easily search information
for particular topic. When the user will search for particular
topic, system will extract various keywords from the set of
documents which will match topic name mentioned by the
web user and will cluster the keywords and will provide topic
related information to the user. Web users will get
information quickly for respective topic they are searching
for.
This project focus on preventing Heart diseases has become
1.Heart Disease Prediction
more than necessary. Good data-driven systems for
19 Mrs. G.Sumangala Project using machine learning
predicting heart diseases can improve the entire research and
algorithm
prevention process, making sure that more people can live
healthy lives. This is where Machine Learning comes into play.
Machine Learning helps in predicting the Heart diseases, and
the predictions made are quite accurate.The project involved
analysis of the heart disease patient dataset with proper data
processing. Then, different models were trained and and
predictions are made with different algorithms KNN, Decision
Tree, Random Forest,SVM,Logistic Regression etc This is the
jupyter notebook code and dataset I've used for my Kaggle
kernel 'Binary Classification with Sklearn and Keras.'I've used
a variety of Machine Learning algorithms, implemented in
Python, to predict the presence of heart disease in a patient.
This is a classification problem, with input features as a
variety of parameters, and the target variable as a binary
variable, predicting whether heart disease is present or not.
In the Predict Taxi Fares project, you will be predicting the
location and time to earn the biggest fare using the New York
taxi dataset. You use tidyverse for data processing and
visualization. To predict location and time, you will
2. Predict Taxi Fares with experiment with a tree base model such as Decision Tree and
Random Forests using ML Random Forest. he Predict Taxi Fare project is a guided
project, but you can replicate the result on a different
dataset, such as Seoul's Bike Sharing Demand. Working on a
completely new dataset will help you with code debugging
and improve your problem-solving skills.
A stock market is a public market where you can buy and sell
shares for publicly listed companies. The stocks, also known
as equities, represent ownership in the company. The stock
exchange is the mediator that allows the buying and selling of
shares. Stock Price Prediction using machine learning
algorithm helps you discover the future value of company
stock and other financial assets traded on an exchange. The
3. Stock Price Prediction using AI
entire idea of predicting stock prices is to gain significant
profits. Predicting how the stock market will perform is a hard
task to do. There are other factors involved in the prediction,
such as physical and psychological factors, rational and
irrational behavior, and so on. All these factors combine to
make share prices dynamic and volatile. This makes it very
difficult to predict stock prices with high accuracy.
Develop a deep learning model that scans images and
identifies people’s faces to match them with the database’s
face data. When there is a match, the system will display the
name of the individual identified. Convolutional neural
networks (CNNs) facilitate the system to process visual data
4. Face Detection System using and accurately detect data. You can improve the accuracy of
deep learning your project with the help of deep learning techniques like
the single shot multibox detector (SSD), or you only live once
(YOLO). Face recognition is a subset of object recognition
where the focus is on observing the instance of semantic
objects. The application of this project is wide-ranging, from
facial recognition for security purposes to sentiment analysis.
Therefore, it is used in various industries like marketing,
healthcare, e-commerce, social media, etc.
In machine learning and computer vision, predicting animal
species includes creating an AI system to recognize an
animal's species from an image. To reliably categorize animal
species using visual characteristics, including shape, color,
5. Animal Species Prediction
and texture, animal species prediction attempts to build a
using artificial intelligence
model that can do so.Because it involves dealing with a vast
and diverse range of animals with varying physical
characteristics, predicting animal species is difficult. However,
recent deep learning and computer vision developments have
made significant advancements possible in this field.

Consider the textual documents and classify the given textual


1.Sentiment classification with
documents into their corresponding emotions using language
emotion labels
models.
Consider the user historical data in e-commerce websites and
2. Recommending serendipitous suggest the items by collaborative filtering. The suggested
items to users. items are suppose to be interesting to the users as well as
users feel thrill by seeing those items.
The proposed approach should able to identify/capture the
20 Mrs A. Amara jyothi 3. Conversational recommender Python
user interests while doing conversations with users (Chabot
system
conversations) and suggest the relevant suggestion to them.
The user interests are captured and represented in the form
4. Graph-based recommender of graph notation. Interrelationships are identified through
systems the graphs and predict/recommend relevant items to new
users.
5. Community detection in social The proposed approach able to identify the groups with
media similar interests.
Novel architectures for Generative Adversarial Networks
1.Novel architectures for
(GANs) and Variational Auto encoders (VAEs) have been a
generative adversarial networks
hotbed of research in the field of deep learning. These
(GANs) and variational auto
architectures aim to enhance the generation capabilities,
encoders (VAEs).
stability, and interpretability of these generative models.
2. Building machine learning Building machine learning models to predict equipment
models to predict equipment failures and maintenance needs in manufacturing processes
failures and maintenance needs is crucial for minimizing downtime, reducing maintenance
in manufacturing processes costs, and maximizing overall operational efficiency.
3. Using machine learning models Using machine learning models to predict traffic congestion
Mr.D Siva Raja
21 to predict traffic congestion and and optimize traffic flow in urban areas is a promising
Kumar
optimize traffic flow in urban approach to alleviate congestion, reduce travel times, and
areas improve overall transportation efficiency.
4. Incorporating real-time data
from traffic sensors, GPS devices, Incorporating real-time data from traffic sensors, GPS
and other sources to improve devices, and other sources is crucial for enhancing prediction
prediction accuracy accuracy in traffic management and congestion forecasting
5. Developing predictive models Developing predictive models to forecast energy
to forecast energy consumption consumption in residential, commercial, or industrial settings
in residential, commercial, or is crucial for efficient energy management, cost reduction,
industrial settings and sustainability efforts.
Transfer learning, where a model is first pre-trained on a
data-rich task before being fine-tuned on a downstream task,
has emerged as a powerful technique in natural
languageprocessing (NLP). The effectiveness of transfer
learning has given rise to a diversity ofapproaches,
methodology, and practice. In this paper, we explore the
landscape of transferlearning techniques for NLP by
1.Exploring the Limits of Transfer introducing a unified framework that converts all text-based
Learning with a UnifiedText-to- language problems into a text-to-text format. Our systematic
Text Transformer study compares pre-training objectives, architectures,
unlabeled data sets, transfer approaches, and other factors
22 Mrs. M.Soujanya
on dozens of language understanding tasks. By combining the
insights from our exploration with scale and our new
“Colossal Clean Crawled Corpus”, we achieve state-of-the-art
results on many benchmarks covering summarization,
question answering, text classification, and more.
Major studies have suggested that around 20% of all road
accidents are fatigue related. Drowsy Driving can be
2. Language Models are extremely dangerous, a lot of road accidents are related to
Unsupervised Multitask Learners the driver falling asleep while driving and subsequently losing
control of the vehicle. However, initial signs of fatigue and
drowsiness can be detected before a critical situation arises.
Driver drowsiness detection is a car safety technology that
helps to prevent accidents caused by driver getting drowsy. In
this project, we aim to design and develop driver drowsiness
detection and use image processing for detecting whether
the driver is feeling fatigued and sleepy, using image
processing we detect the eyes of the person and detect for
how much time the eyes are closed of the driver if the eyes
are closed for greater than 20 sec the speaker included in the
system will sound an alert thus alerting the driver and waking
him up, preventing an accident.
Signatures are widely used to validate the authentication of
an individual. A robust method is still awaited that can
correctly certify the authenticity of a signature. The proposed
solution provided in this paper is going to help individuals to
distinguish signatures for determining whether a signature is
forged or genuine. In our system, we aimed to automate the
process of signature verification using Convolutional Neural
3. Signature Verification using
Networks. Our model is constructed on top of a pre-trained
Convolutional Neural Network
Convolutional Neural Network called the VGG-19. We
evaluated our model on widely accredited signature datasets
with a multitude of genuine signature samples sourced from
ICDAR[3], CEDAR[1] and Kaggle[2]; achieving accuracies of
100%, 88%, and 94.44% respectively. Our analysis shows that
our proposed model can classify the signature if they do not
closely resemble the genuine signature.
As Transfer Learning from large-scale pre-trained models
becomes more prevalentin Natural Language Processing
(NLP), operating these large models in on-the edge and/or
under constrained computational training or inference
budgets remains challenging. In this work, we propose a
method to pre-train a smaller general purpose language
representation model, called DistilBERT, which can then be
fine tuned with good performances on a wide range of tasks
like its larger counterparts.While most prior work
4. DistilBERT, a distilled version investigated the use of distillation for building task-specific
of BERT: smaller, faster, cheaper models, we leverage knowledge distillation during the pre-
and lighter training phase and show that it is possible to reduce the size
of a BERT model by 40%, while retaining 97% of its language
understanding capabilities and being 60% faster. To leverage
the inductive biases learned by larger models during pre-
training,we introduce a triple loss combining language
modeling, distillation and cosine-distance losses. Our smaller,
faster and lighter model is cheaper to pre-train and we
demonstrate its capabilities for on-device computations in a
proof-of-concept experiment and a comparative on-device
study.
Deploying large language models (LLMs) is challenging
because they are memory inefficient and compute-intensive
for practical applications. In reaction, researchers train
smaller task-specific models by either fine tuning with human
labels or distilling using LLM-generated labels. However, fine
5. Distilling Step-by-Step! tuning and distillation require large amounts of training data
Outperforming Larger Language to achieve comparable performance to LLMs. We introduce
Models Distilling step-by-step, a new mechanism that (a) trains
with Less Training Data and smaller models that outperform LLMs, and (b) achieves so by
Smaller Model Sizes leveraging less training data needed by finetuning or
distillation. Our methodextracts LLM rationales as additional
supervision for small models within a multi-task training
framework. We present three findings across 4 NLP
benchmarks: First, compared to both fine tuning and
distillation, our mechanism achieves better performance
with much fewer labeled/unlabeled training examples.
Second, compared to LLMs, we achieve better performance
using substantially smallermodel sizes. Third, we reduce both
the model size and the amount of data required to out-
perform LLMs; our 770M T5 model out performs the 540B
PaLM model using only 80% of available data on a
benchmark task.
Transform images into its cartoon. Yes, the objective of this
1.Cartoony Image with Machine
machine learning project is to CARTOONIFY the images. for
Learning
that we will build a python application that will transform an
image into its cartoon using machine learning libraries.
The idea behind this ML project is to build a model that will
2. Loan Prediction using Machine
classify how much loan the user can take. It is based on the
Learning
user’s marital status, education, number of dependents, and
employments. You can build a linear model for this project.
This project focuses on implementing machine learning
techniques for text summarization, which makes reading
3. Text Summarization tool easy. It summarizes long texts, helps people understand
information fast and saves time. The summary should be
fluent and concise throughout
Customer segmentation is a technique in which we divide the
customers based on their purchase history, gender, age,
23 Mrs.N Swaroopa interest, etc. It is useful to get this information so that the
4. Customer Segmentation using
store can get help in personalize marketing and provide
Machine Learning
customers with relevant deals. With the help of this project,
companies can run user-specific campaigns and provide user-
specific offers rather than broadcasting same offer to all the
users.
This is a python project which will enable us to detect the
drowsiness of the driver while he/she is driving a vehicle. The
driver expressions are detected and then the dataset is
compared to give the desired output on a particular scale.
5.Driver Drowsiness detection
There are a lot of drivers and they all feel lazy or sleepy some
using Python
times which could lead to fatal accidents. To reduce these
accidents, a system should be developed which can identify
the expressions of the driver and then alert the person in
advance. This could save a lot of lives. This project will be
helpful in that case.
24
Enhancing the robustness of deep learning models against
1.Enhancing Robustness of Deep
adversarial attacks is a critical area of research in machine
Learning Models against
learning security. Adversarial attacks involve deliberately
Adversarial Attacks
perturbing input data in a way that's imperceptible to
humans but can significantly mislead a model's predictions.
Interpretable deep neural networks (DNNs) for medical
2. Interpretable Deep Neural image diagnosis aim to provide not only accurate predictions
Networks for Medical Image but also insights into the decision-making process of the
Diagnosis model, which is crucial for gaining trust from clinicians and
ensuring patient safety.
25 Mr.A Laxminaranya
Privacy-preserving federated learning for healthcare data
3.Privacy-Preserving Federated
addresses the challenge of leveraging data from multiple
Learning for Healthcare Data
institutions or individuals while preserving the privacy and
confidentiality of sensitive medical information.
Continual learning in resource-constrained environments
4. Continual Learning in refers to the challenge of continuously learning from a
Resource-Constrained stream of data while operating under limitations in
Environments computational resources, memory, or energy. This scenario is
common in edge computing, Internet of Things (IoT) devices,
and other systems where resources are limited.
Graph Neural Networks (GNNs) for social influence
5. Graph Neural Networks for
prediction leverage the inherent structure of social networks
Social Influence Prediction
to model the influence propagation process and predict the
influence of individuals within the network.
Neurosymbolic reasoning for explainable AI combines the
1.Neurosymbolic Reasoning for
strengths of neural networks and symbolic reasoning to
Explainable AI
enhance the interpretability and transparency of AI systems.
Self-supervised learning for natural language understanding
2. Self-Supervised Learning for (NLU) is an approach where a model learns to understand
Natural Language Understanding language from unlabeled data by generating supervisory
signals automatically from the data itself.
Deep reinforcement learning (DRL) for autonomous driving
3. Deep Reinforcement Learning involves training artificial agents to navigate and control
for Autonomous Driving vehicles in complex environments by interacting with the
26 Ms B.Divyasri
environment and learning from feedback signals.
Fairness-aware machine learning for loan approval systems
4. Fairness-Aware Machine
addresses the challenge of ensuring that automated loan
Learning for Loan Approval
approval systems make decisions that are fair and unbiased
Systems
across different demographic groups.
Multi-modal fusion for video captioning and understanding
involves integrating information from multiple modalities
5. Multi-Modal Fusion for Video
(such as visual, auditory, and textual) to generate accurate
Captioning and Understanding
and comprehensive captions that describe the content of
videos

You might also like