Review and Comparison of Various Technologies For Predicting Students' Academic Performance

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

REVIEW AND COMPARISON OF VARIOUS


TECHNOLOGIES FOR PREDICTING STUDENTS’
ACADEMIC PERFORMANCE
Asish Bhandari 1,Himanshu Thapliyal 2, Shailesh Singh Panwar 3, Y. P. Raiwani 4
1
M.Tech, Department of C.S.E, H.N.B.G. (Central) University, Srinagar Garhwal, India,
2
M.Tech, Department of C.S.E, H.N.B.G. (Central) University, Srinagar Garhwal, India,
3
Research Scholar, C.S.E department in H.N.B.G. (Central) University, Srinagar Garhwal, India,
4
Professor, C.S.E department in H.N.B.G. (Central) University, Srinagar Garhwal, India

Emails:1asish.bhandari14@gmail.com2himu.thapliyal@gmail.com3shaileshpanwar23@gmail.com
4
yp_raiwani@yahoo.com

Received: 14 March 2020 Revised and Accepted: 8 July 2020

ABSTRACT:The world around is facing a technological revolution with the dawn of the digital age, we are in
some ways compelled to rethink our education system and its components. With the tools and the techniques
available to us nowadays we must reconsider how we can use those to improve our education system. The
purpose of the paper is to outline data mining techniques used to predict the output of the students. we could
more effectively improve student's achievement and performance using educational data mining techniques. It
will benefit students, educators, and academic institutions, and will impact.

KEYWORD: (EDM) Educational Data Mining, Predicting Students’ Academic Performance

I. INTRODUCTION
Owing to the tremendous growth of data related to the success of academic students, innovation in the
educational sector is growing rapidly in recent decays. The techniques of data mining can be used to uncover
useful and relevant information from a great deal of data. Evaluating the learning process for students is a very
difficult activity. Data mining analytical tool offers essential information and expertise which can help enhance
the decision-making process. It helps identify at-risk students themselves during the course period and not at the
end of the course.

The students, teachers, administrators, and academic planners everyone may benefit from the application of data
mining on the higher education system data. The future outcome of the learning process based on predictive
outcome may be useful for an institution to modify the outcome of the new group of students by altering the
variables that led to past performance. Xindong Wu in their paper [1] explicitly refers to the prevailing
calculations in the field of information mining, the author inspected the adequacy of these calculations in
various fields and recognized potential bearings for study. EDM principally investigates better approaches for
removing information from instructive information to survey the viability of learning frameworks in a study [3],
Predicting and assessing understudy scholarly execution is fundamental to understudy scholastic achievement
and is
an overwhelming undertaking because of the effect of different variables impacting understudy execution, for
example, family factor, mental profile, past tutoring, earlier execution, and understudy contact with their
colleagues and educators [2].

This technique would assist with foreseeing the performance of the understudies by assessing the scholastic
record of the understudy, for example, internal semester assessment, accommodation of assignments, and level
of participation. The methods talked about in this paper are clustering, grouping, and a few others that may
focus on the use of information mining strategies with different substances to explicit necessities. An efficient
investigation is suggested for the experience of issues.

1. Prediction factors
The two factors important in predicting the results of the students are attributes and methods of prediction.
Table 1 shows general students attribute and Table 2,3,4 shows a description of previous related work on EDM
using Decision Tree’s Naive Bayes, and Neural network by various research scholars and their results. Section
2.1 will target the important student characteristics utilized by the various researchers to estimate student

8443
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

academic performance and section 2.2 will target prediction methods wont to predict the educational
performance of the scholar.

1.1. Important contributing attributes

This section features different traits that are utilized for anticipating the scholastic accomplishment of
understudies in various studies. Variables including current Academic result, previous accomplishments,
General Factor, subjects, Finance, and so on. The author has found in a few studies that the most significant
factors while foreseeing understudy accomplishment in scholastics are participation marks and total evaluation
point normal (CGPA). We found these variables critical because their advanced education makes these imprints
an integral factor.

The author in their study [7] analyzed data of students from the University of Pune pursuing a Post Graduate’s
degree in Computer Application (MCA), obtained a dataset of 60 students pursuing an MCA course, and found
different rules for the association between attributes.

In [11] the author addressed three models: Decision Tree (DT), Artificial Neural Network, and Linear
Regression using data mining tool SAS Enterprise to determine the students' annual cumulative grade point
average (CGPA) upon graduation.

Ahmed Mueen, Bassam Zafar, and Umar Manzoor [9], the dataset was analyzed to classify factors that cause
the student to lose their academic status because of academic results. The students' low achievements were the
result of the lack of involvement in an online platform.

In [8] the author studied students of University in Nigeria to predict their grades. The Decision Tree algorithm
used by the authors was (ID3) – Iterative Dichotomiser3. The model estimated the accuracy of 79.5 percent.

The study [6] indicated that the achievements of students in scholarly establishments may be influenced by
specific aspects such as financial condition, level of motivation, gender, and result of previous (pre-requisite)
institution.
The author in [10] has used the decision tree technique. To predict the end semester results author used different
attributes such as percent attendance, internal exam, seminar and various assignment given throughout the year

Table 1. General Attributes of students

Factors / attributes Description and subdivision

Academic performance Grade, CGPA, Internal assessment, Practical’s

Academic classification Attendance, participation in extracurricular


activities (quizzes, interviews, etc.),
university
performance

Common factors Date of birth, type of program, subject interest,


level of motivation, university facilities

Forum (Online Courses) Forum login time, log out time, write messages,
reply forum, write the total number of words,
spent time

Gender Male, female, etc.

Financing Scholarships, Financial strength, etc.,

1.2. Methods / models

The Aim of mining the data is to extract the unknown and unexpected information. Typically, mining of the data
involves numerous methods and procedures to find interesting facts and designs from the available data sets.

8444
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

Several methods are used to construct predictive models, which are classification, regression, and
categorization. But in our analysis, we found that the most common algorithm used for prediction is
classification algorithms. Explicit methods of data mining such as K-Nearest Neighbour, Bayesian
Classification, Decision Tree, etc were used to find patterns and predict the academic success of students.

1.2.1 Decision Tree: It is one of the most common grading and prediction techniques. It is the type of
supervised learning. It can also be used for classification and regression techniques. It forms the tree-shaped
structure where nodes are a problem/question depending on the attribute, branches are a choice, and a tree leaf is
a result representing sets of decisions. In Decision Trees, the root attribute is selected and the procedure starts
from the root of the tree to assign a label to the record by comparing the value of attributes of the record/tuple
with the root value. Thus by subsequent comparison at each node branches are made to form the Decision Tree.
A dataset classification is based on certain sets of rules- generating decisions. ID3 and C4.5 are called decision
tree induction algorithms developed by Ross Quinlan, a researcher.

Solomon, Kolo, and John in their Study [6] used the CHAIP approach. The score is the dependent variable
meaning the analysis is concerned with estimating the student's score using the past score. It indicates a higher
benefit for the node tree than for the other nodes.
The research shows that aspects like financial condition, motivation, gender and the academic result obtained in
previous courses can affect students' conduct in an institution

The author in [9] used an algorithm - C4.5, That is a high-end data mining analysis algorithm. They used Weka
3.6 as a tool for extracting data. They have an accuracy of 79.2 percent using all attributes, and 80.5 percent
using the best attributes. Information of undergraduate students with Fundamental Programming and Advanced
Operating System courses were taken for review from August 2014 to May 2015.

In the Paper [10] the author used the ID3 (Decision tree) algorithm on a data set of 50 students. They used
knowledge gain to figure out the split ratio and gain ratio of each attribute to generate different IF-THEN rules,
and found that PSM (previous semester result) shows higher gains there by this attribute is selected as the root
node.

Zaidah Ibrahim, Daliela Rusli in their research [11], SAS Enterprise Miner was used to estimate the students'
annual CGPA upon their graduation. Researchers have split the tree into two categories 2 and 4 ways and tested
on both the trees and found that the tree structure is identical for both the categories 2 and 4-way split tree with
Root Average Squared Error (RASE) for both trees to be 0.1769.

Mr. Quadri and Dr. Kalyankar in their paper [13] addressed data mining technology skills and strengths in finding
students who drop out. They used the Weka program. For prediction, they used J4.8 which is an improved and
later version of C4. To measure the dropouts and the influence of each risk factor, they used decision tree
techniques in defining the factors driving dropouts and logistic regression.

In their analysis [14], Dr. Vijayalakshmi and Anupama applied the C4.5 to the data set of internal grades of
students to predict their success in annual examination and compared it to the ID3 algorithm and found that
C4.5 had better precision.

The algorithm used for classification in this paper [15] is J48 (java implementation of algorithm C4.5). The
model's accuracy is 60.46%. High school common entrance test is found to be the most significant attribute in
predicting student success. They used the WEKA tool to build the model.

1.2.2 Naive Bayes: It is based on the application of Bayes theorem. This probabilistic classifier is based on
the assumption that the features have strong independence among them. Here, the presumption is that the
predictors/functions are independent that is the presence of one particular trait does not affect the other. hence it's
called naïve. Navie Bayes can be used for classification by using density-based approximations. It classifies the
data based on previous information. This classifier is fast and simple to apply but its greatest drawback is that
the features must be independent of each other. Much of the time of reality, the predictors are dependent, this
obstructs the effectiveness of the classifier.

In their study [9], Ahmed Mueen, Bassam Zafar, and Umar Manzoor checked all 38 attributes using 10 fold
cross-validation and obtained 86.0 percent accuracy and 85.7 percent while using the best attributes. They
compared it to a Decision tree (C4.5) and Multilayer Perception, and the classifier Naive Bayes outperforms the
other two.

8445
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

In [18] the author used the WEKA software package and applied the naïve Bayes algorithm compared to MLP
and C4.5 and the result indicated that the naïve Bayes classifier outperformed the other two with 76.65 percent
predictive accuracy. The success was assessed with the passing grade. The impact of different variables such as
attitude towards learning, socio-demographic and entrance result all were investigated.
In the study [19] authors considered only the university's previous year result and no demographic or socio-
economic feature. They have used rapid miner tools for Recursive feature elimination (RFE) which has some
criteria, weight by information gain (IG), weight by Gini Index (GI), weight by chi-squared (Chi-SS), and weight
by rule induction and used it on the data to get 75.65 % accuracy without feature selection and 74.78 % with
feature selection

McGuinness, Gray, and Owende used Rapid Miner tool in their study [21] have considered psychometric
parameters apart from the usual data like student’s academic results, grade point average, semester marks, and
general features age, gender, etc. They got the accuracy of 76.51 percent using cross-validation of 10 folds.

R. Sumitha and E.S. Vinoth Kumar in their study [22] used a different method to predict the outcome. The author
worked on the Weka tool with main attributes as CGPA, Attendance, 12th marks, and analyzing the prediction
accuracy of naive Bayes as 85.92 percent and the highest accuracy was 97.27 percent with J48. Analyzing and
predicting student’s academic outcomes will help them to work on their weak topic and focus to boost their
performance.

2.3.3 Neural Network: It is a machine learning algorithm that is used for classification. It is mainly used to
build complex functions to dynamically predict and mimic human-like thought. The neural network can be
defined as the vast number of neurons connected in a chain-like an input-output unit and every connection is
associated with some weight. This network is continuously learning by changing weights during the learning
process, this allows the network to predict the correct class for input tuple. The key features of the neural network
are self-learning, self-organizing, and adapting with less noise and some real-time action. Back Propagation,
Radial Base Function network, multilayer perceptron, etc are some of the neural network algorithms.

Zaidah Ibrahim and Daliela Rusli in their paper [11] had used student's academic and demographic profiles as
parameters for the model. Models are developed using SAS enterprise miner. To gage the smallest RASE (Root
average squared error) the author used different ANN models. The training RASE values was 0.208, and 0.1714
for validation. ANN yields a better outcome as compared with linear regression and the Decision Tree.

In the study [16] D.M.S. Anupama Kumar (2012) achieved 98 percent accuracy using the neural network. The
key attributes that the author have taken into account were internal and external assessment.

Pauziah, Norlida, and Jamalul-lail in their analysis [17] used the Levenberg –Marquardt (LM) algorithm which is
the fastest known technique of Neural Network. The value of MSE (Minimum Square Error) for diploma students
was found to be 0.0488 and the value for Coefficient of Correlation R was 0.9245. They consider grade point
(GPA) to be the elementary attribute for all the fundamental subjects. Based on the value of Correlation and
Mean Square Error the output of the model was calculated.

Edin Osmanbegović and Mirza Suljic regarded the social and academic aspects of the students in their study
[18]. The authors used the MLP (Multilayer Perceptron) algorithm to construct the model of prediction and the
accuracy of prediction was 71.2%. The neural network took more time in comparatively constructing the model,
and the learning time was
4.13 sec.

Raheela Asif, Saman Hina, and Saba Izhar Haque in their analysis [19] use academic results such as grades of
previous universities and previous year results of the graduate program. They have not considered the socio-
demographic parameters of the students. The model gives a reasonable accuracy of 70.43 % and a kappa value
of 0.497 using Rapid miner

The Authors used a multilayer perceptron neural network in their study [20] that uses the linear sigmoid
activation function to predict student output. The model used for training a feed-forward backpropagation
algorithm gives an 84.6 percent accuracy. Data of 150 students were taken into account of which 60 percent data
was used for training, 30 percent for testing the results and 10 percent for the cross-validation

Nick Z. Zacharis in their paper [23] had used (MPL) multi-layer perceptron algorithm in neural networks for

8446
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

analysis to build the model The author used four parameters from the student's online activity messages, the
collaborative creation of new content (ccc), online quizzes, and files viewed. The neural network was trained
using the back-propagation algorithm. The model was 98.3 % accurate in predicting the student's success and
failure.

2. Previous work
The comprehensive study in the field of educational data mining is presented below in the form of a table that
contains some important work done in this field using the Decision Tree, Navies Bayes, and Neural Network.

Table -2 Description of Previous related work on EDM using Decision Tree

Author / Year Algorithm / Tool Attributes Result

Ashutosh, Deepak and Decision Tree – Different social and ID3 shows maximum
Mukesh (2020) CHAID, ID3, C4.5 academic factors accuracy of the three

Ahmed Mueen , Bassam Decision Tree – C4.5 38 different attributes 79.2% on all attributes
Zafar , Umar Manzoor and 80.5% using best
(2016) / Weka 3.6 And 7 best attributes attributes

Kolo David Kolo, Decision Tree – Score, Status, Gender 55.10% of the male
Solomon A. Adepoju, CHAID students while 59.03%
John of
Kolo Alhassan (2015) / SPSS the female is predicted

S. Anupama Kumar and Decision Tree – Internal marks, External J48 algorithm is more
Dr. Vijayalakshmi J48(C4.5) and ID3 marks, CGPA accurate than ID3
M.N (2011)
/ Weka

R. R. Kabra and R. Decision Tree – J48 HSCCET accuracy of the model


S. Bichkar (2011) is 60.46
/Weka %.

Brijesh Kumar Baradwaj, Decision Tree – ID3 PSM, SEM PSM has the highest
Saurabh Pal (2011) gain and used as root
, ASM node

Mr. M. N. Quadri and Decision Tree – J48 academic trouble, The low-income level
Dr. N.V. Kalyankar academic preferences has great influence on
(2010) / Weka the drop out.
, financial position

Zaidah Ibrahim , Daliela Decision Tree – 2-way CGPA RASE for both trees is
Rusli (2007) split and 4-way split 0.1769
tree

/ SAS Enterprise Miner

8447
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

Table -3 Description of Previous related work on EDM using Naive Bayes

Author / Year Algorithm / Tool Attributes Result

Li Peiliang, Bayesian Credit and Confidence


HuPeipei classifier / SPSS Integrity of degree of
(2018) college student students credit
evaluation is high

Raheela Asif, Saman Bayesian classifier- High school marks, 75.65 % accuracy
Hina and Saba (2017) Naïve Bayes/ First year and second without feature
year marks selection and
rapid miner 74.78 % with feature
selection

Ahmed Mueen, Bayesian 38 attributes using accuracy of 86.0


Bassam Zafar and classifier- Naïve 10 fold cross- %
Umar Manzoor Bayes validation
(2016)

R. Sumitha and Bayesian classifier- CGPA, accuracy of


E.S. Vinoth Kumar Naïve Bayes / Attendance, 12th grades
(2016) 85.92 %.
Weka
GPA, sem-1, sem-2
G. Gray, C. McGuinness, Bayesian classifier- grades accuracy of
and Naïve Bayes /
P. Owende (2014) 76.51 %
rapid miner

Edin Bayesian 12 different prediction


Osmanbegović and classifier- Naïve attributes such as accuracy of 76.65
Mirza Suljic Bayes / WEKA gender, distance, %
(2012) GPA, etc
Table -4 Description of Previous related work on EDM using Neural network

Author / Year Algorithm / Tool Attributes Result

S. Arumugam, A. Neural Network – 13 attribute data set Neural network based


Kovalan, A.E. RBFNN / Weka from UCI classifier
Narayanan (2019) provides better
accuracy

Raheela Asif, Saman Neural Network- ANN/ High school marks, Accuracy
Hina, and Saba Izhar First year and second =70.43%
Haque (2017) Rapid miner year marks

Samy Abu Naser, Ihab Neural Network- MLP 10 different attributes Accuracy = 84.6%
Zaqout, Mahmoud Abu (Multilayer Perceptron) such as High school
Ghosh, Rasha Atallah score, CGPA, Gender,
and Eman etc
Alajrami (2015)

8448
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

Pauziah Mohd Arsad, Neural Network- grade point (GP) of MSE =0.0488
Norlida Buniyamin, Levenberg – Marquardt fundamental subjects
Jamalul-lail Ab Manan (LM) R =0.9245
(2013)

Anupama and Neural Network- ANN Internal assessment, accuracy of 98%


Vijayalakshmi (2012) / Weka
External assessment

Edin Osmanbegović Neural Network- MLP 12 different attributes Accuracy


and Mirza Suljic (2012) (Multilayer Perceptron) such as gender, =71.2%
distance, GPA, etc
learning time=
4.13 s

Zaidah Ibrahim and Neural Network- ANN CGPA RASE values for
Daliela Rusli (2007) training is 0.208 and
for
validation is 0.1714.

II. CONCLUSION
Educational data mining (EDM) describes a field of research in which data mining, machine learning, and
statistics are applied to knowledge produced from educational settings. The field aims to establish and enhance
methods to analyze these data to uncover new insights into how people learn in these settings. I have reviewed
several papers on educational data mining and predicting student’s academic performance and found out that the
Decision tree classifier is the most used classifier for prediction though Naïve Bayes and Neural network have
shown more accuracy sometimes with certain attributes. Different papers have taken different attributes and
touched many aspects that impact a student’s performance. The result of these studies has already helped many
students to improve their scores and do better. Many institutions have also benefited from these studies and will
continue to do so.

III. REFERENCES

[1] Xindong Wu, Vipin Kumar, J. Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J.
McLachlan, Angus F. M. Ng, Bing Liu, Philip S. Yu, Zhihua Zhou, Michael Steinbach, David J. Hand,
Dan Steinberg “Top 10 algorithms in data mining,” Knowl. Inf. Syst. 14(1), pp.1-37, 2008.
[2] Araque, F., Roldan, C., & Salguero, A. “Factors influencing university dropout rates,” Journal of Computer
& Education, 53, pp.563–574, 2009.
[3] Kotsiantis, S. B. “Use of machine learning techniques for educational proposes: A decision support system
for forecasting students grades,” Artificial Intelligence Review, 37(4), pp.331–344, 2012.
[4] Baker, R.S., Corbett, A.T., Koedinger, K.R. “Detecting Student Misuse of Intelligent Tutoring Systems,”
Proceedings of the 7th International Conference on Intelligent Tutoring Systems, pp.531-540, 2004.
[5] Romero, C., & Ventura, S.“Data mining in Education,” Wiley Interdisciplinary Reviews: Data Mining and
Knowledge Discovery, 3(1), pp.12–27, 2013.
[6] Kolo David Kolo, Solomon A. Adepoju, John Kolo Alhassan, “A Decision Tree Approach for Predicting
Students Academic Performance” 2015 Published by MECS Publisher
[7] Suchita Borkar, K. Rajeswari, “Predicting Students Academic Performance Using Education Data
Mining”. IJCSMC, Vol. 2, Issue. 7, July 2013, pg.273 – 279
[8] Ogunde A.O., Ajibade D.A. “A data Mining System for Predicting University Students F=Graduation
Grade Using ID3 Decision Tree approach”, Journal of Computer Science and Information Technology,
Volume 2(1) (2014).
[9] Ahmed Mueen, Bassam Zafar, Umar Manzoor “Modeling and Predicting Students’ Academic
Performance Using Data Mining Techniques”, I.J. Modern Education and Computer Science, 2016, 11,
36-42
[10] Brijesh Kumar Baradwaj, Saurabh Pal, “Mining Educational Data to Analyze Students‟ Performance”,
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 2, No. 6, 2011
[11] Zaidah Ibrahim, Daliela Rusli, “predicting students’ academic performance: comparing artificial neural
network, decision tree and linear regression”, 21st Annual SAS Malaysia Forum, 5th September 2007,

8449
ISSN- 2394-5125 VOL 7, ISSUE 19, 2020

Shangri-La Hotel, Kuala Lumpur


[12] Behrouz Minaei-Bidgoli, Deborah A. Kashy, Gerd Kortemeyer, William F. Punch, “predicting student
performance: an application of data mi"g methods with an educational web-based system”, November 5-
8.2003, Boulder, CO 33'd ASEEilEEE Frontiers in Education Conference T2A
[13] Mr. M. N. Quadri, Dr. N.V. Kalyankar, “Drop Out Feature of Student Data for Academic Performance
Using Decision Tree Techniques” Vol. 10 Issue 2 (Ver 1.0), April 2010, GJCST Computing Classification
[14] S. Anupama Kumar and Dr. Vijayalakshmi M.N “EFFICIENCY OF DECISION TREES IN
PREDICTING STUDENT’S ACADEMIC PERFORMANCE”, D.C. Wyld, et al. (Eds): CCSEA 2011, CS
& IT 02, pp. 335–343, 2011
[15] R. R. Kabra and R. S. Bichkar “Performance Prediction of Engineering Students using Decision Trees”,
International Journal of Computer Applications (0975 – 8887) Volume 36– No.11, December 2011
[16] D. M. S. Anupama Kumar, “Appraising the significance of self-regulated learning in higher education
using neural networks”, International Journal of Engineering Research and Development Volume 1 (Issue
1) (2012) 09–15.
[17] Pauziah Mohd Arsad, Norlida Buniyamin, Jamalul-lail Ab Manan, “A Neural Network Students’
Performance Prediction Model (NNSPPM)”, Proc. of the IEEE International Conference on Smart
Instrumentation, Measurement and Applications (ICSIMA) 26-27 November 2013, Kuala Lumpur,
Malaysia
[18] Edin Osmanbegović and Mirza Suljic, Data Mining Approach For Predicting Student Performance,
Economic Review – Journal of Economics and Business, Vol. X, Issue 1, May 2012
[19] Raheela Asif, Saman Hina and Saba, "Predicting Student Academic Performance using Data Mining
Methods", International Journal of Computer Science and Network Security (IJCSNS), VOL.17, 2017
[20] Samy Abu Naser, Ihab Zaqout, Mahmoud Abu Ghosh, Rasha Atallah, and Eman Alajrami, “Predicting
Student Performance Using Artificial Neural Network: in the Faculty of Engineering and Information
Technology”, International Journal of Hybrid Information Technology Vol.8, No.2 (2015), pp.221-228
[21] G. Gray, C. McGuinness, P. Owende, An application of classification models to predict learner progression
in tertiary education, in Advance Computing Conference (IACC), 2014 IEEE International, IEEE, 2014,
pp. 549–554
[22] R. Sumitha and E.S. Vinoth Kumar, Prediction of Students Outcome Using Data Mining Techniques,
International Journal of Scientific Engineering and Applied Science (IJSEAS) – Volume-2, Issue-6, June
2016 ISSN: 2395-3470
[23] Nick Z. Zacharis, “Predicting Student Academic Performance In Blended Learning Using Artificial Neural
Networks” International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 5,
September 2016
[24] Ashutosh Shankhdhar, Deepak Sharma, Mukesh Pushkarna, akash and Suryansh, “Intelligent Decision
Support System Using Decision Tree Method for Student Career”, 2020 International Conference on
Power Electronics & IoT Applications in Renewable Energy and its Control (PARC) GLA University,
Mathura, UP, India. Feb 28-29, 2020
[25] S. Arumugam, A.Kovalan. A.E. Narayanan, “A Learning Performance Assessment Model Using Neural
Network Classification Methods of e-Learning Activity Log Data”, Second International Conference on
Smart Systems and Inventive Technology (ICSSIT 2019)
[26] Li Peiliang, Hu Peipei, “Construction of College Students' Integrity Evaluation Model Based on
Bayesian Classifier”, 2018 International Conference on Information Systems and Computer Aided
Education (ICISCAE)

8450

You might also like