Download as ps, pdf, or txt
Download as ps, pdf, or txt
You are on page 1of 13

)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!

'(##

!"#$%&'()$'*+,-.%#/.)012'$"1"#$%&"$12')$'3+"1/.)01
or Discover by subject area Recruit researchers Join for free Login

ve Test based on Item Response Theory in E-Learning System

in International Journal of Computer Applications 81(6):6-11 · November 2013 with 204 Reads

Kustiyahningsih Andharini Cahyani


Universitas Trunojoyo Madura 2.9 · Universitas Trunojoyo Madura

daptive Test (CAT) is a computer-based test framework which has ability to customize questions items
ner based on their estimated ability. In this research, the CAT system is build using Item Response Theory
to develop an adaptive system based on question item's difficulty level and students' ability level.
re out the effectiveness of this CAT system, we do some experiments by comparing the average post-test
in CAT system and conventional system. The experiments result reveals that the average post-test score
CAT system is much higher than the average post-test score of students in traditional test system.

rld's research

members
publications
ch projects

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&!&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

Yeni Kustiyahningsih Author content

See all › See all › See all ›


Download citation Share Download full-text PDF
1 Citations 17 References 1 Figures

Join ResearchGate to find the p


research you need to help your

15+ million members


118+ million publications
700k+ research projects

Join for free


etween Adaptive Test and non- Adaptive Test

Yeni Kustiyahningsih Author content


Download full-text PDF

International Journal of Computer Applications (0975


Volume 81 – No.6

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&$&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

Computerized Adaptive Test based on Item Response


Theory in E-Learning System

Yeni Kustiyahningsih Andharini Dwi Cahyani


matics Management Department, Universitas Informatics Engineering Department, Univer
Trunojoyo Madura, Indonesia Trunojoyo Madura, Indonesia

STRACT though they are in the same age or class. In the same
uterized Adaptive Test (CAT) is a computer -based test there are not only intelligent and bright
work which has ability to customize questions items unintelligent and slow learners. The adaptive
to the learner based on their estimated ability. In this model can be used to overcome th
the CAT system is build u sing Item Re sponse Moreover, in this model, th e test questions given to each
y (IRT) techniqu es to develop an adaptive system b ased learner is different and based on learner ability.
estion item’s difficulty level and students’ ability level. In the beginning, the CAT system calibrates the difficulty
over, to figure out the effectiveness of this CAT system, level of each questions test using Item Response Theory
some experiments by comparing the average post-test non-adaptive manner (Van der Linden &
students in CAT system and conventional system. Wiliam, 2011). Then, the system tries to recognize learner
xperiments result reveals that the average p ost-test score ability by giving test question with the middle
dents in the CAT system is much higher th an the If learners answer is correct, then the CAT
ge post-test score of students in traditional test system. learners a question with a higher difficulty level. Otherwise
the CAT system will give learners a question with lower
eral Term : e-learning, Adaptive Test difficulty level. By using this system, therefore, c
intelligent learners feel more challenged
words learners will be encouraged at once.
uterized Adaptive Test, Item Response Theory, students
y level, maximum likelihood estimation There are various IRT models, with d ifferent complexity
level. The simplest IRT model is the Rasch model
NTRODUCTION a learner’s response to a question item depends on
Computer Adaptive Test learner’s ability and question item’s
days, many universities, corporations, and educational Linden & Hambleton,1997; Kim, 2006). More complex IRT
ization develop and deliver online course materials for models include additional parameters,
learning programs (Georgieva, Todorov, & discrimination parameter, a p seudo-guessing parameter
arov, 2003). E-learning, also kno wn as distance learning effects of person or item characteristics (Ozaki &
eb-based learning, is self-learning using electronic 2006; Kim, 2006; Wauters, Desmet, & Van den
nces through the Intranet or Internet (Barker, 2002). The 2010).
for the rapid growth of e-learning is its convenience
fficiency so that learning process can take place at any 1.2 ITEM RESPONSE THEORY (IRT)
anywhere and anytime (Kabassi & Virvou, 2004). The Item response theory (IRT) is a psychometric
perspectives of using e-learning is to solve the based on the idea that the probability of a
tions of conventional learning, such as flexible schedule, an item is a mathematical function of
r interactivity, and better quality and a variety of lecture parameters (Kim, 2006). IRT provides a basis
ials (Jayasimman & George, 2013). the parameters, determine ho w well the d ata conformed to the
model and investigate how to measure
to help the learner learn more efficiently, then the
properties. Figure 1 shows the diagram of
ased learning system need to personalized (Xu & Wang,
algorithm.
Brusilovsky, 1999; Andharini, 2012). Some of the
nalization systems consid er the learner’s experiences,
Calculate  ( )
ences, goals and existing knowledge (Hu ang, Huang, &
2007). The term personalized e-learning has become
ar in learning context over this decade. By using the
nalization in e-learning system, then the lear ner can be Estimate the next student
ely id entified and their abilities can be individually ability
ored and assessed (Alavi & Leidner, 2001). One of th e
nalization system in e-learning is computer testing
m that encourage the learner to improve their abilities

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&I&/J&!I
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

dovic, Warren, &Trichina, 2003). Calculate Item


th e educational assessment models applied in Information Function
are still employing the same test q uestions for all
rs. These practices are based on the assumption that Figure 1. Steps in IRT algorithm
rs with the same age or level of education have the same
ilities. Mean while, the learners’ ability is different even

International Journal of Computer Applications (0975


Volume 81 – No.6

s the explanation of each step. To calculate IIF, we can use the equation (3):
culate  ( ) Ii(θ) = ai2 Pi(θ) Qi(θ)

response theory, there is an IRT main curve (Baker, After performing the IRT algorithm, then
, that represents the characteristics of a problem that criteria to stop the test. Usually, the stopping
sts the possibility of a learner with certain ability (θ) can follows.
the question correctly. In addition, this curve namely a. Fixed length: after a certain number of
Response Function (IRF), is denoted by P (θ). test will be terminated.
ermore, there are 3 item parameters of IRF curve. The b. Time limit: after reaching a certain time
arameters simply determine the shape of the IRF and in will be terminated.
cases have a direct interpretation. Those 3 parameters In this research, we use fixed length to terminate the test.
amely a (discriminant factor), b (questions’ difficulty
and c (pseudo-guessing parameter). In our study, we 2. THE PROPOSED COMPUTERIZED
2 parameter logistics (2PL). Equation (1) shows the
la of 2PL model:
ADAPTIVE TEST (CAT)
This section describes system requirements
1 1 components and mechanism of CAT application.
)  = (1)
1 e L
1 e a  b  2.1 System Requirement Analysis
This computerized adap tive system is
Probability that a learner answered a question integrated within e-learning application. Therefore, lea
correctly to parameter b are supposed to learn the material b efore they perfom the
stimated ability of learners Figure 2 below illustrate the responsibilities of each actor
scriminant factor of each question (teacher and student) towards e-learning application.
ifficulty level of each question In this system there are three actors are admin / teachers,
xponentioal value (2,718) students / members and the public. Each actor has
access to the system, while the Admin
imate the next learner’s ability teacher is managing the material, question
timate the next learner’s ability, we apply maximum Manage syllabus, Manage term, Manage
hood estimation (MLE) theory. This process begins with management user guide, Manage polls,
ated ability of learners and some parameters. In this Manage forums, polls contents, guest book,
we predict the learner’s probability to answer access rights of students as members are working
the next q uestion item. This M aximum Likelihood adaptive test, following the forum, setting profiles. The
ation (MLE) th eory is quite efficient and able to of equal access to the public is between members.
bute the error normally (Wang, 2006). Furthermore, to searching materials, guest book, polls contents, help, view
ate the learner’s ability is an iterative process that starts course materials, view help, and view the syllabus.
itial value for the learner's ability. In this research,
the initial value = 0,5. 2.2 Flowchart and Mechanism of CAT

earner application
B331=L""MMM(54=4:5NBG:34(?43"12>O6N:36/?"$P#I%!!QPR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&I&/J&!K
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

earners’ ability estimation formula is a modification of application


ewton-Raphson iterative model. Equation (2) presents Flowchart is a chart that shows the work flow or
rmula of maximum likelihood estimation. done on the system as a whole and explain the sequ
n procedures that exist in the system.
  a u     
i i s The following are description of each process in figure 3:
 s 1  s i 1 (2) 1.The first step begins with th e input
n

a
database, there are 180 items, where
2
i i  s Q  s  combination of chapter 1 to chapter 9.
i 1
2. Furth er initialization capabilities provide students
0.5.
Learner’s ability estimation 3. Then the system displays the first question taken at rand
iscriminant parameter of items i, with i=1,2,…,N with a difficulty level of questions is assumed starting 5/10
orresponding answers of items i, score=1 for a correct 0.5. Where in can of 5 answers correct in comparing the
r, otherwise score = 0 amount of matter that was done, which is 10.
= possible correct answer to the question i 4. Then the system will check the response
= 1 - Pi(θs) answers. After getting the response from the students, the
represent the possibility of learner can answer correctly system will calculate the value of IRT, followed calculate
estion i. MLE and the IIF last count.
5. If the student answers incorrectly then the
lculate item information function (IIF) for each question will be displayed <0.5 or about the difficulty
if the student answers correctly then the next
we estimate the next learners’ ability, the next steps is to given> 0.5 or difficulty level will rise
ate IIF for each item in qu estion bank. Then, the system question.
elect ite m question with the greatest IIF value. The item 6. Then the question is displayed according to the
with the greatest value IIF th en will be given as the the students' answers.
est question.

International Journal of Computer Applications (0975


Volume 81 – No.6

Do Adaptive Test

Manage Item
<<Include
<<Include

Login Manage Guest


<<Include
<<Include
Accept Information
Manage
<<Include Material

Download and view Manage User


material learning

Do Registration

Searching Material
B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&'&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

Searching Material

Fill Guestbook, polling

Public
Figure 2. Use Case Diagram of the CAT system

start

Input items

Initial input capability

Show Items

Answer Items Question

Calculate IRT

No Yes
Is the answer correct?
he Difficulty Items The Difficulty Items
is increase is Low

Show next items

No
Are You Stop?

Yes

Show result

End

Figure 3. CAT flowchart system

International Journal of Computer Applications (0975

Volume 81 No.6
B331=L""MMM(54=4:5NBG:34(?43"12>O6N:36/?"$I#P%!!QIR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&I&/J&!K
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

Volume 81 – No.6
matter in which a given problem as much as 10 questions,
not meet the specified number of questions it will 3.2 Adaptive Test Form
In this form students work on adaptive items
the question and get back to number 4 to meet the provided. The Items created by multiple-choice models.
em stops. if it meets the number of questions it will replying to the question about the level of
y the results of the adaptive test work that has been be displayed in accordance with the truth
students answered that question. If you w
neral, th e prin ciple o f CAT begins with initialization of test with different questions students retur
udent's ability to assume the student has the ability to test. Can be seen in figure 4.
about the medium and medium difficulty levels
ay. The sel ection of the next question based on the
the question of examinees who was granted, if the Hallo Feni
r is correct then the question will have a higher level of 2). Mengantuk, gangguan pencernaan, mulut kering,
but if the ans wer is wrong then the next question
lower level of difficulty. There are three main steps in retensi urine dalam teks petunjuk penggunaan obat
AT , the first, The whole matter of the question bank kalimat di atas di sebut dengan …..
has not been provided will be evaluated to select the
uestions will be issued based on the estimated current
ability. This process is also known as analysis item. A. Komposisi
econd, the best question was issued and students will B. Cara Kerja obat
that question. The third, th e ability level newly
ated based on the answers of all q uestions given Steps 1 C. Indikasi
gh 3 is repeated continuously until it reaches a limit with
n criteria. Suppose the problem has reached a certain D. Efek Samping
the ability of learners can be determined, has
mpassed some particular topic, or b ased on certain

APPLICATION OF COMPUTERIZED
APTIVE TEST (CAT)
Rese
m CAT / Computer adaptive test is implemented o n the
localhost, u sing the programming languag e HTML, t
CSS and Javascript. Software is displayed in a web
The menu is available in this e-learning are as follows.

Adaptive Test Menu


ive test menu is a special menu that can be accessed by Figure 4. Adaptive Test Form
This menu contains a collection of Indonesian as a
learning evaluation. Adaptive tests are provided in 3.3 Graph Form
orm of question s with multiple choice models, many After work on items, the students can
ons are worked out by the students depending on the of questions and ability of students and also sc
(teacher). The menu is created dynamically adaptive
adaptive answering questions. This can be
meaning that at any ti me the admin (teachers) can make
to the questions, changes made include, changes in and Figure 6.
he workings o f this test is adaptive menus student
ng on five questions, then students check the answers by
ng the answer button, the system will automatically
Grafik Tingkat kesulitan
and display the next question based on the answers of 0.51
udent if the student answers co rrectly then the n ext 0.505
on that will app ear about the difficulty level will higher,
student answers incorrectly then the next question that 0.5
ppear about the difficulty level lower. Page results of 0.495
test stud ents containing student grades, and a 0.49
about the value and ability of the students (in the
figures and graphs). The system also provides 0.485
ack in the form of comments or feedback to students in soal soal soal soal soal soal soal
of a compliment if studen ts have performed well 1 2 3 4 5 6 7
ses and warning if the student is still not good in doing Tingkat kesulitan
On the menu adaptive test allows students perform

B331=L""MMM(54=4:5NBG:34(?43"12>O6N:36/?"$P#Q%!!IPR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&I&/J&!K
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

ses back next time, so that the system provides th e Figure 5. Graph Form Difficulty items
s of students' value development exercise.

International Journal of Computer Applications (0975


Volume 81 – No.6

Tingkat Kemampuan Siswa 4. EXPERIMENTAL RESULT


4.1 Experimental Design
4.1.1 Data is used
The data used in this research is data class
Sumenep. The amount of items that is used as much a
items. This test is divided into 2 groups, each group consisting
soal soal soal soal soal soal soal soal soal soal of 88 students. The first group uses adaptive tes
1 2 3 4 5 6 7 8 9 10 second group did not use the adaptive test
tests. The first scenario each group pretest conducted
to determine students' initial ability. The average
Tingkat… for group 1 was 72.46 while the grou p 2 was 7
first group do adaptive testing while the second group do
Figure 6. Graph Form ability student adaptive testing. Adaptive test was conducted to determine the
ability and the difficulty level of each participant,
the average value of which is derived. Where as non
Scores Menu or conventional tests done to determine the average
cores menu is menu for students th at contains historical value, the results can be seen in Table 2. After
daptive test that has been done. scores displayed is performs adaptive and non-adaptive test, then
on an existing session id. Each scores of each session id post-test done in order to d etermine how much of an
display graphs that have followed, the graph shows scores of pre-test to post-test of both groups.
of difficulty about value and ability of the studen ts.
results of adaptive pressed, the system will display 4.2 Experimental Analysis
of adaptive and graphs on the session id. Can be seen in In the adaptive test trials following examples are take from the
results of one students. Where it is clear that the level of
difficulty of questions given to students in
Scores Form each id Session the respon se of the students' answers indicated
form stu dents presented in the form of graphs adaptive
named adinda on-1 skills test to early
opment based on the existing session id. This page also question is given by the level of difficulty
istorical results displayed Adaptive ever done. when ans wering the question the answer is
the answer is wrong, so the ability of the students fell
e-Adaptive Test SMP Negeri 1 Sumenep for about n o.2 -0.833333 given the level difficulties
is 0.488889, while answering the student's
then the ability of studen ts rose to -0.0749264,
ail Nilai adaptive given a 0. 5 degree of difficulty, and
incorrect responses then down another student's ability
IDSESSION TANGGAL NILAI OPERASI 2.3208. Until such no.10 continued ability of
go up or down depending on the response of
3167672473 2013-06-04 answer questio ns that will be given will
70
accordance with the response of the students' answers.
B331=L""MMM(54=4:5NBG:34(?43"12>O6N:36/?"$P#Q%!!RPS./0123456748S98:136;4S<4=3S>:=48S/?S@340SA4=1/?=4S<B4/5CS6?SDEF4:5?6?GSHC=340 *:G4&I&/J&!K
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

accordance with the response of the students' answers.

4165820989 2013-06-03 100 Table 1. Result of student 1

Id
NO A b Respon
item
2472198751 2013-06-03 60 1 153 1.5 0.5
2 14 1.5 0.488889
3 163 1.5 0.5
4 50 1.5 0.464286
4423153936 2013-06-03 60 5 167 1.5 0.5
6 64 1.5 0.514286
7 38 1.5 0.457143
8 117 1.5 0.521739
1519529189 2013-06-02 50 9 168 1.5 0.454545
10 58 1.5 0.444444

Further trials using adaptive and non-adaptive tests don


Figure 7. Scores form each id Session stages that students there are 3 session. From the results of trials
that have been conducted so as to compared th
of using adaptive tests with an average value of using non-
adaptive tests, can be seen in Table 2.

International Journal of Computer Applications (0975


Volume 81 – No.6

ble 2. Comparison Between Adaptive Test and non- [6] Davidovic, A., Warren, J., & Trichina,
Adaptive Test Learning benefits of structural example-based adaptive
tutoring systems. IEEE Transactions
Non adaptive test 46(2), 241–251.
Test Adaptive test average
average [7] Georgieva, G., Todorov, G., & Smrikarov,
52,80193 55,34091 model of a Virtual Universitysome problems during
76,42512 62,84091 development. In Proceedings of the
86,18357 72,15909 conference on Computer systems and technologies: e-
Learning. Bulgaria: ACM Press.
students through this stage then performed posttest
[8] Hamdi, M. S. (2007). MASACAD:
h groups, the results can be seen in Table 3 below. approach to information customization
academic advising of students. Applied
Table 3. List of scores pretest posttest results 7, 746–771.

Adaptive test Non adaptive test [9] Huang, M. J., Huang, H. S., & Chen, M. Y. (2007).
ssion Test Constructing a personalized elearning
average average
Pre Test 72.46 72.67 genetic algorithm and case-based reasonin

ost Test 78.57 76.65 Expert Systems with Applications, 33, 551
B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&%&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

ost Test 78.57 76.65 Expert Systems with Applications, 33, 551
e average 6.11 3.98 [10] Kabassi, K., & Virvou, M. (2004). Person
ase in value training on computer use based on
decision making. Interacting with Computers, 16,
table above shows that scores average student after 132.
adaptive test higher than using co nventional test. Where [11] Kim, S. (2006). A comparative study
erage in crease in the posttest score of 6.11, whereas the parameter calibration methods. Journal of
ntional tests or non-adaptive test is 3.98 so the use of the Measurement, 43(4), 355–381.
method is better than conventional.
[12] L Jayasimman & George Dharma Prakash E
A Soft Computing Approach for User Preference in
CONCLUSIONS based Learning. International Journal
onclusion that can be drawn from th is research is that Applications 61(21):25-29
adaptive testing or CAT (C omputer Adaptive Test), the
[13] Learning—A call for greater d epth
ation system is more accurate in measuring the ability of research. Information Systems Research, 12(1), 1
and can accommodate the diversity of user
ilities to provide learning materials for the system [14] Ozaki, K., & Toyoda, H. (2006). Paired comparison
model by 3-value judgment: estimation
ding to the level of proficiency tests students. While the parameters prior to th e administration
ge value when using the adaptive and non-adaptive tests Behaviormetrika, 33(2), 131–147.
test get the average increase between pre -test and post
6.11 for students who use adaptive and 3.98 for that use [15] Van der Linden, W. J., & Hambleton, R. K. (1997).
Handbook of modern item response theory. Ne
ntional or non-adaptive test, so from here scores average Springer.
st students further increased by using a CAT or adaptive
ompared with conventional tests. [16] Wauters, K., Desmet, P., & Van den
(2010). Adaptive item-based learning
based on the item response theory:
REFERENCES challenges. Journal of Computer Assisted
Andharini, D.C., & Ari Basuki. (2012). Personalized 26(6), 549–562.
Learning Path of a Web-based Learning System.
nternational Journal of Computer Applications 53(7):17- [17] Wiliam, D. (2011). What is assessment
Studies in Educational Evaluation, 37(1), 3

Alavi, M., & Leidner, D. E. (2001). Research [18] Wang, F.-H. (2006). Application of componential IRT
ommentary: Technologymediated model for diagnostic test in a standard
elearning system. In Sixth international conference on
Brusilovsky, P. (1999). Adaptive and intelligent advanced learning technologies (ICALT’06).
echnologies for web-based ed ucation. Künstliche
ntelligenz, 13(4), 19–25. [19] Xu, D., & Wang, H. (2006). Intelligent
personalization for virtual learning
Baker, F. (2001). The basics of item response theory. Decision Support Systems, 42, 825–843.
ERIC clearing house on assessment and evaluation.
Barker, K. (2002 ). E-learning in three step. In School
usiness affairs. Available http://www.asbointl.org

: www.ijcaonline.org

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&!#&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

References (17)

elementos segundo y tercero, diversos trabajos como e-Adaptive (Kustiyahningsih & Dwi Cahyani, 2013 et
e ( Papanikolaou et al., 2003), los definen a partir de la Teoría de Respuesta aí Item o TRI. Esta
da bases probabilísticas al problema de la medición de rasgos indirectamente observables, también
os latentes, como lo es el nivel de conocimiento. ...

ÓN DE UN MODELO DE EVALUACIÓN DINÁMICA BASADO EN LA TEORÍA DE RESPUESTA AL ÍTEM


ailable

MONTOYA GÓMEZ · Julián Moreno Cadavid

Recommended publications
Discover more publications, questions and projects in Computerized Adaptive Testing

Article

Design a personalized e-learning system based on item response theory and artificial neural network...
May 2009 · Expert Systems with Applications

Ahmad Baylari · Gholam Ali Montazer

In web-based educational systems the structure of learning domain and content are usually presented in the static way,
without taking into account the learners’ goals, their experiences, their existing knowledge, their ability (known as insufficient
flexibility), and without interactivity (means there is less opportunity for receiving instant responses or feedbacks from the
instructor when ... [Show full abstract]

Read more

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&!!&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

Article

Item Difficulty Estimation: an Auspicious Collaboration Between Data and Judgement.


May 2012 · Computers & Education
Piet Desmet · Kelly Wauters · Wim Van den Noortgate

The evolution from static to dynamic electronic learning environments has stimulated the research on adaptive item
sequencing. A prerequisite for adaptive item sequencing, in which the difficulty of the item is constantly matched to the ability
level of the learner, is to have items with a known difficulty level. The difficulty level can be estimated by means of the item
response theory (IRT). ... [Show full abstract]

Read more

Article Full-text available

Computerized Adaptive Testing Item Selection in Computerized Adaptive Learning Systems


January 2012

Theo J. H. M. Eggen · Bernard P. Veldkamp

Item selection methods traditionally developed for computerized adaptive testing (CAT) are explored for their usefulness in
item-based computerized adaptive learning (CAL) systems. While in CAT Fisher information-based selection is optimal, for
recovering learning populations in CAL systems item selection based on Kullback-Leibner information is an alternative

View full-text

Conference Paper

A Personalized Assessment System Based on Item Response Theory


December 2010
Jungwon Cho · Sungjae Han · Youngseok Lee · Byung-Uk Choi

Computerized adaptive testing (CAT) is a method of administering tests that adapts to the examinee’s ability level. Previous
research has focused on estimating the examinee’s ability accurately and on providing adequate feedback upon analyzing the
examinee’s ability. However, in order for students to use the feedback, they must find courses or learning materials
themselves. It is difficult to ... [Show full abstract]

Read more

Discover more

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&!$&/I&!J
)*+,-&./0123456748&98:136;4&<4=3&>:=48&/?&@340&A4=1/?=4&<B4/5C&6?&DEF4:5?6?G&HC=340 !!"#$"!%&!'(##

About Support Business


solutions
News Help
Company center Recruiting
Careers FAQ Advertising

© ResearchGate 2019. All rights reserved. Imprint · Terms · Privacy

B331=K""LLL(54=4:5MBG:34(?43"12>N6M:36/?"$O#P%!!QOR./0123456748R98:136;4R<4=3R>:=48R/?R@340RA4=1/?=4R<B4/5CR6?RDEF4:5?6?GRHC=340 *:G4&!I&/J&!I

You might also like