Running Head: Evaluating Student'S Knowledge 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Running head: EVALUATING STUDENT’S KNOWLEDGE 1

Evaluating Student’s Knowledge of Learned Concepts in Online Courses Using Graphical

Interface of Knowledge Structure

Nora Alrashed
Running head: EVALUATING STUDENT’S KNOWLEDGE 2

Evaluating Student’s Knowledge of Learned Concepts in Online Courses Using Graphical

Interface of Knowledge Structure

Introduction

In many learning courses, assessments are essential in determining a student's

comprehension level of the course materials. Often, online courses rely on multiple-choice

questions as one method of evaluation. According to Douglas, Wilson, and Ennis (2012), using

multiple-choice questions has proven beneficial to online courses in multiple ways, including

impartiality and ease of marking. Moreover, they allow instructors to assess a learner on a

broader range of topics. However, these assessments, being quick and easy to administer and

score, have several downsides. Zimmerman et al. (2018) claim that multiple-choice questions

rely on a dichotomous scoring approach: correct or incorrect. A student who provides a wrong

answer receives no credit. Besides, the only information available to explain the failure of the

student is the distractors or lures provided in the assessment. Further, Zimmerman et al. (2018)

expound that students who give a wrong answer might have misinterpreted the question, held a

misconception, or guessed the answer wrongly. Similarly, Gronlund (2006) indicates that

multiple-choice questions only favor recognition of answers rather than the ability to recall

information from memory. As McConnell, St-Onge, and Young (2015) inform, students retain

more information when they can retrieve information from memory rather than relying on

recognition. In contrast, open response assessments, such as essays and online discussions,

provide a learner with the chance to express himself or herself in the language of the discipline

(Paxton, 2000). Importantly, they give the teacher rich details about a student's level of

knowledge. Though they also have limitations, such as a lengthy marking process, they promote

integrative skills and knowledge organization, among others (Zimmerman et al., 2018). In this
EVALUATING STUDENT’S KNOWLEDGE 3

line, the Graphical Interface of Knowledge Structures (GIKS) is a free web-based automated

essay scoring system that can operate on any browser on both computers and tablets

(Zimmerman et al., 2018). The ease of training and use makes the system more appealing to

teachers (Zimmerman et al., 2018). Moreover, students receive a visual representation of how

their understanding of learned concepts aligns or differs from the course content (Zimmerman et

al., 2018). By providing a knowledge structure, students can quickly identify the areas they need

to improve. A notable challenge with GIKS is that it uses specific and technical terms in analysis

of vocabulary (Kim, 2017). Nonetheless, it remains revolutionary in how both teachers and

students receive information regarding a student’s performance. Thus, the research will attempt

to evaluate the impact of using open response assessments in an online course and how they

nurture higher knowledge levels compared to multiple-choice questions, as well as offer insights

into how the GIKS scoring mechanism assists students in understanding their comprehension of

course content.

With many teachers using computer-automated scoring systems, such as GIKS, for

essays, there has been a concern about their use because their acquisition is costly, and they only

provide numeric scores accompanied by generic feedback. As Kim, Clariany, and Kim (2018)

elucidate, the use of formative feedback on students' writing assessments is very labor-intensive

for instructors. However, it plays an indispensable role in providing supportive visual feedback

and developing conceptual knowledge of learning activities (Kim et al., 2018). Therefore, it is

crucial to assess the impact of GIKS on the quality of students’ knowledge acquisition and

retention when taking an online course.


EVALUATING STUDENT’S KNOWLEDGE 4

Research Questions

The research question for this study will focus on understanding whether open response

assessments are more beneficial than multiple-choice questions. It will also focus on

understanding how GIKS supports undergraduate students taking an online course to acquire

knowledge and learning skills that reflect their conceptual perception when writing an

assessment. Specifically, the research will attempt to answer these questions:

a. Are open response assessments, such as essays and online discussions, more effective in

reinforcing a student's knowledge acquisition and retention compared to multiple-choice

questions?

b. Is the quality of essay summaries better when using GIKS compared to other common

methods?

Literature Review

Reading, writing, and learning have an intimate relationship, and they contribute to a

learner's level of knowledge. As Kim (2017) highlights, learning and acquiring new data

primarily depend on how a beginner organizes information in his or her memory. In this

perspective, progressive education is best illustrated through the notion of knowledge structures

(KS), which are characterized by visible directional changes in a student (Kim, 2017). Similarly,

Tawfik, Law, Ge, King, and Kim (2018) inform that the ability of a student to solve a particular

problem relies on their KS, comprising the rules and patterns they engage when making an

argument. Clariana, Wolfe, and Kim (2014) indicate that instructors should evaluate learning

outcomes with a network framework, which can identify and empirically test the

interrelationships among the knowledge constructs. Using such a network helps a learner to

capture all the properties and is most effective in describing and representing a student's KS.
EVALUATING STUDENT’S KNOWLEDGE 5

Notably, using GIKS does not depend on the language used in conducting the course.

Instead, it relies on pattern matching, which captures specific keywords related to a master text.

GIKS can be used within any language context for comparison and analysis purposes (Kim,

2018). GIKS allows an instructor to detect textual patterns and recognize the weight in these

differences based on the text type. As Kim (2018) enlightens, text varies depending on context

and the author's level of knowledge. Moreover, it depends on the writer's text genre or purpose,

and one can organize a text-based on specific predominant patterns.

GIKS has proved successful across several studies and in diverse domains. Clariana et al.

(2014) report that it has been successfully used to measure knowledge convergence in a group of

online learners. Further studies have proven fruitful utilization of GIKS in establishing text

structure by comparing students' text construction in Chinese, Korean, Dutch, and English. Thus,

while there exist a multitude of approaches to test to determine a student's knowledge, such as

multiple-choice tests and interviews, using a domain-specific method, like GIKS, can help to

provide a comprehensive and accurate picture of a student's understanding of course contents.

Method

Context

In this study, the participants will be enrolled in an undergraduate-level online course

offered at Northern Illinois University. The choice of the online course will be that it is a general

introductory course required for different majors. The course will be run on the course

management system, and students will be assigned different course readings. After completion of

the assigned readings, students will be tested using a quiz and a written essay. The essays will be

run through GIKS, and the scores from this system compared to the scores assigned by an

instructor will determine the correlation.


EVALUATING STUDENT’S KNOWLEDGE 6

Participants

The study will involve undergraduate students undertaking an online class at North

Illinois University.

Data Collection

I will collect primary data from students taking an online undergraduate course. The

students will complete a pre-test in their first week. I will provide the students with a consent

form requesting that they allow their results to be used in this research. Students who permit me

to proceed with data collection will complete a post-test in their final week of the course. The

students will also accomplish a demographic survey in response to different essay prompts. The

instructor will manually score the assignments based on a rubric. Additionally, these essays will

be rated using the GIKS, and the results will be tabulated for analysis.
EVALUATING STUDENT’S KNOWLEDGE 7

References

Clariana, R. B., Wolfe, M. B., & Kim, K. (2014). The influence of narrative and expository

lesson text structures on knowledge structures: Alternate measures of knowledge

structure. Educational Technology Research and Development, 62(5), 601–616. Doi:

10.1007/s11423-014-9348-3

Douglas, M., J. Wilson, J. & Ennis, S. (2012). Multiple-choice question tests: A convenient,

flexible and effective learning tool? A case study. Innovations in Education and

Teaching International, 49(2), 111-121. https://doi.org/10.1080/14703297.2012.677596

Gronlund, N. (2006). Assessment of student achievement. Third custom edition for the

University of Alberta. Toronto: Pearson Education, Inc.

Kim, K. (2017). Graphical interface of knowledge structure: A web-based research tool for

representing knowledge structure in text. Technology, Knowledge and Learning, 18(1-2),

1-7. Retrieved from

https://www.researchgate.net/publication/317977469_Graphical_Interface_of_Knowledg

e_Structure_A_Web-

Based_Research_Tool_for_Representing_Knowledge_Structure_in_Text

Kim, K. (2018). An automatic measure of cross-language text structures. Technology,

Knowledge and Learning, 23, 301-314. Retrieved from

https://link.springer.com/article/10.1007/s10758-017-9320-5

Kim, K., Clarianay, R. B., & Kim, Y. (2018). Automatic representation of knowledge structure:

Enhancing learning through knowledge structure reflection in an online course.

Education Technology Research Development, 67, 105-122. Retrieved from

https://doi.org/10.1007/s11423-018-9626-6
EVALUATING STUDENT’S KNOWLEDGE 8

McConnell, M., St-Onge, C., & Young, M. (2015). The benefits of testing for learning on later

performance. Advances in Health Sciences Education, 20(2), 305-320.

https://doi.org/10.1007/s10459-014-9529-1

Paxton, M. (2000). A linguistic perspective on multiple choice questioning. Assessment &

Evaluation in Higher Education, 25(2), 109–119.

Tawfik, A. A., Law, V., Ge, X., Xing, W., & Kim, K. (2018). The effect of sustained vs. faded

scaffolding on students argumentation in ill-structured problem solving. Computers in

Human Behavior, 1-14. Retrieved from

https://www.researchgate.net/publication/323020842_The_effect_of_sustained_vs_faded

_scaffolding_on_students'_argumentation_in_ill-structured_problem_solving

Zimmerman, W., Bin Kang, H., Kim, K., Gao, M., Johnson, G., Zhang, F. (2018). Computer-

automated approach for scoring short essays in an introductory statistics course. Journal

of Statistics Education, 26(1), 40-47. Retrieved from

https://www.tandfonline.com/doi/full/10.1080/10691898.2018.1443047

You might also like