Ugcs3 - v3 - Project - Attachments - Student Perception of The Quality of Courses at The University of North Alabama

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 39

Running Head: QUALITY OF COURSES 1

Utilizing Course Evaluations to Gauge Student Perception of the Quality of Courses

Brian Ford

University of North Alabama

Author Note

Brian Ford, Department of Interdisciplinary and Professional Studies, University of North

Alabama.

Correspondence concerning this article should be addressed to Brian Ford, University of North

Alabama, 1 Harrison Plaza, UNA Box 5005, Florence, AL 35630.

Contact: jbford1@una.edu
QUALITY OF COURSES 2

Abstract

This Capstone Project utilized existing course evaluations from the Fall 2015 and Fall 2016

semesters to determine if there was a difference in the quality of traditional (face-to-face) courses

and online courses, based on student perception or feedback. This paper builds upon existing

studies by gathering data from course evaluations at the institutional level, which allowed the

author to not only determine if student perceptions differed between traditional and online

courses, but also between undergraduate and graduate students, and between the different

Colleges at a southeastern university.

Keywords: course evaluations, surveys, student perception, course quality, undergraduate,

graduate
QUALITY OF COURSES 3

Utilizing Course Evaluations to Gauge Student Perception of the Quality of Courses

Introduction

In today’s world of higher education, most, if not all, institutions of higher education

utilize some form of course evaluations to get feedback from students to gauge their perception

of the quality of courses at their institution. According to Hurney, Harris, Bates Prins, and Kruck

(2014), “Incorporating opportunities for students to provide mid-semester course evaluations

allows instructors to collect formative feedback from students, which can provide valuable

insights regarding the impact and efficacy of course components on student learning” (p. 55).

Ongoing and continuous feedback, such as course evaluations that are conducted each semester,

give students the opportunity to offer feedback on courses they are taking, which can result in

improvements to academic programs (Evans, 2015, p. 104).

Research Statement

This Capstone Project will focus on course evaluations; a common tool used in Higher

Education as a means of providing course feedback for instructors, and utilize them as a means

of determining student perception of the quality of courses at the University of North Alabama

(UNA). The data were based on f identical questions that appeared on both the traditional and

online course evaluations. The questions aimed to address the overall quality of instruction and

instructional materials. The data reflected course evaluations from both graduate and

undergraduate students from the College of Business, the College of Arts and Sciences, The

College of Education and Human Sciences, and the College of Nursing. After extensive review

of the relevant literature, it was determined that the majority of previous studies created a survey

to gauge the quality of courses of a specific area/department at an institution/s, but only two
QUALITY OF COURSES 4

studies utilized pre-existing course evaluations as a means of determining student perception of

the quality of courses in a particular discipline. None of the previous studies utilized pre-existing

course evaluations to look at student perception of the quality of courses at the institution as a

whole. This paper hopes to contribute to existing research by offering data that to this point does

not appear to exist from an institution-wide standpoint, and to serve as a source of information to

support a means of ensuring the quality of courses at UNA.

Justifications for the Study

Pre-existing Course Evaluations

Based on the literature review, only two of the previous studies utilized pre-existing

course evaluations as a means of determining the student perception of the quality of courses.

Other studies either created surveys specifically for their research, or conducted interviews.

Hancock (2013) utilized not only an ex post facto casual-comparative research design, but also

utilized secondary data analysis that was obtained from student evaluations of general education

courses that had been conducted at the institution (p. 53-54). Anderson, Tredway, and Calice

(2015) utilized the Instructor and Course Evaluation System survey instrument developed by the

institution where the study was conducted, and that survey is distributed in an online and paper

format to students at the end of each semester (p.10). Other studies, such as Evans (2015),

utilized a cross-sectional analysis to analyze the data, creating a quantitative survey with closed

ended questions combined with an interview using open ended questions (p. 108). Yang and

Durrington (2010) utilized an online survey based on a list of quality benchmarks that were

published by the Institution of Higher Education Policy (p. 341). Hurney et al. (2014) gathered

data from small group instructional diagnosis, in which a consultant leads a guided, student

group discussion focused on getting feedback on issues regarding learning and other aspects of
QUALITY OF COURSES 5

the course (p. 55). Lee, Srinivasan, Trail, Lewis, and Lopez (2011) gathered data using a web-

based survey that was distributed and made available to a class a week before final exams (p.

160). These show that utilizing pre-existing course evaluations, as a means of determining

student perception of quality of courses at an institution, was not the source of data for the

majority of previously conducted studies. The study conducted for this Capstone Project focuses

on data gathered directly from course evaluations at an institution, which will contribute to the

study of student perception of courses, due to a lack of existing data utilizing course evaluations

that exists thus far.

University-Wide Student Perception

After extensive analysis of the literature, none of the previous studies looked at student

perception of the quality of courses from a university as a whole, but instead focused on specific

areas/disciplines or specific course formats in the method of gathering their data. Anderson et al.

(2015) gathered data from course evaluations of Nursing students from 9 courses in a RN-BSN

online program from 2011-2012 (p. 8). Evans (2015) obtained data by surveying, interviewing,

and gathering a cumulative average of grades in both online and face-to-face formats of one

MBA course, taught by the same instructor (p. 108). Hurney et al. (2014) gathered data from a

select number of instructors that volunteered to participate in a small group instructional

diagnosis during the Fall 2009 and Spring 2010 semesters (p. 56). Lee et al. (2011) chose an

introductory undergraduate online course in public health during the Spring 2010 semester (p.

160). Yang and Durrington (2010) gathered data from online programs by emailing the Division

of Continuing Education and instructors and faculty in the College of Business, College of

Education, and College of Arts and Sciences to encourage their students to complete the online

survey for the study (p. 347). Though Wang (2014) states that the data in their study was
QUALITY OF COURSES 6

gathered from the university as a whole, it only focused on online courses (p. 345). These

examples show that the majority of previous studies focused on specific disciplines or course

formats, and not at a university as a whole, from the standpoint of the perception of the quality of

courses. The study conducted in this Capstone Project examines course evaluations from the

university as a whole, and does not exclude any disciplines or course formats. Since the majority

of the literature does not focus on student perspective of course quality from the standpoint of a

university as a whole, this Capstone Project will help provide further knowledge on the subject.

Population Size

Based on literature review, the data gathered for previous studies had a much smaller

population size, in comparison to the data gathered for this Capstone Project. Anderson et al.

(2015) had 339 students that completed the course evaluation, which was a 67% response rate (p.

11). Evans (2015) had 225 MBA students complete the online survey for their study, with a 28%

success rate (p. 109). Hurney et al. (2014) had 5003 students who went through the small group

instructional diagnosis, but only 789 students were included in the study by completing the

online survey that was distributed at the end of each semester (p. 56). Lee et al. (2011) had 110

out of the 145 students in the introductory undergraduate online course in public health complete

the online survey used in the study (p. 160). Yang and Durrington (2010) obtained complete

survey responses from 176 respondents out of 781 online students for their study. Hancock

(2013) conducted the only study with a larger population size, with 4,751 student evaluation

submissions utilized as data (p. 57). While the study conducted by Hancock utilized a much

larger population size than the other studies, it was still substantially smaller than the population

size utilized in the study of this Capstone Project. For this Capstone Project, in Fall 2015 there

were 14,616 completed course evaluations out of 23,832 potential evaluations, and in Fall 2016
QUALITY OF COURSES 7

there were 15,498 completed course evaluations out of 24,459 potential evaluations (See Course

Evaluation Response Rates By Delivery document in Appendix). Of the 30,114 completed

course evaluations, 29,130 (96.7%) were traditional courses and 984 (3.3%) were online courses.

Though some of the previous studies had larger sample sizes in relation to the total population,

the data gathered for this Capstone Project utilized a significantly larger number of respondents.

Comparing Traditional to Online Courses

Upon analysis of the literature review, it was also determined that only a few of the

previous studies focused on a direct comparison of the student perception of the quality of

traditional to online courses. Evans (2015) utilized three of the principles of adult learning

theory in their study to look at graduate students’ perception of course effectiveness when

comparing traditional to online courses (p. 104). Kaushik, Saxena, and Garg (2012) conducted a

student to determine the student perception and satisfaction of traditional and online courses in

terms of quality, cost effectiveness, and employment opportunities of students at several colleges

in India (p. 87). The majority of other studies focused on one particular course format.

Anderson et al. (2015) focused on online nursing courses from Spring 2011 to Summer 2012 for

their study (p. 11). Hurney et al. (2014) focused on how different types of small group

instructional diagnoses had an effect on their perception of the learning environment of their

course (p. 56). The study conducted by Lee et al. (2011) focused exclusively on an introductory

undergraduate online course in public health (p. 160). Yang and Durrington (2010) distributed a

survey to only online students by emailing three of the university’s Colleges, and the Division of

Continuing Education as a means of encouraging students to complete the survey (p. 347). As

mentioned in a previous justification, Wang (2014) states that the data in their study was

gathered from the university as a whole, but in actuality the study excludes traditional courses
QUALITY OF COURSES 8

and focuses exclusively on online courses (p. 345). Though a higher percentage of respondents

in the data gathered for this Capstone Project consisted of traditional students than online

students, this project, in comparison to what has been demonstrated in the literature, is a far more

comprehensive set of data reflecting truly university-wide data from both traditional and online

courses.

Literature Review

Student Perception of Online Courses

When it comes to the topic of student perception of the quality of courses, student

perception of online courses is important because online courses are prominent in nearly all

institutions of higher education, and because online courses allow institutions of higher

education to attract students outside of their normal demographic. If student perception of the

quality of online courses is negative at a particular institution, it may discourage students from

continuing their education at that institution, by either not completing their collegiate education

or by transferring to an institution of higher education where the student perception of online

courses is more positive. The following literature consists of studies that focused on student

perception of online courses.

The study conducted by Anderson et al. (2015) included 72 students from a 2011-2012

RN-BSN online program, and included course evaluations from all of the courses that took place

from Spring 2011 to Summer 2012 (p. 8). A survey was given to the students enrolled during the

terms mentioned, a total of 509 enrollments, and it is the same survey that is utilized both online

and on paper to gather data across academic disciplines at the institution where the study was

conducted (Anderson et al, 2015, p. 10). The survey consisted of 46 questions, which included
QUALITY OF COURSES 9

35 that were Likert-type, multiple-choice questions; 3 questions that ask the course name,

number, and instructor; 5 that were open-ended; and 3 that were optional (Anderson et al., 2015,

p. 10). The students completed the survey anonymously online, and 339 students completed the

survey during the terms mentioned, which was a 67% response rate (Anderson et al., 2015, p.

11). The results of the survey were categorized based on the following common themes that

were identified based on the questions: instructional design, course facilitation, and infrastructure

and technology (Anderson et al., 2015, p. 12). The results of the study suggested that when

Merrill’s First Principles of Instruction exist in a course, that the students’ perception of the

quality of the course is also high, and that the learning experience is positively affected

(Anderson et al., 2015, p. 16). The results also demonstrated a statistically significant difference

between students rating the quality of a course high when they perceived the course to include

the First Principles of Instruction, in comparison to when they did not recognize the presence of

the First Principles of Instruction (Anderson et al., 2015, p. 16). The results of the study also

suggested that a statistically significant association existed between the factor of instructional

design and if students rated the quality of the course as high (Anderson et al., 2015, p. 17). It is

important to mention that for each of the courses included in the study that the same instructional

designer was present, and there was also a program director to ensure that there was consistency

in both structure and format in the courses (Anderson et al., 2015, p. 9). The results from this

study are important this Capstone Project because they suggest the importance of instructional

design in online courses.

Yang and Durrington (2010) conducted a study that were based upon three quality

benchmarks from the 24 Institution of Higher Education Policy benchmarks, due to those

benchmarks being greatly associated with student experiences (p. 346). The three benchmark
QUALITY OF COURSES 10

scales utilized were Teaching/Learning Process Benchmarks, Course Structure Benchmarks, and

Student Support Benchmarks, and an online survey was created that included questions based on

those benchmarks, and questions that were designed to gauge student perception of the quality of

the online course they were taking (Yang and Durrington, 2010, p. 347). The survey was

accessible online, and personnel in Continuing Education and the Colleges on campus were

asked to encourage their students to complete the survey (Yang and Durrington, 2010, p. 347).

176 out of the total 781 online students completed the survey, which was a 23% response rate

(Yang and Durrington, 2010, p. 347). The data gathered from this study demonstrated that the

perception of nearly all students of the quality of online teaching, online learning, course

structure, and student support, at least met or exceeded their expectations (Yang and Durrington,

2010, p. 351). The results also suggested that three variables have a significant effect on student

perception of the quality of courses: peer interaction, feedback from instructors, and course

structure (Yang and Durrington, 2010, p. 353). Comparing undergraduate to graduate students,

the results of the study suggested that peer interaction had a greater influence on undergraduate

students’ perception of course quality, while feedback from instructors, course structure, and

student support had a greater influence on graduate students (Yang and Durrington, 2010, p.

354). The results from the study are relevant to this Capstone Project because they reiterate the

importance of course structure, and the affect it can have on student perception of the quality of

online courses. The results from the study also identifies other factors that can influence

students’ perception when it comes to course quality.

The purpose of the study conducted by Lee et al. (2011) was to determine if students’

perception of support had a direct influence on their learning outcomes (p. 160). The study

included students from a lower level undergraduate online course in public health, and was
QUALITY OF COURSES 11

conducted during the spring 2010 semester (Lee et al., 2011, p. 160). An online survey was used

to collect data for the study, and the survey was made available to the students a week prior to

their final exam (Lee et al., 2011, p. 160). The survey consisted of questions that intended to

gauge student perception of instructional support, peer support, technical support, and course

satisfaction (Lee et al., 2011, p. 160). 110 out of the 145 students in the course completed the

online survey (Lee et al., 2011, p. 160). The results of the study showed that overall students

perceived the course to be a supportive environment for their learning, and a lack of

opportunities for group interaction in the course resulted in students rating peer support lower

than instructional and technical support (Lee et al., 2011, p. 160). The results of the study also

suggested that instructional support, peer support, and technical support had a positive impact

upon course satisfaction with students (Lee et al., 2011, p. 160). Though there wasn’t a strong

relationship, the results of the study suggested that a relationship between course satisfaction and

final grades in the course did exist (Lee et al., 2011, p. 160). Results from the portion of the

survey that consisted of open ended questions pointed towards three key areas: student

perception of the opportunities for interaction in the course with the three support groups; student

perception of whether content learned could be applied outside of the course; and whether the

course had self-directed learning which allowed a student to work at their own pace (Lee et al.,

2011, p. 161). The study is relevant to this Capstone Project, because it demonstrates how vital

various levels of support are when it comes student perception or satisfaction of an online course.

The study also demonstrates that though the relationship was not significant, course satisfaction

can affect the learning outcomes of students in a course.

Wang (2014) conducted a study with the goal of identifying both social and technical

factors that impact students’ perception about the trustworthiness of an online course, and how to
QUALITY OF COURSES 12

integrate those factors into a framework that would be likely to gain the trust of students (p. 347).

A student’s decision to take an online course is not always an easy one, especially considering

the lack of face-to-face interactions that typically exist. This can prove to be even more so for

students with disabilities (Wang, 2014, p. 346). For purposes of the study, student trust in an

online course was defined as “the degree to which a student is willing to rely on the e-learning

system and has faith and confidence in the instructor or the educational institution to take

appropriate steps that help the student achieve his or her learning objectives” (Wang, 2014, p.

347). The framework defined by this study categorized 12 trust-related factors into four

dimensions: credibility which comes from previous experience or reputation of the learning

management system or instructor; the quality of the design and accessibility of the content in the

course; the instructor socio-communicative style, which is how the instructor behaves and

communicates in the course; and what privacy and security measures exist in the online learning

system (Wang, 2014, p. 349). To gather data for the study, an online survey was distributed to

students at a university via various delivery methods, such as university listservs, online

announcements, and faculty Twitter posts. Twelve $25 Barnes and Noble gift cards were also

given away in a random drawing to survey respondents as a means of encouraging participation

(Wang, 2014, p. 350). The first part of the survey focused on obtaining demographic

information of the students, the second part of the survey consisted of questions that focused on

the 12 trust-related factors in the proposed framework, and the third part of the study consisted of

questions that were relevant to students with disabilities (Wang, 2014, p. 351). 398 students

submitted responses to the online survey, but only 361 respondents were used due to unfinished

submissions (Wang, 2014, p.351). Wang (2014) decided to change the proposed framework

model from two components rather than four based on the results. Those components were
QUALITY OF COURSES 13

named the course instruction dimension, which included reputation, design quality, and

instructor socio-communicative style; and the privacy and security dimension (p. 352). The

results of the study showed that both dimensions demonstrated parts of an online course that can

promote student trust. The results also indicated that there was not a statistically significant

difference between male and female students in regard to their average trust rating, and there

wasn’t a statistically significant difference based on class-level (Wang, 2014, p. 354). 15 of the

361 students identified themselves as having some form of a disability, and those students

answered the relevant section of the online survey (Wang, 2014, p. 354). The results of the study

demonstrated that student perception of the trustworthiness of a course could increase the

likelihood of a student self-disclosing a disability (Wang, 2014, p. 355). The study is relevant to

this Capstone project because it helps define what is meant by the trustworthiness of an online

course, and it also demonstrates how students’ perception of the trustworthiness of an online

course can affect the overall perception of the course.

Tricker, Rangecroft, and Gilroy (2001) identified two key areas of online courses that

need to be considered when creating an online course. Those areas are what is the demographic

of the students that make up the student body and what logistical issues may arise when

developing an online course (p. 166). Tricker et al. (2001) suggests that it can often be difficult

to obtain student feedback in an online course, because typically online students participate from

a distance (p. 167). The focus of the study conducted by Tricker et al. (2001) was to create an

evaluation template for post-graduate online courses that helped assess student satisfaction (p.

168). For this study, a questionnaire was created from brainstorming sessions that included the

research team and online students at two schools in the UK, utilizing a qualitative approach

(Tricker et al., 2001, p. 169). Based on these sessions, the questionnaire designed consisted of
QUALITY OF COURSES 14

five key areas: The decision to join the online course; Course satisfaction; Course Materials; The

components of the course; and the Course Assessment (Tricker et al., 2001, p. 170). The study

included students from the Management Sciences and Education online programs, and 285

students participated for a 61% response rate (Tricker et al., 2001, p. 170). The results of the

student indicated that both online programs were very similar in the rankings of the five

components (Tricker, et al., 2001, p. 171). The results of the study showed that course content,

personal development, and professional development were the three factors that ranked the

highest when it came to students deciding to join the online program (Tricker et al., 2001, p.

172). The results also showed that brochures of the online programs and influence from their

employers were the least effective when it came to influencing students’ decision to choose an

online program (Tricker et al., 2001, p. 172). When it came to course satisfaction, the one factor

that was highly ranked among both online programs was the opportunities for flexible study

(Tricker et al., 2001, p. 172). When it came to course materials, the quality of content was

determined to be most influential (Tricker et al., 2001, p. 173). When it came to components of

a course, the most important factors were course content, teaching methods of the instructor,

opportunities for interactions with tutors, and how often a student had to attend a synchronous

class session (Tricker et al., 2001, p. 174). The results of the study also determined that the most

important factors of course assessment were the quality of feedback from instructors; the

relevance of course assignments; and a clear identification of assessment (Tricker et al., 2001, p.

174). The study conducted is relevant to this Capstone Project because it helped identify what

factors influence students’ decisions to join an online program. These factors can have a direct

influence on student perception of the quality of online courses.


QUALITY OF COURSES 15

The purpose of the study conducted by Roby, Ashe, Singh, and Clark (2013) was to

define what factors improve student and instructor experiences in both online and blended

courses at a university (p. 31). Another reason for conducting the study is that administration at

the university where the study was conducted wanted to improve student enrollment, retention,

and graduation, and saw online and blended courses as a means of doing so (Roby et al., 2013, p.

31). A survey was created for this study for both students who had taken blended or online

courses and instructors who had taught blended or online courses. Both surveys had a similar

format, but with sections pertaining to each group (Roby et al., 2013, p. 31). 1139 students,

which is around 11% of the target population completed the survey, and 49 instructors of the

target population of 161 completed the survey (Roby et al., 2013, p. 31). 81% of the student

responses showed that difficulty of the course content in the online course had a direct impact on

their decision to take the course; 79% of the student responses showed that the topic of the online

course had a direct impact on their decision to take the course; and 71% of the student responses

showed that recommendations from other students who had taken the online course had a direct

impact on their decision to take the course (Roby et al., 2013, p. 32). When it came to content in

an online course, 97% of the student responses indicated that having a manageable amount of

assignments in the course as being the most important (Roby et al., 2013, p. 32). For instructors,

the highest rated reasons that influence their decision to teach online were to experience new

methods of teaching and having a personal interest in the topic being taught (Roby et al., 2013, p.

32). Instructors ranked the availability of technical support as most important resource when it

came to developing an online course (Roby et al., 2013, p. 32). Both students and instructors

identified instructor-student interaction as being important or very important in an online course

(Roby et al., 2013, p. 33). One of the biggest differences between students and instructors was in
QUALITY OF COURSES 16

relation to a sense of community in an online course, where 68% of the instructors felt that a

sense of community was important or very important, but 54% of the students felt that a sense of

community was slightly important or not important (Roby et al., 2013, p. 33). When it came to

effective class size for an online course, the results of the study showed that instructors identified

30 as the ideal class size for an online course, but students identified 50 as the ideal class size

(Roby et al., 2013, p. 33). The study is relevant to this Capstone Project because it helped define

what factors are considered most important among students and instructors when it comes to

online courses. The factors mentioned in the study are also beneficial to administration, as it

helps them determine how best to proceed to utilize online courses as a means increasing student

enrollment and retention.

The study conducted by Uwagie-Ero (2008) gathered student perceptions of quality in

online courses, and the population of this study was limited to undergraduate students in online

programs at a university (p. 3). The total population for this study consisted of 588 students that

were enrolled in online courses, but the participants included in the data gathered for this study

only included 19 students in an online psychology course that volunteered to participate

(Uwagie-Ero, 2008, p. 83). The study utilized a qualitative approach, and consisted of interview

questions that were separated into four sections: participant demographics; student perception of

support systems for learning; questions based on the Institution of Higher Education Policy

benchmarks; and student factors that affect learning (Uwagie-Ero, 2008, p. 3). The participating

students were interviewed online via chat sessions, and any follow-up feedback was obtained via

email (Uwagie-Ero, 2008, p. 93). The majority of the participating students were female, the

majority of the participating students were psychology majors, and the majority of the students

had previously taken an online course (Uwagie-Ero, 2008, p. 94). The results of the study
QUALITY OF COURSES 17

showed that students must possess several characteristics to have a quality experience in online

learning such as a strong self-discipline to be successful; a strong commitment to independent

learning; access to a high-speed internet connection; and be comfortable with the internet,

including the ability to navigate online (Uwagie-Ero, 2008, p. 96). The results of the study

showed that students perceive a quality online course to have faculty who are understanding,

encouraging, give clear directions for assignments, and offer enough flexibility for students to

complete their work (Uwagie-Ero, 2008, p. 96). The results of the study also showed that

students expect a quality online course (or program) to possess a great technical support system

and administration who are both caring and understanding (Uwagie-Ero, 2008, p. 97). Some of

the reasons that were given by the participating students for choosing an online course or

program were: course availability, or what time of day the course was offered; the flexibility of

an online course schedule, which allowed time for personal and family activities; the online

course was required; or it allowed the students to pursue additional education and maintain a

career (Uwagie-Ero, 2008, p. 110). The study is relevant to this Capstone Project because it

helped identify what factors were considered to be the most important when deciding to take

online courses or to join online programs. The study also helped further identify some of the

reasons that students decide to take online courses or join online programs.

How Student Attitude Affects Student Perception

One factor that can directly affect student perception of course quality is student attitude.

Students’ attitude can include their opinion of the instructor teaching the course, the content in

the course, and their overall disposition. If a student has a negative attitude towards the course,

or if a student does not believe the instructor is a content knowledge expert, it can directly
QUALITY OF COURSES 18

influence their overall perception of the course. The following literature looked at how students’

attitude can affect students’ perception.

Fish and Snodgrass (2015) conducted a study that focused on student characteristics such

as graduate versus undergraduate, gender, and previous course experience. These characteristics

were then looked at in regard to students’ perception of online versus traditional courses in terms

of student motivation, discipline, independence, time and cost investment, preference, happiness

and appropriateness of the courses (p. 83). The study consisted of three undergraduate and three

graduate business classes, and the survey used to gather data for the study was delivered during

the last week of classes (Fish and Snodgrass, 2015, p. 85). The survey was divided into multiple

sections that were based upon whether students had taken at least one online course or if they

had never taken an online course (Fish and Snodgrass, 2015, p. 86). A total of 61 undergraduate

students (36 males, 25 females) and a total of 45 graduate students (30 males, 15 females)

completed the survey on a voluntary basis (Fish and Snodgrass, 2015, p. 86). The results of the

study showed that when it came to motivation, both online and traditional students preferred

traditional courses. Online students felt that there was more discipline and independence in

online courses (Fish and Snodgrass, 2015, p. 86). The results of the study also demonstrated that

there was not a significant difference between undergraduate and graduate students when it came

to student perception regarding motivation, discipline, independence, time and cost investment,

preference, happiness, and appropriateness in the learning environment (Fish and Snodgrass,

2015, p. 87). When it came to male and female students, the results of the study showed that

there was not much difference regarding their perceptions of online or traditional learning

environments (Fish and Snodgrass, 2015, p. 88). For online courses, the results showed that

happiness was significantly different for students that took a second online course, versus
QUALITY OF COURSES 19

students that only took one (Fish and Snodgrass, 2015, p. 89). The study was relevant to this

Capstone Project because it helped identify several factors that can affect students’ attitude. The

study also showed that while there did not appear to be much difference between undergraduate

versus graduate students, or male versus female students, when it came to student perspective,

there was a clear difference in happiness with online courses for students who already had

experience in an online environment.

Myers and Thorn (2013) conducted a study to determine if students’ motives for

communicating with instructors was related to their classroom effort and their perceptions of

course workload. (p. 485). The motives defined in the study were: relative motive, functional

motive, participatory motive, excuse making motive, and the sycophancy motive (Myers and

Thorn, 2013, p. 485). The study included 119 students that were enrolled in a lower-level

communications course, and utilized three different instruments to collect data (Myers and

Thorn, 2013, p. 486). The results of the study indicated that effort in the course is only slightly

to moderately related to four of the students’ motives to establish communication with their

instructors, and course workload was not related at all (Myers and Thorn, 2013, p. 486). Myers

and Thorn (2013) suggest that the results of the study suggest show that those students who put

in increased academic effort are more likely to be motivated to communicate with their

instructors (p. 486). Communication is a very important factor when it comes to a course,

whether the course is a traditional or online course. The study is relevant to the Capstone Project

because it demonstrated that student effort in a course could be related to student motivation.

Students who are motivated to participate in a course are likely to be more communicative with

not only their instructors, but with the other students in the course. Communication in a course

can help improve the quality of a course, in both a traditional and online environment.
QUALITY OF COURSES 20

The purpose of the study conducted by Edwards and Edwards (2013) was to determine if

ratings of instructors found on the internet from RateMyProfessors.com influenced student

perceptions of instructors, and the attitudes students had towards learning material in the course

(p. 412). The study consisted of 186 participants who were undergraduate students in a

traditional lower-level communications course (Edwards and Edwards, 2013, p. 416). For the

study, the participating students were put in three treatment groups that included positive,

negative, and mixed RateMyProfessors.com reviews and one control group that did not include

any RateMyProfessors.com reviews (Edwards and Edwards, 2013, p. 417). Participants were

then asked to watch a prerecorded twelve-minute lecture, and following the video they were

asked to complete a survey (Edwards and Edwards, 2013, p. 417). The results of the study

determined that overall, the treatment group that was given positive RateMyProfessors.com

reviews gave higher ratings of the recorded instructor than any of the other groups. There also

appeared to be no difference between the group with mixed RateMyProfessors.com reviews

when compared to the control group that did not have access to any RateMyProfessors.com

reviews (Edwards and Edwards, 2013, p. 420). The study is relevant to this Capstone Project

because it demonstrates how students’ perception of instructors can be directly influenced by

their peers. The students with access to positive reviews of the recorded instructor from

RateMyProfessors.com gave much higher ratings than those without access to the positive

reviews. Since students’ perception of the instructor of a course can impact overall perception of

the course, this suggests that student perception of the quality of courses can potentially be

affected by how their peers’ perceive the quality of the instructor.

Hancock (2013) conducted a study to determine if students’ responses on their

evaluations was directly related to the instructor’s content knowledge (p. 53). The study only
QUALITY OF COURSES 21

utilized student evaluations from general education courses at a junior college that had already

been completed as part of the instructor’s evaluation process (Hancock, 2013, p. 53-54). The

sample used for the study consisted of five general education subjects that were taught in a

traditional format from March 2008 to November 2011 at ten locations of the junior college

(Hancock, 2013, p. 55). The data collected for the study was gathered from 4,751 completed

student evaluations that covered 427 courses taught by 204 instructors (Hancock, 2013, p. 57).

The results of the data were separated into an independent variable of content knowledge, and

control variables of gender, morning session, afternoon session, formal training, and course

experience (Hancock, 2013, p. 77). There were also subscales created based on the content of

each question in the survey. These subscales were Comprehension, or if the instructor

demonstrates knowledge of the subject in the course; Transformation, or if the instructor is

prepared and manages class time effectively; Instruction, or if the instructor follows set

guidelines; and Evaluation, confirms that students are aware of how their work is evaluated

(Hancock, 2013, p. 63). Based on the results of the study, it was discovered that there was

insufficient evidence to determine if an instructor’s content knowledge had any additional effect

on student evaluations outside of the listed control variables and subscales of comprehension,

transformation, and instruction (Hancock, 2013, p. 76-78). The results of the study, did

however, show that: Courses offered in the morning typically received lower student evaluations

rating when it came to comprehension; Instructors with more experience teaching the course

increased the overall student evaluations rating of comprehension; And this also proved to be the

case for the subscale of transformation (Hancock, 2013, p. 78). Though the results of the study

did not indicate that content knowledge influenced the control variables and the subscales of

comprehension, transformation, and instruction, they do suggest to have had an effect on the
QUALITY OF COURSES 22

subscale of evaluation (Hancock, 2013, p. 78-79). The study is relevant to this Capstone Project

because it suggests that student evaluations of instructors to an extent can be influenced by

several factors. Overall, students perceived the quality of their instructors more negatively in

courses that were offered in the morning. Students’ perception of instructors also tends to be

more positive when the instructor has experience teaching the course.

Technology and Student Perception

A student’s comfort level with technology can have a direct effect on their perception of

the quality of a course. Also, the competency level of both the students and the instructor when

it comes to technology, and how technology is incorporated into the course, can affect students’

perception of the course. Another factor to consider is how useful students perceive the

technology incorporated into the course to be. The following literature looked at technology and

its effect on the perception of courses.

Galal, Mayberry, Chan, Hargis, and Halilovic (2015) conducted a study that focused on

student response systems and whether they could be effectively used as a learning tool that could

have an effect on not only student retention of the course material but also on students’ attitude

towards class participation and the incorporation of technology (p. 591). The participants in this

study consisted of students enrolled in an intro-level pharmacy course that was part of a three-

year accelerated PharmD program (Galal et al., 2015, p. 591). The study consisted of multiple

sessions, two sessions that made up the treatment group where a student response system was

incorporated, and the other sessions were the control group did not utilize a student response

system (Galal et al., 2015, p. 592). The students in the treatment group were able to answer

questions asked via the student response system through use of either a computer or a smart

phone, while the students in the control group were asked to raise their hand based on the answer
QUALITY OF COURSES 23

they chose (Galal et al., 2015, p. 592). A survey was given to the participating students before

and after the implementation of the student response systems, and out of the 214 students in the

course, 153 students participated in the study with 68 students in the treatment group and 85

students in the control group (Galal et al., 2015, p. 592). The results of the study suggested that

the use of student response systems did not appear to influence student retention of the course

material (Galal et al., 2015, p. 593). However, the use of student response systems did appear to

influence student attitudes. Positive attitudes towards the use of student response systems was

more than three times higher in the treatment group compared to the control group, and there was

also significant differences between both groups when it came to attitude changes towards using

technology in the classroom (Galal et al., 2015, p. 593). Also, students in the control group

expressed a desire to use technology in the classroom and to be able to participate anonymously,

while students in the treatment group expressed a desire to do the opposite (Galal et al., 2015, p.

594). Another interesting result of the study is that data suggests that students in the treatment

group credited technology for their performances in the course. However, students in the control

group who performed positively did not feel the need for technology, while those who performed

negatively felt they might have performed better if the technology had been available to them

(Galal et al., 2015, p. 594). The study is relevant to this Capstone Project because it

demonstrated how technology could have a direct influence on students’ attitude. Technology

such as a student response system is often utilized in the modern classroom, so it is easy to

recognize how students’ perception of this type of technology could not only influence their

attitude towards using the technology, but also towards their perception of courses that use it.

Akcauglu and Bowman (2016) conducted a study to determine if there were any

differences with students who participated in an instructor-led Facebook group in comparison to


QUALITY OF COURSES 24

students who did not, in regard to their perception of the course content and of their closeness to

their instructor and peers in the course (p. 582). The study conducted was cross-sectional, and

students were asked to participate in a survey via course emails, university listservs, academic

areas, and social media. Participating students were given a chance to win a $25 Amazon.com

gift card to encourage participation (Akcauglu and Bowman, 2016, p. 584). Out of the 211

students who responded to the survey, only 87 students indicated that they had been in a course

that utilized a Facebook group, so those students were the focus of the study. Participating

students were from fifteen different universities in both Canada and the United States (Akcauglu

and Bowman, 2016, p. 584). Of the 87 participating students, 56 of them, whom the study

defined as adopters, identified themselves as actively participating in instructor-led Facebook

groups, and 31 of the students, whom the study defined as non-adopters, did not participate in

instructor-led Facebook groups (Akcauglu and Bowman, 2016, p. 585). The results of the study

determined that there was not a statistically significant difference between adopters and non-

adopters when it came to situational interest in course content, but a statistically significance

difference did occur when it came to both groups maintained interest in the course content

(Akcauglu and Bowman, 2016, p. 585). The results of the study also suggested that while

utilizing an instructor-led Facebook group could bring students to feel closer to the material in

the course, the data did not suggest that using a Facebook group led to students feeling closer to

their instructor or peers in the course (Akcauglu and Bowman, 2016, p. 586). The study is

relevant to this Capstone Project because it demonstrates how social media such as Facebook can

be utilized as a means of increasing students’ interest in a course. Students who want to become

more involved in learning the material being offered in a course are more likely to deem the

course to be of high quality.


QUALITY OF COURSES 25

Teclehaimanot, Mentzer, and Hickman (2011) conducted a study that focused on students

in the College of Education at an institution to determine why the students were not integrating

technology into their student teaching, even though it was determined that they were properly

trained and competent in the use of the technology (p. 5). To gather data for this study, an

annual survey that is mailed to graduating students was used, and the response rate was 47, or

16%. This survey asked questions such as the students comfort level with specific instructional

technology tools, if faculty had trained them on using these tools, if they had access to the tools

during student teaching, and if they utilized the tools during student teaching (Teclehaimanot et

al., 2011, p. 9). Findings from the data gathered from the student study was then compared to a

study based on faculty perceptions and attitudes (Teclehaimanot et al., 2011, p. 9). An

exploratory study that consisted of eight faculty members and two administrators in the College

of Education was also included in the data gathered. They were asked questions regarding the

integration of technology in the pre-service teacher program, how they contributed to

demonstrating technology as an instructional tool, and what technology they felt students should

be competent in (Teclehaimanot et al., 2011, p. 10). The results of the study showed that overall

students performed well in their educational technology course, suggesting that they were

competent in the use of the technology. 75% of the students also indicated that they utilized

technology in their student teaching (Teclehaimanot et al., 2011, p. 11). However, though

students did well academically in the educational technology course, they indicated that their

confidence in using the technology was not very high (Teclehaimanot et al., 2011, p. 11). The

results of the faculty perceptions study indicated that the most common reason for integrating

technology into the teacher education programs was to ensure that the students became more

competent with technology tools (Teclehaimanot et al., 2011, p. 13). The results of the faculty
QUALITY OF COURSES 26

perceptions study also indicated that among faculty, there was some disagreement in terms of

when and where technology integration should be taught (Teclehaimanot et al., 2011, p. 13). It

was also determined from the results of the faculty study that while the majority of the faculty

discussed technology integration, few gave their students opportunities such as exercises or

activities to develop their skills in using the technology (Teclehaimanot et al., 2011, p. 13). The

study is relevant to this Capstone Project because it shows that while it is important to

incorporate technology into courses, it is just as important to give students many opportunities to

develop the skills necessary to increase their confidence in using the technology. A student that

is confident it the technology is more likely to utilize that technology outside of the course.

Concannon, Flynn, and Campbell (2005) conducted a study as a way of addressing some

of the concerns expressed by individual students in terms of incorporating accounting-related

information technology into courses (p. 503). The study focused on a Principles of Accounting

course that consisted of 600 first-year undergraduate students at the University of Limerick. The

course was their first e-learning experience for all of the students with the exception of overseas

students (Concannon et al., 2005, p. 503). The course was a blended learning course that

consisted of weekly lectures, tutorials, lab sessions that included online content, interactive

quizzes and Excel tasks (Concannon et al., 2005, p. 503). At the time the paper was published,

the university did not provide institutional support for e-learning at a campus-wide level, but

there were two additional courses that also used blended learning in economics and mathematics

(Concannon et al., 2005, p. 504). Data was gathered for the study utilizing the following

methods: web server log files; a survey that consisted of open and closed questions; and focus

groups (Concannon, et al., 2005, p. 504). The results of the study showed that though some

students reported having very limited or no experience with computers, none of the students
QUALITY OF COURSES 27

expressed difficulty in accessing a web site or taking online exams (Concannon et al., 2005, p.

506). When it came to when and how much time students spent studying, the results showed that

the amount of time varied and was dependent upon several factors such as the distance their

parents lived away, or if they had part-time jobs (Concannon et al., 2005, p. 506). The results

also showed that all of the students preferred to utilize lecture and tutorial notes and would refer

to a web site as a main resource for additional readings and any missed information. 81% of the

survey respondents indicated that course lectures and study groups resulted in more effective

student learning than an e-learning environment (Concannon et al., 2005, p. 508). Finally, the

results of the study suggested that students with plans of pursuing a career in accounting were

more likely to approve of using technology in the course, since they viewed the use of computers

as an essential skill that accountants needed to possess (Concannon et al., 2005, p. 508). The

study is relevant to this Capstone Project because it demonstrates that students who plan to

pursue a career where perception is that specific technology will be utilized as part of the job are

more likely to look positively towards incorporating that technology into their coursework.

Gaining experience with technology through courses at an institution of higher education helps

students hone their craft, and thus makes them more confident when it comes to utilizing that

technology in the workplace upon the completion of their degree.

Student Ability and Student Perception

A student’s ability to participate in a course can have impact on how the course is

perceived. If a student encounters difficulty participating in a course or utilizing course material,

they may perceive the course more negatively than a student who does not encounter those

issues. The following literature looks at how student ability can affect student perception.
QUALITY OF COURSES 28

Su (2007) conducted a study that examined how individual ability and achieving good

grades in team-based learning affected student perception of team-based learning (p. 806). The

study utilized a field experiment design, and the participants in the study consisted of 96

undergraduate students enrolled in a Service Quality Management course (Su, 2007, p. 811).

The study had two independent variables in the course: individual ability and favorable team

member scores. The favorable team member variable was the independent variable that was

manipulated, and students’ perception of the importance of a course grade was the moderating

variable (Su, 2007, p. 811). The dependent variables used in the study were student preference

of learning formats, different grading methods, and the perception of social loafing (Su, 2007, p.

811). Students’ individual ability was determined by their grade point averages, and the team

member scores were manipulated by assigning each student that participated in the study to a

team of three, where two of the team members’ individual ability was predetermined by the

instructor (Su, 2007, p. 813). The results of the study determined that students’ preference of

team-based learning was significantly different based upon their level of individual ability.

Students that possessed a medium level of individual ability preferred team-based learning the

most. (Su, 2007, p. 815). The results of the study also determined that there was a significant

difference between students’ preference of team-based learning based on the level of favorable

team member scores. Students who scored high team member scores also possessed the highest

means of preference on team-based learning (Su, 2007, p. 815). The results also determined that

when it comes students with low levels of individual ability, if the favorable team member score

was medium or high, their preference of team-based learning was much higher (Su, 2007, p.

818). The results of the study also determined that students’ that perceived the course to be

important were more likely to have a negative response to social loafing (Su, 2007, p. 823). The
QUALITY OF COURSES 29

study is relevant to this Capstone Project because it shows that student ability can influence their

perception of how they learn material in a course. Team-based learning is a method that is

commonly used in higher education courses. The study demonstrates how a student’s individual

ability can affect their perception of team-based learning.

Visser-Widnveen, van der Rijst, and van Driel (2016) conducted a study to determine the

factors that affect how students perceive research integration in their courses (p. 477). A

questionnaire was created to measure student perception of research integration, and this

questionnaire was given to students during the final class period of five undergraduate courses

including one Medical course, two Language courses, and two Technology courses (Visser-

Widnveen et al., 2016, p. 479). 221 students answered the questionnaire, but only 208 students

answered the questionnaire completely. Those 208 students were included as participants in the

study (Visser-Widnveen et al., 2016, p. 479). The results of the questionnaire were separated

into the following subscales: Research Integration, which included reflection, participation,

current research, and motivation; Quality; and Beliefs (Visser-Widnveen et al., 2016, p. 482).

The subscale reflection consisted of questions that focused on the way research results were

produced. The data showed that the students in Medicine and Language courses recognized this

subscale more than any of the other courses (Visser-Widnveen et al., 2016, p. 482). The

subscale participation focused on if instructor’s attempted to introduce students to research

analysis in their course. Only a small number of students in the Technology courses and the

Medicine course expected to participate in research in their courses (Visser-Widnveen et al., p.

483). When it came to the subscale current research, the Language courses scored the highest

(Visser-Widnveen et al., 2016, p. 483). The subscale motivation focused on questions related to

students’ possessing an increase in interest and motivation for research in their course’s field.
QUALITY OF COURSES 30

One of the Language courses scored the highest on not only motivation, but in all other

subscales, suggesting that the course seemed to be the most motivating for research (Visser-

Widnveen et al., 2016, p. 483). When looking at the subscale beliefs, only students in one of the

Technology courses believed that research was not important to their learning. All of the courses

in the study deemed the overall quality of the courses to be relatively high, with the exception of

the Medicine course, which was the only course that was of a larger class that consisted

primarily of lectures (Visser-Widnveen et al., 2016, p. 484). The study conducted suggests that

student perception/motivation when it comes to integrating research varies based on the subject

matter of the course. The study is also relevant to this Capstone Project because it shows that the

size of a course and how the course content is delivered can have an impact on students’

perception of integrating research into a course.

How Learning Effectiveness Affects Student Perception in Online versus Traditional

Courses

How students view learning effectiveness in courses can affect their perception of

courses. If students feel that courses are not very effective in assisting the students in learning

the course material, then it can potentially affect how they perceive the course. The following

literature compared traditional courses to online courses, to determine how learning effectiveness

can affect student perception.

Evans (2015) conducted a cross-sectional descriptive study of graduate students to gauge

their perceptions of whether learning effectiveness was more effective in a traditional or online

environment (p. 104). Data was gathered for the study by distributing a survey to MBA students,

interviewing graduate-level students utilizing a qualitative design, and gathering overall grades

of graduate students in both a traditional and online format of a course taught by the same
QUALITY OF COURSES 31

instructor (Evans, 2015, p. 108). The following three variables from Knowles’ adult learning

theory was utilized in the study: foundation, self-concept, and orientation (Evans, 2015, p. 108).

The survey that was conducted was given online, and was completed by 225 MBA students out

of 801. This included 119 students from the traditional format and 106 from the online format

(Evans, 2015, p. 109). The results of the study indicated that there were statistically significant

differences related to the dependent variable foundation. These results regarding the foundation

variable were broken down into three key areas, which were also identified as key differences

between traditional and online formats: support, social skills, and quality of instruction (Evans,

2015, p. 110). Findings from the student interviews suggest that students in both traditional and

online formats believed that the format their course was in enabled them to interact with others in

the course, with the only difference being how they interacted based upon the course format

(Evans, 2015, p. 110). When it came to self-concept, the results of the study suggested that

traditional learning was associated as a more effective mode of learning, and that both

achievement and quality of learning were deemed a disadvantage of online learning (Evans,

2015, p. 112). The results of the study also determined that details, or the amount of work

assigned, and support, such as interaction with those in the course, were identified as major

differences between traditional and online formats (Evans, 2015, p. 112). When looking at

students’ grades, the results showed that the grades of online students were significantly lower

when compared to the traditional students, which suggests that the overall achievement level of

online students was low in comparison to traditional students (Evans, 2015, p. 113). The study is

relevant to this Capstone Project because it demonstrates that there are some clear differences of

student perception of the effectiveness of course formats when it comes to comparing traditional
QUALITY OF COURSES 32

to online courses. The results of the study also seems to suggest that there is a major gap when it

comes to the learning effectiveness of online courses when compared to traditional courses.

Hurney et al. (2014) conducted a study that evaluated the impact of a course evaluation

process known as small group instructional diagnosis to gauge student perceptions of aspects of

teaching that promote effective learning (p. 55). Hurney et al. (2014) identified three benefits of

utilizing a small group instructional diagnosis to conduct course evaluations: First, they take

place halfway through the semester, allowing the evaluation to be formative instead of

summative. Second, a consultant, ensuring that the discussion relates directly to learning or

other aspects of the course, guides the group discussion. Third, the small group instructional

diagnosis allows the instructor to interact with the small group regarding their feedback (p. 55).

For purposes of the study, two types of small group instructional diagnosis were created: a

traditional small group instructional diagnosis and a learner-centered small group instructional

diagnosis (Hurney et al., 2014, p. 56). In this study, 2451 students participated in the learner-

centered small groups, while 2552 students participated in the traditional small groups (Hurney

et al., 2014, p. 56). A survey was given to the participating students, but only 789 out of the

5003 students who participated in the small group instructional diagnosis completed the survey.

Of those students, 471 were a part of the traditional small groups, while 372 students were a part

of the learner-centered small groups (Hurney et al., 2014, p. 56). The results of the study

indicated that students who participated in both types of small groups felt the experience had a

positive effect on their impression of the learning environment. However, a higher percentage of

students in the learner-centered small groups indicated that being a part of the experience

improved their perception of instructor enthusiasm (Hurney et al., 2014, p. 57). The results of

the study also indicated that a higher percentage of students in the learner-centered group felt
QUALITY OF COURSES 33

that their involvement in the small group improved their interactions with their peers, instructor,

and the course (Hurney et al., 2014, p. 58). The study is important to the overall focus of this

Capstone Project, because it focuses on an alternative method of course evaluations. The small

group instructional diagnosis process allows students to offer much more relevant feedback in

comparison to survey-based course evaluations that are typically offered at the end of a semester.

Fydryszewski, Scanlan, Guiles, and Tucker (2010) conducted a study that examined

students’ perception of course quality in relation to the Seven Principles for Good Practice in

Undergraduate Education, when comparing a traditional to online format of a phlebotomy

certificate program (p. 39). The sample of students used in the study consisted of 19 students in

the traditional phlebotomy program and 11 in the online program (Fydryszewski et al., 2010, p.

41). The data for the study was gathered from the final exam grades of the participating students

and from the results of a course evaluation study (Fydryszewski et al., 2010, p. 40). The results

of the study showed that there was a significant relationship between the preferred program

format and the education level of the student. The online program had 64% of the students

indicate that they had more than a high school education, while only 16% of the traditional

program indicated the same (Fydryszewski et al., 2010, p. 42). The results of the study indicated

that students in the traditional program rated student/instructor interaction significantly greater

than students in the online program. However, there was not a significant difference between the

traditional and online programs when it came to overall course quality ratings (Fydryszewski et

al., 2010, p. 42). Though the study indicated that a difference in educational level did exist with

students in the online and traditional programs, this difference did not seem to affect how the

students performed in the program (Fydryszewski et al., 2010, p. 43). The fact that the study

shows that an educational difference did exist in a certificate program is relevant to this Capstone
QUALITY OF COURSES 34

Project because it suggests there can be a difference in the educational level of students when

looking at traditional versus online formats. A student that prefers a traditional format is most

likely to have a more negative perception of online formats, while a student that prefers an online

format is most likely to have a more negative perception to traditional formats.

Kaushik et al. (2012) conducted a study to determine the perception and satisfaction of

students in regards to traditional and online courses, in terms of quality, cost effectiveness and

employment opportunities of students at several colleges in India (p. 87). A survey was utilized

as a means of collecting data for the study, and it was distributed to different colleges in

Haryana. 450 students completed the survey and were used as respondents in the study (Kaushik

et al., 2012, p. 88). The results of the study indicated that a significant relationship existed

between choosing a traditional or online course in regards to gender. The results showed that

58.5% of male students chose online courses and 62% of female students chose traditional

courses (Kaushik et al., 2012, p. 88). The results of the study also showed that 61.5% of male

students felt that online courses were a better choice in terms of providing a good opportunity for

career growth, while 57.9% of female students felt traditional courses provided the better

opportunity (Kaushik et al., 2012, p. 88). Students of both online and traditional courses felt that

traditional courses provided more intriguing course material, but 47.2% of the students felt that

online courses were better in terms of the course materials provided for studying (Kaushik et al.,

2012, p. 89). The results also showed that students that felt that traditional courses were better

for career growth were more likely to take additional traditional courses in the future, while

online students were likely to choose either learning format (Kaushik et al., 2012, p. 89).

Overall, both online and traditional students were happy with either learning format. However,

student satisfaction was the highest in regards to online courses (Kaushik et al., 2012, p. 89).
QUALITY OF COURSES 35

The results of the study are important to this Capstone Project because they suggest student

preference towards online courses, which is in contrast to other studies that pointed toward a

preference of traditional courses. This suggests some variance in students’ perception of what

learning format is more effective.

Summation of the Literature Review

The literature reviewed in this Capstone Project identified several key areas that were

addressed. When it comes to students’ perception of online courses, the literature identified

several relevant points. First, instructional design is an important element when it comes to

student perception of online courses. This is reiterated by other literature that emphasized the

importance of course structure. Second, various levels of support are also important when it

comes to student perception of online courses. Third, the trustworthiness of the course is also

important. Fourth, several factors can influence students’ decisions to join an online program,

which can have a direct impact on their perception of the quality of online courses. Finally, the

literature identified several factors that students’ consider to be the most important when it

comes to online courses.

In relation to student attitude, and how it affects students’ perception, the literature

identified several relevant points. First, several factors were identified that can affect students’

attitude, which can directly influence students’ perception of course quality. Second, it was

determined that student effort can be directly related to the motivation of the student in the

course. Third, the literature demonstrated how students’ perception of their instructors can be

directly influenced by their peers. Finally, the literature identified several student attitude-related

factors that can influence student evaluations of instructors.


QUALITY OF COURSES 36

Another key area addressed in the literature was technology and how it can affect student

perception. First, the literature demonstrated how specific technology when implemented in

courses can have a direct influence on not only student attitude but also student perception.

Second, the literature demonstrated how social media could be utilized as a means of increasing

students’ interest in a course. Third, the literature showed that while it is important to

incorporate technology into courses, it is just as import to give students opportunities to develop

the skills necessary to increase their confidence in using the technology. Finally, the literature

demonstrated how certain career paths can influence students’ perception of incorporating

technology into their courses.

The literature also addressed how student ability can affect student perception. First, the

literature demonstrated how student ability can influence their perception of different methods of

learning materials in the course. The literature also suggested that student perception/motivation

when it comes to incorporating specific learning methods into the course can vary based upon

the subject matter of the course, and that the course size and how course content is delivered can

also be a factor.

Finally, the literature addressed how learning effectiveness can affect student perception

when comparing online and traditional courses. First, the literature suggested that there are some

clear difference of students’ perception of the effectiveness of both course formats. The

literature also suggested a difference in educational level of students in one study did exist based

on whether the students chose an online or traditional course format. It is important to identify

these key areas when trying to utilize course evaluations as a means of determining student

perception of the quality of course at an institution.


QUALITY OF COURSES 37

References

Akcaoglu, M., & Bowman, N. D. (2016). Using instructor-led Facebook groups to enhance

students' perceptions of course content. Computers In Human Behavior, 582.

doi:10.1016/j.chb.2016.05.029

Anderson, G., Tredway, C., & Calice, C. (2015). A Longitudinal Study of Nursing Students'

Perceptions of Online Course Quality. Journal Of Interactive Learning Research, 26(1),

5-21.

Concannon, F., Flynn, A., & Campbell, M. (2005). What campus-based students think about the

quality and benefits of e-learning. British Journal Of Educational Technology, 36(3),

501-512.

Edwards, A., & Edwards, C. (2013). Computer-Mediated Word-of-Mouth Communication: The

Influence of Mixed Reviews on Student Perceptions of Instructors and Courses.

Communication Education, 62(4), 412-424. doi:10.1080/03634523.2013.800217

Evans, N. (2015). A Cross-Sectional Descriptive Study of Graduate Students' Perceptions of

Learning Effectiveness in Face-to-Face and Online Courses. Academy Of Business

Research Journal, 1104-118.

Fish, L. A., & Snodgrass, C. R. (2015). Business Student Perceptions of Online versus Face-to-

Face Education: Student Characteristics. Business Education Innovation Journal, 7(2),

83-96.

Fydryszewski, N., Scanlan, C., Guiles, H., & Tucker, A. (2010). An exploratory study of live vs.

Web-based delivery of a phlebotomy program. Clinical Laboratory Science, 233-39.


QUALITY OF COURSES 38

Galal, S. M., Mayberry, J. K., Chan, E., Hargis, J., & Halilovic, J. (2015). Technology vs.

pedagogy: Instructional effectiveness and student perceptions of a student response

system. Currents In Pharmacy Teaching And Learning, (5), 590.

doi:10.1016/j.cptl.2015.06.004

Hancock, S. C. (2013, January 1). The Effect of Content Knowledge on Students' Perceptions of

Instructors' Teaching Effectiveness. ProQuest LLC,

Hurney, C. A., Harris, N. L., Bates Prins, S. C., & Kruck, S. E. (2014). The Impact of a Learner-

Centered, Mid-Semester Course Evaluation on Students. Journal Of Faculty

Development, 28(3), 55-62.

Kaushik, N., Saxena, P., & Garg, R. (2012). An Empirical Study of Assessing Student

Perception and Satisfaction between Regular and Distance Learning Formats. Indian

Journal Of Higher Education, 3(1), 87.

Lee, S. J., Srinivasan, S., Trail, T., Lewis, D., & Lopez, S. (2011). Examining the relationship

among student perception of support, course satisfaction, and learning outcomes in online

learning. The Internet And Higher Education, 14158-163.

doi:10.1016/j.iheduc.2011.04.001

Myers, S. A., & Thorn, K. (2013). The Relationship between Students' Motives to Communicate

with Their Instructors, Course Effort, and Course Workload. College Student Journal,

47(3), 485-488.

Roby, T., Ashe, S., Singh, N., & Clark, C. (2013). Shaping the online experience: How

administrators can influence student and instructor perceptions through policy and

practice. Internet & Higher Education, 1729-37. doi:10.1016/j.iheduc.2012.09.004


QUALITY OF COURSES 39

Su, A. (2007). The impact of individual ability, favorable team member scores, and student

perception of course importance on student preference of team-based learning and

grading methods. Adolescence, 42(168), 805-826.

Teclehaimanot, B., Mentzer, G., & Hickman, T. (2011). A Mixed Methods Comparison of

Teacher Education Faculty Perceptions of the Integration of Technology into Their

Courses and Student Feedback on Technology Proficiency. Journal Of Technology And

Teacher Education, 19(1), 5-21.

Tricker, T., Rangecroft, M., Long, P., & Gilroy, P. (2001). Evaluating distance education

courses: The student perception. Assessment & Evaluation In Higher Education, 26(2),

165-177. doi:10.1080/02602930020022002

Uwagie-Ero, F. V. (2008). Connections between student perceptions of quality in online distance

education and retention. Dissertation Abstracts International Section A, 69, 1249.

Visser-Wijnveen, G., van der Rijst, R., & van Driel, J. (2016). A questionnaire to capture

students' perceptions of research integration in their courses. Higher Education

(00181560), 71(4), 473-488.

Wang, Y. D. (2014). Building student trust in online learning environments. Distance Education,

35(3), 345-359. doi:10.1080/01587919.2015.955267

Yang, Y., & Durrington, V. (2010). Investigation of Students' Perceptions of Online Course

Quality. International Journal On E-Learning, 9(3), 341-361.

You might also like