Tobin Gebo

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/249028117

Assessing student learning and departmental effectiveness


through an undergraduate comprehensive exam

Article in Criminal Justice Studies · September 2008


DOI: 10.1080/14786010802355362

CITATIONS READS
3 1,545

2 authors, including:

Erika Gebo
Suffolk University
29 PUBLICATIONS 192 CITATIONS

SEE PROFILE

All content following this page was uploaded by Erika Gebo on 20 April 2016.

The user has requested enhancement of the downloaded file.


Criminal Justice Studies
Vol. 21, No. 3, September 2008, 223–238

RESEARCH ARTICLE
Assessing student learning and departmental effectiveness through an
undergraduate comprehensive exam
Kimberly Tobina* and Erika Gebob
a
Department of Criminal Justice, Westfield State College, Westfield, MA, USA; bDepartment of
Sociology, Suffolk University, Boston, MA, USA
Criminal
10.1080/14786010802355362
GJUP_A_335703.sgm
1478-601X
Original
Taylor
302008
21
Dr
ktobin@wsc.ma.edu
000002008
KimberlyTobin
and
&Article
Justice
Francis
Francis
(print)/1478-6028
Studies (online)

Student assessment helps guide understanding of learning experiences and departmental


effectiveness. In this paper one department’s locally developed undergraduate
comprehensive exam measuring knowledge in core criminal justice areas is discussed
and preliminary results are evaluated. Exam data were collected from seniors and, as
baseline, from freshmen. Regression results showed that seniors had significantly more
knowledge across all core areas than freshmen. Overall, this lends support to the idea that
the department was effective in imparting knowledge to students. There was, however,
a gender effect present and possible reasons for this result are explored. Limitations of
the assessment instrument and directions for future research are also discussed.
Keywords: student assessment; student learning outcomes; comprehensive exams;
departmental effectiveness

Assessment is a buzz word on college campuses today. Assessing the quality of higher
education in America, however, was not always a priority for colleges and universities. The
push for higher education assessment began in earnest in the mid-1980s (Banta, 1991;
Doherty, Riordan, & Roth, 2002), at the time when the Association of American Colleges
and Universities (AAC&U) produced a report declaring that most higher education institu-
tions were unaware of the effectiveness of their methods on student learning (1984). This
report was a loud call for institutions and departments to ensure that quality education was
being provided. Since that time, the number of higher education institutions engaged in the
process has grown tremendously, and currently almost all institutions of higher education
are engaged in assessment of their education goals (Chun, 2002). Accreditation organiza-
tions also are increasingly requiring and providing assistance to institutions and departments
in their efforts to develop and refine assessment techniques (AAC&U, 2004; Weiss, 2002).
Three broad domains characterize assessment: (a) assessment of student learning, (b)
assessment of departmental effectiveness or program assessment, and (c) assessment of
institutional effectiveness. The combination of data from these three domains provides
evidence of the quality of learning in higher education. Research in these domains, however,
is highly varied. Institutional assessment has been documented by individual researchers
and, in particular, accreditation bodies and their subcommittees (i.e., AAC&U, 1984; Ewell,
2002; Palomba & Banta, 1999). In the field of social science, most assessment literature
tends to focus on individual course assessment (Wagenaar, 2002). Departmental assessment
literature, in contrast, is lacking; yet faculty need to write about their experiences with
assessment in order to further each discipline’s assessment process (Weiss, 2002).

*Corresponding author. Email: ktobin@wsc.ma.edu

ISSN 1478-601X print/ISSN 1478-6028 online


© 2008 Taylor & Francis
DOI: 10.1080/14786010802355362
http://www.informaworld.com
224 K. Tobin and E. Gebo

Lack of departmental assessment is especially true in the field of criminal justice where
very few articles addressing this topic are found in scholarly works. To add to the literature
specific to criminal justice departmental assessment, this paper discusses the creation and
implementation of a comprehensive exam as one piece of a comprehensive assessment
process. Preliminary results from the exam are also explored. Lessons learned from assess-
ment and early exam results can assist other criminal justice departments in assessment
development and refinement, and can be particularly helpful for those who are employing
or are thinking of employing a comprehensive exam as an evaluation method.

The department
Brief discussions of the department under study and the creation of an assessment plan are
necessary to help situate the discussion of the comprehensive exam. The Criminal Justice
Department at Westfield State College, a small liberal arts college in Massachusetts, offers
the largest major on campus. The program is designed to serve students who may elect to
enter the criminal justice profession immediately upon graduation, students of the liberal
arts without professional interest in criminal justice, and students who desire to continue
their education through graduate studies. The major is interdisciplinary, predicated on a
common core of the humanities, social sciences, mathematics, and physical sciences. The
goal of the program is to develop in students an increased analytical awareness of the role
of law enforcement agencies, courts, and correctional institutions in the criminal justice
system. The foundation of this awareness is presented in required courses taken in Introduc-
tion to Criminal Justice, American Judicial Systems, Corrections, Law Enforcement,
Research Methods, and Theories of Crime. All students must take required courses, and
they are taken in sequence from freshman to junior years (for a full description of program
requirements, see Appendix 1).
The department has grown from one faculty member and four students in 1969 to
12 full-time faculty members and over 700 majors. The department also established a
collaborative satellite program with Worcester State College in the fall of 2001. At the time
of this study, the Worcester State site had three full-time faculty members and approxi-
mately 150 majors. The main campus program and satellite program adhere to the same
standards and there is very little differentiation in the administration of the programs. Both
programs primarily serve full-time, traditional undergraduates who attend the college for
their entire undergraduate careers. On both campuses almost all of the full-time faculty
members have PhDs in criminal justice and related fields. With few exceptions, required
major courses are offered by the full-time faculty. The department has occasionally used
adjuncts for introduction to criminal justice and law enforcement courses, but does not
rely on these instructors. The class size for all major required courses is capped at 35, with
the exception of methods, which is capped at 25. Upper level electives are also capped at
25 students.

Assessment plan
Prior to 2003, the department had no formal assessment plan. Departmental effectiveness
was primarily determined by traditional evaluation methods, such as application rates, grad-
uation rates, overall student GPA, and institutional alumni feedback surveys. The formal
assessment plan was developed as a result of external pressure by the state’s Board of
Higher Education. The department expedited the assessment process in order to receive
approval from the state to be a receptor site for students who wish to be recognized under
Criminal Justice Studies 225

the state’s Quinn Bill legislation, which provides monetary incentives for law enforcement
officers for each earned higher education degree.
The department realized that in order to implement a successful assessment plan, well-
articulated departmental goals needed to be established and institutional goals needed to be
considered (see Appendix 2 for the department’s stated goals). Based on prior literature
and the complexity of departmental and institutional goals, the department recognized that
the comprehensive assessment process had to measure several areas of learning. The most
valid measures of assessment rely on a multidimensional approach (AAC&U, 2004); there-
fore, the department chose to assess departmental objectives in several ways. Methods
used by the department included analyzing student writing through work in completed
major courses; conducting focus groups with seniors, alumni, and employers to survey
their experiences with the criminal justice department; developing an advisory board; and
administering a comprehensive examination to graduating seniors. This paper focuses on
the comprehensive exam piece of the assessment plan.

Comprehensive exams
Comprehensive exams are a common method of assessment in higher education (Banta &
Schneider, 1986; Weiss, 2002) and in criminal justice programs (Winter & Fadale, 1991),
yet there has been some criticism of the use of this method to measure student learning.
Morris (n.d.) stated that comprehensive exams placed too much emphasis on memorization
and not enough emphasis on problem solving. Pelfrey and Hague (2000) also believed this
to be true in their review of one criminal justice department’s comprehensive exam. Such
general thinking, however, may sell short the value of the comprehensive exam. Recent
research has suggested that comprehensive exams can and do engage high-order levels of
thinking, as discussed in Bloom’s taxonomy (1956), including application, synthesis, anal-
ysis, and evaluation (Banta & Schneider, 1986; Simkin & Kuechler, 2005).
Another major criticism of the comprehensive exam as an assessment tool is that it
cannot truly authenticate student learning (Wagenaar, 1991, 2004). A comprehensive exam,
however, may be an integral piece of overall program assessment because it does provide a
measure of quantitative, objective feedback to add to the more subjective and qualitative
information received from other assessment methods, such as focus groups and analysis of
student writing.
Other scholars believe that undergraduates may not take comprehensive exams seri-
ously if their grades are not affected (Meadows, Dyal, & Wright, 1998). Despite these crit-
icisms, researchers suggest that the departmental comprehensive exam may, in fact, be
taking on more importance because of its dual functions of assessing what majors should
know upon finishing coursework and assessing departmental effectiveness (Banta &
Schneider, 1986). Pelfrey and Hague (2000) echoed those purposes when they stated that
the comprehensive exam should ‘respond to the needs of the student and the needs of the
program’ (p. 168).
There also are indirect benefits to administering a comprehensive exam for departmental
assessment purposes. Prior research has shown that while some faculty members initially
may be reluctant to engage in assessment, especially those who are concerned that ‘their’
areas of expertise will show little improvement, assessment often brings faculty together to
strive for common goals in student learning. For example, Banta and Schneider (1986)
found that important indirect benefits of assessment were to act as a catalyst to increase
dialog about pedagogical issues among faculty and to increase consciousness about depart-
mental curricula and individual course learning objectives. These additional benefits have
226 K. Tobin and E. Gebo

been found to be especially common to departments engaged in local comprehensive exam


development (AAC&U, 2004; Banta & Schneider, 1986).
Departments can choose to administer a standardized comprehensive exam or to
produce their own ‘home-grown’ comprehensive exam. The most popular standardized test
is the Educational Testing Service’s Major Field Test (MFT) in criminal justice. Even so,
relatively few institutions (70 in total) utilize the Criminal Justice MFT (ETS, 2005), which
may affect the validity of the results by creating a ‘user-based’ norm that may not necessar-
ily compare well with a true, national norm (Erwin, 1991). Alternatively, developing a local
test also may better reflect departmental and institutional teaching and learning goals.
Unfortunately, the major problem with developing a local test as opposed to utilizing one
with national availability is that there are no standards for comparison with national data.

Exam scores and external factors


Regardless of which type of test is utilized, there are other factors external to actual knowl-
edge acquisition that may affect student exam scores (Larose, Robertson, Roy, & Legault,
1998). Controlling for such factors in the comprehensive exam is important. Living status
is one possible influence. Inman and Pascarella (1998) found that critical thinking skills
developed differently in college freshmen who lived on campus than those who commuted.
Similarly, Ishitani (2001) found that resident students spend more time studying than non-
resident students.
Previous academic performance also may influence exam score. Research has found that
high school GPA and SAT scores accurately reflected academic achievement in some
college students (Ishitani, 2001; Martin & Hanrahan, 2004) and the inclusion of prior
measures of academic performance should be included on comprehensive exams. Social
integration also may have an effect. Pascarella and Terenzini (1991) found that students
who were involved in student life and had more faculty–student contact faired better
academically and socially than those who were not (also see Tinto, 1975).
Employment and gender also may make a difference (Magolda, 1992). Logically,
employment may negatively affect exam scores and grades. In a survey of entering fresh-
men criminology majors, Martin and Hanrahan (2004) found that as students’ expected
employment hours increased, the amount of time expected to be devoted to studies signifi-
cantly decreased; yet examination of first semester GPAs showed no employment effect on
studying. Even though students expected to study less, employed students did not study less
than their unemployed counterparts. While no significant differences between employment
and grades were found in those criminology students, further research is needed outside of
one institution in one location. While the comprehensive exams are primarily about knowl-
edge acquisition, the individual factors that may play a role in the learning process are more
difficult to control for in the exam format.
A small, but significant body of research has assessed gender differences in college
learning. Research has shown that there are no gender differences in general intelligence,
although females tend to have the advantage in verbal abilities and males tend to have the
advantage in visual–spatial and quantitative abilities (Halpern, 1997; also see Halpern, 1986
for a comprehensive review). Gender differences also may exist as a result of exam format.
There is some research showing that multiple-choice testing may advantage males (i.e., Bell
& Hay, 1987; Bolger & Kellaghan, 1990), but the research is far from clear as other research
has shown no significant gender differences on multiple-choice testing formats (Chan &
Kennedy, 2002; Greene, 1997). Further, Moreno and Mayer (1999) found that males and
females responded significantly differently to an open-ended question on student learning
Criminal Justice Studies 227

across four study sites. When the authors reanalyzed the question, they found that the ques-
tion was gender biased based on traditional male/female roles in the general culture. Finally,
course instructors also could play a role in gender differences on exam scores as research
suggests that female students may learn more if they are better able to identify with their
instructors (Busch, 1995).
Individual internal factors also may vary by gender, which may in turn influence exam
performance. Literature has also shown that significant differences between males and females
do appear to exist in motivation and confidence. While females appear to be more motivated
to succeed in college than males (Martin & Hanrahan, 2004; Pascarella & Terenzini, 1991),
they also have less confidence in their abilities (Campbell, Wadia-Fascetti, Perlman, &
Hadley, 2002; Crawford & MacLeod, 1990). This is particularly true in male-dominated areas
(i.e., science and math) (Busch, 1995; Campbell et al., 2002). Females’ lack of confidence
in their abilities may be due to inattention from teachers (American Association of University
Women [AAUW], 1992), but these differences have not been shown to play out in national
test scores or in the criminology field (Martin & Hanrahan, 2004). Even so, any assessment
tool should be examined for gender differences.

Comprehensive exam development


The department in this study proceeded with implementing a comprehensive exam despite
the criticisms and the challenges of utilizing this assessment method. The department felt
that administering an exam was an integral piece of overall program assessment because it
allowed a measure of quantitative, objective feedback to add to the more subjective and
qualitative information received from the other assessment methods. The exam assisted in
determining the extent of criminal justice knowledge students gained during their under-
graduate career at the institution.
As previously stated, a decision has to be made whether to use a standardized test such
as MFT or to construct a ‘home-grown’ exam. In the current study, issues of time and finan-
cial resources were major considerations during the exam decision process so the depart-
ment elected to develop a local exam. The home-grown exam better reflected departmental
and institutional teaching and learning goals. In other words, the locally developed exam
was better suited to capitalize on construct validity, matching questions with known curric-
ulum content, than the MFT. While the MFT may be more externally valid, if in fact current
departments utilizing it are representative of all criminal justice departments, the department
in this study felt that a home-grown exam was better suited to their specific needs. This
belief was based on the idea that a local exam could better link exam questions with depart-
mental goals, ensuring questions were related to departmental goals and students’ knowl-
edge acquisition of what was taught in departmental courses.
A large department made the creation of an exam easier as there were multiple faculty
members in each area of expertise. All faculty members became involved in aspects of the
assessment process, including the construction of the exam. Their involvement was out of
necessity due to the short timeline for review. Although there may not have been initial buy-
in from everyone, faculty who were involved with the process became optimistic about the
exam.
The exam reflected key information that was presented in the required courses and rein-
forced in upper level electives. Each faculty member developed questions in his/her area of
expertise that measured quality, higher order thinking in required core courses. The ques-
tions were also directly related to prior mentioned departmental goals. As a group, depart-
ment experts in each core area determined which questions would best represent their area
228 K. Tobin and E. Gebo

on the exam. A multiple-choice format was employed for this exam as a way to enhance
objective scoring. Because the question and answer sets were developed by many different
faculty members, the final question and answer sets were then modified to establish consis-
tency across questions.
The resulting comprehensive exam consisted of 60 substantive multiple-choice ques-
tions. Ten questions in each area assess student knowledge: Introduction to Criminal
Justice, American Judicial Systems, Corrections, Law Enforcement, Research Methods, and
Theories of Crime.1 While standardized tests in criminal justice do not normally include
introduction to the study of criminal justice as an area, the department chose to include it as
one of the areas of testing. The primary reason for its inclusion is that an introduction course
is required for the major, and the department believed that the information provided in this
course reflected integrated knowledge of the criminal justice system, rather than topic-
specific knowledge.
The department’s overall educational goals included more than just increase in criminal
justice knowledge. Therefore, exam questions measured more than just knowledge acquisi-
tion. The department was interested in ethical and moral changes in perspective as well as
changes in tolerance of diversity over the students’ undergraduate careers. While these
concepts are difficult to measure, questions were developed to assess tolerance and ethics,
with the understanding that a measure of departmental ‘effectiveness’ of these areas is much
more unclear than straight knowledge acquisition. Ultimately, there was some internal
disagreement about operational definitions of these concepts, and thus questions to address
these areas are still being refined by the department.
Additional exam questions that control for external factors were also measured. These
measures included standard demographic information, including race/ethnicity and gender.
Three questions delineated at what institution the students had taken classes. These ques-
tions included if students took classes on the Westfield or Worcester campus, if they
transferred from another institution,2 and if their major core classes were taken in the depart-
ment or elsewhere. As discussed, prior literature also directed a series of questions that
measured potential factors associated with learning outcomes, including commuter status,
employment, and motivation. Commuter status was measured with a question that asked if
students lived on campus, off campus in an apartment, off campus at home, or elsewhere.
Employment was measured with a question that asked if students worked limited part time
(less than 10 hours a week), part time (more than 10 hours, but less than 30 hours a week),
full time (30 or more hours), or if they were unemployed. Motivation was a one-item
measure, with a five-item response set, asking students to compare their academic motiva-
tion to that of their peers.

Exam implementation process


After much debate, the department decided that the exam should assess ‘unprepared knowl-
edge’ that measured students’ ingrained understanding of key subject information. As such,
the department did not want students to study or prepare for the exam in anyway so the
exam was unannounced and it was not used in grade calculations. The department felt that
the other assessment pieces (i.e., writing samples) could analyze prepared areas. To combat
potential motivation problems in the current study, faculty carefully explained that student
responses would provide a key piece of information in the assessment process which would
help shape the criminal justice curriculum. In addition, a number of students taking the
exam actually wanted their test scores provided to them after graduation, which suggests
that some took the exam seriously.
Criminal Justice Studies 229

Researchers have pointed out that pilot testing plays a critical role in exam development
(Banta & Schneider, 1986), as it is critical in increasing reliability and validity of the exam.
The department administered a pilot test to 65 students in the spring of 2004. One large
change in exam administration occurred as a result. At first, the upper classmen exam was
slated to occur during the last week of research methods, generally the last required course
taken during the first semester junior year. The department’s original view was that all
required major coursework would be taken and the test would reflect the knowledge gained
in these classes. During the pilot test, however, significant differences were found in the
knowledge of juniors compared to seniors, who were taking research methods late in their
college career. The conclusion drawn was that the knowledge presented during the required
courses is continually reinforced in upper level electives. Therefore, seniors were more
likely to retain and understand the information when compared to others. Based on this
observation the revised methodology was to give the exam to graduating seniors rather than
to any student after required coursework was complete.
The department realized that establishing baseline knowledge of incoming students was
important in gaining a better understanding of what graduating seniors actually learned.
Although creating baseline measures for exam assessment was discussed in the literature
(i.e., Banta & Schneider, 1986), there appeared to be no previous literature that discussed a
pre/posttest comparison. The use of freshman scores provided an aggregate measure of
criminal justice knowledge upon entrance to college, which was compared with knowledge
upon college exit. If the department is effective, then there should be significant differences
in knowledge from freshman year to senior level class standing.
One potential problem with giving a comprehensive exam to freshmen is that it may be
discouraging to them. They may feel deflated after taking an exam with no preparation and
perhaps little prior knowledge (Erwin, 1991). The department, however, felt that a freshmen
comparison group was essential to the study and tried to combat this potential problem by
encouraging freshmen to look at the exam from a different perspective. Faculty members
explained that the exam was based on key criminal justice material that freshmen would be
exposed to in their current and future criminal justice courses. It was hoped that this framing
of the exam would be seen by freshmen as furthering their interests in the subject, rather
than producing any negative feelings about themselves or the subject matter.
The resulting exam has been administered 2 times per academic year. The first data
collection period has occurred during the first two weeks of the fall semester. The exam has
been distributed in Introduction to Criminal Justice classes, generally taken by first semester
freshmen. Optimally, this assesses knowledge before formal exposure to criminal justice
coursework. The second data collection period has occurred during the last two weeks of
the spring semester. The exam has been distributed in upper level criminal justice elective
courses, where seniors are invited to participate. The current instrument was first imple-
mented in the fall of 2004. Eventually, after spring 2008, this sampling method will allow
for pre–post comparisons from individual freshman (Time 1) to senior (Time 2) scores four
years later.

Method
In this paper, preliminary comprehensive exam results are discussed comparing the freshman
sample from fall 2004 (n = 281) to the senior sample from spring 2005 (n = 56). The exam
was so recently developed that it was impossible to administer the test to graduating seniors
during their freshman year. Thus, a proxy measure for baseline knowledge was developed.
All first semester freshmen criminal justice majors were administered the comprehensive
230 K. Tobin and E. Gebo

exam during the month of September 2004. Their scores were compared to the comprehen-
sive exam scores of graduating seniors who were given the exam in April 2005.
The freshman sample consisted of 251 students from the Westfield campus and
30 students from the Worcester campus. The senior sample consisted of 49 students from
the Westfield campus and seven students from the Worcester satellite campus. The senior
sample represented approximately one-third of the senior class. It is important to note that
this small sample may be a problem in generalizing to the entire senior cohort and prevents
some more advanced statistical analyses.3 The Westfield and Worcester students were simi-
lar to one another with the exception of their employment status. Worcester students were
more likely to work more than 10 hours per week compared to Westfield students. There
were so few Worcester students in either sample, however, that their inclusion did not
significantly bias the analyses.
Gender, commuter status, and employment status were included as control variables. For
this analysis, the commuter status and employment variables were recoded into dichoto-
mous variables. Commuter status categorized students who lived on campus from those who
lived off campus, regardless of where they lived off campus. Employment status distin-
guished students who work less than 10 hours a week, including those not working and
those working more than 10 hours a week, including those working full time. It should be
noted that motivation was included as a control variable on the spring 2005 exam and is
included in the spring 2005 senior sample analysis. Unfortunately, because this measure-
ment of motivation was not asked in fall 2004, no cross-sample analyses could be conducted
on this factor.

Results
The freshman and senior samples were similar to one another (see Table 1). In both samples,
the incoming GPAs of criminal justice majors were approximately 2.9. There were slightly
more males than females in both samples, and most students in both samples were white.
The results from students in both samples are similar to the criminal justice student body as
a whole, suggesting that the sample is representative of the criminal justice major. There
were two areas of significant differences between the samples. Significantly more seniors
lived off campus than freshmen (62% vs. 26%) and more seniors were employed than fresh-
men (82% vs. 60%). These figures reflect the natural social processes of the department’s
college students. As they advance in academic standing, they are more likely to move off
campus and to obtain employment. As discussed, the possible influences of these variables

Table 1. Comparison of freshmen and senior samples.a


Freshmen (n = 281) Seniors (n = 56)
Gender 63.35% Male 64.28% Male
Race 92.88% White 96.43% White
Campus 89.32% Westfield 87.50% Westfield
Commuter status 25.62% Off campus 62.50% Off campus**
Employment 59.96% Work 82.14% Work*
Entering GPAb 2.90 2.91
a
Reference group is listed.
b
GPA is derived from a calculation of all entering criminal justice students in the cohort.
**p < 0.01 (one-tailed); *p < 0.05 (one-tailed).
Criminal Justice Studies 231

Table 2. Comparison of freshmen and senior exam and scale scores.a


Freshman (n = 281) Seniors (n = 56)
Total exam 25.84 (43.1%) 35.98** (59.9%)
Intro to CJ 4.46 6.57**
Theories of crime 2.84 4.67**
American judicial 5.15 6.52**
Corrections 6.26 8.21**
Law enforcement 3.79 6.46**
Research methods 3.34 4.09**
a
Total exam score is based on 60 questions. Scale scores are based on 10 questions.
**p < 0.01 (one-tailed).

on exam score are controlled for in subsequent analyses. Overall, the data also suggests that
the two groups can, with caution, be compared to one another.
Bivariate analysis of the two samples showed a significant difference in comprehensive
exam scores (see Table 2). Comprehensive exam scores are based on a total of 60 questions
and each area score is based on 10 questions. Freshmen had an average comprehensive exam
score of 25.84 (43.1%), with a low of 13% to a high of 75%. The standard deviation was
10.3%. The average comprehensive exam score for the senior cohort was 35.98 (59.9%),
with a low of 30% to a high of 88%. The standard deviation was 12.9%. While the depart-
ment had hoped that seniors would average a score higher than 60%, this result is consistent
with other research that found that the average student exam score across 40 departments in
one institution was between 60 and 65% (Banta & Schneider, 1986). As shown in Table 2,
not only did seniors score significantly higher on the overall exam, but also in each of the
areas. This suggests that students increased their knowledge in all core areas.
Ideally, a multivariate model should predict very little of the variance in exam score,
as knowledge acquisition should be independent of control variables. A multivariate anal-
ysis of class standing and the control variables on overall exam score was conducted (see
Table 3). Class standing significantly predicted exam score, even after controlling for
other factors (β = 4.62**). The model itself was significant (adj. r2 = 0.25), but this was
due to class standing. While other factors, such as maturation, may influence exam score,
these findings cautiously suggest that students may indeed acquire and retain knowledge
throughout their undergraduate career. This knowledge, while introduced in required
classes, is continually reinforced throughout elective coursework. Commuter status was
also significantly related to exam score, but this was due to an interaction effect with class

Table 3. Combined sample regression model predicting exam score.


β (SE) t Sig.
Constant 39.44 (2.83) 13.93 0.00
Class standing 4.62 (0.57) 8.13 0.00
Gender −2.13 (1.26) −1.70 0.09
Commuter status 3.21 (1.49) 2.15 0.03
Employment −2.23 (1.27) −1.76 0.08
Adjusted r2 = 0.25**.
**p < 0.01.
232 K. Tobin and E. Gebo

Table 4. Senior sample regression model predicting exam score.


β (SE) t Sig.
Constant 74.40 (11.83) 6.29 0.00
Gender −8.84 (3.96) −2.23 0.03
Commuter status −1.66 (3.68) −0.45 0.65
Employment −1.47 (3.88) −0.38 0.71
Motivation 1.01 (2.20) 0.46 0.65
Adjusted r2 = 0.06.

status. Seniors are more likely to be commuters and more likely to score significantly
higher on the exam. To better understand the relationship between external factors and
senior exam score, a senior only analysis was conducted.
Table 4 shows the multivariate model for seniors only. The overall model was indeed
insignificant (adj. r2 = 0.06). Motivation, employment, and commuter status were not signif-
icant. Given the significant association between exam score and year and the combined
multivariate model showing that year significantly predicts exam score, the present model
adds support to the contention that criminal justice learning drives exam score increase.
Interestingly, gender also significantly predicted exam score (β = −8.84**) in the senior
analysis. Males were more likely to score higher on the exam than females. The gender
effect on exam score is troublesome. It is imperative to determine why females scored lower
than males.

The gender effect


To understand the gender effect, a more detailed analysis of gender differences in the senior
sample was conducted. A further exploration of the exam scales showed that females in the
senior cohort scored lower than males on all exam scales (see Table 5). The difference,
however, was only significant on the law enforcement scale. Females had an average score
of 5.16 out of 10, while males averaged 7.17 out of 10.
To better understand gender differences, the same analysis was run with the freshmen
only data (analysis not shown), and no significant differences were found. There was no
gender effect with incoming freshmen, which may indicate a change occurring over the
course of students’ college careers. When the multivariate analysis was then rerun with law

Table 5. Comparison of male and female exam and scale scores in senior sample.a
Male (n = 36) Female (n = 20)
Total exam 37.69 (62.8%) 32.89** (54.8%)
Intro to CJ 6.94 5.90
Theories of crime 4.89 4.26
American judicial 6.69 6.20
Corrections 8.36 7.95
Law enforcement 7.17 5.16**
Research methods 4.09 4.11
a
Total exam score is based on 60 questions. Scale scores are based on 10 questions.
**p < 0.01 (one-tailed).
Criminal Justice Studies 233

Table 6. Senior sample regression model (excluding law enforcement subscale).


β (SE) t Sig.
Constant 66.61 (10.92) 6.10 0.00
Gender −5.97 (3.86) −1.55 0.13
Commuter status −1.99 (3.59) −0.55 0.58
Employment −1.47 (3.88) −0.10 0.92
Motivation 1.01 (2.20) 0.62 0.54
Adjusted r2 = −0.02.

enforcement questions removed, the gender differences disappeared (see Table 6). This
suggested that scores on the law enforcement subscale, then, drove the results. It is difficult
to sort out the reasons for the gender effect without further data. There are several plausible
reasons for these differences. Although exam questions will be re-examined for gender bias,
there was no gender difference in the freshmen sample using the same questions, suggesting
this is not the case.
Females scoring significantly worse on the law enforcement portion of the exam could
be the result of natural patterns. In other words, females simply may be less interested than
males in pursuing law enforcement careers and therefore, take fewer upper level courses in
that area. Law enforcement knowledge, then, is not reinforced, resulting in lower exam
scores by females. The gender of the instructor may play a role, too. In the department, the
majority of instructors in law enforcement courses were male. While the effect, if any, of
male instructors is uncertain, the department will continue to conduct more detailed analysis
on gender differences in exam scores. As a result of the gender discrepancy, a question on
career aspiration will be added to the exam in order to determine whether the gender effect
is related to interest in pursing law enforcement careers. In addition, the department is
considering ways to foster female achievement in law enforcement, including greater utili-
zation of female practitioners in the classroom.

Summary and discussion


The results suggest that the freshman sample and senior sample are similar across many vari-
ables. Overall, the results of this preliminary study very tentatively suggest that the depart-
ment is effective at imparting knowledge on undergraduates throughout their time at
Westfield State College. Compared to first year students, senior students scored higher on
the locally developed comprehensive exam. These results were also demonstrated across
areas of overall system understanding, judicial systems, law enforcement, corrections,
research methods, and theories of crime. More importantly the external effects of motivation,
employment, and commuter status did not mediate the exam results in the combined analysis.
The results of the multivariate analysis consistently showed that class standing was signifi-
cantly related to exam score. External factors were further explored in senior only analysis
with similar results. As discussed, females did score significantly lower than males in the
senior sample analysis. The gender effect is troublesome and it will continue to be examined.

Limitations
There are several limitations in the current preliminary research. Some of these limitations
can be overcome with relative ease in future iterations of the exam. Other limitations may
234 K. Tobin and E. Gebo

be more challenging to overcome. Foremost, there is no individual pre–post measurement


that would provide greater assurance that differences between freshmen and senior scores
are due to criminal justice course learning alone and not other factors. As discussed, pre/
posttest analysis will be conducted as the data becomes available. Comprehensive exam
scores, however, will always present difficulty in measuring the process of learning and
student learning outcomes. It is extremely difficult to measure how people learn and all of
the factors that influence this learning process.
The test can also include names in order to make direct comparisons of learning.
Because of the short timeline from exam development to implementation, the department
decided to ask for IRB approval to administer the exam with no names or GPA checks for
ease of passage. In retrospect, the department should have pushed for IRB approval to
include names on exams so individuals can be tracked from freshman to senior year for a
more accurate picture of knowledge change.
In addition, tracking GPA would have added another measure of control to the exam.
GPA may influence exam score. Research has found that high school GPA and SAT scores
accurately reflected academic achievement (Ishitani, 2001; Martin & Hanrahan, 2004).
Future administrations of the exam will record student GPA. Other factors not controlled
for, such as academic and social integration (Pascarella & Terenzini, 1991; Tinto, 1975)
also may account for some differences in exam score.
In addition to measurement issues, there are limitations present in sampling. Future anal-
ysis can be strengthened by comparing characteristics of the sample, such as gender, race,
GPA, and commuter status with the overall criminal justice population at Westfield State
College to ensure that the sample is representative. The freshman exam is administered in
the introduction to criminal justice courses. This course includes non-majors. While a ques-
tion was added at a later point in time to disaggregate majors and non-majors, it was not
present in the current analysis. There will also always be the concern that a non-major in
Introduction to Criminal Justice could later declare the major.
In the current analysis, only one-third of the senior cohort took the exam and this may
also affect the results, as these students may be different from all graduating criminal justice
seniors. There may be several reasons why participation rates were so low. It may be due to
the fact that it was the first time the exam was given to seniors. It is also the case that the
exam was given to seniors during the last two weeks of the spring semester when seniors
may have already mentally or physically ‘checked out.’ In the future the exam could be
given to seniors earlier and incentives, such as extra credit or entrance into a lottery, can be
used to increase exam participation.
In the current study external validity is also a limitation. Again, however, so few criminal
justice programs used the ETS MFT, that external validity also can be questioned with that
exam. Issues of external validity must be balanced against issues of internal validity. While
it is likely that both the department’s locally developed exam and the MFT in Criminal
Justice measure many of the same underlying themes, construct validity is most likely higher
in the local exam because of the focus on developing questions that accurately measure
departmental curricula. The department will be able to factor analyze exam questions after
more students take the exam to help ensure the validity of the exam questions.

Directions for future research


The current research is based on preliminary analysis of an ongoing local comprehensive
exam. One of the responsibilities of the department is to graduate well-rounded students who
can successfully enter into the field of criminal justice. This home-grown comprehensive
Criminal Justice Studies 235

exam is one aspect of a larger assessment plan and its purpose is to provide objective infor-
mation on departmental effectiveness. Several important lessons come from this research
and help direct the future of the local home-grown comprehensive exam and departmental
assessment.
First, the department will continue to refine the exam in order to collect more complete
data in the future. The exam will never be able to fully control for all factors that impact the
learning process. However, other measures can be added that may impact learning or the
educational process. For example, a question regarding career aspirations would assist in
determining whether the gender effect is related to interest in pursuing law enforcement
careers. Departmental and college mission statements also refer to developing more than
just student knowledge. Concepts such as civic engagement, personal responsibility, and
diversity awareness are included in individual department and institutional objectives.
Future comprehensive exams should measure some of those goals. The department contin-
ues to struggle with how best to measure the concepts of tolerance and ethics, but that
should not be seen as a barrier to attempting to discover changes in beliefs, morals, and
values during the course of a student’s undergraduate career.
Continued analysis is necessary to fully understand student learning and departmental
effectiveness. The department expects that future exams, with stronger methodologies, will
continue to strengthen the data that is gathered to better assess the relationship between
department pedagogy and student achievement. In 2008, the data will be available to
conduct a true pretest–posttest cohort analysis. Control variables that have been added to the
exam since its inception will also strengthen future research, such as the inclusion of a ques-
tion measuring major/non-major that will allow the department to eliminate the small
minority of non-majors from the pool.
The department must also use the results of the comprehensive exam as basis for dialog
and further exploration. While the senior scores are consistent with other comprehensive
exam averages, the department has engaged in dialog regarding the students who score very
low on the exam because their scores suggest that some students may be graduating with
minimal criminal justice proficiency. It is important to note that the results of the current
analysis are tentative and the department is not comfortable with acting on them at this time.
Future analysis may yield similar findings, but to emphasize a point made earlier, the results
of any comprehensive exam need to be treated with caution because there are many other
factors, not measurable in an exam, that could influence exam results.
In the future, the department will combine information gathered from the comprehensive
exam with other assessment methods. The multidimensional assessment plan that is in place
at Westfield State will allow the department to triangulate the data from the exam with focus
groups and writing assessment. As previously mentioned, student learning is impacted by
several things. Not all are picked up by an assessment exam. The comprehensive exam
results, combined with other assessment methods, will allow the department to thoroughly
evaluate student proficiency. Focus groups, for example, can be tailored to gain information
to better understand comprehensive exam results. If other methods of departmental assess-
ment (i.e., writing analysis) yield similar results, this shows that the department is fulfilling
one part of their responsibility. Combined, these measures are a way to continually monitor
and improve student learning and overall department quality.

Acknowledgement
A previous version of this manuscript was presented at the 2005 Annual Meeting of the American
Society of Criminology.
236 K. Tobin and E. Gebo

Notes
1. The comprehensive exam is available from the first author.
2. Transfer students make up approximately 20% of the criminal justice majors. They must take
most of their required courses on the Westfield campus.
3. Seniors constitute the lowest percentage of the overall criminal justice student population. This
is due to attrition and rapidly increasing numbers of incoming criminal justice student populations
on both campuses.

Notes on contributors
Kimberly Tobin is an Associate Professor at Westfield State College in Westfield, MA. Her teaching
and research interests are in the areas of youth gangs, juvenile justice, substance use, and criminolog-
ical theory. She published a textbook entitled Gangs: An Individual and Group Perspective (2008)
and co-authored Gangs and Delinquency in Developmental Perspective.

Erika Gebo is an Assistant Professor of Sociology at Suffolk University in Boston. Her publications,
teaching, and research interests are in the areas of juvenile justice, family violence, social policy, and
evaluation.

References
American Association of University Women Educational Foundation (AAUW). (1992). Overview
of how schools shortchange girls, with recommendations for educators and policymakers.
Washington, DC: Author.
Association of American Colleges and Universities (AAC&U). (1984). Integrity in the college
curriculum: A report to the academic community. Washington, DC: Author.
Association of American Colleges and Universities (AAC&U). (2004). Taking responsibility for the
quality of the baccalaureate degree. Washington, DC: Author, Greater Expectations Project on
Accreditation and Assessment.
Banta, T.W. (1991). Contemporary approaches to assessing student achievement of general educa-
tion outcomes. The Journal of General Education, 40, 203–223.
Banta, T.W., & Schneider, J.A. (1986). Using locally developed comprehensive exams for majors to
assess and improve academic program quality. ERIC. ED269472.
Bell, R.C., & Hay, J.A. (1987). Differences and biases in English language examination formats.
British Journal of Educational Psychology, 57, 212–220.
Bloom, B.S. (1956). Taxonomy of educational objectives: Handbook I. Cognitive domain. New
York: David McKay.
Bolger, N., & Kellaghan, T. (1990). Method of measurement and gender differences in scholastic
achievement. Journal of Educational Measurement, 27, 165–174.
Busch, T. (1995). Gender differences in self-efficacy and academic performance among students of
business administration. Scandinavian Journal of Educational Research, 39, 311–318.
Campbell, P., Wadia-Fascetti, S., Perlman, L., & Hadley, E.N. (2002). Confidence and continuation:
The Northeastern first year engineering student survey. Proceedings of the 2002 WEPAN
Annual Meeting, Lafayette, IN.
Chan, N., & Kennedy, P.E. (2002). Are multiple-choice exams easier for economic students? A
comparison of multiple choice and equivalent constructed response exam questions. Southern
Economic Journal, 68, 957–971.
Chun, M. (2002). Looking where the light is better: A review of the literature on assessing higher
education quality. Peer Review, 4, 16–25.
Crawford, M., & MacLeod, M. (1990). Gender in the college classroom: An assessment of the
‘chilly climate’ for women. Sex Roles, 23, 101–122.
Doherty, A., Riordan, T., & Roth, J. (2002). Student learning: A central focus for institutions of
higher education. Milwaukee, WI: Alverno College Institute.
Educational Testing Service (ETS). (2005). Institutions administering the major field test – Criminal
justice. Retrieved August 18, 2006 from http://www.ets.org/Media/Tests/MFT/pdf/appcrimjust
03-04.pdf
Erwin, T.D. (1991). Assessing student learning and development. San Francisco: Jossey-Bass.
Criminal Justice Studies 237

Ewell, P. (2002). CHEA workshop on accreditation and student learning outcomes. Retrieved
August 18, 2006 from http://www.chea.org/pdf/workshop_outcomes_ewell_02.pdf
Greene, B. (1997). Verbal abilities, gender and the introductory economics course: A new look at an
old assumption. Journal of Economic Education, 28, 13–30.
Halpern, D.F. (1986). Sex differences in cognitive abilities. Hillsdale, NJ: Lawrence Erlbaum.
Halpern, D.F. (1997). Sex differences in intelligence. American Psychologist, 52, 1091–1102.
Inman, P., & Pascarella, E. (1998). The impact of college residence on the development of critical
thinking skills in college freshmen. Journal of College Student Development, 39, 557–568.
Ishitani, T. (2001). Does campus involvement affect academic performance? Comparisons between
resident and commuter students. Terre Haute, IN: Indiana State University, Office of Institutional
Research and Effectiveness.
Larose, S., Robertson, D.U., Roy, R., & Legault, F. (1998). Nonintellectual learning factors as deter-
minants for success in college. Research in Higher Education, 39, 275–297.
Magolda, B. (1992). Cocurricular influences on college students’ intellectual development. Journal
of College Student Development, 33, 203–213.
Martin, J.S., & Hanrahan, K. (2004). Criminology freshmen: Preparation, expectations, and college
performance. Journal of Criminal Justice Education, 15, 251–269.
Meadows, R.B., Dyal, A.B., & Wright, J.V. (1998). Preparing educational leaders through the use of
portfolio assessment: An alternative comprehensive examination. Journal of Instructional
Psychology, 25, 94–100.
Moreno, R., & Mayer, R.E. (1999). Gender differences in responding to open-ended problem-
solving questions. Learning and Individual Differences, 11, 355–365.
Morris, J.D. (n.d.). The case against the comprehensive exam. ERIC. ED225520.
Palomba, C.A., & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and improv-
ing assessment in higher education. San Francisco: Jossey-Bass.
Pascarella, E., & Terenzini, P. (1991). Studying college students in the 21st century: Meeting new
challenges. The Review of Higher Education, 21, 151–165.
Pelfrey, W.V., & Hague, J.L. (2000). Examining the comprehensive examination: Meeting educa-
tional program objectives. Journal of Criminal Justice Education, 11, 167–177.
Simkin, M.G., & Kuechler, W.L. (2005). Multiple-choice tests and student understanding: What is
the connection? Decision Sciences Journal of Innovative Education, 3, 73–97.
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review
of Educational Research, 45, 89–125.
Wagenaar, T.C. (1991). Goals for the discipline. Teaching Sociology, 19, 92–95.
Wagenaar, T.C. (2002). Outcomes assessment in sociology: Prevalence and impact. Teaching
Sociology, 30, 403–413.
Wagenaar, T.C. (2004). Assessing sociological knowledge: A first try. Teaching Sociology, 32,
232–238.
Weiss, G.L. (2002). The current status of assessment in sociology departments. Teaching Sociology,
30, 391–402.
Winter, G.M., & Fadale, L.M. (1991). Outcome assessment in postsecondary occupational
programs: A consortia approach (ERIC #337-237). Albany, NY: NYS Educational Department.
238 K. Tobin and E. Gebo

Appendix 1. Major requirements for Criminal Justice


Coursework in the Criminal Justice major consists of 39 credits. There are six required courses (18
credits) in the major, including:

Introduction to Criminal Justice


Theories of Crime
Law Enforcement and Society
American Judicial Systems
Introduction to Corrections
Research Methods in Criminal Justice.

The remaining coursework (21 credits) is taken from an extensive offering of criminal justice elec-
tives. Students transferring in from other institutions must complete at least 21 credits of criminal
justice coursework at Westfield State College.

Appendix 2. Educational goals for Westfield State Criminal Justice undergraduates


Several learning objectives have been identified that align with the overall mission and goals of the
Criminal Justice Program at Westfield State College. Recipients of a Bachelor of Science in Criminal
Justice from Westfield State College should:

(1) Possess the capacity to comprehend and evaluate qualitative and quantitative social
science research, including at least a basic familiarity with introductory level statistical
concepts.
(2) Comprehend the Constitutional concepts of due process, equal protection, and fundamen-
tal fairness in policing, courts, and corrections.
(3) Understand the role of ethics and moral reasoning throughout the criminal justice system.
(4) Understand issues of diversity, including but not limited to gender, race, ethnic, cultural,
and class issues, in the administration of criminal justice.
(5) Possess writing, research, communications, and computer literacy skills sufficient to
enable graduates to obtain bachelor-level entry into criminal justice and allied professional
fields, or alternatively, to pursue graduate studies in such fields and disciplines.
(6) Be able to employ critical reasoning skills across the criminal justice curriculum.
(7) Exhibit an understanding of traditional and contemporary theories of crime causation, and
their implications for public policy.
(8) Being cognizant of the history, development, fundamental concepts, and current operation
of American law and our principal criminal justice institutions, together with their rela-
tionships to each other and to the larger social and political context.
View publication stats

You might also like