Professional Documents
Culture Documents
Tobin Gebo
Tobin Gebo
Tobin Gebo
net/publication/249028117
CITATIONS READS
3 1,545
2 authors, including:
Erika Gebo
Suffolk University
29 PUBLICATIONS 192 CITATIONS
SEE PROFILE
All content following this page was uploaded by Erika Gebo on 20 April 2016.
RESEARCH ARTICLE
Assessing student learning and departmental effectiveness through an
undergraduate comprehensive exam
Kimberly Tobina* and Erika Gebob
a
Department of Criminal Justice, Westfield State College, Westfield, MA, USA; bDepartment of
Sociology, Suffolk University, Boston, MA, USA
Criminal
10.1080/14786010802355362
GJUP_A_335703.sgm
1478-601X
Original
Taylor
302008
21
Dr
ktobin@wsc.ma.edu
000002008
KimberlyTobin
and
&Article
Justice
Francis
Francis
(print)/1478-6028
Studies (online)
Assessment is a buzz word on college campuses today. Assessing the quality of higher
education in America, however, was not always a priority for colleges and universities. The
push for higher education assessment began in earnest in the mid-1980s (Banta, 1991;
Doherty, Riordan, & Roth, 2002), at the time when the Association of American Colleges
and Universities (AAC&U) produced a report declaring that most higher education institu-
tions were unaware of the effectiveness of their methods on student learning (1984). This
report was a loud call for institutions and departments to ensure that quality education was
being provided. Since that time, the number of higher education institutions engaged in the
process has grown tremendously, and currently almost all institutions of higher education
are engaged in assessment of their education goals (Chun, 2002). Accreditation organiza-
tions also are increasingly requiring and providing assistance to institutions and departments
in their efforts to develop and refine assessment techniques (AAC&U, 2004; Weiss, 2002).
Three broad domains characterize assessment: (a) assessment of student learning, (b)
assessment of departmental effectiveness or program assessment, and (c) assessment of
institutional effectiveness. The combination of data from these three domains provides
evidence of the quality of learning in higher education. Research in these domains, however,
is highly varied. Institutional assessment has been documented by individual researchers
and, in particular, accreditation bodies and their subcommittees (i.e., AAC&U, 1984; Ewell,
2002; Palomba & Banta, 1999). In the field of social science, most assessment literature
tends to focus on individual course assessment (Wagenaar, 2002). Departmental assessment
literature, in contrast, is lacking; yet faculty need to write about their experiences with
assessment in order to further each discipline’s assessment process (Weiss, 2002).
Lack of departmental assessment is especially true in the field of criminal justice where
very few articles addressing this topic are found in scholarly works. To add to the literature
specific to criminal justice departmental assessment, this paper discusses the creation and
implementation of a comprehensive exam as one piece of a comprehensive assessment
process. Preliminary results from the exam are also explored. Lessons learned from assess-
ment and early exam results can assist other criminal justice departments in assessment
development and refinement, and can be particularly helpful for those who are employing
or are thinking of employing a comprehensive exam as an evaluation method.
The department
Brief discussions of the department under study and the creation of an assessment plan are
necessary to help situate the discussion of the comprehensive exam. The Criminal Justice
Department at Westfield State College, a small liberal arts college in Massachusetts, offers
the largest major on campus. The program is designed to serve students who may elect to
enter the criminal justice profession immediately upon graduation, students of the liberal
arts without professional interest in criminal justice, and students who desire to continue
their education through graduate studies. The major is interdisciplinary, predicated on a
common core of the humanities, social sciences, mathematics, and physical sciences. The
goal of the program is to develop in students an increased analytical awareness of the role
of law enforcement agencies, courts, and correctional institutions in the criminal justice
system. The foundation of this awareness is presented in required courses taken in Introduc-
tion to Criminal Justice, American Judicial Systems, Corrections, Law Enforcement,
Research Methods, and Theories of Crime. All students must take required courses, and
they are taken in sequence from freshman to junior years (for a full description of program
requirements, see Appendix 1).
The department has grown from one faculty member and four students in 1969 to
12 full-time faculty members and over 700 majors. The department also established a
collaborative satellite program with Worcester State College in the fall of 2001. At the time
of this study, the Worcester State site had three full-time faculty members and approxi-
mately 150 majors. The main campus program and satellite program adhere to the same
standards and there is very little differentiation in the administration of the programs. Both
programs primarily serve full-time, traditional undergraduates who attend the college for
their entire undergraduate careers. On both campuses almost all of the full-time faculty
members have PhDs in criminal justice and related fields. With few exceptions, required
major courses are offered by the full-time faculty. The department has occasionally used
adjuncts for introduction to criminal justice and law enforcement courses, but does not
rely on these instructors. The class size for all major required courses is capped at 35, with
the exception of methods, which is capped at 25. Upper level electives are also capped at
25 students.
Assessment plan
Prior to 2003, the department had no formal assessment plan. Departmental effectiveness
was primarily determined by traditional evaluation methods, such as application rates, grad-
uation rates, overall student GPA, and institutional alumni feedback surveys. The formal
assessment plan was developed as a result of external pressure by the state’s Board of
Higher Education. The department expedited the assessment process in order to receive
approval from the state to be a receptor site for students who wish to be recognized under
Criminal Justice Studies 225
the state’s Quinn Bill legislation, which provides monetary incentives for law enforcement
officers for each earned higher education degree.
The department realized that in order to implement a successful assessment plan, well-
articulated departmental goals needed to be established and institutional goals needed to be
considered (see Appendix 2 for the department’s stated goals). Based on prior literature
and the complexity of departmental and institutional goals, the department recognized that
the comprehensive assessment process had to measure several areas of learning. The most
valid measures of assessment rely on a multidimensional approach (AAC&U, 2004); there-
fore, the department chose to assess departmental objectives in several ways. Methods
used by the department included analyzing student writing through work in completed
major courses; conducting focus groups with seniors, alumni, and employers to survey
their experiences with the criminal justice department; developing an advisory board; and
administering a comprehensive examination to graduating seniors. This paper focuses on
the comprehensive exam piece of the assessment plan.
Comprehensive exams
Comprehensive exams are a common method of assessment in higher education (Banta &
Schneider, 1986; Weiss, 2002) and in criminal justice programs (Winter & Fadale, 1991),
yet there has been some criticism of the use of this method to measure student learning.
Morris (n.d.) stated that comprehensive exams placed too much emphasis on memorization
and not enough emphasis on problem solving. Pelfrey and Hague (2000) also believed this
to be true in their review of one criminal justice department’s comprehensive exam. Such
general thinking, however, may sell short the value of the comprehensive exam. Recent
research has suggested that comprehensive exams can and do engage high-order levels of
thinking, as discussed in Bloom’s taxonomy (1956), including application, synthesis, anal-
ysis, and evaluation (Banta & Schneider, 1986; Simkin & Kuechler, 2005).
Another major criticism of the comprehensive exam as an assessment tool is that it
cannot truly authenticate student learning (Wagenaar, 1991, 2004). A comprehensive exam,
however, may be an integral piece of overall program assessment because it does provide a
measure of quantitative, objective feedback to add to the more subjective and qualitative
information received from other assessment methods, such as focus groups and analysis of
student writing.
Other scholars believe that undergraduates may not take comprehensive exams seri-
ously if their grades are not affected (Meadows, Dyal, & Wright, 1998). Despite these crit-
icisms, researchers suggest that the departmental comprehensive exam may, in fact, be
taking on more importance because of its dual functions of assessing what majors should
know upon finishing coursework and assessing departmental effectiveness (Banta &
Schneider, 1986). Pelfrey and Hague (2000) echoed those purposes when they stated that
the comprehensive exam should ‘respond to the needs of the student and the needs of the
program’ (p. 168).
There also are indirect benefits to administering a comprehensive exam for departmental
assessment purposes. Prior research has shown that while some faculty members initially
may be reluctant to engage in assessment, especially those who are concerned that ‘their’
areas of expertise will show little improvement, assessment often brings faculty together to
strive for common goals in student learning. For example, Banta and Schneider (1986)
found that important indirect benefits of assessment were to act as a catalyst to increase
dialog about pedagogical issues among faculty and to increase consciousness about depart-
mental curricula and individual course learning objectives. These additional benefits have
226 K. Tobin and E. Gebo
across four study sites. When the authors reanalyzed the question, they found that the ques-
tion was gender biased based on traditional male/female roles in the general culture. Finally,
course instructors also could play a role in gender differences on exam scores as research
suggests that female students may learn more if they are better able to identify with their
instructors (Busch, 1995).
Individual internal factors also may vary by gender, which may in turn influence exam
performance. Literature has also shown that significant differences between males and females
do appear to exist in motivation and confidence. While females appear to be more motivated
to succeed in college than males (Martin & Hanrahan, 2004; Pascarella & Terenzini, 1991),
they also have less confidence in their abilities (Campbell, Wadia-Fascetti, Perlman, &
Hadley, 2002; Crawford & MacLeod, 1990). This is particularly true in male-dominated areas
(i.e., science and math) (Busch, 1995; Campbell et al., 2002). Females’ lack of confidence
in their abilities may be due to inattention from teachers (American Association of University
Women [AAUW], 1992), but these differences have not been shown to play out in national
test scores or in the criminology field (Martin & Hanrahan, 2004). Even so, any assessment
tool should be examined for gender differences.
on the exam. A multiple-choice format was employed for this exam as a way to enhance
objective scoring. Because the question and answer sets were developed by many different
faculty members, the final question and answer sets were then modified to establish consis-
tency across questions.
The resulting comprehensive exam consisted of 60 substantive multiple-choice ques-
tions. Ten questions in each area assess student knowledge: Introduction to Criminal
Justice, American Judicial Systems, Corrections, Law Enforcement, Research Methods, and
Theories of Crime.1 While standardized tests in criminal justice do not normally include
introduction to the study of criminal justice as an area, the department chose to include it as
one of the areas of testing. The primary reason for its inclusion is that an introduction course
is required for the major, and the department believed that the information provided in this
course reflected integrated knowledge of the criminal justice system, rather than topic-
specific knowledge.
The department’s overall educational goals included more than just increase in criminal
justice knowledge. Therefore, exam questions measured more than just knowledge acquisi-
tion. The department was interested in ethical and moral changes in perspective as well as
changes in tolerance of diversity over the students’ undergraduate careers. While these
concepts are difficult to measure, questions were developed to assess tolerance and ethics,
with the understanding that a measure of departmental ‘effectiveness’ of these areas is much
more unclear than straight knowledge acquisition. Ultimately, there was some internal
disagreement about operational definitions of these concepts, and thus questions to address
these areas are still being refined by the department.
Additional exam questions that control for external factors were also measured. These
measures included standard demographic information, including race/ethnicity and gender.
Three questions delineated at what institution the students had taken classes. These ques-
tions included if students took classes on the Westfield or Worcester campus, if they
transferred from another institution,2 and if their major core classes were taken in the depart-
ment or elsewhere. As discussed, prior literature also directed a series of questions that
measured potential factors associated with learning outcomes, including commuter status,
employment, and motivation. Commuter status was measured with a question that asked if
students lived on campus, off campus in an apartment, off campus at home, or elsewhere.
Employment was measured with a question that asked if students worked limited part time
(less than 10 hours a week), part time (more than 10 hours, but less than 30 hours a week),
full time (30 or more hours), or if they were unemployed. Motivation was a one-item
measure, with a five-item response set, asking students to compare their academic motiva-
tion to that of their peers.
Researchers have pointed out that pilot testing plays a critical role in exam development
(Banta & Schneider, 1986), as it is critical in increasing reliability and validity of the exam.
The department administered a pilot test to 65 students in the spring of 2004. One large
change in exam administration occurred as a result. At first, the upper classmen exam was
slated to occur during the last week of research methods, generally the last required course
taken during the first semester junior year. The department’s original view was that all
required major coursework would be taken and the test would reflect the knowledge gained
in these classes. During the pilot test, however, significant differences were found in the
knowledge of juniors compared to seniors, who were taking research methods late in their
college career. The conclusion drawn was that the knowledge presented during the required
courses is continually reinforced in upper level electives. Therefore, seniors were more
likely to retain and understand the information when compared to others. Based on this
observation the revised methodology was to give the exam to graduating seniors rather than
to any student after required coursework was complete.
The department realized that establishing baseline knowledge of incoming students was
important in gaining a better understanding of what graduating seniors actually learned.
Although creating baseline measures for exam assessment was discussed in the literature
(i.e., Banta & Schneider, 1986), there appeared to be no previous literature that discussed a
pre/posttest comparison. The use of freshman scores provided an aggregate measure of
criminal justice knowledge upon entrance to college, which was compared with knowledge
upon college exit. If the department is effective, then there should be significant differences
in knowledge from freshman year to senior level class standing.
One potential problem with giving a comprehensive exam to freshmen is that it may be
discouraging to them. They may feel deflated after taking an exam with no preparation and
perhaps little prior knowledge (Erwin, 1991). The department, however, felt that a freshmen
comparison group was essential to the study and tried to combat this potential problem by
encouraging freshmen to look at the exam from a different perspective. Faculty members
explained that the exam was based on key criminal justice material that freshmen would be
exposed to in their current and future criminal justice courses. It was hoped that this framing
of the exam would be seen by freshmen as furthering their interests in the subject, rather
than producing any negative feelings about themselves or the subject matter.
The resulting exam has been administered 2 times per academic year. The first data
collection period has occurred during the first two weeks of the fall semester. The exam has
been distributed in Introduction to Criminal Justice classes, generally taken by first semester
freshmen. Optimally, this assesses knowledge before formal exposure to criminal justice
coursework. The second data collection period has occurred during the last two weeks of
the spring semester. The exam has been distributed in upper level criminal justice elective
courses, where seniors are invited to participate. The current instrument was first imple-
mented in the fall of 2004. Eventually, after spring 2008, this sampling method will allow
for pre–post comparisons from individual freshman (Time 1) to senior (Time 2) scores four
years later.
Method
In this paper, preliminary comprehensive exam results are discussed comparing the freshman
sample from fall 2004 (n = 281) to the senior sample from spring 2005 (n = 56). The exam
was so recently developed that it was impossible to administer the test to graduating seniors
during their freshman year. Thus, a proxy measure for baseline knowledge was developed.
All first semester freshmen criminal justice majors were administered the comprehensive
230 K. Tobin and E. Gebo
exam during the month of September 2004. Their scores were compared to the comprehen-
sive exam scores of graduating seniors who were given the exam in April 2005.
The freshman sample consisted of 251 students from the Westfield campus and
30 students from the Worcester campus. The senior sample consisted of 49 students from
the Westfield campus and seven students from the Worcester satellite campus. The senior
sample represented approximately one-third of the senior class. It is important to note that
this small sample may be a problem in generalizing to the entire senior cohort and prevents
some more advanced statistical analyses.3 The Westfield and Worcester students were simi-
lar to one another with the exception of their employment status. Worcester students were
more likely to work more than 10 hours per week compared to Westfield students. There
were so few Worcester students in either sample, however, that their inclusion did not
significantly bias the analyses.
Gender, commuter status, and employment status were included as control variables. For
this analysis, the commuter status and employment variables were recoded into dichoto-
mous variables. Commuter status categorized students who lived on campus from those who
lived off campus, regardless of where they lived off campus. Employment status distin-
guished students who work less than 10 hours a week, including those not working and
those working more than 10 hours a week, including those working full time. It should be
noted that motivation was included as a control variable on the spring 2005 exam and is
included in the spring 2005 senior sample analysis. Unfortunately, because this measure-
ment of motivation was not asked in fall 2004, no cross-sample analyses could be conducted
on this factor.
Results
The freshman and senior samples were similar to one another (see Table 1). In both samples,
the incoming GPAs of criminal justice majors were approximately 2.9. There were slightly
more males than females in both samples, and most students in both samples were white.
The results from students in both samples are similar to the criminal justice student body as
a whole, suggesting that the sample is representative of the criminal justice major. There
were two areas of significant differences between the samples. Significantly more seniors
lived off campus than freshmen (62% vs. 26%) and more seniors were employed than fresh-
men (82% vs. 60%). These figures reflect the natural social processes of the department’s
college students. As they advance in academic standing, they are more likely to move off
campus and to obtain employment. As discussed, the possible influences of these variables
on exam score are controlled for in subsequent analyses. Overall, the data also suggests that
the two groups can, with caution, be compared to one another.
Bivariate analysis of the two samples showed a significant difference in comprehensive
exam scores (see Table 2). Comprehensive exam scores are based on a total of 60 questions
and each area score is based on 10 questions. Freshmen had an average comprehensive exam
score of 25.84 (43.1%), with a low of 13% to a high of 75%. The standard deviation was
10.3%. The average comprehensive exam score for the senior cohort was 35.98 (59.9%),
with a low of 30% to a high of 88%. The standard deviation was 12.9%. While the depart-
ment had hoped that seniors would average a score higher than 60%, this result is consistent
with other research that found that the average student exam score across 40 departments in
one institution was between 60 and 65% (Banta & Schneider, 1986). As shown in Table 2,
not only did seniors score significantly higher on the overall exam, but also in each of the
areas. This suggests that students increased their knowledge in all core areas.
Ideally, a multivariate model should predict very little of the variance in exam score,
as knowledge acquisition should be independent of control variables. A multivariate anal-
ysis of class standing and the control variables on overall exam score was conducted (see
Table 3). Class standing significantly predicted exam score, even after controlling for
other factors (β = 4.62**). The model itself was significant (adj. r2 = 0.25), but this was
due to class standing. While other factors, such as maturation, may influence exam score,
these findings cautiously suggest that students may indeed acquire and retain knowledge
throughout their undergraduate career. This knowledge, while introduced in required
classes, is continually reinforced throughout elective coursework. Commuter status was
also significantly related to exam score, but this was due to an interaction effect with class
status. Seniors are more likely to be commuters and more likely to score significantly
higher on the exam. To better understand the relationship between external factors and
senior exam score, a senior only analysis was conducted.
Table 4 shows the multivariate model for seniors only. The overall model was indeed
insignificant (adj. r2 = 0.06). Motivation, employment, and commuter status were not signif-
icant. Given the significant association between exam score and year and the combined
multivariate model showing that year significantly predicts exam score, the present model
adds support to the contention that criminal justice learning drives exam score increase.
Interestingly, gender also significantly predicted exam score (β = −8.84**) in the senior
analysis. Males were more likely to score higher on the exam than females. The gender
effect on exam score is troublesome. It is imperative to determine why females scored lower
than males.
Table 5. Comparison of male and female exam and scale scores in senior sample.a
Male (n = 36) Female (n = 20)
Total exam 37.69 (62.8%) 32.89** (54.8%)
Intro to CJ 6.94 5.90
Theories of crime 4.89 4.26
American judicial 6.69 6.20
Corrections 8.36 7.95
Law enforcement 7.17 5.16**
Research methods 4.09 4.11
a
Total exam score is based on 60 questions. Scale scores are based on 10 questions.
**p < 0.01 (one-tailed).
Criminal Justice Studies 233
enforcement questions removed, the gender differences disappeared (see Table 6). This
suggested that scores on the law enforcement subscale, then, drove the results. It is difficult
to sort out the reasons for the gender effect without further data. There are several plausible
reasons for these differences. Although exam questions will be re-examined for gender bias,
there was no gender difference in the freshmen sample using the same questions, suggesting
this is not the case.
Females scoring significantly worse on the law enforcement portion of the exam could
be the result of natural patterns. In other words, females simply may be less interested than
males in pursuing law enforcement careers and therefore, take fewer upper level courses in
that area. Law enforcement knowledge, then, is not reinforced, resulting in lower exam
scores by females. The gender of the instructor may play a role, too. In the department, the
majority of instructors in law enforcement courses were male. While the effect, if any, of
male instructors is uncertain, the department will continue to conduct more detailed analysis
on gender differences in exam scores. As a result of the gender discrepancy, a question on
career aspiration will be added to the exam in order to determine whether the gender effect
is related to interest in pursing law enforcement careers. In addition, the department is
considering ways to foster female achievement in law enforcement, including greater utili-
zation of female practitioners in the classroom.
Limitations
There are several limitations in the current preliminary research. Some of these limitations
can be overcome with relative ease in future iterations of the exam. Other limitations may
234 K. Tobin and E. Gebo
exam is one aspect of a larger assessment plan and its purpose is to provide objective infor-
mation on departmental effectiveness. Several important lessons come from this research
and help direct the future of the local home-grown comprehensive exam and departmental
assessment.
First, the department will continue to refine the exam in order to collect more complete
data in the future. The exam will never be able to fully control for all factors that impact the
learning process. However, other measures can be added that may impact learning or the
educational process. For example, a question regarding career aspirations would assist in
determining whether the gender effect is related to interest in pursuing law enforcement
careers. Departmental and college mission statements also refer to developing more than
just student knowledge. Concepts such as civic engagement, personal responsibility, and
diversity awareness are included in individual department and institutional objectives.
Future comprehensive exams should measure some of those goals. The department contin-
ues to struggle with how best to measure the concepts of tolerance and ethics, but that
should not be seen as a barrier to attempting to discover changes in beliefs, morals, and
values during the course of a student’s undergraduate career.
Continued analysis is necessary to fully understand student learning and departmental
effectiveness. The department expects that future exams, with stronger methodologies, will
continue to strengthen the data that is gathered to better assess the relationship between
department pedagogy and student achievement. In 2008, the data will be available to
conduct a true pretest–posttest cohort analysis. Control variables that have been added to the
exam since its inception will also strengthen future research, such as the inclusion of a ques-
tion measuring major/non-major that will allow the department to eliminate the small
minority of non-majors from the pool.
The department must also use the results of the comprehensive exam as basis for dialog
and further exploration. While the senior scores are consistent with other comprehensive
exam averages, the department has engaged in dialog regarding the students who score very
low on the exam because their scores suggest that some students may be graduating with
minimal criminal justice proficiency. It is important to note that the results of the current
analysis are tentative and the department is not comfortable with acting on them at this time.
Future analysis may yield similar findings, but to emphasize a point made earlier, the results
of any comprehensive exam need to be treated with caution because there are many other
factors, not measurable in an exam, that could influence exam results.
In the future, the department will combine information gathered from the comprehensive
exam with other assessment methods. The multidimensional assessment plan that is in place
at Westfield State will allow the department to triangulate the data from the exam with focus
groups and writing assessment. As previously mentioned, student learning is impacted by
several things. Not all are picked up by an assessment exam. The comprehensive exam
results, combined with other assessment methods, will allow the department to thoroughly
evaluate student proficiency. Focus groups, for example, can be tailored to gain information
to better understand comprehensive exam results. If other methods of departmental assess-
ment (i.e., writing analysis) yield similar results, this shows that the department is fulfilling
one part of their responsibility. Combined, these measures are a way to continually monitor
and improve student learning and overall department quality.
Acknowledgement
A previous version of this manuscript was presented at the 2005 Annual Meeting of the American
Society of Criminology.
236 K. Tobin and E. Gebo
Notes
1. The comprehensive exam is available from the first author.
2. Transfer students make up approximately 20% of the criminal justice majors. They must take
most of their required courses on the Westfield campus.
3. Seniors constitute the lowest percentage of the overall criminal justice student population. This
is due to attrition and rapidly increasing numbers of incoming criminal justice student populations
on both campuses.
Notes on contributors
Kimberly Tobin is an Associate Professor at Westfield State College in Westfield, MA. Her teaching
and research interests are in the areas of youth gangs, juvenile justice, substance use, and criminolog-
ical theory. She published a textbook entitled Gangs: An Individual and Group Perspective (2008)
and co-authored Gangs and Delinquency in Developmental Perspective.
Erika Gebo is an Assistant Professor of Sociology at Suffolk University in Boston. Her publications,
teaching, and research interests are in the areas of juvenile justice, family violence, social policy, and
evaluation.
References
American Association of University Women Educational Foundation (AAUW). (1992). Overview
of how schools shortchange girls, with recommendations for educators and policymakers.
Washington, DC: Author.
Association of American Colleges and Universities (AAC&U). (1984). Integrity in the college
curriculum: A report to the academic community. Washington, DC: Author.
Association of American Colleges and Universities (AAC&U). (2004). Taking responsibility for the
quality of the baccalaureate degree. Washington, DC: Author, Greater Expectations Project on
Accreditation and Assessment.
Banta, T.W. (1991). Contemporary approaches to assessing student achievement of general educa-
tion outcomes. The Journal of General Education, 40, 203–223.
Banta, T.W., & Schneider, J.A. (1986). Using locally developed comprehensive exams for majors to
assess and improve academic program quality. ERIC. ED269472.
Bell, R.C., & Hay, J.A. (1987). Differences and biases in English language examination formats.
British Journal of Educational Psychology, 57, 212–220.
Bloom, B.S. (1956). Taxonomy of educational objectives: Handbook I. Cognitive domain. New
York: David McKay.
Bolger, N., & Kellaghan, T. (1990). Method of measurement and gender differences in scholastic
achievement. Journal of Educational Measurement, 27, 165–174.
Busch, T. (1995). Gender differences in self-efficacy and academic performance among students of
business administration. Scandinavian Journal of Educational Research, 39, 311–318.
Campbell, P., Wadia-Fascetti, S., Perlman, L., & Hadley, E.N. (2002). Confidence and continuation:
The Northeastern first year engineering student survey. Proceedings of the 2002 WEPAN
Annual Meeting, Lafayette, IN.
Chan, N., & Kennedy, P.E. (2002). Are multiple-choice exams easier for economic students? A
comparison of multiple choice and equivalent constructed response exam questions. Southern
Economic Journal, 68, 957–971.
Chun, M. (2002). Looking where the light is better: A review of the literature on assessing higher
education quality. Peer Review, 4, 16–25.
Crawford, M., & MacLeod, M. (1990). Gender in the college classroom: An assessment of the
‘chilly climate’ for women. Sex Roles, 23, 101–122.
Doherty, A., Riordan, T., & Roth, J. (2002). Student learning: A central focus for institutions of
higher education. Milwaukee, WI: Alverno College Institute.
Educational Testing Service (ETS). (2005). Institutions administering the major field test – Criminal
justice. Retrieved August 18, 2006 from http://www.ets.org/Media/Tests/MFT/pdf/appcrimjust
03-04.pdf
Erwin, T.D. (1991). Assessing student learning and development. San Francisco: Jossey-Bass.
Criminal Justice Studies 237
Ewell, P. (2002). CHEA workshop on accreditation and student learning outcomes. Retrieved
August 18, 2006 from http://www.chea.org/pdf/workshop_outcomes_ewell_02.pdf
Greene, B. (1997). Verbal abilities, gender and the introductory economics course: A new look at an
old assumption. Journal of Economic Education, 28, 13–30.
Halpern, D.F. (1986). Sex differences in cognitive abilities. Hillsdale, NJ: Lawrence Erlbaum.
Halpern, D.F. (1997). Sex differences in intelligence. American Psychologist, 52, 1091–1102.
Inman, P., & Pascarella, E. (1998). The impact of college residence on the development of critical
thinking skills in college freshmen. Journal of College Student Development, 39, 557–568.
Ishitani, T. (2001). Does campus involvement affect academic performance? Comparisons between
resident and commuter students. Terre Haute, IN: Indiana State University, Office of Institutional
Research and Effectiveness.
Larose, S., Robertson, D.U., Roy, R., & Legault, F. (1998). Nonintellectual learning factors as deter-
minants for success in college. Research in Higher Education, 39, 275–297.
Magolda, B. (1992). Cocurricular influences on college students’ intellectual development. Journal
of College Student Development, 33, 203–213.
Martin, J.S., & Hanrahan, K. (2004). Criminology freshmen: Preparation, expectations, and college
performance. Journal of Criminal Justice Education, 15, 251–269.
Meadows, R.B., Dyal, A.B., & Wright, J.V. (1998). Preparing educational leaders through the use of
portfolio assessment: An alternative comprehensive examination. Journal of Instructional
Psychology, 25, 94–100.
Moreno, R., & Mayer, R.E. (1999). Gender differences in responding to open-ended problem-
solving questions. Learning and Individual Differences, 11, 355–365.
Morris, J.D. (n.d.). The case against the comprehensive exam. ERIC. ED225520.
Palomba, C.A., & Banta, T.W. (1999). Assessment essentials: Planning, implementing, and improv-
ing assessment in higher education. San Francisco: Jossey-Bass.
Pascarella, E., & Terenzini, P. (1991). Studying college students in the 21st century: Meeting new
challenges. The Review of Higher Education, 21, 151–165.
Pelfrey, W.V., & Hague, J.L. (2000). Examining the comprehensive examination: Meeting educa-
tional program objectives. Journal of Criminal Justice Education, 11, 167–177.
Simkin, M.G., & Kuechler, W.L. (2005). Multiple-choice tests and student understanding: What is
the connection? Decision Sciences Journal of Innovative Education, 3, 73–97.
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review
of Educational Research, 45, 89–125.
Wagenaar, T.C. (1991). Goals for the discipline. Teaching Sociology, 19, 92–95.
Wagenaar, T.C. (2002). Outcomes assessment in sociology: Prevalence and impact. Teaching
Sociology, 30, 403–413.
Wagenaar, T.C. (2004). Assessing sociological knowledge: A first try. Teaching Sociology, 32,
232–238.
Weiss, G.L. (2002). The current status of assessment in sociology departments. Teaching Sociology,
30, 391–402.
Winter, G.M., & Fadale, L.M. (1991). Outcome assessment in postsecondary occupational
programs: A consortia approach (ERIC #337-237). Albany, NY: NYS Educational Department.
238 K. Tobin and E. Gebo
The remaining coursework (21 credits) is taken from an extensive offering of criminal justice elec-
tives. Students transferring in from other institutions must complete at least 21 credits of criminal
justice coursework at Westfield State College.
(1) Possess the capacity to comprehend and evaluate qualitative and quantitative social
science research, including at least a basic familiarity with introductory level statistical
concepts.
(2) Comprehend the Constitutional concepts of due process, equal protection, and fundamen-
tal fairness in policing, courts, and corrections.
(3) Understand the role of ethics and moral reasoning throughout the criminal justice system.
(4) Understand issues of diversity, including but not limited to gender, race, ethnic, cultural,
and class issues, in the administration of criminal justice.
(5) Possess writing, research, communications, and computer literacy skills sufficient to
enable graduates to obtain bachelor-level entry into criminal justice and allied professional
fields, or alternatively, to pursue graduate studies in such fields and disciplines.
(6) Be able to employ critical reasoning skills across the criminal justice curriculum.
(7) Exhibit an understanding of traditional and contemporary theories of crime causation, and
their implications for public policy.
(8) Being cognizant of the history, development, fundamental concepts, and current operation
of American law and our principal criminal justice institutions, together with their rela-
tionships to each other and to the larger social and political context.
View publication stats