Professional Documents
Culture Documents
Assessing Student Learning Outcomes Across A Curriculum
Assessing Student Learning Outcomes Across A Curriculum
Educational researchers have been investigating the relationship between knowledge bases and professional
role performances, with little consensus established across disciplines and professions. Outside pressures on
educators to demonstrate critical thinking, problem solving, and communication competencies are being felt all
across the educational spectrum, from secondary schools to graduate and professional schools. Additionally, many of
today's professional problems require interprofessional solutions, meaning that experienced professionals need to be
prepared to use their knowledge base when engaging their metacognitive processes. In "Assessing Student Learning
Outcomes Across a Curriculum," Mentkowski et al. present a performance assessment instrument to help faculty
identify and address issues related to validity and interpretation of results across learning outcomes. This article looks
to examine the challenges of assessing student learning outcomes across disciplines and professions, as well as
building consensus definitions of professional constructs for developing and implementing assessments.
Postsecondary Perspectives and Theoretical
Frameworks
The Collegiate Learning Assessment (CLA) is being used more frequently in higher education to
measure student learning outcomes. Although this method may be useful for establishing
accountability, some faculty members question its validity and its ability to identify specific
disciplinary or professional problems. The American Association of Colleges and Universities
(AAC&U) created the VALUE project in an effort to make program evaluation findings more
useful to faculty. This project uses student-created portfolios to obtain evidence that can be
used to identify broad problems and optimize student learning. Alverno College also participates
in this study, and focuses on assessment design criteria such as connecting the assessment to
the curriculum, providing individual feedback, and training faculty members and educational
researchers to be assessors.
Postsecondary Perspectives and Theoretical
Frameworks
Alverno’s gradually evolving criteria for assessment design include:
The assessment is designed to investigate and resolve questions that are raised by faculty about the quality and
effectiveness of student learning. Thus, the rationale for an assessment is problem-based and focused on the learning
of individuals and groups.
Assessments are specifically connected to courses taught in the faculty’s current curriculum because when they are
connected, faculty are more likely to make curricular changes that benefit student learning.
Individual students are not only demonstrating what they have learned in an assessment, they are also learning while
they are completing an assessment. Challenging tasks across modes are likely to challenge student learning.
Assessors provide students with individual feedback. Students are challenged and motivated by feedback to do their
best work on an assessment.
Faculty are motivated because they are designing an assessment to judge whether students can integrate content and
competence, that is, knowledge systems and their ability to use them via demonstrated abilities/competencies. Faculty has
assurance that students are able to adapt and transfer deep learning to unfamiliar problems in new settings.
Students complete such instruments in an assessment center, located outside of their classes. The purpose is to create
distance from immediate coursework, and in some instances, for students to experience engaging outside assessors from the
local professional and business community.
Faculty members from across the disciplines and professions are trained as assessors and are interested in the results and in
ensuring all students succeed.
Educational researchers are motivated because they are working side by side with faculty in using disciplinary and
professional learning principles (for example: learning is integrative and transferable) that are connected to assessment
principles (for example, assessment requires integration of knowledge systems and competencies, and their transfer across
assessment modes and contexts unfamiliar to students) (Bransford et al. 2000; Mentkowski et al. 2000).
Purposes and Problems to Consider for Developing
a New General Education Assessment Across
Disciplines and Professions
PURPOSES
plus seven assessment center staff participated in training conducted by EAS team
members (natural scientist and social scientist).
During the assessor training sessions, faculty began to engage each other interactively—across
the disciplines and professions—about what it meant to resolve validity issues regarding their
all-too-human judgments related to criteria.
Not only did faculty members begin to insist on studying the consistency and reliability of their
judgments.
Faculty also elaborated on the basis for judgments, assisted by faculty in the STEM disciplines.
However, other disciplines also weighed in on the basis for judgment that led to several
clarifications in what it meant for students to provide a null hypothesis, and what it meant for
students to provide a declarative sentence as a hypothesis statement /The resolution to this
issue was cross-disciplinary. Assessors were instructed during subsequent training sessions
that both types of hypotheses were to be acknowledged as “met criteria.”
Evidence for Reliability of Faculty Judgment
Assessors explained the judgments they made about the student’s performance, to the student,
indicating where the assessor decided the performance met criteria, partially met criteria, or did
not meet criteria. The assessor role was not to begin to instruct students in the integrated
constructs and competencies/capabilities, but rather to communicate to the student how they
made an overall judgment of succeed or did not succeed based on the pattern of performance
the student demonstrated: met, partially met, did not meet. Whether or not the student’s
performance succeeded, the assessor engaged the learner in creating a plan for further learning.
Thus, students whose performances were unsuccessful were assisted to use assessor feedback
to develop a plan for further learning. Thus, they received a benefit that would be engaging to
the student and communicate to him/her that he/she would be continuing to learn. The authors
do not claim that this benefit is similar to that of the successful student, however, both
successful and unsuccessful student performances are communicated to students. Each
receives a similar message from the assessor: learning is a continuous process
Establishing Validity of Assessor Final Judgment and
Fairness to Each Student
When a student did not succeed, an experienced coach present during the assessment provided
an independent view of the performance, usually by taking assessor through his or her decision-
making process for arriving at a final judgment. Thus, a student knew when she left the
assessment whether she had succeeded or not. This is evidence for procedural fairness to
students
Evidence for Consequential Validity of Assessment
Policies and Procedures
Faculty assessors during the training sessions began to discuss issues related to
consequential validity as defined by Messick (1994). As a result of discussions about the
consequences of the assessment for individual students, in particular by the humanities
faculty assessors, the EAS decided that students who did not meet criteria would be invited
to an intervention workshop.
As noted earlier, all students who do or do not succeed on the assessment are requested to
develop a picture of strengths and weaknesses and a plan for further learning. The Subgroup
argued that students who had developed their plans would be invited to a follow-up
workshop, so that they would achieve the benefits of further instruction, and also be
assessed. So far, all of the students who did not succeed attended the intervention
workshop. Afterward, students completed a shorter version of an assessment with different
problems. So far, only two students have not passed the next assessment.
Feedback from Assessors
Following completion of the assessment by students, faculty assessors met with each
student to provide individual feedback. Faculty strove to provide accurate, conceptual,
diagnostic, inferred from performance, and prescriptive feedback.
Intervention Workshops
students are also given an opportunity to share insights related to their experience with the
assessment process and the assessment itself.
Students complete a reflection on their preparation for the general education assessment and on
the ability areas that caused them the most difficulty on the assessment prior to attending the
workshop.
students completing portions of scientific reasoning and quantitative literacy-based activities using
a combination of instructional approaches.
all students are provided with a solution set for the entire activity, including questions that were not
used during the workshop session.
entire activity, including questions that were not used during the workshop session. They are also
provided with another scientific reasoning and quantitative literacy activity, with analytical problem
solutions to use for practice on their own to reinforce the strategies developed during the workshop.
Summary of Findings
Validity issues included:
(1) achieving clarity of purpose for out-of-class assessments (integration of knowledge and
competencies and their adaptation and transfer);
(3) who completes the assessment (two-year students and students who entered from other
colleges);
(4) whether policies and procedures were rigorously reasoned and fair to students and
assessors;
Summary of Findings
Validity issues included:
(6) whether consequences were appropriate for both students who succeeded and those who did
not succeed, given faculty learning and assessment principles;
(7) who infers recommendations from analzses of student performances and implications for ‘
curricular improvement (assessment council or general faculty); and
(8) whether program evaluation and student learning purposes compete (Mentkowski 1991, 1998,
2006; Loacker and Rogers 2004). Because not all students succeeded (77 %) the instrument can be
used for program evaluation of integration and transfer of content with competence, but not,
argued faculty, without intervention workshops and comparable reassessments for ensuring
mastery.
Conclusions and Implications for Professions
Education