Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 19

Assessing Student Learning

Outcomes Across a Curriculum


Marcia Mentkowski, Jeana Abromeit, Heather Mernitz, Kelly Talley, Catherine Knuteson, William H. Rickards, Lois
Kailhofer, Jill Haberman and Suzanne Mente

Prepared by: BABY JANE C. PAIRAT


Introduction
Welcome to this presentation on assessing student learning outcomes across a curriculum. As educators, we
all want to ensure that our students are learning and growing throughout their academic journey. However, it can be
challenging to know if our teaching methods are effective without proper assessment.

Educational researchers have been investigating the relationship between knowledge bases and professional
role performances, with little consensus established across disciplines and professions. Outside pressures on
educators to demonstrate critical thinking, problem solving, and communication competencies are being felt all
across the educational spectrum, from secondary schools to graduate and professional schools. Additionally, many of
today's professional problems require interprofessional solutions, meaning that experienced professionals need to be
prepared to use their knowledge base when engaging their metacognitive processes. In "Assessing Student Learning
Outcomes Across a Curriculum," Mentkowski et al. present a performance assessment instrument to help faculty
identify and address issues related to validity and interpretation of results across learning outcomes. This article looks
to examine the challenges of assessing student learning outcomes across disciplines and professions, as well as
building consensus definitions of professional constructs for developing and implementing assessments.
Postsecondary Perspectives and Theoretical
Frameworks
The Collegiate Learning Assessment (CLA) is being used more frequently in higher education to
measure student learning outcomes. Although this method may be useful for establishing
accountability, some faculty members question its validity and its ability to identify specific
disciplinary or professional problems. The American Association of Colleges and Universities
(AAC&U) created the VALUE project in an effort to make program evaluation findings more
useful to faculty. This project uses student-created portfolios to obtain evidence that can be
used to identify broad problems and optimize student learning. Alverno College also participates
in this study, and focuses on assessment design criteria such as connecting the assessment to
the curriculum, providing individual feedback, and training faculty members and educational
researchers to be assessors.
Postsecondary Perspectives and Theoretical
Frameworks
Alverno’s gradually evolving criteria for assessment design include:

 The assessment is designed to investigate and resolve questions that are raised by faculty about the quality and
effectiveness of student learning. Thus, the rationale for an assessment is problem-based and focused on the learning
of individuals and groups.

 Assessments are specifically connected to courses taught in the faculty’s current curriculum because when they are
connected, faculty are more likely to make curricular changes that benefit student learning.

 Individual students are not only demonstrating what they have learned in an assessment, they are also learning while
they are completing an assessment. Challenging tasks across modes are likely to challenge student learning.

 Assessors provide students with individual feedback. Students are challenged and motivated by feedback to do their
best work on an assessment.

 Assessors assist students to design their plans for further learning.


Postsecondary Perspectives and Theoretical
Frameworks
Alverno’s gradually evolving criteria for assessment design include:

 Faculty are motivated because they are designing an assessment to judge whether students can integrate content and
competence, that is, knowledge systems and their ability to use them via demonstrated abilities/competencies. Faculty has
assurance that students are able to adapt and transfer deep learning to unfamiliar problems in new settings.

 Students complete such instruments in an assessment center, located outside of their classes. The purpose is to create
distance from immediate coursework, and in some instances, for students to experience engaging outside assessors from the
local professional and business community.

 Faculty members from across the disciplines and professions are trained as assessors and are interested in the results and in
ensuring all students succeed.

 Educational researchers are motivated because they are working side by side with faculty in using disciplinary and
professional learning principles (for example: learning is integrative and transferable) that are connected to assessment
principles (for example, assessment requires integration of knowledge systems and competencies, and their transfer across
assessment modes and contexts unfamiliar to students) (Bransford et al. 2000; Mentkowski et al. 2000).
Purposes and Problems to Consider for Developing
a New General Education Assessment Across
Disciplines and Professions

PURPOSES

1. to examine the assumption that investments and benefits of performance assessments


for undergraduates develop as campus faculty—across disciplines and professions—design
instruments for assessment center administration outside-of-classes

2. faculty-designed assessment technique described here is to assess for integration and


transfer of student learning outcomes across selected prior coursework in general
education and over time.
Nature of the Faculty-Identified Problem
1. the design team relied on their own
experiences, teaching, and assessing in
their classes with current students.
2. some students recycle to earlier forms
of understanding what they had earlier
been able to demonstrate—such as
ways of thinking about their
knowledge and abilities—when they
were faced with unfamiliar problems in
unfamiliar contexts
3. some faculty on the EAS were aware of
findings in the broader literature that
directly named the problem of lack of
integrative learning and transfer
Methods
 Assessor training for faculty assessors and establishing validity of training fifty-two
faculty members from across the disciplines and professions (humanities, natural
sciences and mathematics, social and behavioral sciences, arts and technology,
nursing, business and management, education, professional communications,
community leadership)

 plus seven assessment center staff participated in training conducted by EAS team
members (natural scientist and social scientist).

 Assessor training sessions occurred over an academic year. Action research


involved assessors in processes designed to improve assessor training as a result of
their instruction.
Data Sources for Establishing Design-Based Validity
and Reliability
An independent evaluator recorded faculty questions during training sessions. Mentkowski used a
combination of deliberative inquiry (Harris 1991), qualitative methods, and action research (O’Brien
2001; Reason and McArdle 2008) to examine assessor training procedures, the construct validity of the
instrument, and the procedures for instrument administration. during each of the # training sessions she
observed

Mentkowski identified faculty questions related to


1. the purposes of general education assessments for student learning;
2. student learning outcomes that were being assessed with the technique;
3. procedures for the administration of the assessment technique by the assessment center;
4. the faculty assessor role during the assessment;
5. purposes for particular procedures during the assessment process; and
6. Alverno assessment policies related to how students were learning during the time they were
completing the assessment.
Evidence for Faculty-Identified Validity Issues

During the assessor training sessions, faculty began to engage each other interactively—across
the disciplines and professions—about what it meant to resolve validity issues regarding their
all-too-human judgments related to criteria.

Not only did faculty members begin to insist on studying the consistency and reliability of their
judgments.

Faculty also elaborated on the basis for judgments, assisted by faculty in the STEM disciplines.

However, other disciplines also weighed in on the basis for judgment that led to several
clarifications in what it meant for students to provide a null hypothesis, and what it meant for
students to provide a declarative sentence as a hypothesis statement /The resolution to this
issue was cross-disciplinary. Assessors were instructed during subsequent training sessions
that both types of hypotheses were to be acknowledged as “met criteria.”
Evidence for Reliability of Faculty Judgment

Assessors explained the judgments they made about the student’s performance, to the student,
indicating where the assessor decided the performance met criteria, partially met criteria, or did
not meet criteria. The assessor role was not to begin to instruct students in the integrated
constructs and competencies/capabilities, but rather to communicate to the student how they
made an overall judgment of succeed or did not succeed based on the pattern of performance
the student demonstrated: met, partially met, did not meet. Whether or not the student’s
performance succeeded, the assessor engaged the learner in creating a plan for further learning.
Thus, students whose performances were unsuccessful were assisted to use assessor feedback
to develop a plan for further learning. Thus, they received a benefit that would be engaging to
the student and communicate to him/her that he/she would be continuing to learn. The authors
do not claim that this benefit is similar to that of the successful student, however, both
successful and unsuccessful student performances are communicated to students. Each
receives a similar message from the assessor: learning is a continuous process
Establishing Validity of Assessor Final Judgment and
Fairness to Each Student

When a student did not succeed, an experienced coach present during the assessment provided
an independent view of the performance, usually by taking assessor through his or her decision-
making process for arriving at a final judgment. Thus, a student knew when she left the
assessment whether she had succeeded or not. This is evidence for procedural fairness to
students
Evidence for Consequential Validity of Assessment
Policies and Procedures

Faculty assessors during the training sessions began to discuss issues related to
consequential validity as defined by Messick (1994). As a result of discussions about the
consequences of the assessment for individual students, in particular by the humanities
faculty assessors, the EAS decided that students who did not meet criteria would be invited
to an intervention workshop.

As noted earlier, all students who do or do not succeed on the assessment are requested to
develop a picture of strengths and weaknesses and a plan for further learning. The Subgroup
argued that students who had developed their plans would be invited to a follow-up
workshop, so that they would achieve the benefits of further instruction, and also be
assessed. So far, all of the students who did not succeed attended the intervention
workshop. Afterward, students completed a shorter version of an assessment with different
problems. So far, only two students have not passed the next assessment.
Feedback from Assessors

Following completion of the assessment by students, faculty assessors met with each
student to provide individual feedback. Faculty strove to provide accurate, conceptual,
diagnostic, inferred from performance, and prescriptive feedback.
Intervention Workshops
 students are also given an opportunity to share insights related to their experience with the
assessment process and the assessment itself.
 Students complete a reflection on their preparation for the general education assessment and on
the ability areas that caused them the most difficulty on the assessment prior to attending the
workshop.
 students completing portions of scientific reasoning and quantitative literacy-based activities using
a combination of instructional approaches.
 all students are provided with a solution set for the entire activity, including questions that were not
used during the workshop session.
 entire activity, including questions that were not used during the workshop session. They are also
provided with another scientific reasoning and quantitative literacy activity, with analytical problem
solutions to use for practice on their own to reinforce the strategies developed during the workshop.
Summary of Findings
Validity issues included:

(1) achieving clarity of purpose for out-of-class assessments (integration of knowledge and
competencies and their adaptation and transfer);

(2) relationship between knowledge/skills assessed and courses completed;

(3) who completes the assessment (two-year students and students who entered from other
colleges);

(4) whether policies and procedures were rigorously reasoned and fair to students and
assessors;
Summary of Findings
Validity issues included:

(5) whether summative assessor judgments were reliable;

(6) whether consequences were appropriate for both students who succeeded and those who did
not succeed, given faculty learning and assessment principles;

(7) who infers recommendations from analzses of student performances and implications for ‘
curricular improvement (assessment council or general faculty); and

(8) whether program evaluation and student learning purposes compete (Mentkowski 1991, 1998,
2006; Loacker and Rogers 2004). Because not all students succeeded (77 %) the instrument can be
used for program evaluation of integration and transfer of content with competence, but not,
argued faculty, without intervention workshops and comparable reassessments for ensuring
mastery.
Conclusions and Implications for Professions
Education

Assessors in a wide range of disciplines may prioritize values differently (autonomy;


open-mindedness) than professions (teamwork, client service). Yet faculty in this
integrated liberal arts and professions curriculum reached consensus on assessment
purposes for a general education assessment.

Humanities faculty challenged consequential validity (Do majors in humanities need


scientific reasoning? What policies support unsuccessful students?), and agreed to be
assessors because faculty themselves should demonstrate general education
outcomes for students. When humanities learning outcomes are integrated into
professions curricula, assessments should demonstrate consequential validity

You might also like