Prospectus Draft 2.6

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Amanda Wilson

October 24, 2010

EdS Thesis Prospectus

How Students Perceive Their Learning

An exploration of student reflections on assessment

Statement of the Problem

There has been a shift in higher education from teaching objectives to student learning

outcomes. Much of this drive has precipitated from a shift of focus from regional accreditors of

schools. The focus on student learning outcomes facilitates a culture of continual improvement

of the education institution as a whole that strives to engage the entire academic community.

Authentic assessment measures are becoming more common as supplements to more traditional

assessment measures to provide valuable data needed to inform this culture of change and

improvement. The outcomes assessment movement in foreign language classes has embraced

these authentic assessment measures, mostly in the form of portfolio-style projects for students.

Unfortunately, there is less research on how students are receiving these various forms of

assessment and what they perceive as the benefits or drawbacks of each. The purpose of this

study is to explore the reflections of students in a beginning level Spanish class towards various

forms of traditional and authentic assessment tools.

Significance of the Problem

References to Cross & Hendricks and Background/Interest

Literature Review

A Focus Shift: Student Learning Outcomes


A culture change is occurring in higher education that shifts the primary focus of

educators from teaching the subject matter of specific disciplines to a perspective of student

learning (Allen 2004, p. 1). “As departmental, organizational, and institutional cultures undergo

change, and as the focus of that change is less on teaching and more on learning, a commitment

to sustainable outcomes assessment becomes essential” (Hernon, et al. 2006, p. 1). According to

Allen, this type of assessment occurs when, “empirical data on student learning is used to refine

programs and improve student learning” (2004, p. 2).

At the classroom-level, Palmer encourages viewing student assessment as “a strategic

tool for enhancing teaching and learning” (2004, p. 194). He goes on, a few pages later to add

that, “Continuous assessment starting early in the semester has the benefit of quickly identifying

those students falling behind and perhaps at risk of dropping out, so remedial action can be

taken” (p. 198).

Allen comments that, “while classroom assessment examines learning in the day-to-day

classroom, program assessment systematically examines student attainment in the entire

curriculum” (2004, p. 1). In their 2001 work, Ratcliff, et al., point out that “the continuous

improvement cycle should begin with clear departmental goals that identify what a student can

expect to gain as a result of studying a particular field or discipline” (p. 25). Hernon, et al. add

that, “programs and institutions need to develop a strong and sustainable commitment to

assessment as a process and as a means to improve learning based on explicit student learning

outcomes” (2006, p. 11). Ratcliff, et al. further point out that, “While a college’s or university’s

general goals for student achievement can be measured at the university level, the accreditation

self-study must address student academic achievement in the discipline. Therefore, departments
and programs must contribute to the accreditation self-study by assessing their students’ learning

at the department level” (2001, p. 32).

Classroom, program (or department), and institutional assessment support a foundation for

accreditation. As this culture shift continues to push the focus towards student learning,

accreditation standards continue to link student outcomes assessment to continued accreditations

of programs and schools (Ratcliff, et al. 2001, p. 13).

Regional Accreditors on Assessment

Allen tells us on page 18 of her 2004 text that, “accrediting organizations…generally

focus on two major issues: capacity and effectiveness.” She goes on to explain that capacity is

the bean counting of the process. It is when tallies are taken of the resources any institution has

to support its student such as libraries, technology, physical space, and student support services.

The focus, however, has really turned more towards a long-term commitment to improving

student learning (Hernon, et al. 2006, p. 1). Allen continues on to say that accrediting

organizations, “expect campuses to document their impact on student learning” (2004, p. 18) and

that, “when accrediting bodies require assessment, campuses pay attention” (2004, p. 2). She

cautions, however, that, “assessment…should be implemented because it promotes student

learning, not because an external agency requires it” (Allen 2004, p. 2).

Closing the Loop

On pages 163 and 164, Allen imparts some “friendly suggestions,” one of which is to

“close the loop,” stating that “good assessment has impact” (2004). The foundation of all this

assessment is that it will drive change towards the continual improvement of the quality of the

educational system. As Ratcliff, et al. put it, “Assessment and accreditation are both premised on

the importance of quality assurance” (2001, p. 17).


A Community of Assessors: Collaboration is Key

To move toward these lofty goals, faculty, institutional research offices, and “everyone in

the educational enterprise, has [the] responsibility for maintaining and improving the quality of

services and programs” (Ratcliff, et al. 2001, p. 17). Assessment of student learning outcomes,

“includes all members of the [educational] community as they strive to contribute to and enhance

the educational enterprise,” as a whole (Ratcliff, et al. 2001, p. 17).

Authentic Assessment as Means to Focus on Student Learning Outcomes

Brown, et al. expound the functions of assessment on page 47 of their 1999 work as six

points:

1. Capturing student time and attention.

2. Generating appropriate student learning activity.

3. Providing timely feedback which students pay attention to.

4. Helping students to internalize the discipline’s standards and notions of quality.

5. Marking: generating marks or grades which distinguish between students or which

enable pass/fail decisions to be made.

6. Quality assurance: providing evidence for others outside the course (such as external

examiners) to enable them to judge the appropriateness of standards on the course.

With these purposes in mind, assessment methods can be examined to determine their validity.

On pages 62 and 63, the authors discuss traditional unseen written exams and how they function

as assessments: “In particular, this assessment format seems to be at odds with the most

important factors underpinning successful learning…there is cause for concern that traditional

unseen written exams do not really measure the learning outcomes which are the intended

purposes of higher education” (Brown, et al. 1999). Palmer echoes these concerns on page 194 of
his 2004 paper on authenticity in assessment, stating that, “traditional forms of assessment can

encourage surface learning rather than deep learning.” Banta goes a bit further, with her colorful

simile to discourage purchasing more traditional assessment measures for the purposes of

improving student learning: “Just as weighing a pig will not make it fatter, spending millions to

test college students is not likely to help them learn more” (2007, p. 2).

Watson points out, “a need for more authentic, learner-friendly methods to encourage

[student] engagement” (2008, p. 1) which seems to align with Brown, et al.’s first and second

functions of assessment listed above (1999, p. 47). Watson goes on to point out a bit further on

that, “the assessment of authentic performance…has the potential to address a number of

contemporary criticisms of assessment,” (2008, p. 1) such as those quoted above of Banta,

Palmer, and Brown, et al. Banta also notes that, “authentic and valid assessment approaches must

be developed and promoted as viable alternatives to scores on single-sitting, snapshot measures

of learning that do not capture the difficult and demanding intellectual skills that are the true aim

of a college education” (2009, p. 3). She continues in the same paper on page 4 that the point, “is

knowledge creation, not knowledge reproduction” (Banta 2009, p. 4).

Brown, et al., bring it together nicely when they state that, “ultimately, assessment should be

for students…[as] a formative part of their learning experience” and that students who develop

their test-taking skills, “the best tend to succeed in assessment” whether or not they are the most

qualified in their field (1999, p. 58). In other words, their first two functions of assessment,

“capturing student time and attention” and “generating appropriate student learning activity,”

along with the fourth and sixth, “helping students to internalize the discipline’s standards and

notions of quality” and providing for “quality assurance,” are just as important as the fifth,

“marking,” which tends to get all the attention but most often seems more prone to be partially
invalid in the case of many traditional assessment measures (Brown, et al. 1999, p. 47). Whereas

authentic assessment brings the focus of assessment to student learning outcomes because, “the

onus is on lecturers to be able to demonstrate that assessment is measuring well what it is

intended to measure,” thereby forcing an increase in the validity of the assessments (Brown, et

al. 1999, p. 59).

It is important to note, that the point is not to throw away traditional assessment measures.

They can still serve some of the functions of assessment well. As Ratcliff, et al. state on page 28

of their 2001 text, “formative and summative assessment methodologies provide the department

or program with evidence of their students’ learning.” While traditional summative assessments

can, and should, support the functions of assessment processes, their results cannot stand alone to

inform the process of continual improvement of learning (Ratcliff, et al. 2001, p. 28). To really

get at that sixth purpose of “quality assurance,” a balance is needed (Brown, et al. 1999, p. 47).

Outcomes Assessment in Foreign Languages

Trends over the last couple of decades in foreign language methodologies have espoused

“communicative goals of instruction…yet, examinations in foreign language courses typically

are pen and paper exercises that single out discrete points of grammar or vocabulary” (Higgs

1987, p. 1). Higgs raised the warning almost twenty-five years ago that if foreign language

educators really want to set communication as a goal for their students, then, “assessment

procedures must test for communicative function” (1987, p. 1). While these pen and paper exams

are still common place, language classes have seen an influx of authentic assessment measures

(Sullivan 2006, p. 590).

Most of these assessments have come in the form of portfolio-style projects. Banta

commented in her 2007 article on assessment that portfolio assessments would be the most
authentic because students develop the content themselves (p. 4). Studies of English as a Foreign

Language (EFL) learners have found that portfolio-style assessments have contributed to student

learning, especially when combined with other assessment measures, and that portfolio

assessments help students take ownership of their learning (Barootchi, et al. 2002 & Caner

2010). Additionally, one study found that some EFL students in writing courses preferred the

portfolio assessments over more traditional assessments (Caner 2010, p. 1).

There are many styles of portfolio assessments depending upon the specific needs of the

assessment, but the seemingly most popular version in language learning is the self-assessment

portfolio. The European Language Portfolio (ELP) was the model for the American adaptations:

LinguaFolio and the Global Language Portfolio (Cummings, et al. 2009, p. 1). These portfolios

present a “learner-empowering alternative” to traditional assessment (Cummings, et al. 2009, p.

1) Moeller points toward these self-assessment models as more valid forms of assessment than

more traditional assessment models (Moeller 2010 Self-assessment in the foreign language

classroom). She describes LinguaFolio, in particular, as, “a portfolio that focuses on student self-

assessment, goal setting and collection of evidence of language achievement” (Moeller 2010

LinguaFolio). The students set their language goals and, based on the evidences of their own

work that they collect, determine when their goals are met (Fasciano 2010, slide 9). Moeller

points out that if language educators are using LinguaFolio effectively, it will necessitate moving

away from teacher-centered methodologies and toward learning-centered outcomes because it is

by its very definition a learner-centered self-assessment tool that facilitates the processes of goal

setting and self-reflection and establishes intrinsic motivation in students (Moeller 2010

LinguaFolio). Brown, et al. also mention that a major advantage of these types of assessments is
that they promote intrinsic motivation through personal involvement because the student is

taking charge of their learning (1999, p. 75).

Student Perception on Assessments

“At present, students often feel that they are excluded from the assessment culture, and

that they have to use trial and error to make successive approximations towards the performances

that are being sought in assessed work” (Brown, et. al. 1999 p. 58). Because of the reflections

gathered from their students, Brown, et al. encourage innovation in assessment” (1999, p. 81).

They also stated that, “to some students, conventional forms of assessment appear to have no

relevance to anything outside the university and are all about judging them, sometimes on a

somewhat arbitrary basis, rather than involving them in genuine learning” (Brown, et al. 1999, p.

81). Instead they found that, “students appreciate assessment tasks which help them to develop

knowledge, skills and abilities which they can take with them and use in other contexts such as in

their subsequent careers” and they encourage, “assessment which incorporates elements of

choice,” because, “it can give students a greater sense of ownership and personal involvement in

the work and avoid the demotivating perception that they are simply going through routine tasks”

(Brown, et al. 1999, p. 81).

Research Questions

 What are the perceptions of undergraduate students related to traditional and authentic

assessments used in an introductory Spanish course?

1. What do students think are the benefits or limitations of each type of assessment on
their learning? Do students think these assessments reflect their learning?

2. How do students feel that these assessments can enhance or detract from their

learning experience?
3. What factors do students feel affect the impact of each type of assessment on their

learning?

4. What preferences do students express toward each type of assessment? What are

their reasonings for these preferences?

5. What recommendations do students have for enhancing their perceived

effectiveness of each type of assessment?

Methodology

Context/Setting

My research will be conducted by gathering data from former students of my fall 2010

beginning Spanish college courses. I teach at a public state funded institution and although we

are moving towards being a more research focused institution, the current focus is more aligned

with teaching. I am fortunate to have a great deal of freedom and control in my classroom. While

it is true that my general curriculum and my textbook are mandated by the tenured faculty of my

department, I am free to choose whatever path I believe will best help me achieve the course

goals. That is to say that while I am not free to choose what I teach, I am free to determine how

to teach. While I also receive feedback from peers once a year, I feel no other demands from any

supervisors on my teaching methods. This allows me to constantly experiment with ways to

improve how I teach my classes. I am currently in my fifth semester teaching these courses and I

can say with certainty that no two semesters have held very much in common outside of my

general teaching philosophy. I am constantly trying to improve my methods based on what I

have perceived as being effective. This hands-off situation created by the administration allows

me to be fluid in my methods. While the freedom to teach the way I feel is best has many
advantages, it also carries heavy responsibility; I have to rely on my perceptions of my students'

learning with little feedback from anyone else.

The classes I teach are capped at 28 students. This is a moderate number of students for a

beginning foreign language class. While it would be ideal to have a smaller number because it

would allow for more individualized attention, there are advantages to this class size as well.

With this many students it is easier to employ group learning strategies, allowing students to

facilitate their own and each other’s' learning processes. These large classes make it easy to use

traditional assessment measures because they are easy to administer and assess, even in so large

a group. Authentic assessments, like portfolio-style assessments, are more challenging to

administer and evaluate because they take longer to facilitate, collect, and assess but can provide

richer, more detailed feedback to students.

Participants

For this project, there will be two primary participant groups. First, I will invite the 79

students taking one of the three sections of beginning Spanish 1 in fall 2010 to participate in a

broad attitude survey (see Appendix B). These students run the gambit in class rank, from

freshmen to seniors, and in age, the youngest being 17 and the older ones being over thirty, as

well as in educational experiences and majors. I hope to see at least 40% of these students

respond to the survey to ensure a valid sample.

The second group of students I plan to solicit for this study will ideally be a group of

nine. I would like to ask three students from each of the three sections to participate in individual

interviews. I will choose which students to ask to participate based on their performance levels in

class. Ideally, I will find one high-, one mid-, and one low-performing student to ensure a
broader range of perspectives on the assessment measures. I will use a semi-structured interview

guide (see Appendix A) and record the interviews digitally.

Research Plan

I will complete my research by two methods: individual interview and a broad attitude

survey. First, I will employ a semi-structured interview method to ask the nine students,

described above, to explore their reflections on various forms of traditional and authentic

assessment tools used their previous semester of beginning level Spanish. I will use the semi-

structured interview guide (see Appendix A) to solicit their opinions on these assessments. I will

record the interviews digitally, after having each student sign an informed consent form (see

Appendix C).

Additionally, I will email an invitation to all 79 students taking one of the three sections

of beginning Spanish 1 in fall 2010 to participate in the broad attitude survey (see Appendix B)

which will be housed online. This survey will ask these former students to comment on their

engagement and motivation levels and their perceived effectiveness on their learning experiences

of the various assessment tools used throughout the course. I hope to have at least 32 of those

students surveyed to respond, which would be approximately 40% and enough for a valid

sample. I plan to send out an invitation to these students in early January, after grades have

posted for the semester, asking them to complete the survey by February 1, 2011. On January 31,

2011, I plan to send another email reminder asking students to complete the survey. If my results

are still under 50% participation, I will send a final email request during the second week of

February 2011.

Plan for Evaluation of Data


Time Line

Validity

look at Commitment AR study and Hendricks

Ethics and Subjectivity

look at Commitment AR study and Hendricks

You might also like