Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 36

UNDERGRADUATE THESIS PROJECT PROPOSAL

School of Engineering and Applied Science


University of Virginia

Computer Science Education:


A System for Evaluating Teaching Methods

Submitted by
Edward Pan
Computer Science

STS 401
Section 16 (2 p.m.)
November 7, 2006
On my honor as a University student, on this assignment I have neither given nor
received unauthorized aid as defined by the Honor Guidelines for papers in Science,
Technology, and Society courses.
Signed
Approved

Date
Technical Advisor Thomas Horton

Approved

Date
Science, Technology, and Society Advisor
Bryan Pfaffenberger

i
Contents
Abstract................................................................................................................................1
Introduction..........................................................................................................................1
Rationale and Objectives.....................................................................................................3
Preliminary Analysis of Social Context and Ethical Implications.......................................3
Literature Review................................................................................................................7
Project Activities..................................................................................................................8
Activities..........................................................................................................................9
Schedule.........................................................................................................................11
Personnel........................................................................................................................11
Resources.......................................................................................................................14
Expected Outcomes...........................................................................................................14
Bibliography......................................................................................................................15
Science and Engineering Education..............................................................................15
Education Design...........................................................................................................17
Educational Research.....................................................................................................18
Assessment and Evaluation...........................................................................................19
Metaevaluation..............................................................................................................20
Miscellaneous................................................................................................................21
References..........................................................................................................................23
Appendix A: Budget and Equipment Checklist.................................................................26
Appendix B: Biographical Sketch of Student....................................................................26
Appendix C: Preliminary Outline of Thesis......................................................................26

ii
Appendix D: Guiding Principles for Evaluators................................................................27
Appendix E: The Program Evaluation Standards..............................................................28
Appendix F: Thesis Project Gantt Chart............................................................................33

Evaluating Teaching Methods 1

Abstract
Enrollment and retention in computer science are declining, with the brunt of the
impact occurring among freshman and sophomore students. CS201 is an introductory
course in software engineering that is typically taken by freshman and sophomore
engineering students. CS201 has been plagued by student dissatisfaction and frustration.
Instructors would like to improve the class by trying new teaching methods, but they lack
a system with which to evaluate these methods. With the help of Professor Thomas
Horton of the Department of Computer Science, the students of CS201, Professor Jerry
Short of the Curry School of Education, and Professor Bryan Pfaffenberger of the
Department of Science, Technology, and Society, this project will establish educational
design guidelines for CS201, develop a system for evaluating teaching methods, examine
the current state of CS201 and identify problems, and test three solutions to these
problems using the system developed.

Introduction
Enrollment in computer science is declining. According to an article from
Microsoft (2005), the percentage of incoming undergraduates indicating plans to major
in computer science declined by more than 60 percent between fall 2000 and 2004, and is
now 70 percent lower than its peak in the early 1980s. Attrition in computer science is
increasing (Cohoon & Chen, 2003). Both of these factors combine to create a dismal
situation for computer science in higher education. Attrition appears to be highest among
freshman and sophomore students (Cohoon & Chen, 2003), who are usually in

Evaluating Teaching Methods 2


introductory and lower-level computer science courses. Microsoft (2005) points to the
teaching methods in these courses as reasons for decreased enrollment, and Cohoon and
Chen (2003) point to them as reasons for increased attrition. Improving the way that
these courses are taught can make a positive difference. CS201 is an example of such a
course.
CS201, Software Development Methods, is an introductory course in software
engineering, which is required for all Computer Engineering, Computer Science,
Electrical Engineering, and Systems Engineering majors (University of Virginia, 2006a,
2006b, 2006c, and 2006e). CS201 is typically taken during an engineering students first
or second year, a time when many students are unsure of their final course of study and
career plans the time that has been identified as having the highest attrition in computer
science nationally. In the past, many students have experienced difficulty and frustration
in CS201. The instructor wants to improve the course, but is faced with a multitude of
different theories and teaching methods to try, and a lack of knowledge of how to
determine if a given method is truly effective or not. With the help of Professor Thomas
Horton of the Department of Computer Science, the students of CS201, Professor Jerry
Short of the Curry School of Education, and Professor Bryan Pfaffenberger of the
Department of Science, Technology, and Society, this project will establish educational
design guidelines for CS201, develop a system for evaluating teaching methods, examine
the current state of CS201 and identify problems, and test three solutions to these
problems using the system developed.

Evaluating Teaching Methods 3


Rationale and Objectives
The goal of this project is to create a system for evaluating teaching methods in
CS201. Each time instructors try to teach material in a different way, they are essentially
conducting an experiment and attempting to evaluate the outcome. Instructors need a
reliable and valid way to perform this evaluation in CS201, in other computer science
courses, other engineering courses, and all the other kinds of courses throughout the
university.
The Objectives of this project are:
o To create guidelines for choosing educational methods
o To create a standardized way of conducting an educational method
experiment in an introductory-level CS class such as CS201
o To create a standardized way of assessing students in the experiment to
produce data
o To create a standardized way of evaluating the data produced by
assessments to draw reasonable conclusions regarding the effectiveness of
the method
o To select a specific educational method and conduct an experiment,
assessment, and evaluation as specified
o To evaluate the usefulness of the proposed approach

Preliminary Analysis of Social Context and Ethical Implications


Teachers have an ethical and social responsibility to continuously improve their
teaching. Scriven (1982) says, in his paper on professorial ethics, that one has an

Evaluating Teaching Methods 4


obligation to be experimenting with new procedures in ones teaching, and that means
one must have some grasp of classroom action research (p.314). He also says that
teachers must stay current in their subject, know about promising new instructional
devices and procedures, and use the most valid methods for measuring their teaching
performance (Scriven, 1982). The National Academy of Engineering supports the idea of
continuing professional development of engineering faculty in the scholarship of
engineering education -- so much so that they created a center for the scholarship of
teaching and learning in engineering (Wulf, 2002). As William Wulf, the President of the
National Academy of Engineering, says, we have to start thinking about how we can
teach smarter (Wulf, 2002). This work fits within these sentiments.
Teachers have an ethical responsibility to treat all students fairly (Scriven, 1982).
Fairness in education is giving each student what he or she needs, and not treating them
all the same (Lavoie & Peter Rosen Productions, 1989). This means that one must be
cognizant of different learning styles, and teach to them appropriately. The Virginia
Department of Education (2000) has made it a requirement of its teachers to match
instruction to the needs of the students, and to make the material meaningful to all
students. These ideas have made their way into engineering education, and are reflected
in Felder and Brents (2005) work regarding student differences, and OBrien, Bernold,
and Akroyds (1998) study regarding the role of the Meyers-Briggs Type Indicator in
engineering students performance. Adapting instruction to different learning styles
involves experimentation with teaching methods, which is where this work comes into
play. The breakdown of student population upon which to conduct these experiments is
not clear, keeping in mind the fairness issue.

Evaluating Teaching Methods 5


How does one distinguish between research and evaluation? Between a full-blown
research program that requires Research Board oversight, and what every teacher does
when they experiment with new teaching methods in a class room? Cohen, Manion, and
Morrison (2000) make a distinction between research and evaluation, but the work here
meets some of their criteria for both research and evaluation. If the experiments in this
project are considered research then they must be governed by governmental
regulations regarding research on human subjects (U.S. Department of Health and
Human Services, 1979, 2005), and the Universitys own regulations (University of
Virginia, 2006d). If they are merely considered evaluation, then perhaps they dont
require such heavy-handed measures. This work will opt for a place in the fuzzy center,
considering the experiments as something less than formal research, more than informal
evaluation, and adhering to the guidelines for research on human subjects.
Every instance of assessment and evaluation carries with it an implied value
system (Cohen, Manion, & Morrison, 2000). There are two main views of education: one
that views education as predictive, following the laws of natural science, and knowledge
as something that can be transferred from teacher to student; and another view that
education is dependent upon complex social factors and the variability of individuals, and
that learning is a process of constructing knowledge based upon individual perception
(Cohen, Manion, & Morrison, 2000). The first view leads to the use of quantitative
measurement, whereas the second leads to the use of more qualitative measurement
(Cohen, Manion, & Morrison, 2000). Therefore, choosing a method of measurement
(and evaluation of the results of such measurement) carries with it the corresponding
perspective. It would be nonsensical to espouse the view that learning is based upon

Evaluating Teaching Methods 6


individual perception, and yet rely solely upon quantitative measures and expect the
results to be predictive. The different views of education also lead to different preferred
methods of teaching: the first leading to the dominant use of direct instruction (lecturing),
and the second to constructivist techniques such as hands-on exercises or group work
(Snowman & Biehler, 2006). These differing viewpoints make necessary the presence of
the educational design guidelines component in this project. This component will declare
and define all assertions regarding education upon which the projects system of
assessments and evaluations are based.
Perhaps the most important ethical aspect of this work is the determination of the
validity of this system. An evaluation of the system must be conducted, and this means
an evaluation of an evaluation a metaevaluation (Stufflebeam, 2001). The
metaevaluation checks the validity of an evaluation by comparing it against a set of
ethical principles and standards for evaluations (Stufflebeam, 2001). The metaevaluation,
itself an evaluation, must also adhere to these principles and standards. At the same time,
one must keep in mind the human aspect of the evaluation process, and not let a system
necessarily override the intuitive conclusion of the evaluators. Intuitive conclusions
themselves are colored by individual perspectives, and the definitions of evaluation
criteria (such as the definition of effectiveness or academic performance) may not be
a matter of objective discussion. Making an effort to clarify these amorphous aspects,
and attempting to formalize and systematize the process, is better than making no attempt
to address a problem which obviously needs fixing.

Evaluating Teaching Methods 7


Literature Review
A large part of this project consists of surveys of the relevant literature in various
areas. These areas include science and engineering education (including computer
science education), education design, educational research, assessment and evaluation,
and metaevaluation (the method of evaluating evaluations). Cohen, Manion, and
Morrison (2000) have identified many styles of educational research, including
naturalistic and ethnographic; historical; surveys, longitudinal, cross-sectional, and trend
studies; case studies; correlational; ex post facto; experiments, quasi-experiments, singlecase research; and action research. There are many strategies for research, including the
use of questionnaires, interviews, accounts, observation, tests, personal constructs, multidimensional measurement, and role-playing (Cohen, Manion, & Morrison, 2000). This
project will pick the research styles and strategies that best fit the educational design
guidelines determined in the project. Wulf (2002) said that there is a tremendous
amount known about the way that people learn. The cognitive psychologists have made
major strides in understanding both the physiological and psychological properties of
learning. Very little of that has been applied to the pedagogy of engineering education
(p.8). This project will delve into this untapped wealth of information and apply more of
it to the pedagogy of computer science education.
See the Bibliography section for a preliminary list of documents that will be
surveyed.

Evaluating Teaching Methods 8

Project Activities
The primary objective of this project is to produce a system for evaluating
teaching methods in introductory computer science courses. This objective will be
achieved through the steps enumerated in the following Activities section. The degree of
success in achieving the primary objective will be measured through the use of
Stufflebeams (2001) process of metaevaluation. Metaevaluation is, simply stated, the
evaluation of an evaluation, wherein one compares an evaluation against a set of
evaluation standards (Stufflebeam, 2001). The set of standards against which the primary
objective will be evaluated consists of the American Evaluation Associations (2004)
Guiding Principles for Evaluators (see Appendix D), and The Program Evaluation
Standards (see Appendix E) of the Joint Committee for Educational Evaluation (1994).
Meeting all of the standards would be an ideal metaevaluation result, but metaevaluations
are flexible and success criteria may change during the course of the project. Stufflebeam
(2001) recommends treating all standards with equal importance to begin with, expecting
that the weightings will change later, to include the possible determination that some
standards may not even apply at all.
Two types of metaevaluations will be performed: formative and summative.
Formative metaevaluations will occur as part of the ongoing iterative process of
developing the system for evaluating teaching methods. A formal summative
metaevaluation will be performed at the end of the project to gauge the level of success in
achieving the primary objective. Selecting the final criteria for success and performing
the summative metaevaluation are part of the project activities.

Evaluating Teaching Methods 9


The secondary objective of this project is to conduct three teaching method
experiments. The success of these experiments will be measured with the primary
objective product.
Activities
1. I will perform a literature survey to determine the current methods used in
computer science and engineering education to evaluate the effectiveness of
teaching methods.
2. I will perform a literature survey to determine the current methods used in the
field of education to evaluate the effectiveness of teaching methods.
3. I will talk to Professor Thomas Horton, the instructor of CS201, to determine
what methods he uses to evaluate the effectiveness of teaching methods.
4. I will perform a literature survey to determine what specific teaching methods and
educational theories are promoted in computer science and engineering education.
5. I will perform a literature survey to determine what specific teaching methods and
educational theories are promoted in the field of education.
6. I will talk to Professor Thomas Horton to determine what teaching methods he
currently uses, and what educational theories he supports.
7. I will assemble a collection of education design guidelines that are co-constructed
with the help of Professor Thomas Horton.
8. I will create a general framework for creating and conducting teaching method
experiments.
9. I will create a general system for creating assessments of the teaching method
experiments.

Evaluating Teaching Methods 10


10. I will create a general system for evaluating data produced by the assessments of
the teaching method experiments.
11. I will assemble the education design guidelines, the framework for creating and
conducting teaching method experiments, the system for creating the assessments
for the teaching method experiments, and the system for evaluating the data
produced by those assessments into a single artifact called the Teaching
Experiment Toolkit.
12. I will perform a summative metaevaluation of the Teaching Experiment Toolkit.
13. I will observe CS201 for the first three weeks of the Spring 2007 semester in
order to identify specific problems that require addressing.
14. I will compare notes for the Spring 2007 observations with notes already taken at
the beginning of the Fall 2006 semester to identify problems that are persistent
between the two semesters.
15. Professor Thomas Horton and I will co-construct three specific teaching method
experiments, according to the method proposed by this project, aimed at
addressing problems identified in the Spring 2007 CS201 class, with emphasis on
problems that have persisted from the Fall 2006 semester.
16. Professor Thomas Horton and I will co-construct the assessments for the three
teaching method experiments, according to the method proposed by this project.
17. Professor Thomas Horton and I will co-construct the evaluation criteria specific to
the three teaching method experiments, according to the method proposed by this
project.
18. Professor Thomas Horton will execute the first teaching method experiment.

Evaluating Teaching Methods 11


19. Professor Thomas Horton will administer the assessment for the first teaching
method experiment.
20. Professor Thomas Horton will execute the second teaching method experiment.
21. Professor Thomas Horton will administer the assessment for the second teaching
method experiment.
22. Professor Thomas Horton will execute the third teaching method experiment.
23. Professor Thomas Horton will administer the assessment for the third teaching
method experiment.
24. Professor Thomas Horton and I will jointly evaluate the outcomes of the
assessments using the evaluation criteria according to the method proposed by
this project.
25. Professor Thomas Horton and I will evaluate the result of the evaluations of the
teaching method experiments to judge the usefulness of the system proposed by
this project.
26. I will record the results and reflect on the experience, recording these data in the
final thesis report.
27. Assemble final thesis document.
Schedule
See Appendix F for task start date, end date, estimated duration, and Gantt chart.

Personnel
The following people are involved in this project.

Evaluating Teaching Methods 12


Edward Pan (the student):

Manages the project

Meets with the technical advisor at least once a month, and more often during the
execution, assessing, and evaluation of teaching method experiments

Performs the literature surveys

Creates the system for creating the teaching method experiments, assessments,
and evaluations

Assembles the Teaching Experiment Toolkit

Performs formative and summative metaevaluations of the Teaching Experiment


Toolkit on component and system levels

Performs field observations of CS201

Identifies problems in CS201 that require addressing

Co-constructs the education design guidelines

Recommends teaching methods to be used in the specific teaching method


experiment test cases

Co-constructs teaching method experiments, assessments, and evaluations

Extracts data from assessments

Jointly performs evaluations of teaching method experiments

Jointly evaluates and judges the usefulness of the system proposed by this project

Records results and reflections on the experience

Writes and delivers the thesis document

Thomas Horton (technical advisor and CS201 instructor):

Evaluating Teaching Methods 13

Meets with the student at least once a month, and more often during the
execution, assessing, and evaluation of teaching method experiments

Identifies problems in CS201 that require addressing

Co-constructs the education design guidelines

Selects and approves recommended teaching methods to be used in specific


teaching method experiment test cases

Co-constructs teaching method experiments, assessments, and evaluations

Executes teaching method experiments and administers assessments

Jointly performs evaluations of teaching method experiments

Jointly evaluates and judges the usefulness of the system proposed by this project

Approves the thesis document and the work that it represents

Bryan Pfaffenberger (STS advisor):

Evaluates the thesis document

Recommends improvements and possible approaches to the project

Approves the thesis document and the work that it represents

Jerry Short (education advisor):

Checks the reasonability of educational aspects of the work

Recommends improvements and possible approaches to the project

CS201 Students:

Participate in teaching method experiments

Complete teaching method experiment assessment instruments

Evaluating Teaching Methods 14

Resources
The key resources that may be required include:

Internet access: This can reasonably be expected to be available.

Internet databases (ACM Portal, JSTOR): In the event that these are unavailable,
attempts will be made to execute searches and obtain copies of articles via the
UVA libraries.

UVA libraries (Engineering and Education): These can reasonably be expected to


be available.

Access to CS201: This may be a problem if my Spring 2007 class schedule


changes and a course in which I am registered occupies that time slot (WF 12001250). If this happens, I will not be able to perform field observations and must
rely upon notes from the Fall 2006 semester observations, and recommendations
from Professor Horton.

Access to CS201 student data: These data include historical tracking of individual
students academic performance and attendance. If I am unable to view student
data (most likely due to privacy issues), there are three options: 1) the student data
are identified through pseudonyms whose relations to the actual students are
hidden from me, 2) Professor Horton processes the student data without me, or 3)
we do not use student data.

Evaluating Teaching Methods 15


Expected Outcomes
When the thesis is submitted, I expect to have successfully created a useful
system for evaluating teaching methods in computer science classes. The system will be
general enough to be applicable to engineering education in general, yet specific enough
to be practical for an individual class such as CS201. The system will provide instructors
with the tools they need to try innovative teaching methods in their classes, with the goal
of improving the educational experience for the instructor and the students. Since CS201
is directly involved in this work, it will be improved directly by this work. Professor
Horton will learn something new about education and be empowered to improve his
teaching. I will know a lot more about education than I did before I worked on this
thesis.

Bibliography
The following list of sources includes those cited in this paper as well as key
sources that I plan to use during the project. For a list of sources cited in this paper alone,
see the References section that follows.

Science and Engineering Education


Brent, R., Felder, R., Regan, T., Walser, A., Carlson-Dakes, C., Evans, D., Malave, C.,
Sanders, K., & McGourty, J. (2000). Engineering faculty development: a
multicoalition perspective. In 2000 Annual ASEE Conference.

Evaluating Teaching Methods 16


Carlson, L. E., & Sullivan, J. F. (1999). Hands-on engineering: learning by doing in the
integrated teaching and learning program. International Journal of Engineering
Education, 15(1), 20-31.
Cohoon, J. M., & Chen, L. (2003, March). Migrating out of computer science.
Computing Research News, 15(2), 2-3.
Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005, January).
Engineering design thinking, teaching, and learning. Journal of Engineering
Education, 103-120.
Felder, R. M., Woods, D. R., Stice, J. E., & Rugarcia, A. (2000). The future of
engineering education : II. teaching methods that work. Chemical Engineering
Education, 34(1), 26-39.
Ibrahim, A. (1999). Current issues in engineering education quality. Global Journal of
Engineering Education, 3(3), 301-305.
Margolis, J., & Fisher, A. (2002). Unlocking the clubhouse : women in computing.
Cambridge, MA: MIT Press.
Microsoft. (2005, September 12). More than Fun and Games: New Computer Science
Courses Attract Students with Educational Games. Retrieved September 13,
2006, from http://www.microsoft.com/presspass/features/2005/sep05/0912CSGames.mspx
National Science Foundation. (2006, October). Funding - engineering education
programs. Retrieved November 4, 2006, from
http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=13374&org=NSF&more=Y

Evaluating Teaching Methods 17


OBrien, T. P., Bernold, L. E., & Akroyd, D. (1998). Myers-Briggs Type Indicator and
academic achievement in engineering education. International Journal of
Engineering Education, 14(5), 311-315.
Sanders, K. E., & McCartney, R. (2003, February). Program assessment tools in
computer science: a report from the trenches. In SIGCSE03 (pp. 31-35).
Williams, L., & Upchurch, R. L. (2001, March). In support of student pair-programming.
ACM SIGCSE Bulletin, 33(1), 327-331.
Wulf, W. A. (2002). The urgency of engineering education reform. Journal of SMET
Education, 3/3&4 July-December, 3-9.

Education Design
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University
Press.
Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction (6th ed.).
Boston: Pearson.
Felder, R. M., & Brent, R. (2005, January). Understanding student differences. Journal of
Engineering Education, 57-72.
Fink, L. D. (2003). Creating significant learning experiences. San Francisco: JosseyBass.
Hagler, M. O., & Marcy, W. M. (1999, January). Strategies for designing engineering
courses. Journal of Engineering Education, 11-13.
Kulik, C. C., & Kulik, J. A. (1990). Effectiveness of mastery learning programs: a metaanalysis. Review of Educational Research, 60(2), 265-299.

Evaluating Teaching Methods 18


Kulik, J. A., Cohen, P. A., & Ebeling, B. J. (1980). Effectiveness of programmed
instruction in higher education: a meta-analysis of findings. Educational
Evaluation and Policy Analysis, 2(6), 51-64.
Mager, R. F. (1997). How to turn learners onwithout turning them off (3rd ed.). Atlanta:
CEP Press.
Mager, R. F. (1997). Making instruction work (2nd ed.). Atlanta: CEP Press.
McKeachie, W. J., & Kulik, J. A. (1975). Effective college teaching. Review of Research
in Education, 3, 165-209.
McKeachie, W. J., & Svinicki, M. (2006). McKeachies teaching tips : strategies,
research, and theory for college and university teachers. Boston: Houghton
Mifflin.
National Research Council. (2000). How people learn : brain, mind, experience, and
school. Washington, DC: National Academy Press.
Scriven, M. (1982). Professorial ethics. The Journal of Higher Education, 53(3), 307317.
Snowman, J., & Biehler, R. (2006). Psychology applied to teaching (11th ed.). Boston:
Houghton Mifflin.
Springer, L., Stanne, M. E., & Donovan, S. S. (1999). Effects of small-group learning on
undergraduates in science, mathematics, engineering, and technology: a metaanalysis. Review of Educational Research, 69(1), 21-51.

Evaluating Teaching Methods 19


Educational Research
Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education (5th ed.).
Abingdon: RoutledgeFalmer.
Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. The
Journal of Higher Education, 39(3), 124-130.
U.S. Department of Health, Education, and Welfare. (1979, April 18). The Belmont
report: ethical principles and guidelines for the protection of human subjects of
research. Retrieved October 26, 2006, from
http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.htm
U.S. Department of Health and Human Services. (2005, June 23). Code of federal
regulations, title 45 public welfare, part 46 protection of human subjects.
Retrieved October 26, 2006, from
http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.305
University of Virginia. (2006). SBS guide for researchers: education. Retrieved October
26, 2006, from http://www.virginia.edu/vprgs/irb/sbs_help_education.html

Assessment and Evaluation


Blandford, D. K., & Hwang, D. J. (2003). Five easy but effective assessment methods. In
Proceedings of the 34th SIGCSE technical symposium on computer science
education, 41-44.
Costin, F., Greenough, W. T., & Menges, R. J. (1971, December). Student ratings of
college teaching: reliability, validity, and usefulness. Review of Educational
Research, 41(5), 511-535.

Evaluating Teaching Methods 20


Jones, T. (2003). Self-reported instrument for measuring student learning outcomes. In
Proceedings of the 2001 American Society for Engineering Education Annual
Conference & Exposition.
Kulik, J. A., & McKeachie, W. J. (1975). The evaluation of teachers in higher education.
Review of Research in Education, 3, 210-240.
Mager, R. F. (1997). Measuring instructional results (3rd ed.). Atlanta: CEP Press.
McGourty, J. (1999, October). Four strategies to integrate assessment into the
engineering educational environment. Journal of Engineering Education, 391395.
McGourty, J., Sebastian, C., & Swart, W. (1998, October). Developing a comprehensive
assessment program for engineering education. Journal of Engineering
Education, 355-361.
Olds, B. M., Moskal, B. M., & Miller, R. L. (2005, January). Assessment in engineering
education: evolution, approaches and future collaborations. Journal of
Engineering Education, 13-25.
Parker, J. R., & Becker, K. (2003, September). Measuring effectiveness of constructivist
and behaviourist assignments in CS102. ACM SIGCSE Bulletin, 35(3), 40-44.
Waters, R. & McCracken, M. (1997). Assessment and evaluation in problem-based
learning. In 1997 Frontiers in Education Conference (pp. 689-693).
Wise, J., Lee, S. H., Litzinger, T. A., Marra, R. M., & Palmer, B. (2001). Measuring
cognitive growth in engineering undergraduates: a longitudinal study. In
Proceedings of the 2001 American Society for Engineering Education Annual
Conference & Exposition.

Evaluating Teaching Methods 21

Metaevaluation
American Evaluation Association. (2004, July). Guiding principles for evaluators.
Retrieved November 4, 2006, from
http://www.eval.org/Publications/GuidingPrinciples.asp
Joint Committee on Standards for Educational Evaluation. (1994). The program
evaluation standards. Retrieved November 4, 2006, from
http://www.wmich.edu/evalctr/jc/PGMSTNDS-SUM.htm
Schwartz, R., & Mayne, J. (2004). Assuring the quality of evaluative information: theory
and practice. Evaluation and Program Planning, 28(2005), 1-14.
St. Pierre, R. G. (1979). The role of multiple analyses in quasi-experimental evaluations.
Educational Evaluation and Policy Analysis, 1(6), 29-35.
St. Pierre, R. G. (1982). Follow through: a case study in meta-evaluation research.
Educational Evaluation and Policy Analysis, 4(1), 47-55.
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of
Evaluation, 22(2), 183-209.

Miscellaneous
Lavoie, R. D. (Creator, Writer), & Peter Rosen Productions (Producer). (1989).
Understanding learning disabilities: how difficult can this be? [DVD].
Greenwich, CT: Eagle Hill Foundation, Inc., & PBS Video.

Evaluating Teaching Methods 22


National Education Association. (1975). Code of ethics of the education profession.
Retrieved November 5, 2006, from http://www.nea.org/aboutnea/code.html
University of Virginia. (2006). Computer engineering. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=850
University of Virginia. (2006). Computer science. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=848
University of Virginia. (2006). Electrical engineering. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=849
University of Virginia. (2006). Systems engineering. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=855
Virginia Department of Education, Division of Teacher Education and Licensure. (2000,
January 6). Guidelines for uniform performance standards and evaluation criteria
for teachers, administrators, and superintendents. Richmond: Author.

Evaluating Teaching Methods 23

References
American Evaluation Association. (2004, July). Guiding principles for evaluators.
Retrieved November 4, 2006, from
http://www.eval.org/Publications/GuidingPrinciples.asp
Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education (5th ed.).
Abingdon: RoutledgeFalmer.
Cohoon, J. M., & Chen, L. (2003, March). Migrating out of computer science.
Computing Research News, 15(2), 2-3.
Felder, R. M., & Brent, R. (2005, January). Understanding student differences. Journal of
Engineering Education, 57-72.
Joint Committee on Standards for Educational Evaluation. (1994). The program
evaluation standards. Retrieved November 4, 2006, from
http://www.wmich.edu/evalctr/jc/PGMSTNDS-SUM.htm
Lavoie, R. D. (Creator, Writer), & Peter Rosen Productions (Producer). (1989).
Understanding learning disabilities: how difficult can this be? [DVD].
Greenwich, CT: Eagle Hill Foundation, Inc., & PBS Video.
Microsoft. (2005, September 12). More than Fun and Games: New Computer Science
Courses Attract Students with Educational Games. Retrieved September 13,
2006, from http://www.microsoft.com/presspass/features/2005/sep05/0912CSGames.mspx
OBrien, T. P., Bernold, L. E., & Akroyd, D. (1998). Myers-Briggs Type Indicator and
academic achievement in engineering education. International Journal of
Engineering Education, 14(5), 311-315.

Evaluating Teaching Methods 24


Scriven, M. (1982). Professorial ethics. The Journal of Higher Education, 53(3), 307317.
Snowman, J., & Biehler, R. (2006). Psychology applied to teaching (11th ed.). Boston:
Houghton Mifflin.
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of
Evaluation, 22(2), 183-209.
U.S. Department of Health, Education, and Welfare. (1979, April 18). The Belmont
report: ethical principles and guidelines for the protection of human subjects of
research. Retrieved October 26, 2006, from
http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.htm
U.S. Department of Health and Human Services. (2005, June 23). Code of federal
regulations, title 45 public welfare, part 46 protection of human subjects.
Retrieved October 26, 2006, from
http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.305
University of Virginia. (2006a). Computer engineering. Undergraduate Record 20062007. Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=850
University of Virginia. (2006b). Computer science. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=848
University of Virginia. (2006c). Electrical engineering. Undergraduate Record 20062007. Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=849

Evaluating Teaching Methods 25


University of Virginia. (2006d). SBS guide for researchers: education. Retrieved
October 26, 2006, from
http://www.virginia.edu/vprgs/irb/sbs_help_education.html
University of Virginia. (2006e). Systems engineering. Undergraduate Record 2006-2007.
Retrieved September 13, 2006, from http://records.ureg.virginia.edu/
preview_program.php?catoid=7&poid=855
Virginia Department of Education, Division of Teacher Education and Licensure. (2000,
January 6). Guidelines for uniform performance standards and evaluation criteria
for teachers, administrators, and superintendents. Richmond: Author.
Wulf, W. A. (2002). The urgency of engineering education reform. Journal of SMET
Education, 3/3&4 July-December, 3-9.

Evaluating Teaching Methods 26

Appendix A: Budget and Equipment Checklist


No budgetary or equipment requirements are expected for this project.

Appendix B: Biographical Sketch of Student


Edward Pan is a 4th-year undergraduate computer science student at the
University of Virginia (UVA). He has a deep interest in education, especially higher
education. He would like to work to improve education, and plans to pursue graduate
studies in education. While at UVA, he has taken a few education courses, including
Introduction to Educational Psychology (which he has audited twice after having
originally taken it for credit); Introduction to Child Development; Gender, Technology,
and Education; and an independent study in Educational Psychology (for which he wrote
a semester paper on possible ways that teaching could be improved in the School of
Engineering and Applied Science at UVA). He has taken many computer science courses
at UVA, including CS201, and also an independent study in which he looked at a way of
improving a university seminar class in robotics run by the Department of Computer
Science. He previously attended Northern Virginia Community College, and The George
Washington University. He worked for ten years in information technology, the majority
of which he spent as a computer network engineer. He received an A.S. in Computer
Science, and an A.S. in Mathematics, both from Northern Virginia Community College.

Appendix C: Preliminary Outline of Thesis


The preliminary outline of the thesis will be submitted at a later date.

Evaluating Teaching Methods 27


Appendix D: Guiding Principles for Evaluators
A. Systematic Inquiry: Evaluators conduct systematic, data-based inquiries about
whatever is being evaluated.
B. Competence: Evaluators provide competent performance to stakeholders.
C. Integrity/Honesty: Evaluators ensure the honesty and integrity of the entire evaluation
process.
D. Respect for People: Evaluators respect the security, dignity and self-worth of the
respondents, program participants, clients, and other stakeholders with whom they
interact.
E. Responsibilities for General and Public Welfare: Evaluators articulate and take into
account the diversity of interests and values that may be related to the general and
public welfare.
(American Evaluation Association, 2004)

Evaluating Teaching Methods 28

Appendix E: The Program Evaluation Standards


Utility Standards
The utility standards are intended to ensure that an evaluation will serve the information
needs of intended users.

U1 Stakeholder Identification Persons involved in or affected by the evaluation should be


identified, so that their needs can be addressed.
U2 Evaluator Credibility The persons conducting the evaluation should be both
trustworthy and competent to perform the evaluation, so that the evaluation
findings achieve maximum credibility and acceptance.
U3 Information Scope and Selection Information collected should be broadly selected to
address pertinent questions about the program and be responsive to the needs and
interests of clients and other specified stakeholders
U4 Values Identification The perspectives, procedures, and rationale used to interpret the
findings should be carefully described, so that the bases for value judgments are
clear.
U5 Report Clarity Evaluation reports should clearly describe the program being
evaluated, including its context, and the purposes, procedures, and findings of the
evaluation, so that essential information is provided and easily understood.
U6 Report Timeliness and Dissemination Significant interim findings and evaluation
reports should be disseminated to intended users, so that they can be used in a
timely fashion.

Evaluating Teaching Methods 29


U7 Evaluation Impact Evaluations should be planned, conducted, and reported in ways
that encourage follow-through by stakeholders, so that the likelihood that the
evaluation will be used is increased.

Feasibility Standards
The feasibility standards are intended to ensure that an evaluation will be realistic,
prudent, diplomatic, and frugal..

F1 Practical Procedures The evaluation procedures should be practical, to keep disruption


to a minimum while needed information is obtained.
F2 Political Viability The evaluation should be planned and conducted with anticipation
of the different positions of various interest groups, so that their cooperation may
be obtained, and so that possible attempts by any of these groups to curtail
evaluation operations or to bias or misapply the results can be averted or
counteracted.
F3 Cost Effectiveness The evaluation should be efficient and produce information of
sufficient value, so that the resources expended can be justified

Propriety Standards
The propriety standards are intended to ensure that an evaluation will be conducted
legally, ethically, and with due regard for the welfare of those involved in the
evaluation, as well as those affected by its results.

Evaluating Teaching Methods 30


P1 Service Orientation Evaluations should be designed to assist organizations to address
and effectively serve the needs of the full range of targeted participants.
P2 Formal Agreements Obligations of the formal parties to an evaluation (what is to be
done, how, by whom, when) should be agreed to in writing, so that these parties
are obligated to adhere to all conditions of the agreement or formally to
renegotiate it.
P3 Rights of Human Subjects Evaluations should be designed and conducted to respect
and protect the rights and welfare of human subjects.
P4 Human Interactions Evaluators should respect human dignity and worth in their
interactions with other persons associated with an evaluation, so that participants
are not threatened or harmed.
P5 Complete and Fair Assessment The evaluation should be complete and fair in its
examination and recording of strengths and weaknesses of the program being
evaluated, so that strengths can be built upon and problem areas addressed.
P6 Disclosure of Findings The formal parties to an evaluation should ensure that the full
set of evaluation findings along with pertinent limitations are made accessible to
the persons affected by the evaluation and any others with expressed legal rights
to receive the results.
P7 Conflict of Interest Conflict of interest should be dealt with openly and honestly, so
that it does not compromise the evaluation processes and results.
P8 Fiscal Responsibility The evaluator's allocation and expenditure of resources should
reflect sound accountability procedures and otherwise be prudent and ethically
responsible, so that expenditures are accounted for and appropriate

Evaluating Teaching Methods 31

Accuracy Standards
The accuracy standards are intended to ensure that an evaluation will reveal and convey
technically adequate information about the features that determine worth or merit
of the program being evaluated.

A1 Program Documentation The program being evaluated should be described and


documented clearly and accurately, so that the program is clearly identified.
A2 Context Analysis The context in which the program exists should be examined in
enough detail, so that its likely influences on the program can be identified.
A3 Described Purposes and Procedures The purposes and procedures of the evaluation
should be monitored and described in enough detail, so that they can be identified
and assessed.
A4 Defensible Information Sources The sources of information used in a program
evaluation should be described in enough detail, so that the adequacy of the
information can be assessed.
A5 Valid Information The information-gathering procedures should be chosen or
developed and then implemented so that they will assure that the interpretation
arrived at is valid for the intended use.
A6 Reliable Information The information-gathering procedures should be chosen or
developed and then implemented so that they will assure that the information
obtained is sufficiently reliable for the intended use.

Evaluating Teaching Methods 32


A7 Systematic Information The information collected, processed, and reported in an
evaluation should be systematically reviewed, and any errors found should be
corrected.
A8 Analysis of Quantitative Information Quantitative information in an evaluation should
be appropriately and systematically analyzed so that evaluation questions are
effectively answered.
A9 Analysis of Qualitative Information Qualitative information in an evaluation should
be appropriately and systematically analyzed so that evaluation questions are
effectively answered.
A10 Justified Conclusions The conclusions reached in an evaluation should be explicitly
justified, so that stakeholders can assess them.
A11 Impartial Reporting Reporting procedures should guard against distortion caused by
personal feelings and biases of any party to the evaluation, so that evaluation
reports fairly reflect the evaluation findings.
A12 Metaevaluation The evaluation itself should be formatively and summatively
evaluated against these and other pertinent standards, so that its conduct is
appropriately guided and, on completion, stakeholders can closely examine its
strengths and weaknesses.
(Joint Committee on Standards for Educational Evaluation, 1994)

Evaluating Teaching Methods 33

Appendix F: Thesis Project Gantt Chart

You might also like