Professional Documents
Culture Documents
Undergraduate Thesis Project Proposal School of Engineering and Applied Science University of Virginia
Undergraduate Thesis Project Proposal School of Engineering and Applied Science University of Virginia
Submitted by
Edward Pan
Computer Science
STS 401
Section 16 (2 p.m.)
November 7, 2006
On my honor as a University student, on this assignment I have neither given nor
received unauthorized aid as defined by the Honor Guidelines for papers in Science,
Technology, and Society courses.
Signed
Approved
Date
Technical Advisor Thomas Horton
Approved
Date
Science, Technology, and Society Advisor
Bryan Pfaffenberger
i
Contents
Abstract................................................................................................................................1
Introduction..........................................................................................................................1
Rationale and Objectives.....................................................................................................3
Preliminary Analysis of Social Context and Ethical Implications.......................................3
Literature Review................................................................................................................7
Project Activities..................................................................................................................8
Activities..........................................................................................................................9
Schedule.........................................................................................................................11
Personnel........................................................................................................................11
Resources.......................................................................................................................14
Expected Outcomes...........................................................................................................14
Bibliography......................................................................................................................15
Science and Engineering Education..............................................................................15
Education Design...........................................................................................................17
Educational Research.....................................................................................................18
Assessment and Evaluation...........................................................................................19
Metaevaluation..............................................................................................................20
Miscellaneous................................................................................................................21
References..........................................................................................................................23
Appendix A: Budget and Equipment Checklist.................................................................26
Appendix B: Biographical Sketch of Student....................................................................26
Appendix C: Preliminary Outline of Thesis......................................................................26
ii
Appendix D: Guiding Principles for Evaluators................................................................27
Appendix E: The Program Evaluation Standards..............................................................28
Appendix F: Thesis Project Gantt Chart............................................................................33
Abstract
Enrollment and retention in computer science are declining, with the brunt of the
impact occurring among freshman and sophomore students. CS201 is an introductory
course in software engineering that is typically taken by freshman and sophomore
engineering students. CS201 has been plagued by student dissatisfaction and frustration.
Instructors would like to improve the class by trying new teaching methods, but they lack
a system with which to evaluate these methods. With the help of Professor Thomas
Horton of the Department of Computer Science, the students of CS201, Professor Jerry
Short of the Curry School of Education, and Professor Bryan Pfaffenberger of the
Department of Science, Technology, and Society, this project will establish educational
design guidelines for CS201, develop a system for evaluating teaching methods, examine
the current state of CS201 and identify problems, and test three solutions to these
problems using the system developed.
Introduction
Enrollment in computer science is declining. According to an article from
Microsoft (2005), the percentage of incoming undergraduates indicating plans to major
in computer science declined by more than 60 percent between fall 2000 and 2004, and is
now 70 percent lower than its peak in the early 1980s. Attrition in computer science is
increasing (Cohoon & Chen, 2003). Both of these factors combine to create a dismal
situation for computer science in higher education. Attrition appears to be highest among
freshman and sophomore students (Cohoon & Chen, 2003), who are usually in
Project Activities
The primary objective of this project is to produce a system for evaluating
teaching methods in introductory computer science courses. This objective will be
achieved through the steps enumerated in the following Activities section. The degree of
success in achieving the primary objective will be measured through the use of
Stufflebeams (2001) process of metaevaluation. Metaevaluation is, simply stated, the
evaluation of an evaluation, wherein one compares an evaluation against a set of
evaluation standards (Stufflebeam, 2001). The set of standards against which the primary
objective will be evaluated consists of the American Evaluation Associations (2004)
Guiding Principles for Evaluators (see Appendix D), and The Program Evaluation
Standards (see Appendix E) of the Joint Committee for Educational Evaluation (1994).
Meeting all of the standards would be an ideal metaevaluation result, but metaevaluations
are flexible and success criteria may change during the course of the project. Stufflebeam
(2001) recommends treating all standards with equal importance to begin with, expecting
that the weightings will change later, to include the possible determination that some
standards may not even apply at all.
Two types of metaevaluations will be performed: formative and summative.
Formative metaevaluations will occur as part of the ongoing iterative process of
developing the system for evaluating teaching methods. A formal summative
metaevaluation will be performed at the end of the project to gauge the level of success in
achieving the primary objective. Selecting the final criteria for success and performing
the summative metaevaluation are part of the project activities.
Personnel
The following people are involved in this project.
Meets with the technical advisor at least once a month, and more often during the
execution, assessing, and evaluation of teaching method experiments
Creates the system for creating the teaching method experiments, assessments,
and evaluations
Jointly evaluates and judges the usefulness of the system proposed by this project
Meets with the student at least once a month, and more often during the
execution, assessing, and evaluation of teaching method experiments
Jointly evaluates and judges the usefulness of the system proposed by this project
CS201 Students:
Resources
The key resources that may be required include:
Internet databases (ACM Portal, JSTOR): In the event that these are unavailable,
attempts will be made to execute searches and obtain copies of articles via the
UVA libraries.
Access to CS201 student data: These data include historical tracking of individual
students academic performance and attendance. If I am unable to view student
data (most likely due to privacy issues), there are three options: 1) the student data
are identified through pseudonyms whose relations to the actual students are
hidden from me, 2) Professor Horton processes the student data without me, or 3)
we do not use student data.
Bibliography
The following list of sources includes those cited in this paper as well as key
sources that I plan to use during the project. For a list of sources cited in this paper alone,
see the References section that follows.
Education Design
Bain, K. (2004). What the best college teachers do. Cambridge, MA: Harvard University
Press.
Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction (6th ed.).
Boston: Pearson.
Felder, R. M., & Brent, R. (2005, January). Understanding student differences. Journal of
Engineering Education, 57-72.
Fink, L. D. (2003). Creating significant learning experiences. San Francisco: JosseyBass.
Hagler, M. O., & Marcy, W. M. (1999, January). Strategies for designing engineering
courses. Journal of Engineering Education, 11-13.
Kulik, C. C., & Kulik, J. A. (1990). Effectiveness of mastery learning programs: a metaanalysis. Review of Educational Research, 60(2), 265-299.
Metaevaluation
American Evaluation Association. (2004, July). Guiding principles for evaluators.
Retrieved November 4, 2006, from
http://www.eval.org/Publications/GuidingPrinciples.asp
Joint Committee on Standards for Educational Evaluation. (1994). The program
evaluation standards. Retrieved November 4, 2006, from
http://www.wmich.edu/evalctr/jc/PGMSTNDS-SUM.htm
Schwartz, R., & Mayne, J. (2004). Assuring the quality of evaluative information: theory
and practice. Evaluation and Program Planning, 28(2005), 1-14.
St. Pierre, R. G. (1979). The role of multiple analyses in quasi-experimental evaluations.
Educational Evaluation and Policy Analysis, 1(6), 29-35.
St. Pierre, R. G. (1982). Follow through: a case study in meta-evaluation research.
Educational Evaluation and Policy Analysis, 4(1), 47-55.
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of
Evaluation, 22(2), 183-209.
Miscellaneous
Lavoie, R. D. (Creator, Writer), & Peter Rosen Productions (Producer). (1989).
Understanding learning disabilities: how difficult can this be? [DVD].
Greenwich, CT: Eagle Hill Foundation, Inc., & PBS Video.
References
American Evaluation Association. (2004, July). Guiding principles for evaluators.
Retrieved November 4, 2006, from
http://www.eval.org/Publications/GuidingPrinciples.asp
Cohen, L., Manion, L., & Morrison, K. (2000). Research methods in education (5th ed.).
Abingdon: RoutledgeFalmer.
Cohoon, J. M., & Chen, L. (2003, March). Migrating out of computer science.
Computing Research News, 15(2), 2-3.
Felder, R. M., & Brent, R. (2005, January). Understanding student differences. Journal of
Engineering Education, 57-72.
Joint Committee on Standards for Educational Evaluation. (1994). The program
evaluation standards. Retrieved November 4, 2006, from
http://www.wmich.edu/evalctr/jc/PGMSTNDS-SUM.htm
Lavoie, R. D. (Creator, Writer), & Peter Rosen Productions (Producer). (1989).
Understanding learning disabilities: how difficult can this be? [DVD].
Greenwich, CT: Eagle Hill Foundation, Inc., & PBS Video.
Microsoft. (2005, September 12). More than Fun and Games: New Computer Science
Courses Attract Students with Educational Games. Retrieved September 13,
2006, from http://www.microsoft.com/presspass/features/2005/sep05/0912CSGames.mspx
OBrien, T. P., Bernold, L. E., & Akroyd, D. (1998). Myers-Briggs Type Indicator and
academic achievement in engineering education. International Journal of
Engineering Education, 14(5), 311-315.
Feasibility Standards
The feasibility standards are intended to ensure that an evaluation will be realistic,
prudent, diplomatic, and frugal..
Propriety Standards
The propriety standards are intended to ensure that an evaluation will be conducted
legally, ethically, and with due regard for the welfare of those involved in the
evaluation, as well as those affected by its results.
Accuracy Standards
The accuracy standards are intended to ensure that an evaluation will reveal and convey
technically adequate information about the features that determine worth or merit
of the program being evaluated.