Professional Documents
Culture Documents
Taking Notes in The Digital Age
Taking Notes in The Digital Age
Taking Notes in The Digital Age
Existing studies of how note-taking tools affect student learning typically find that students who
choose to take notes on a computer perform worse on assessments than students who take notes on
paper. To our knowledge, the literature has not disentangled whether this result is due to the note-
taking process itself, or instead due to the characteristics of students who choose to use computers
to take notes. In order to answer this question we employ a within-subject random control trial
experiment and conclude that taking notes on computers does not have a statistically meaningful
1
Introduction
The ability of computers to improve pedagogical outcomes has long been debated, and their
use in taking notes is increasingly prevalent. Yet, although taking notes is considered a vital part of
learning, particularly in lecture-oriented courses, the effectiveness of taking notes via computers is
unresolved. Latent student characteristics may be associated with course performance as well as the
choice to take notes via computer. This complicates the existing literature’s efforts to measure the
in the classroom by requiring students to take notes using paper in one trial and computers in
another. We then record, compare and analyze student performance on related assessments. The
results of our analysis indicate that for typical students, taking notes on the computer does not lead
situations with questions that involve mathematical or graphical representations or analysis. Our
finding has important implications for introductory business instructors wrestling with students’ use
We begin the paper by briefly surveying the related literature on note-taking, and then follow
by describing the experiment, data, and the methodology. In the next section we present the results
of our study using t-tests for group comparisons and ordinary least squares (OLS) fixed effects
regressions for greater insights into the data. We then analyze the students’ choice to use computers
2
Literature review
The use of computers in the classroom can improve student engagement and educational
outcomes by fostering active learning and improved collaboration (Efaw, Hampton, Martinez, &
Smith, 2004; Fried, 2008; Trimmel & Bachman, 2004; Wurst, Smarkola, & Gaffney, 2008). Yet
computers in the classroom also provide opportunities for distraction and, consequently, can have a
deleterious effect on course performance (Aguilar-Roca, Williams, & O'dowd, 2012; Hembrooke, &
Gay, 2003; Kraushaar, & Novak, 2010; Patterson & Patterson, 2017; Sana, Weston, & Cepeda,
2013).
students for taking notes. The educational psychology literature suggests note-taking affects
personalized, non-verbatim notes (Igo, Brunning, & McCrudden, 2005). Encoding requires
inference, integration, and structuring of the material being presented. Learning occurs because
students engage in selecting important information and summarizing it while still actively listening to
the lecture (Beck, 2014). The process of writing notes by hand is thought to be closely associated
with encoding and thus, hand-written notes are generally believed to be associated with better recall
than notes taken on a computer or other device (Olive & Barbier, 2017; Patterson & Patterson,
2017).
Most studies tasked with observing how note-taking behavior relates to information recall
take place in laboratory environments, often testing students on the same day the material was
presented (Beck, 2014; Bui, Myerson, & Hale, 2013; Mueller & Oppenheimer, 2014; Olive &
Barbier, 2017; Sana et al., 2013; Wei, Wang, & Fass, 2014). Yet this approach may be problematic.
3
recall on TED talks (Mueller & Oppenheimer, 2014) or lectures on non-fiction books (Bui et al.,
2013) do not capture the reality of student life in the classroom (Harrison & List, 2004). While some
laboratory-based studies do offer course extra credit (Beck., 2014) or some nominal amount of
course participation points (Bui et al., 2013; Igo et al., 2005; Sana et al., 2013), others rely on
volunteers (Kiewra, Dubois, Christian, Mcshane, & Al, 1991; Olive & Barbier, 2017; Wei et al., 2014;
Wurst et al., 2008) or cash payments (Mueller & Oppenheimer, 2014). This begs the question
Studies conducted in a classroom setting (field experiments) can overcome two important
issues that bedevil laboratory experiments. First, student recall and understanding is tested on a
timeframe consistent with a typical course schedule for giving quizzes and exams. Second, the stakes
become much more realistic in the classroom, as students’ grades – and hence their ability to move
forward with their university education – depend on recall and understanding of lecture content.
Patterson and Patterson (2017) leverage an institutional policy for a natural experiment at the U.S.
Military Academy and find that optional classroom computer use is negatively associated with GPA
outcomes across disciplines. In two additional field experiments, Fried (2008) and Aguilar-Roca et al.
(2012) allow students to self-select computer usage in the classroom. Both studies find that a
student’s choice to use computers to take notes during lectures is associated with lower course
performance. Carter, Greenberg and Walker (2017) find that for West Point Principles of
Economics students, those in sections allowed to take notes on a computer exhibited poorer
performance on the course’s common final exam (about 1.7 points on a 100-point scale). By
allowing students to choose their method of note-taking, field experiments conducted in the
classroom can still suffer from a variation of self-selection problems if the characteristics that lead
4
Research question
Building on this previous literature, our research question is: does a student who uses a
computer to take notes in the classroom perform worse on course assessments than if (s)he would
Participants
what Harrison and List (2004) call a framed field experiment. The experiment was administered
during the Fall 2016 semester at a U.S. regional comprehensive university located in the Upper-
Midwest. Participation in the experiment included completing pre- and post-experiment surveys
designed to collect demographic information and information on student studying and note-taking
preferences.1 During this semester, all students in all five of the scheduled Principles of
Microeconomics sections participated in the study. Approximately 45 students were enrolled in each
section for an initial sample of 230. Three sections met for 60 minutes on Mondays, Wednesdays
and Fridays; the remaining two sections met for 90 minutes on Tuesdays and Thursdays. Three of
the five sections met during the morning; the remaining sections met during the early afternoon.
Three different tenured professors used a whiteboard for primary instruction and Microsoft
PowerPoint slides as supplements to teach the five sections. Scores on the experiment’s multiple-
1
We had some mild attrition over the course of the semester, losing 11 students across all five
sections. This rate was statistically equivalent to the attrition rate for introductory microeconomics at
our university for all sections over the past two years.
5
Experimental design
A fourth tenured professor met with each section on two separate, rigorously scheduled
occasions for our experimental trials. For each trial, the fourth instructor lectured for the first thirty
minutes of class on a topic that, although common to introductory economics courses, could be
taught as a small, independent and stand-alone unit. These topics were not covered by the primary
instructors at all.2 The first topic was Consumer Theory and involved a short lecture on indifference
curves and budget constraints as an extension of demand analysis. The second topic was Income
Inequality as measured by Lorenz Curves and Gini Coefficients. The primary instructors could
answer specific questions about the topics and remind the students that they would be covered on
quizzes and exams. The primary instructors did not otherwise elaborate on these topics in lectures.
Students in each section were randomly assigned to one of two groups based on whether the last
digit of their student identification (ID) number was odd or even. In Trial 1, students whose ID
ended with an odd digit (ODDS) were instructed to use laptop computers to take notes, while the
second group (EVENS) were instructed to take notes on paper (Figure 1). Students could choose to
use their own paper notebooks or laptop computers. Paper, pencils and university laptops were
supplied to any student who requested them. ODDS could take notes using whatever computer
program they preferred, though most used Microsoft Word. In the second trial, which occurred
approximately one month later, the ODDS and EVENS reversed their note-taking restrictions.
2
Both topics are discussed in brief chapters in the recommended textbooks (Hubbard & O’Brien,
any edition; Miller, any edition; McEachern, 2017).
6
studies that rely on either self-choice of computer use or studies that assign a student a single option.
First, because we randomly assign a note-taking method, sample selection bias is negated. Second,
because each student had to take notes using both paper and a computer, we are able to capture the
within-subject (same person) effect of switching note-taking tools. Though previous studies have
examined the actual notes taken by students for ‘number of ideas recorded’ or ‘number of words
recorded’ or other content measures, our experimental method obviates the necessity for this
because each individual takes notes using both methods. Thus, our design can test whether taking
notes by computer is more detrimental to assessment scores than taking notes by paper-and-pencil
Figure 1
Experimental design
Must Take Notes on a Must Take Notes on
Trial Laptop Computer Paper
Trial 1 Odds Evens
(Topic = Consumer Theory) (N = 127) (N = 103)
Trial 2 Evens Odds
(Topic = Economic Inequality) (N = 103) (N = 127)
Instructor 4 taught the same material to all five sections and followed a highly structured
lecture outline using only the classroom whiteboard. Students were notified by their primary
instructor that they would be quizzed on their knowledge of the lecture material during the next
class period (in two days’ time). Students were also told the material would be covered on the next
course exam (two to three weeks later). Thus, the students were acutely aware that their knowledge
7
While the instructors set their own grading schemes, course schedules and textbooks,
everything related to the experiment was highly controlled, including the dates of the lectures and
quizzes, and the relative value of experimental points in the total course grade. Since we are only
interested in how students performed on the quiz and exam questions related to the special lecture
(trial) material, other variations in class organization are irrelevant to our study.
Identical ten-question, multiple-choice quizzes were administered in each section two days
after the experimental lecture to test student recall. The questions are representative of typical
introductory economics questions and are much like what is found in a standard textbook question
bank. Students faced an additional two multiple-choice questions on the experimental lecture topic
Patterson and Patterson (2017) suggested that students in quantitative courses would
experience a greater penalty from taking notes on computers rather than paper. Mueller and
Oppenheimer (2014) supported this finding by showing that laptop note-takers perform equivalently
on recall questions but worse on quantitative questions relative to paper note-takers. Cognizant of
these concerns, and because mathematical and graphical content are standard in the course, we
identify a subset of three to four quiz questions in each trial that require a greater level of
Data Collection
8
data along with information regarding student note-taking and studying preferences. The pre-
experiment survey was done on the first day of class when the experiment was explained. The post-
experiment survey was completed on the same day as the last quiz.
Summary statistics are presented in Table 1. Our random selection mechanism placed
roughly 55% of the students in one group (ODDS) and 45% in the other (EVENS). While one
might suppose that each group should be essentially the same size, a random selection process need
not guarantee this. More than 90% of students attended the special lectures. Women made up 36%
of the sample. Most students were sophomores and with the average slightly over 20 years of age.
The average student grade point average (GPA) was 3.145 and the mean ACT composite test score
was close to 23. Most of the students work while attending school; the average number of hours
worked per week during the semester was slightly over 11 hours.
Mean quiz grades were approximately 80% for each trial of the experiment; mean scores for
exam questions exhibited more variation with an average of less than one question correct in trial 1
and 1.3 questions correct in trial 2 (out of two total questions). We provide these separately and
summed into a “total” score (73.3% for trial 1 and 77.5% for trial 2). In addition, Table 1 reports the
average number correct from the subset of questions identified as “quantitative”. Students score
slightly lower on the quantitative questions, but the difference is not statistically significant.
Table 2 provides summary statistics separately by group allowing for simple unconditional
comparisons. The similarity in characteristics between the ODD and EVEN groups is indicative of
the randomness of our sorting. The groups are statistically insignificantly different from each other
in terms of gender, race, age, scholastic ability as measured by GPA or ACT score, and outside work
effort. We note that attendance in trial 2 decreased slightly in the group scheduled to take notes
9
Results
Examination of t-tests
Since the treatment and control groups were randomly assigned, unconditional mean
comparisons offer a meaningful and statistically relevant way to examine how note-taking influences
recall. In order to test for a statistically significant difference in assessment scores by note-taking
Table 3 presents the results of these t-tests (Ho: mean (handwritten) – mean (computer) =
0). The columns successively offer group means for each assessment. The first row of Table 3
features t-tests that assume equal variances in the groups’ scores. While a simple F-test fails to reject
the null hypothesis of equal variances between the groups’ scores, we also include t-test mean
comparisons that assume unequal variances in row 2. Insofar as we are measuring how note-taking
behavior affects assessment scores, it is also prudent to provide mean comparisons between groups
consisting only of students that attended each lecture. These results are presented in Row 3.
We are not able to discern statistically significant differences between paper and computer
note-taker assessment scores in any of the group comparisons. Our t-tests indicate that when
students are randomly assigned to take notes by computer, it does not, in fact, lead to lower
assessment scores. These findings suggest that it may be the choice to take notes by computer that is
driving the negative results in previous studies, and not the computer notes, per se.
10
between our groups. For example, t-tests cannot identify by how much note-taking method
influences assessment scores. To identify effect magnitudes, we employ OLS fixed effects regression
analysis.
Our experiment’s design allows us to measure changes to assessment scores from switching
between note-taking methods. Our within-subject design randomly assigns a student to take notes
by one method for one lecture-assessment pair (trial 1), and then assigns the same student to switch
methods for a second lecture-assessment pair (trial 2). This unique feature allows us to isolate the
effect of taking notes by computer for an individual student, abrogating the influence of choice.
Though actual examination of course notes could prove an interesting extension as to why
differences might occur, in this study we narrowly focus on the question of whether an individual
If taking notes by computer negatively affects student scores in actual classroom settings,
then within-student (fixed effects) OLS estimations should result in statistically significant negative
coefficients on the computer note-taking indicator variable.3 Random assignment of groups means
We proceed by first standardizing assessment scores, done by fitting them to the standard
. Student scores are thus measured in
terms of standard deviations from the mean of each assessment. We identify the change in a
3
A fixed effects model controls for any unobserved, time-invariant factors (fixed effects) associated
with an individual such as gender, race, ACT scores, etc. In doing so, the variation related to those
factors are removed, which then allows us to observe within-individual variation explained by the
computer note-taking.
11
presents the results for our four assessment measures (quiz questions, exam questions, combined
total and the quantitative subset). We do this for the whole sample and then once more including a
control that identifies students who attended a lecture and completed the quiz versus those who did
The estimated coefficients are uniformly negative. They suggest that students taking notes
on computers scored between 0.03 and 0.12 of a standard deviation lower than students taking notes
on paper (roughly equivalent to 0.2 of a quiz question or 0.02 of an exam question worse). The
results are remarkably robust even when controlling for attendance. However, none of the results
are statistically significant, meaning we cannot conclude that the estimated coefficients are
meaningfully different from zero. Additionally, none of the results are of a magnitude that is
practically significant in determining final grades. In other words, a typical student taking notes by a
computer performs in a statistically equivalent way to when the same student takes notes on paper.
Discussion
To our knowledge, studies thus far have been unable to separate the student’s choice to take
notes on a computer from the process or effect of taking notes on a computer. As noted earlier,
previous studies conflate the two due to problems of experimental design and/or self-selection bias.
That we find no statistically significant difference in recall by individual students across computer
and paper note-taking tools suggests that the negative impact of computer note-taking is related to
the choice of students to use a computer in the classroom and not how notes are taken.
This suggests that students who choose to take notes by computer may harbor latent
characteristics correlated with lower assessment scores than students who opt to take notes on
12
who primarily use laptop or tablet computers to take notes. Our small sample of only 18 computer
note-takers constrains our ability to draw statistically strong conclusions regarding these students’
characteristics relative to the remaining students. Nevertheless, the results in Table 5 offer some
suggestive evidence that students who choose computers are somewhat different from students who
choose paper, and that these differences may correlate with lower assessment scores.
Based on these sometimes statistically significant differences between computer and paper
note-takers, we might presume the computer note-taking students would perform worse on
assessments, regardless of note-taking tool. Indeed, Table 6 presents estimates of simple OLS
regressions controlling only for students’ choice of primary note-taking tool. Like previously cited
studies, we allow characteristics that correlate with assessment performance and the likelihood to
choose computers for note-taking to vary between students. The often statistically significant
negative coefficient on the single control in the regressions cannot isolate whether it is the student’s
individual-specific characteristics that are causing lower assessment scores or the method of note-
Aguilar-Roca et al., (2012) found that students had strong and largely uniform opinions
about use of computers in the classroom. We also find remarkable homogeneity of opinion across
our students regarding computer use. At the beginning of the semester 92.1% of our students
reported that they prefer to take notes on paper. Only one student took notes on a tablet with an
attached keypad, and seventeen students (7.5%) reported that they usually take notes on their
4
Kirschner and Karpinski (2010) surmise that students more susceptible to distraction or students
with lower impulse control may be both more likely to choose to take notes on the computer and
more likely to fare poorly in the course, even when controlling for other factors.
13
Although 89.2% of students responded that they felt hand-written notes were associated
with better quiz and exam performance, not one student who primarily used computers to take
notes planned on switching to use paper and pencil after completing our experiment. Among
primary paper and pencil note-takers though, 2.7% decided to switch to using computers. In a more
general form of the question, 89.7% of students believed that hand-written notes were associated
with better quiz and exam scores for “the class in general.” Some of the student responses were
driven by the content of the course: 88.7% of students believed they “wrote down” less
mathematical or graphical content when using the computer. Overall, 41.9% of students agreed with
the statement that some methods of note-taking are better for some classes than others (7.39% of
students reported that electronic and paper note-taking were equivalent; 49.3% agreed with the
statement that taking notes by hand is superior; and 1.5% believed taking notes electronically was
superior). From these results it appears the students did not change their views on their method of
Conclusions
This paper attempts to answer whether the process of taking notes on the computer leads to
poorer test scores. Many existing studies demonstrate that students taking notes on computers
perform worse than students taking notes on paper. However, these studies’ experimental designs
and self-selection problems arrest their ability to disentangle whether the lower test scores are due to
the students’ choice to take notes on a computer or the process of taking notes on the computer.
5
Despite anecdotal evidence to the contrary, zero students reported “I do not regularly take notes”
on the pre-experiment survey and only 7.9% rated their note-taking skills as poor (zero rated their
note-taking skills as “very poor”).
14
both types of note-taking. The results reveal that the process of computer note-taking does not have a
Our results highlight important avenues for future research. There is little comprehensive
understanding about why the choice to use laptops correlates with negative assessment outcomes.
We show, albeit with a very small sample, that students who choose to take notes on the computer
may be poorer students. The New York Times reported “The research is unequivocal: Laptops
distract from learning, both for users and for those around them” (Dynarski, 2017), but little has
been done to study whether the distractions are associated with a student’s choice to take notes
using a computer.
Where does this leave the typical introductory economics or business instructor? Instructors
should not ban computers and presume students will perform better if they are compelled to take
notes by using paper. Students view computers in the classroom positively and are strongly opposed
to laptop bans (Nemetz, Eager, & Limpaphayom, 2017). Fried (2008) suggests educators examine
how they can maximize student learning in a digital environment and minimize student distractions
from learning. For example, instructors can recommend software applications that allow students to
transform a tablet computer into computerized paper, or alternatively how to use a digital pen on a
tablet. They can help students to be more engaged in their note-taking by providing them with an
outline of a lecture. In either case, students can benefit from the cognitive process of handwriting
while still being able to take advantage of the conveniences offered by computers, especially for
storing and sharing lecture notes. Although the evidence supporting the benefits of taking notes
using computers is weak or nonexistent, computers are increasingly important in classrooms and are
not going away. Instructors need to be willing to work with their students to increase the efficacy of
15
Aguilar-Roca, N.M., Williams, A.E., & O'dowd, D.K. (2012). The impact of laptop-free zones on
student performance and attitudes in large lectures. Computer & Education, 59(4), 1300-1308.
Beck, K. (2014). Note Taking Effectiveness in the Modern Classroom. The Compass, 1(1), 1-14.
Bui, D.C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring alternative
Carter, S.P., Greenberg, K., & Walker, M.S. (2017). The impact of computer usage on academic
performance: Evidence from a randomized trial at the United States Military Academy.
Dynarski, S. (2017, November 22). Laptops are great. But not during a lecture or a meeting. The New
during-lecture-or-meeting.html
Efaw, J., Hampton, S., Martinez, S., & Smith, S. (2004). Miracle or menace: Teaching and learning
Fried, C.B. (2008). In-class laptop use and its effects on student learning. Computer & Education,
50(3), 906-914.
Haley, M.R., Johnson, M., & Mcgee, M.K. (2010). A framework for reconsidering the Lake
Harrison, G.W., & List, J.A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009-1055.
Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in
Hubbard, R.G. & O'Brien, A.P. (2006). Microeconomics. Upper Saddle River, NJ: Pearson.
Igo, L.B., Bruning, R., & Mccrudden, M.T. (2005). Exploring differences in students' copy-and-paste
17
97(1), 103-116.
Kiewra, K.A., Dubois, N.F., Christian, D., Mcshane, A., & Al, E. (1991). Note-taking functions and
Kirschner, P.A., & Karpinski, A.C. (2010). Facebook and academic performance. Computers in Human
Kraushaar, J.M., & Novak, D.C. (2010). Examining the effects of student multitasking with laptops
McEachern, W. A., & Trost, S. C. (2017). Econ micro: Principles of microeconomics. Boston, MA: Cengage
Learning.
Milller, R.L. (2016). Economics Today: Student value edition. New York, NY: Pearson.
Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard. Psychological
Nemetz, P. L., Eager, W. M., & Limpaphayom, W. (2017). Comparative effectiveness and student
choice for online and face-to-face classwork. Journal of Education for Business, 92(5), 210-219.
Olive, T., & Barbier, M. (2017). Processing time and cognitive effort of longhand note taking when
reading and summarizing a structured or linear text. Written Communication, 34(2), 210-219.
Patterson, R.W., & Patterson, R.M. (2017). Computers and productivity: Evidence from laptop use
Sana, F., Weston, T., & Cepeda, N.J. (2013). Laptop multitasking hinders classroom learning for
both users and nearby peers. Computer & Education, 62, 24-31.
Trimmel, M., & Bachmann, J. (2004). Cognitive, social, motivational and health aspects of students
18
Wurst, C., Smarkola, C., & Gaffney, M. A. (2008). Ubiquitous laptop usage in higher education:
19
Summary statistics
Variable description Obs. Mean
(St Dev.)
Computer 1: assigned to use computer in trial 1 ODDS 230 0.552
(0.498)
Computer 2: assigned to use computer in trial 2 EVENS 230 0.448
(0.498)
Attendance 1: attended lecture in trial 1 230 0.930
(0.255)
Attendance 2: attended lecture in trial 2 230 0.913
(0.282)
Female: = 1 if student is a woman and 0 otherwise 228 0.360
(0.481)
Non-white: = 1 if student’s race is not white and 0 otherwise 228 0.105
(0.308)
Age: age of student in years 227 20.251
(2.936)
GPA: cumulative college grade point average (at start of the 225 3.145
semester)a (1.727)
ACT: student composite ACT score 213
22.732
(3.140)
Hours: number of hours per week student works for pay 226
11.280
(11.703)
Quiz 1: score on trial 1 quiz (out of 10) 220
8.018
(1.627)
Quiz 2: score on trial 2 quiz (out of 10) 215 7.972
(1.694)
Quantitative Subset 1: score on trial 1 quiz subset 220 2.808
Questions (2), (4), (7) and (8) (1.018)
Quantitative Subset 2: score on trial 2 quiz subset 215 2.340
Questions (4), (7) and (10) (0.756)
Exam 1: score on trial 1 exam questions (out of 2) 223 0.767
(0.722)
20
21
Trial 1 Trial 2
22
(1) Equal 0.928 0.288 1.1 -0.35 0.433 .098 0.631 0.773
variances (0.355) (0.773) (0.273) (0.727) (0.666) (0.922) (0.529) (0.441)
{220} {223} {217} {219} {215} {219} {212} {215}
(2) Unequal 0.934 0.290 1.109 -0.35 0.430 0.098 0.630 0.770
variances (0.351) (0.772) (0.269) (0.727) (0.668) (0.922) (0.530) (0.442)
{220} {223} {217} {219} {215} {219} {212} {215}
(3) If students 1.030 0.259 0.986 -0.119 0.619 0.741 0.984 1.245
attended (0.304) (0.796) (0.325) (0.906) (0.537) (0.460) (0.326) (0.214)
each lecture {208} {209} {207} {207} {203} {202} {200} {203}
(equal
variances)
23
24
Note. We test for statistical significance in the characteristics’ differences between groups using
unpaired two-sample t-tests, assuming unequal variances when necessary, and the null
hypothesis: Ho: mean(paper preference) - mean(computer preference) = 0. t-statistics are in
parentheses; *** indicates statistical significance at better than the 1% level (p < 0.01); **
indicates statistical significance at better than the 5% level (p < 0.05). Heteroskedasticity
robust standard errors are clustered at the individual level.
25
Simple OLS assessment scores controlling only for primary note-taking tool
Note. t-statistics are in parentheses; *** indicates statistical significance at better than the 1% level (p
< 0.01); ** indicates statistical significance at better than the 5% level (p < 0.05). Heteroskedasticity
robust standard errors are clustered at the individual level.
26
Please answer the following questions by recording your answer on the provided scantron sheet.
There are 4 questions at the end of the survey which ask you to fill in the blank.
1. I identify my gender as
a. Female
b. Male
c. Other
4. To the best of my knowledge, I am eligible for federally-funded financial aid such as Pell
Grants, loans, or work-study.
a. Yes
b. No
6. I am currently employed for pay on or off campus (for 5 hours or more per week).
a. Yes
b. No
11. When taking notes on topics that involve graphical analysis or mathematics, I
a. Copy everything exactly, including labeling axes and all mathematical steps, even if I
know how to do the problem
b. I copy most of the material in the graph or most of the mathematics
c. I sketch the graph or just write down the main equation
d. I don’t usually make notes of graphs or math
12. How often do you consult your lecture notes to study for scheduled exams or quizzes?
a. I consult my notes several times prior to every assessment
b. I consult my notes only once prior to every assessment
c. I consult my notes prior to most, but not all, assessments
d. I consult my notes prior to assessments only when I am unsure of the subject matter
e. I rarely consult my notes
f. I never consult my notes
13. How useful would you say your notes typically are for performing well in your courses?
a. Very useful
b. Somewhat useful
c. Not very useful
d. Not useful at all
Please record your answer to the following questions in the accompanying blanks. If you are not
sure, please record to the best of your ability.
17. How many hours do you work for pay in an average week during the semester?
____________
28
1. Which method of note-taking do you think was associated with better quiz and exam scores
for yourself?
a. hand-written notes on paper
b. notes taken electronically on a laptop, tablet, or phone
c. no note-taking, but listening to lectures
d. hand-written notes and electronic notes are equivalent (there is no difference)
3. For the class in general, which method of note-taking do you think was associated with
better quiz and exam scores?
a. hand-written notes on paper
b. notes taken electronically on a laptop, tablet, or phone
c. no note-taking, but listening to lectures
d. hand-written notes and electronic notes are equivalent (there is no difference)
29
a b c d e
9. Please indicate which of the following statements best reflects your attitude about
participating in this study.
a. I highly enjoyed participating in this study
b. I generally enjoyed participating in this study
c. I was indifferent to participating in this study
d. I did not enjoy participating in this study
e. I strongly disliked participating in this study
10. Did you borrow a university laptop to take notes for either of the consumer behavior or
inequality lectures?
a. Yes
b. No
13. Did having the extra credit available from the experiment alter the amount of effort you
put into this course, relative to your other courses?
a. Yes
b. No
14. Did you ever feel the need, because of the experiment’s design, to study with a student in
the other group (for instance, if you are even, you wanted to study with an odd)?
a. Yes
b. No
30
2) On the graph above, when the budget line shifts from BL2 to BL1, the consumer whose
preferences are shown in the graph above will buy
A) More of K and more of L
B) Less of K and less of L
C) More of K and less of L
D) Less of K and more of L
3) Which of the following happens to a consumer’s budget line if that consumer’s budget
(income) increases? The budget line _____________.
A) becomes steeper
B) shifts farther away from the origin of the graph
C) becomes more horizontal
D) shifts closer to the origin of the graph
31
6) When the price of a product rises, consumers shift their purchases to other products whose
prices are now relatively lower. This statement describes.
A) the rationing function of prices
B) the substitution effect
C) the law of supply
D) the income effect
32
9) In the above figures, which one reflects an increase in the consumer’s income?
A) Figure A
B) Figure B
C) Figure C
D) Figure D
10) The substitution effect explains that when the price of a good increases, consumers will
consume
A) less of the good and more of some other good
B) more of the good and less of some other good
C) more of the good because their real incomes are lower after the price increase
D) less of the good because their real incomes are higher after the price increase
33
1) On the graph above, if the budget line shifts from BL2 to BL1, it is because the price of
A) K increased
B) K decreased
C) L increased
D) L decreased
34
1) When a Lorenz curve is as far away from the equality line as possible, there is
A) perfect income equality.
B) perfect income inequality.
C) a Gini coefficient close to zero.
D) an equal distribution of income.
35
9) Which of the following Gini coefficients indicates the highest degree of income
inequality?
A) 0.78
B) 0.65
C) 0.29
D) 0.42
36
1) Refer to the above diagram where curves (a) through (e) represent five different countries. If
curve (c) reflects the Lorenz curve in the U.S., which curves would reflect the Lorenz curve
of Brazil and of Canada?
A) curve (b) for Canada and curve (d) for Brazil.
B) curve (d) for Canada and curve (b) for Brazil.
C) curve (a) for Canada and point (e) for Brazil.
D) point (e) for Canada and curve (a) for Brazil.
37
t-statistics are in parentheses; *** indicates statistical significance at better than the 1% level (p < 0.01); ** indicates
statistical significance at better than the 5% level (p < 0.05); * indicates statistical significance at better than the 10% level
(p < 0.10). Heteroskedasticity robust standard errors are clustered at the individual level.