Taking Notes in The Digital Age

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 39

Taking notes in the Digital Age:

Evidence from Classroom Random Control Trials

Benjamin Artz, Marianne Johnson, Denise Robson, Sarinda Taengnoi Siemers

University of Wisconsin Oshkosh

Existing studies of how note-taking tools affect student learning typically find that students who

choose to take notes on a computer perform worse on assessments than students who take notes on

paper. To our knowledge, the literature has not disentangled whether this result is due to the note-

taking process itself, or instead due to the characteristics of students who choose to use computers

to take notes. In order to answer this question we employ a within-subject random control trial

experiment and conclude that taking notes on computers does not have a statistically meaningful

impact on student performance.

Key Words: Computers, Classrooms, Note-Taking, Experiment

Corresponding author: Marianne Johnson, Professor of Economics, University of Wisconsin


Oshkosh. Email: johnsonm@uwosh.edu. Phone: (920) 424-1441. Mailing address: Department of
Economics, College of Business, 800 Algoma Blvd., UW Oshkosh, Oshkosh, WI, 54901, USA.


 

Electronic copy available at: https://ssrn.com/abstract=3036455


Taking notes in the Digital Age:

Evidence from Classroom Random Control Trials

Introduction

The ability of computers to improve pedagogical outcomes has long been debated, and their

use in taking notes is increasingly prevalent. Yet, although taking notes is considered a vital part of

learning, particularly in lecture-oriented courses, the effectiveness of taking notes via computers is

unresolved. Latent student characteristics may be associated with course performance as well as the

choice to take notes via computer. This complicates the existing literature’s efforts to measure the

effect of computer note-taking on academic performance. We deconstruct this problem by

employing a within-subject random control experimental design. Our experiment is operationalized

in the classroom by requiring students to take notes using paper in one trial and computers in

another. We then record, compare and analyze student performance on related assessments. The

results of our analysis indicate that for typical students, taking notes on the computer does not lead

to poorer performance on quizzes or exams in an introductory microeconomics course, even in

situations with questions that involve mathematical or graphical representations or analysis. Our

finding has important implications for introductory business instructors wrestling with students’ use

of computers in the classroom.

We begin the paper by briefly surveying the related literature on note-taking, and then follow

by describing the experiment, data, and the methodology. In the next section we present the results

of our study using t-tests for group comparisons and ordinary least squares (OLS) fixed effects

regressions for greater insights into the data. We then analyze the students’ choice to use computers


 

Electronic copy available at: https://ssrn.com/abstract=3036455


and their opinions on computer note-taking. We conclude the paper by discussing the implications

of this study and avenues for future research.

Literature review

The use of computers in the classroom can improve student engagement and educational

outcomes by fostering active learning and improved collaboration (Efaw, Hampton, Martinez, &

Smith, 2004; Fried, 2008; Trimmel & Bachman, 2004; Wurst, Smarkola, & Gaffney, 2008). Yet

computers in the classroom also provide opportunities for distraction and, consequently, can have a

deleterious effect on course performance (Aguilar-Roca, Williams, & O'dowd, 2012; Hembrooke, &

Gay, 2003; Kraushaar, & Novak, 2010; Patterson & Patterson, 2017; Sana, Weston, & Cepeda,

2013).

Computers’ increasingly ubiquitous presence in classrooms is largely due to their use by

students for taking notes. The educational psychology literature suggests note-taking affects

academic performance through an “encoding function”, which is the process of generating

personalized, non-verbatim notes (Igo, Brunning, & McCrudden, 2005). Encoding requires

inference, integration, and structuring of the material being presented. Learning occurs because

students engage in selecting important information and summarizing it while still actively listening to

the lecture (Beck, 2014). The process of writing notes by hand is thought to be closely associated

with encoding and thus, hand-written notes are generally believed to be associated with better recall

than notes taken on a computer or other device (Olive & Barbier, 2017; Patterson & Patterson,

2017).

Most studies tasked with observing how note-taking behavior relates to information recall

take place in laboratory environments, often testing students on the same day the material was

presented (Beck, 2014; Bui, Myerson, & Hale, 2013; Mueller & Oppenheimer, 2014; Olive &

Barbier, 2017; Sana et al., 2013; Wei, Wang, & Fass, 2014). Yet this approach may be problematic.


 

Electronic copy available at: https://ssrn.com/abstract=3036455


The artificiality and selection bias inherent in laboratory studies that, for example, test volunteers’

recall on TED talks (Mueller & Oppenheimer, 2014) or lectures on non-fiction books (Bui et al.,

2013) do not capture the reality of student life in the classroom (Harrison & List, 2004). While some

laboratory-based studies do offer course extra credit (Beck., 2014) or some nominal amount of

course participation points (Bui et al., 2013; Igo et al., 2005; Sana et al., 2013), others rely on

volunteers (Kiewra, Dubois, Christian, Mcshane, & Al, 1991; Olive & Barbier, 2017; Wei et al., 2014;

Wurst et al., 2008) or cash payments (Mueller & Oppenheimer, 2014). This begs the question

whether laboratory results can fully inform on classroom outcomes.

Studies conducted in a classroom setting (field experiments) can overcome two important

issues that bedevil laboratory experiments. First, student recall and understanding is tested on a

timeframe consistent with a typical course schedule for giving quizzes and exams. Second, the stakes

become much more realistic in the classroom, as students’ grades – and hence their ability to move

forward with their university education – depend on recall and understanding of lecture content.

Patterson and Patterson (2017) leverage an institutional policy for a natural experiment at the U.S.

Military Academy and find that optional classroom computer use is negatively associated with GPA

outcomes across disciplines. In two additional field experiments, Fried (2008) and Aguilar-Roca et al.

(2012) allow students to self-select computer usage in the classroom. Both studies find that a

student’s choice to use computers to take notes during lectures is associated with lower course

performance. Carter, Greenberg and Walker (2017) find that for West Point Principles of

Economics students, those in sections allowed to take notes on a computer exhibited poorer

performance on the course’s common final exam (about 1.7 points on a 100-point scale). By

allowing students to choose their method of note-taking, field experiments conducted in the

classroom can still suffer from a variation of self-selection problems if the characteristics that lead


 

Electronic copy available at: https://ssrn.com/abstract=3036455


students to choose to take notes on the computer are also the characteristics that are associated with

poorer class performance (e.g., high distractibility).

Research question

Building on this previous literature, our research question is: does a student who uses a

computer to take notes in the classroom perform worse on course assessments than if (s)he would

have taken notes on paper?

Experiment design and methodology

Participants

We employed a within-subjects random control trial experiment design, or a variation of

what Harrison and List (2004) call a framed field experiment. The experiment was administered

during the Fall 2016 semester at a U.S. regional comprehensive university located in the Upper-

Midwest. Participation in the experiment included completing pre- and post-experiment surveys

designed to collect demographic information and information on student studying and note-taking

preferences.1 During this semester, all students in all five of the scheduled Principles of

Microeconomics sections participated in the study. Approximately 45 students were enrolled in each

section for an initial sample of 230. Three sections met for 60 minutes on Mondays, Wednesdays

and Fridays; the remaining two sections met for 90 minutes on Tuesdays and Thursdays. Three of

the five sections met during the morning; the remaining sections met during the early afternoon.

Three different tenured professors used a whiteboard for primary instruction and Microsoft

PowerPoint slides as supplements to teach the five sections. Scores on the experiment’s multiple-

                                                            
1
We had some mild attrition over the course of the semester, losing 11 students across all five
sections. This rate was statistically equivalent to the attrition rate for introductory microeconomics at
our university for all sections over the past two years.

 

Electronic copy available at: https://ssrn.com/abstract=3036455


choice exam and quiz questions factored substantially into students’ course grades. Thus, students

were incentivized to treat the experiment and its assessments seriously.

Experimental design

A fourth tenured professor met with each section on two separate, rigorously scheduled

occasions for our experimental trials. For each trial, the fourth instructor lectured for the first thirty

minutes of class on a topic that, although common to introductory economics courses, could be

taught as a small, independent and stand-alone unit. These topics were not covered by the primary

instructors at all.2 The first topic was Consumer Theory and involved a short lecture on indifference

curves and budget constraints as an extension of demand analysis. The second topic was Income

Inequality as measured by Lorenz Curves and Gini Coefficients. The primary instructors could

answer specific questions about the topics and remind the students that they would be covered on

quizzes and exams. The primary instructors did not otherwise elaborate on these topics in lectures.

Students in each section were randomly assigned to one of two groups based on whether the last

digit of their student identification (ID) number was odd or even. In Trial 1, students whose ID

ended with an odd digit (ODDS) were instructed to use laptop computers to take notes, while the

second group (EVENS) were instructed to take notes on paper (Figure 1). Students could choose to

use their own paper notebooks or laptop computers. Paper, pencils and university laptops were  

supplied to any student who requested them. ODDS could take notes using whatever computer

program they preferred, though most used Microsoft Word. In the second trial, which occurred

approximately one month later, the ODDS and EVENS reversed their note-taking restrictions.

                                                            
2
Both topics are discussed in brief chapters in the recommended textbooks (Hubbard & O’Brien,
any edition; Miller, any edition; McEachern, 2017).

 

Electronic copy available at: https://ssrn.com/abstract=3036455


This within-subjects random control trial experimental design provides a distinct advantage over

studies that rely on either self-choice of computer use or studies that assign a student a single option.

First, because we randomly assign a note-taking method, sample selection bias is negated. Second,

because each student had to take notes using both paper and a computer, we are able to capture the

within-subject (same person) effect of switching note-taking tools. Though previous studies have

examined the actual notes taken by students for ‘number of ideas recorded’ or ‘number of words

recorded’ or other content measures, our experimental method obviates the necessity for this

because each individual takes notes using both methods. Thus, our design can test whether taking

notes by computer is more detrimental to assessment scores than taking notes by paper-and-pencil

for an individual student.

Figure 1

Experimental design
Must Take Notes on a Must Take Notes on
Trial Laptop Computer Paper
Trial 1 Odds Evens
(Topic = Consumer Theory) (N = 127) (N = 103)
Trial 2 Evens Odds
(Topic = Economic Inequality) (N = 103) (N = 127)

Instructor 4 taught the same material to all five sections and followed a highly structured

lecture outline using only the classroom whiteboard. Students were notified by their primary

instructor that they would be quizzed on their knowledge of the lecture material during the next

class period (in two days’ time). Students were also told the material would be covered on the next

course exam (two to three weeks later). Thus, the students were acutely aware that their knowledge


 

Electronic copy available at: https://ssrn.com/abstract=3036455


of the lecture material would affect their course grade. Primary instructors observed students

throughout the experimental lectures to ensure note-taking compliance.

While the instructors set their own grading schemes, course schedules and textbooks,

everything related to the experiment was highly controlled, including the dates of the lectures and

quizzes, and the relative value of experimental points in the total course grade. Since we are only

interested in how students performed on the quiz and exam questions related to the special lecture

(trial) material, other variations in class organization are irrelevant to our study.

Quiz and exam performance data

Identical ten-question, multiple-choice quizzes were administered in each section two days

after the experimental lecture to test student recall. The questions are representative of typical

introductory economics questions and are much like what is found in a standard textbook question

bank. Students faced an additional two multiple-choice questions on the experimental lecture topic

seeded into instructor-specific exams.

Patterson and Patterson (2017) suggested that students in quantitative courses would

experience a greater penalty from taking notes on computers rather than paper. Mueller and

Oppenheimer (2014) supported this finding by showing that laptop note-takers perform equivalently

on recall questions but worse on quantitative questions relative to paper note-takers. Cognizant of

these concerns, and because mathematical and graphical content are standard in the course, we

identify a subset of three to four quiz questions in each trial that require a greater level of

quantitative knowledge (as compared to more definitional questions).

Data Collection


 

Electronic copy available at: https://ssrn.com/abstract=3036455


Students completed pre-experiment and post-experiment surveys that provide demographic

data along with information regarding student note-taking and studying preferences. The pre-

experiment survey was done on the first day of class when the experiment was explained. The post-

experiment survey was completed on the same day as the last quiz.

Summary statistics are presented in Table 1. Our random selection mechanism placed

roughly 55% of the students in one group (ODDS) and 45% in the other (EVENS). While one

might suppose that each group should be essentially the same size, a random selection process need

not guarantee this. More than 90% of students attended the special lectures. Women made up 36%

of the sample. Most students were sophomores and with the average slightly over 20 years of age.

The average student grade point average (GPA) was 3.145 and the mean ACT composite test score

was close to 23. Most of the students work while attending school; the average number of hours

worked per week during the semester was slightly over 11 hours.

Mean quiz grades were approximately 80% for each trial of the experiment; mean scores for

exam questions exhibited more variation with an average of less than one question correct in trial 1

and 1.3 questions correct in trial 2 (out of two total questions). We provide these separately and

summed into a “total” score (73.3% for trial 1 and 77.5% for trial 2). In addition, Table 1 reports the

average number correct from the subset of questions identified as “quantitative”. Students score

slightly lower on the quantitative questions, but the difference is not statistically significant.

Table 2 provides summary statistics separately by group allowing for simple unconditional

comparisons. The similarity in characteristics between the ODD and EVEN groups is indicative of

the randomness of our sorting. The groups are statistically insignificantly different from each other

in terms of gender, race, age, scholastic ability as measured by GPA or ACT score, and outside work

effort. We note that attendance in trial 2 decreased slightly in the group scheduled to take notes


 

Electronic copy available at: https://ssrn.com/abstract=3036455


using a computer. There is no clear reason for this, other than normal midterm attrition, and the

difference is not statistically significant.

Results

Examination of t-tests

Since the treatment and control groups were randomly assigned, unconditional mean

comparisons offer a meaningful and statistically relevant way to examine how note-taking influences

recall. In order to test for a statistically significant difference in assessment scores by note-taking

method, we first employ simple, unpaired two-sample t-tests.

Table 3 presents the results of these t-tests (Ho: mean (handwritten) – mean (computer) =

0). The columns successively offer group means for each assessment. The first row of Table 3

features t-tests that assume equal variances in the groups’ scores. While a simple F-test fails to reject

the null hypothesis of equal variances between the groups’ scores, we also include t-test mean

comparisons that assume unequal variances in row 2. Insofar as we are measuring how note-taking

behavior affects assessment scores, it is also prudent to provide mean comparisons between groups

consisting only of students that attended each lecture. These results are presented in Row 3.

We are not able to discern statistically significant differences between paper and computer

note-taker assessment scores in any of the group comparisons. Our t-tests indicate that when

students are randomly assigned to take notes by computer, it does not, in fact, lead to lower

assessment scores. These findings suggest that it may be the choice to take notes by computer that is

driving the negative results in previous studies, and not the computer notes, per se.

OLS fixed effects regression results

10 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


While indicative, mean comparisons offer few insights regarding the lack of difference

between our groups. For example, t-tests cannot identify by how much note-taking method

influences assessment scores. To identify effect magnitudes, we employ OLS fixed effects regression

analysis.

Our experiment’s design allows us to measure changes to assessment scores from switching

between note-taking methods. Our within-subject design randomly assigns a student to take notes

by one method for one lecture-assessment pair (trial 1), and then assigns the same student to switch

methods for a second lecture-assessment pair (trial 2). This unique feature allows us to isolate the

effect of taking notes by computer for an individual student, abrogating the influence of choice.

Though actual examination of course notes could prove an interesting extension as to why

differences might occur, in this study we narrowly focus on the question of whether an individual

student performs differently when taking notes using a different method.

If taking notes by computer negatively affects student scores in actual classroom settings,

then within-student (fixed effects) OLS estimations should result in statistically significant negative

coefficients on the computer note-taking indicator variable.3 Random assignment of groups means

that other covariates should be unnecessary to assess performance.

We proceed by first standardizing assessment scores, done by fitting them to the standard

normal distribution using all observations, where:


. Student scores are thus measured in

terms of standard deviations from the mean of each assessment. We identify the change in a

                                                            
3
A fixed effects model controls for any unobserved, time-invariant factors (fixed effects) associated
with an individual such as gender, race, ACT scores, etc. In doing so, the variation related to those
factors are removed, which then allows us to observe within-individual variation explained by the
computer note-taking.

11 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


standardized assessment score when a student switches between note-taking methods. Table 4

presents the results for our four assessment measures (quiz questions, exam questions, combined

total and the quantitative subset). We do this for the whole sample and then once more including a

control that identifies students who attended a lecture and completed the quiz versus those who did

not attend the lecture but still completed the quiz.

The estimated coefficients are uniformly negative. They suggest that students taking notes

on computers scored between 0.03 and 0.12 of a standard deviation lower than students taking notes

on paper (roughly equivalent to 0.2 of a quiz question or 0.02 of an exam question worse). The

results are remarkably robust even when controlling for attendance. However, none of the results

are statistically significant, meaning we cannot conclude that the estimated coefficients are

meaningfully different from zero. Additionally, none of the results are of a magnitude that is

practically significant in determining final grades. In other words, a typical student taking notes by a

computer performs in a statistically equivalent way to when the same student takes notes on paper.

Discussion

Computers or the choice to use computers?

To our knowledge, studies thus far have been unable to separate the student’s choice to take

notes on a computer from the process or effect of taking notes on a computer. As noted earlier,

previous studies conflate the two due to problems of experimental design and/or self-selection bias.

That we find no statistically significant difference in recall by individual students across computer

and paper note-taking tools suggests that the negative impact of computer note-taking is related to

the choice of students to use a computer in the classroom and not how notes are taken.

This suggests that students who choose to take notes by computer may harbor latent

characteristics correlated with lower assessment scores than students who opt to take notes on

12 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


paper.4 In order to test this hypothesis we use the pre-experiment survey to identify the students

who primarily use laptop or tablet computers to take notes. Our small sample of only 18 computer

note-takers constrains our ability to draw statistically strong conclusions regarding these students’

characteristics relative to the remaining students. Nevertheless, the results in Table 5 offer some

suggestive evidence that students who choose computers are somewhat different from students who

choose paper, and that these differences may correlate with lower assessment scores.

Based on these sometimes statistically significant differences between computer and paper

note-takers, we might presume the computer note-taking students would perform worse on

assessments, regardless of note-taking tool. Indeed, Table 6 presents estimates of simple OLS

regressions controlling only for students’ choice of primary note-taking tool. Like previously cited

studies, we allow characteristics that correlate with assessment performance and the likelihood to

choose computers for note-taking to vary between students. The often statistically significant

negative coefficient on the single control in the regressions cannot isolate whether it is the student’s

individual-specific characteristics that are causing lower assessment scores or the method of note-

taking the student chooses.

Student opinions of note-taking

Aguilar-Roca et al., (2012) found that students had strong and largely uniform opinions

about use of computers in the classroom. We also find remarkable homogeneity of opinion across

our students regarding computer use. At the beginning of the semester 92.1% of our students

reported that they prefer to take notes on paper. Only one student took notes on a tablet with an

attached keypad, and seventeen students (7.5%) reported that they usually take notes on their

                                                            
4
Kirschner and Karpinski (2010) surmise that students more susceptible to distraction or students
with lower impulse control may be both more likely to choose to take notes on the computer and
more likely to fare poorly in the course, even when controlling for other factors.
13 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


laptop.5 This is significantly lower than the estimated 25% to 30% of students using computers for

note-taking reported by Aguilar-Roca et al. (2012).

Although 89.2% of students responded that they felt hand-written notes were associated

with better quiz and exam performance, not one student who primarily used computers to take

notes planned on switching to use paper and pencil after completing our experiment. Among

primary paper and pencil note-takers though, 2.7% decided to switch to using computers. In a more

general form of the question, 89.7% of students believed that hand-written notes were associated

with better quiz and exam scores for “the class in general.” Some of the student responses were

driven by the content of the course: 88.7% of students believed they “wrote down” less

mathematical or graphical content when using the computer. Overall, 41.9% of students agreed with

the statement that some methods of note-taking are better for some classes than others (7.39% of

students reported that electronic and paper note-taking were equivalent; 49.3% agreed with the

statement that taking notes by hand is superior; and 1.5% believed taking notes electronically was

superior). From these results it appears the students did not change their views on their method of

note-taking as a result of this experiment.

Conclusions

This paper attempts to answer whether the process of taking notes on the computer leads to

poorer test scores. Many existing studies demonstrate that students taking notes on computers

perform worse than students taking notes on paper. However, these studies’ experimental designs

and self-selection problems arrest their ability to disentangle whether the lower test scores are due to

the students’ choice to take notes on a computer or the process of taking notes on the computer.

                                                            
5
Despite anecdotal evidence to the contrary, zero students reported “I do not regularly take notes”
on the pre-experiment survey and only 7.9% rated their note-taking skills as poor (zero rated their
note-taking skills as “very poor”).
14 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Our experiment tackles such shortcomings by requiring each student in the sample to participate in

both types of note-taking. The results reveal that the process of computer note-taking does not have a

statistically significant impact on student performance on multiple-choice quizzes and exams.

Our results highlight important avenues for future research. There is little comprehensive

understanding about why the choice to use laptops correlates with negative assessment outcomes.

We show, albeit with a very small sample, that students who choose to take notes on the computer

may be poorer students. The New York Times reported “The research is unequivocal: Laptops

distract from learning, both for users and for those around them” (Dynarski, 2017), but little has

been done to study whether the distractions are associated with a student’s choice to take notes

using a computer.

Where does this leave the typical introductory economics or business instructor? Instructors

should not ban computers and presume students will perform better if they are compelled to take

notes by using paper. Students view computers in the classroom positively and are strongly opposed

to laptop bans (Nemetz, Eager, & Limpaphayom, 2017). Fried (2008) suggests educators examine

how they can maximize student learning in a digital environment and minimize student distractions

from learning. For example, instructors can recommend software applications that allow students to

transform a tablet computer into computerized paper, or alternatively how to use a digital pen on a

tablet. They can help students to be more engaged in their note-taking by providing them with an

outline of a lecture. In either case, students can benefit from the cognitive process of handwriting

while still being able to take advantage of the conveniences offered by computers, especially for

storing and sharing lecture notes. Although the evidence supporting the benefits of taking notes

using computers is weak or nonexistent, computers are increasingly important in classrooms and are

not going away. Instructors need to be willing to work with their students to increase the efficacy of

their computer use.

15 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


16 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


References

Aguilar-Roca, N.M., Williams, A.E., & O'dowd, D.K. (2012). The impact of laptop-free zones on

student performance and attitudes in large lectures. Computer & Education, 59(4), 1300-1308.

Beck, K. (2014). Note Taking Effectiveness in the Modern Classroom. The Compass, 1(1), 1-14.

Bui, D.C., Myerson, J., & Hale, S. (2013). Note-taking with computers: Exploring alternative

strategies for improved recall. Journal of Educational Psychology, 105(2), 299-309.

Carter, S.P., Greenberg, K., & Walker, M.S. (2017). The impact of computer usage on academic

performance: Evidence from a randomized trial at the United States Military Academy.

Economics of Education Review, 56(C), 118-132.

Dynarski, S. (2017, November 22). Laptops are great. But not during a lecture or a meeting. The New

York Times. Retrieved from https://www.nytimes.com/2017/11/22/business/laptops-not-

during-lecture-or-meeting.html

Efaw, J., Hampton, S., Martinez, S., & Smith, S. (2004). Miracle or menace: Teaching and learning

with laptop computers in the classroom. Educause Quarterly, 27(3), 10 – 19.

Fried, C.B. (2008). In-class laptop use and its effects on student learning. Computer & Education,

50(3), 906-914.

Haley, M.R., Johnson, M., & Mcgee, M.K. (2010). A framework for reconsidering the Lake

Wobegon Effect. The Journal of Economic Education, 41(2), 95-109.

Harrison, G.W., & List, J.A. (2004). Field experiments. Journal of Economic Literature, 42(4), 1009-1055.

Hembrooke, H., & Gay, G. (2003). The laptop and the lecture: The effects of multitasking in

learning environments. Journal of Computing in Higher Education, 15(1), 46-64.

Hubbard, R.G. & O'Brien, A.P. (2006). Microeconomics. Upper Saddle River, NJ: Pearson.

Igo, L.B., Bruning, R., & Mccrudden, M.T. (2005). Exploring differences in students' copy-and-paste

17 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


decision making and processing: A mixed-methods study. Journal of Educational Psychology,

97(1), 103-116.

Kiewra, K.A., Dubois, N.F., Christian, D., Mcshane, A., & Al, E. (1991). Note-taking functions and

techniques. Journal of Educational Psychology, 83(2), 240-245.

Kirschner, P.A., & Karpinski, A.C. (2010). Facebook and academic performance. Computers in Human

Behavior, 26(6), 1237-1245.

Kraushaar, J.M., & Novak, D.C. (2010). Examining the effects of student multitasking with laptops

during the lecture. Journal of Information Systems Education, 21(2), 241-251.

McEachern, W. A., & Trost, S. C. (2017). Econ micro: Principles of microeconomics. Boston, MA: Cengage

Learning.

Milller, R.L. (2016). Economics Today: Student value edition. New York, NY: Pearson.

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard. Psychological

Science, 25(6), 1159-1168.

Nemetz, P. L., Eager, W. M., & Limpaphayom, W. (2017). Comparative effectiveness and student

choice for online and face-to-face classwork. Journal of Education for Business, 92(5), 210-219.

Olive, T., & Barbier, M. (2017). Processing time and cognitive effort of longhand note taking when

reading and summarizing a structured or linear text. Written Communication, 34(2), 210-219.

Patterson, R.W., & Patterson, R.M. (2017). Computers and productivity: Evidence from laptop use

in the college classroom. Economics of Education Review, 57(C), 66-79.

Sana, F., Weston, T., & Cepeda, N.J. (2013). Laptop multitasking hinders classroom learning for

both users and nearby peers. Computer & Education, 62, 24-31.

Trimmel, M., & Bachmann, J. (2004). Cognitive, social, motivational and health aspects of students

in laptop classrooms. Journal of Computer Assisted Learning, 20(2), 151-158.

18 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Wei, F.F., Wang, Y.K., & Fass, W. (2014). An experimental study of online chatting and notetaking

techniques on college students’ cognitive learning from a lecture. Computers in Human

Behavior, 34, 148-156.

Wurst, C., Smarkola, C., & Gaffney, M. A. (2008). Ubiquitous laptop usage in higher education:

Effects on student achievement, student satisfaction, and constructivist measures in honors

and traditional classrooms. Computer & Education, 51(4), 1766-1783.

19 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 1

Summary statistics
Variable description Obs. Mean
(St Dev.)
Computer 1: assigned to use computer in trial 1 ODDS 230 0.552
(0.498)
Computer 2: assigned to use computer in trial 2 EVENS 230 0.448
(0.498)
Attendance 1: attended lecture in trial 1 230 0.930
(0.255)
Attendance 2: attended lecture in trial 2 230 0.913
(0.282)
Female: = 1 if student is a woman and 0 otherwise 228 0.360
(0.481)
Non-white: = 1 if student’s race is not white and 0 otherwise 228 0.105
(0.308)
Age: age of student in years 227 20.251
(2.936)
GPA: cumulative college grade point average (at start of the 225 3.145
semester)a (1.727)
ACT: student composite ACT score 213
22.732
(3.140)
Hours: number of hours per week student works for pay 226
11.280
(11.703)
Quiz 1: score on trial 1 quiz (out of 10) 220
8.018
(1.627)
Quiz 2: score on trial 2 quiz (out of 10) 215 7.972
(1.694)
Quantitative Subset 1: score on trial 1 quiz subset 220 2.808
Questions (2), (4), (7) and (8) (1.018)
Quantitative Subset 2: score on trial 2 quiz subset 215 2.340
Questions (4), (7) and (10) (0.756)
Exam 1: score on trial 1 exam questions (out of 2) 223 0.767
(0.722)

20 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Exam 2: score on trial 2 exam questions (out of 2) 219 1.329
(0.761)
Total 1: sum of trial 1 quiz and exam scores (out of 12) 217 8.806
(1.903)
Total 2: sum of trial 2 quiz and exam scores (out of 12) 212 9.311
(2.028)
GPA: cumulative college grade point average (at start of the 225 3.145
semester)a (1.727)
ACT: student composite ACT score 213 22.732
(3.140)
Hours: number of hours per week student works for pay 226 11.280
(11.703)
Quiz 1: score on trial 1 quiz (out of 10) 220 8.018
(1.627)
Quiz 2: score on trial 2 quiz (out of 10) 215 7.972
(1.694)
Quantitative Subset 1: score on trial 1 quiz subset 220 2.808
Questions (2), (4), (7) and (8) (1.018)
Quantitative Subset 2: score on trial 2 quiz subset 215 2.340
Questions (4), (7) and (10) (0.756)
Exam 1: score on trial 1 exam questions (out of 2) 223 0.767
(0.722)
Exam 2: score on trial 2 exam questions (out of 2) 219 1.329
(0.761)
Total 1: sum of trial 1 quiz and exam scores (out of 12) 217 8.806
(1.903)
Total 2: sum of trial 2 quiz and exam scores (out of 12) 212 9.311
(2.028)
a
We rely on student-reported GPA and ACT measures rather than institutional data. Haley,
Johnson, & McGee (2010) demonstrate that self-reported and institutional measures of
GPA and ACT scores are distributionally equivalent and can therefore be used
interchangeably in research of this type.

21 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 2

Summary statistics by group

Variable Computer Group in Trial 1 Computer Group in Trial 2


ODDS EVENS
Female 0.376 0.340
(0.486) (0.476)

Non-white 0.128 0.078


(0.335) (0.269)

Age 20.097 20.437


(1.985) (3.780)

GPA 3.289 2.970


(2.293) (0.452)

ACT 22.839 22.600


(3.323) (2.908)

Trial 1 Trial 2

Computer Paper group Computer Paper group


Variable group (EVENS) group (ODDS)
(ODDS) (EVENS)
Attendance 0.929 0.932 0.883 0.937
(0.259) (0.253) (0.322) (0.244)

Quiz Score (Out of 10) 7.924 8.129 7.916 8.017


(1.688) (1.553) (1.760) (1.645)

Quantitative Subset Score 2.831 2.782 2.295 2.375


(Out of 4; Out of 3) (1.032) (1.006) (0.770) (0.745)

Exam Score (Out of 2) 0.754 0.782 1.323 1.333


(0.742) (0.701) (0.740) (0.781)

Total Score 8.675 8.960 9.213 9.390


(Quiz + Exam, Out of 12) (1.995) (1.786) (2.047) (2.017)

22 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 3

Unpaired two-sample t-tests

Trial 1 assessments Trial 2 assessments

Model Quiz 1 Exam 1 Total 1 Quantitative Quiz 2 Exam 2 Total 2 Quantitative


Subset 1 Subset 2

(1) Equal 0.928 0.288 1.1 -0.35 0.433 .098 0.631 0.773
variances (0.355) (0.773) (0.273) (0.727) (0.666) (0.922) (0.529) (0.441)
{220} {223} {217} {219} {215} {219} {212} {215}

(2) Unequal 0.934 0.290 1.109 -0.35 0.430 0.098 0.630 0.770
variances (0.351) (0.772) (0.269) (0.727) (0.668) (0.922) (0.530) (0.442)
{220} {223} {217} {219} {215} {219} {212} {215}

(3) If students 1.030 0.259 0.986 -0.119 0.619 0.741 0.984 1.245
attended (0.304) (0.796) (0.325) (0.906) (0.537) (0.460) (0.326) (0.214)
each lecture {208} {209} {207} {207} {203} {202} {200} {203}
(equal
variances)

Note. Ho: mean(handwritten) - mean(computer) = 0.


Format: t-statistic, (p-value in parentheses), {number of observations in brackets}

23 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 4

Fixed effects OLS regressions of standardized scores (z scores)


All Students Controlling for Lecture Attendance
Variable Quiz Exam Total Subset Quiz Exam Total Subset
-0.123 -0.027 -0.119 -0.056 -0.123 -0.025 -0.119 -0.055
Computer notes
(-1.47) (-0.37) (-1.53) (-0.61) (-1.49) (-0.34) (-1.54) (-0.59)

0.688*** 0.067 0.491** 0.314


Attended class ----- ----- ----- -----
(3.12) (0.25) (2.25) (1.16)
0.061 0.013 0.059 0.278 -0.590*** -0.050 -0.406** -0.270
Constant
(1.47) (0.37) (1.53) (0.61) (-2.82) (-0.20) (-1.98) (-1.06)
Observations 435 442 431 437 435 442 431 437
R-squared 0.002 0.000 0.003 0.000 0.029 0.002 0.023 0.007
F-statistic 2.16 0.13 2.34 0.37 5.62*** 0.10 3.36** 0.79
Note. t-statistics are in parentheses; *** indicates statistical significance at better than the 1% level (p < 0.01); ** indicates
statistical significance at better than the 5% level (p < 0.05). Heteroskedasticity robust standard errors are clustered at the
individual level.
.

24 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 5

Select descriptive statistics by note-taking preferences: mean (st. dev.)


Computer Paper
Variable preference preference
(n = 18) (n = 212)

Attendance: = 1 if student attended both trial lectures 0.722 0.882


and 0 otherwise (0.461) (0.324)

Note-taking skills: = 1 if “very good” (best chosen 0.000*** 0.118***


category) and 0 otherwise (0.000) (0.323)
0.722 0.811
Note-taking skills: = 1 if “good” and 0 otherwise
(0.461) (0.392)

Note-taking skills: = 1 if “poor” (lowest chosen category) 0.278* 0.061*


and 0 otherwise (0.461) (0.240)
Study habits: =1 if student studies notes several times 0.500 0.656
before every assessment (best chosen category) and 0
otherwise (0.514) (0.476)

Useful notes: = 1 if student thinks notes are typically 0.333* 0.580*


“very useful” for performing well in courses and 0
otherwise (0.485) (0.495)

Note. We test for statistical significance in the characteristics’ differences between groups using
unpaired two-sample t-tests, assuming unequal variances when necessary, and the null
hypothesis: Ho: mean(paper preference) - mean(computer preference) = 0. t-statistics are in
parentheses; *** indicates statistical significance at better than the 1% level (p < 0.01); **
indicates statistical significance at better than the 5% level (p < 0.05). Heteroskedasticity
robust standard errors are clustered at the individual level.

25 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Table 6

Simple OLS assessment scores controlling only for primary note-taking tool

Trial 1 assessments Trial 2 assessments

Quiz 1 Concept Exam 1 Total 1 Quiz 2 Concept Exam 2 Total 2


Variable Subset 1 Subset 2

Computer -0.721* -0.063 -0.230 -0.939** -0.645 -0.367** -0.152 -0.810**


preference (-1.807) (-0.264) (-1.354) (-2.200) (-1.636) (-1.978) (-0.670) (-2.108)

Constant 8.074*** 2.813*** 0.785*** 8.880*** 8.020*** 2.367*** 1.340*** 9.372***


(70.877) (38.956) (15.527) (65.831) (66.426) (44.254) (25.409) (63.622)

Observations 220 219 223 217 215 215 219 212

R-squared 0.014 0.000 0.008 0.018 0.010 0.016 0.003 0.011

F-statistic 3.26* 0.07 1.83 4.84** 2.68 3.91** 0.45 4.44**

Note. t-statistics are in parentheses; *** indicates statistical significance at better than the 1% level (p
< 0.01); ** indicates statistical significance at better than the 5% level (p < 0.05). Heteroskedasticity
robust standard errors are clustered at the individual level.

26 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Pre-Experiment Survey

Please answer the following questions by recording your answer on the provided scantron sheet.
There are 4 questions at the end of the survey which ask you to fill in the blank.

1. I identify my gender as
a. Female
b. Male
c. Other

2. My college class status is


a. Freshman
b. Sophomore
c. Junior
d. Senior
e. Super Senior
f. Other

3. I identify my racial or ethnic background as


a. Black
b. Hispanic/Latino(a)
c. Native American
d. Asian/Pacific Islander
e. Caucasian/White
f. Other

4. To the best of my knowledge, I am eligible for federally-funded financial aid such as Pell
Grants, loans, or work-study.
a. Yes
b. No

5. I am a veteran of the U.S. Armed Forces.


a. Yes
b. No

6. I am currently employed for pay on or off campus (for 5 hours or more per week).
a. Yes
b. No

7. When taking notes in class, I primarily use


a. Pen/pencil and paper (notebook)
b. A tablet computer with keyboard
c. A tablet computer with stylus
d. A laptop computer
e. I do not regularly take notes

8. I would rate my note-taking skills as


27 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


a. Very good
b. Good
c. Poor
d. Very Poor

9. When the instructor speaks I will typically


a. write down everything the instructor says
b. write down some of the things the instructor says
c. write down none of the things the instructor says

10. When the instructor writes on the board I will typically


a. write down everything written on the board
b. write down some of the things written on the board
c. write down none of the things written on the board

11. When taking notes on topics that involve graphical analysis or mathematics, I
a. Copy everything exactly, including labeling axes and all mathematical steps, even if I
know how to do the problem
b. I copy most of the material in the graph or most of the mathematics
c. I sketch the graph or just write down the main equation
d. I don’t usually make notes of graphs or math

12. How often do you consult your lecture notes to study for scheduled exams or quizzes?
a. I consult my notes several times prior to every assessment
b. I consult my notes only once prior to every assessment
c. I consult my notes prior to most, but not all, assessments
d. I consult my notes prior to assessments only when I am unsure of the subject matter
e. I rarely consult my notes
f. I never consult my notes

13. How useful would you say your notes typically are for performing well in your courses?
a. Very useful
b. Somewhat useful
c. Not very useful
d. Not useful at all

Please record your answer to the following questions in the accompanying blanks. If you are not
sure, please record to the best of your ability.

14. My age is ______________

15. My current cumulative college GPA is ____________

16. My composite ACT or SAT score was _____________

17. How many hours do you work for pay in an average week during the semester?
____________

28 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Post-Experiment Survey

1. Which method of note-taking do you think was associated with better quiz and exam scores
for yourself?
a. hand-written notes on paper
b. notes taken electronically on a laptop, tablet, or phone
c. no note-taking, but listening to lectures
d. hand-written notes and electronic notes are equivalent (there is no difference)

2. Based on your experiences this semester, are you


a. more likely to switch your note-taking habit and take notes by hand and on paper in the
future
b. more likely to switch your note-taking habit and take notes electronically (computer,
tablet, phone, etc.) in the future
c. unlikely to change your note-taking habits

3. For the class in general, which method of note-taking do you think was associated with
better quiz and exam scores?
a. hand-written notes on paper
b. notes taken electronically on a laptop, tablet, or phone
c. no note-taking, but listening to lectures
d. hand-written notes and electronic notes are equivalent (there is no difference)

4. When taking notes electronically, what method did you use?


a. I typed my notes in Word or an equivalent program
b. I used a stylus and tablet to take notes
c. I used another program or method

5. When you were asked to take notes electronically, you found


a. the note-taking harder than expected
b. the note-taking easier than expected
c. neither harder nor easier than expected

6. When taking notes about graphs or mathematics electronically, you


a. ‘wrote down’ less than you would if you had taken notes by hand (e.g. not label axis, not
use subscripts, not label lines in the graph)
b. ‘wrote down’ as much as you would if you had taken notes by hand
c. ‘wrote down’ more than you would if you had taken notes by hand (e.g. labeled more
lines, wrote more descriptive explanations, used subscripts, labeled more lines on the
graphs)

7. Please indicate which statement you agree with the most


a. taking notes by hand and electronically are both equally good methods
b. taking notes by hand is superior
c. taking notes electronically is superior
d. some methods are better for some classes than others

29 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


8. Please rate the following statement on a scale of “a” to “e”, with “a” being “I am not at all
distracted” and “e” indicating “I am highly distracted.”

I am distracted when students around me use their computers or tablets in class.

a b c d e

Not at all distracted Highly distracted

9. Please indicate which of the following statements best reflects your attitude about
participating in this study.
a. I highly enjoyed participating in this study
b. I generally enjoyed participating in this study
c. I was indifferent to participating in this study
d. I did not enjoy participating in this study
e. I strongly disliked participating in this study

10. Did you borrow a university laptop to take notes for either of the consumer behavior or
inequality lectures?
a. Yes
b. No

13. Did having the extra credit available from the experiment alter the amount of effort you
put into this course, relative to your other courses?
a. Yes
b. No

14. Did you ever feel the need, because of the experiment’s design, to study with a student in
the other group (for instance, if you are even, you wanted to study with an odd)?
a. Yes
b. No

30 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Trial 1 Quiz and Exam Questions

1) An indifference curve shows all combinations of two goods that


A) are equally preferable by a consumer
B) a consumer does not prefer to purchase
C) could be available to the consumer in a given time period
D) a consumer could buy with a given income

2) On the graph above, when the budget line shifts from BL2 to BL1, the consumer whose
preferences are shown in the graph above will buy
A) More of K and more of L
B) Less of K and less of L
C) More of K and less of L
D) Less of K and more of L

3) Which of the following happens to a consumer’s budget line if that consumer’s budget
(income) increases? The budget line _____________.
A) becomes steeper
B) shifts farther away from the origin of the graph
C) becomes more horizontal
D) shifts closer to the origin of the graph

31 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


4) The graph above indicates that the consumer prefers combination
A) A more than B.
B) C more than B.
C) B more than D.
D) D more than A.

5) All points on or below a budget constraint


A) are attainable with the given income.
B) are equally desirable.
C) represent market basket combinations that exhaust the income available.
D) are described, in part, by answers a, b, and c above.

6) When the price of a product rises, consumers shift their purchases to other products whose
prices are now relatively lower. This statement describes.
A) the rationing function of prices
B) the substitution effect
C) the law of supply
D) the income effect

7) As long as all prices remain constant, an increase in money income results in


A) an increase in the slope of the budget line.
B) a decrease in the slope of the budget line.
C) an increase in the intercepts of the budget line.
D) a decrease in the intercepts of the budget line.

32 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


8) In the above figures, which one reflects an increase in the price of chicken?
A) Figure A
B) Figure B
C) Figure C
D) Figure D

9) In the above figures, which one reflects an increase in the consumer’s income?
A) Figure A
B) Figure B
C) Figure C
D) Figure D

10) The substitution effect explains that when the price of a good increases, consumers will
consume
A) less of the good and more of some other good
B) more of the good and less of some other good
C) more of the good because their real incomes are lower after the price increase
D) less of the good because their real incomes are higher after the price increase

33 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Trial 1 Exam Questions

1) On the graph above, if the budget line shifts from BL2 to BL1, it is because the price of
A) K increased
B) K decreased
C) L increased
D) L decreased

2) The graph above indicates that the consumer


A) at point A is indifferent between a of apples and b of butter
B) at point A is consuming either a of apples or b of butter.
C) is indifferent between a of apples plus b of butter on the one hand and c of apples
plus d of butter on the other.
D) is correctly described by all of the above.

34 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Trial 2 Quiz and Exam Questions

1) When a Lorenz curve is as far away from the equality line as possible, there is
A) perfect income equality.
B) perfect income inequality.
C) a Gini coefficient close to zero.
D) an equal distribution of income.

2) The closer the Gini coefficient is to zero, the


A) greater the degree of income inequality.
B) less the degree of income inequality.
C) greater the degree of discrimination.
D) less the degree of discrimination.

3) Compared to the distribution of income, the distribution of wealth is


A) about the same
B) much more equal
C) much less equal
D) a little more equal

4) Which of the following statements is false?


A) The Lorenz curve expresses the relationship between cumulative percentage of
households and cumulative percentage of income.
B) If there was perfect income equality, the Lorenz curve would be a 45-degree line.
C) The Gini coefficient is a measurement of the degree of inequality in the income
distribution.
D) The Gini coefficient is a number between 0 and 100.

5) The Lorenz curve for the United States economy has:


A) shifted further away from the line of equality over the years.
B) shifted toward the line of perfect equality over the years.
C) equaled the line of perfect equality in recent years.
D) become a vertical line in recent years.

6) The Lorenz curve is helpful in visualizing the:


A) Tradeoff between unemployment and inflation
B) Relationship between the prices received and paid by farmers
C) Degree of inequality in the distribution of income
D) Relationship between education and income

35 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


7) According to the graph above, which curve shows the most unequal distribution of
income?
A) A.
B) B.
C) C.
D) D.

8) The Gini coefficient:


A) measures the relative extent of poverty in a nation.
B) compares the income of persons, households, or households at the 90th percentile
of the income distribution to the income at the 10th percentile.
C) is a numerical measure of the overall dispersion of income in a nation.
D) is a ratio consisting of (in the numerator) the entire area below and to the right of
the equality line in a Lorenz diagram and (in the denominator) the area between
the equality line and Lorenz curve.

9) Which of the following Gini coefficients indicates the highest degree of income
inequality?
A) 0.78
B) 0.65
C) 0.29
D) 0.42

10) Which of the following statements is true?


A) If the Gini coefficient is large, then the space between the Lorenz curve and the
line of perfect income equality will be small.
B) If the Gini coefficient is 1.00, then the Lorenz curve and the line of perfect
income equality coincide.
C) If the government succeeds in decreasing income inequality, then the Gini
coefficient will decrease.
D) If the Gini coefficient is 0.50, then one-half of the households earn one-half of the
income, and the other half of the households earn the other half.

36 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


Trial 2 Exam Questions

1) Refer to the above diagram where curves (a) through (e) represent five different countries. If
curve (c) reflects the Lorenz curve in the U.S., which curves would reflect the Lorenz curve
of Brazil and of Canada?
A) curve (b) for Canada and curve (d) for Brazil.
B) curve (d) for Canada and curve (b) for Brazil.
C) curve (a) for Canada and point (e) for Brazil.
D) point (e) for Canada and curve (a) for Brazil.

2) Wealth in the United States is:


A) distributed in a way that reduces the degree of income inequality.
B) more unequally distributed than is income.
C) less unequally distributed than is income.
D) distributed in a way that has no effect on income inequality.

37 
 

Electronic copy available at: https://ssrn.com/abstract=3036455


   38 
 
OLS Full Specification Cross-Sections

Mean Trial 1 Assessments Trial 2 Assessments


(St. Dev.) Quiz 1 Exam 1 Quiz 2 Exam 2
(1) (2) (3) (4)
Computer -0.209 -0.104 -0.108 0.079
(-0.979) (-1.043) (-0.472) (0.788)
Attendance 0.940 -0.099 1.108* 0.045
(1.509) (-0.410) (1.948) (0.191)
Female -0.493** 0.254** -0.113 -0.008
(-1.996) (2.179) (-0.444) (-0.071)
Non-white -0.395 0.042 -0.222 -0.116
(-0.990) (0.203) (-0.580) (-0.610)
Age -0.093* -0.027 -0.131 -0.049*
(-1.725) (-0.862) (-1.442) (-1.904)
ACT 0.094* 0.065*** 0.154*** 0.051***
(1.807) (3.743) (4.222) (3.406)
GPA 0.167*** -0.007 -0.025 -0.005
(4.256) (-0.427) (-0.864) (-0.461)
Hours 0.004 -0.001 -0.001 0.001
(0.329) (-0.163) (-0.133) (0.205)
Sophomore: = 1 if student is a sophomore and 0 0.548 -0.104 -0.019 0.230 0.087
otherwise (0.499) (-0.398) (-0.161) (0.882) (0.775)
Financial aid eligible: = 1 if student is eligible for 0.826 0.231 0.064 -0.894*** -0.003
financial aid and 0 otherwise (0.380) (0.724) (0.452) (-3.278) (-0.024)
Veteran: = 1 if student is an armed forces veteran 0.013 0.833 -0.360 0.874* 0.438*
and 0 otherwise (0.114) (1.370) (-1.457) (1.712) (1.670)
Computer preference: = 1 if student primarily uses a 0.078 -0.148 -0.337* -0.599 -0.351*
computer or tablet to take notes and 0 otherwise (0.269) (-0.337) (-1.822) (-1.034) (-1.781)
Very good note-taker: = 1 if student thinks she is 0.109 0.373 0.110 0.149 0.218
a very good note-taker and 0 otherwise (0.312) (0.941) (0.701) (0.475) (1.403)
Poor note-taker: = 1 if student thinks she is a poor 0.078 -0.098 0.155 0.145 -0.080
note-taker and 0 otherwise. (0.269) (-0.198) (0.743) (0.270) (-0.305)
Writes all instructor says: = 1 if student writes down 0.139 0.216 -0.039 0.165 -0.185
all the instructor says and 0 otherwise (0.347) (0.773) (-0.283) (0.584) (-1.442)
Writes all instructors writes: = 1 if student writes 0.748 -0.201 -0.025 0.184 0.020
down all the instructor writes and 0 otherwise (0.435) (-0.788) (-0.208) (0.704) (0.172)
Writes all math and graphs: = 1 if students writes all 0.691 0.139 0.090 -0.311 0.041
math and graphs from the lecture and 0 otherwise (0.463) (0.534) (0.796) (-1.216) (0.358)
Studies often: = 1 if student studies notes several 0.643 -0.221 0.166 0.110 -0.050
times before each assessment and 0 otherwise (0.480) (-0.938) (1.541) (0.457) (-0.496)
Notes are useful: = 1 if students thinks her notes are 0.561 0.373 0.019 0.890*** 0.076
very useful and 0 otherwise (0.497) (1.527) (0.176) (3.601) (0.770)
Section 2 0.009 -0.452*** 0.188 0.570***

Electronic copy available at: https://ssrn.com/abstract=3036455


     39 
 
(0.026) (-2.920) (0.522) (3.402)
Section 3 -0.065 -0.113 -0.147 0.618***
(-0.222) (-0.722) (-0.396) (3.974)
Section 4 -0.404 -0.166 0.295 0.788***
(-0.972) (-0.998) (0.783) (5.450)
Section 5 0.314 -0.057 -0.389 -0.086
(0.949) (-0.341) (-1.121) (-0.509)
Constant 6.418*** -0.134 6.370*** 0.642
(3.247) (-0.160) (2.672) (0.825)
Observations 202 204 198 202
R squared 0.183 0.196 0.299 0.337
F-stat 2.58*** 2.76*** 4.30*** 6.35***

t-statistics are in parentheses; *** indicates statistical significance at better than the 1% level (p < 0.01); ** indicates
statistical significance at better than the 5% level (p < 0.05); * indicates statistical significance at better than the 10% level
(p < 0.10). Heteroskedasticity robust standard errors are clustered at the individual level.

Electronic copy available at: https://ssrn.com/abstract=3036455

You might also like