You are on page 1of 251

IN THE UNITED STATES DISTRICT COURT

FOR THE NORTHERN DISTRICT OF


GEORGIA ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN
ALSTON LATHAM, and BRIAN
JAY VAN GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity


as Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair
of the Georgia State Election Board,
DAVID J. WORLEY, in his official
capacity as a member of the Georgia
State Election Board, REBECCA N.
SULLIVAN, in her official capacity as
a member of the Georgia State Election
Board, MATTHEW MASHBURN, in
his official capacity as a member of the
Georgia State Election Board, and ANH
LE, in her official capacity as a member
of the Georgia State Election Board,

Defendants.

INTERVENORS’ MOTION TO EXCLUDE TESTIMONY OF SHIVA


AYYADURAI, RUSSELL JAMES RAMSLAND, JR., MATTHEW
BRAYNARD, WILLIAM M. BRIGGS, RONALD WATKINS, BENJAMIN A.
OVERHOLT, ERIC QUINNELL, S. STANLEY YOUNG, AND “SPYDER”
TABLE OF CONTENTS

TABLE OF CONTENTS ............................................................................................i


TABLE OF AUTHORITIES .................................................................................... ii
I. INTRODUCTION ...............................................................................................1
II. LEGAL STANDARD .........................................................................................2
III. ARGUMENT .......................................................................................................4
A. Ayyadurai is not qualified and fails to disclose his methods. ..........................4
B. Ramsland is not qualified and fails to disclose his methods. ...........................7
C. Braynard is not qualified and his report does not utilize generally accepted
methodology. ..................................................................................................12
D. Briggs’ report is built on a faulty foundation and is not helpful. ...................15
E. Watkins is not qualified and his report rests entirely on speculation. ............17
F. Overholt discloses no relevant qualifications and his report contains serious
errors. ..............................................................................................................19
G. Quinnell and Young are not qualified and their declarations are unreliable. 21
H. It is impossible to assess the qualifications of the unnamed individual known
as “Spyder” and his declaration consists of nothing more than speculation. .24
IV. CONCLUSION..................................................................................................25

i
TABLE OF AUTHORITIES

CASES

Bowers v. Norfolk S. Corp.,


300 Fed. App’x 700 (11th Cir. 2008) ................................................................... 8

Chapman v. Proctor & Gamble Distrib., LLC,


766 F.3d 1296 (11th Cir. 2014) ...................................................................passim
Daubert v. Merrell Dow Pharm., Inc.,
509 U.S. 579 (1993) .......................................................................................... 2, 4

Greater Hall Temple Church of God v. S. Mutual Church Ins. Co.,


820 Fed. App’x 915 (11th Cir. 2020) ................................................................. 25

Horton v. Maersk Line Ltd.,


603 Fed. App’x 791 (11th Cir. 2015) ............................................................... 4, 5
McClain v. Metabolife Int’l, Inc.,
401 F.3d 1233 (11th Cir. 2005) ........................................................12, 14, 21, 25

McCorvey v. Baxter Healthcare Corp.,


298 F.3d 1253 (11th Cir. 2002) .......................................................................... 12
McDowell v. Brown,
392 F.3d 1283 (11th Cir 2004) ...............................................................5, 7, 9, 16
Redmond v. City of East Point, Georgia,
No. 1:00-CV-2492-WEJ, 2004 WL 6060552 (N.D. Ga. Mar. 29,
2004) ..................................................................................................................... 9

Rider v. Sandoz Pharm. Corp.,


295 F.3d 1194 (11th Cir. 2002) ..............................................................19, 21, 25

Robinson v. City of Montgomery,


Civil Action No. 2:01cv40-CSC, 2005 WL 6743206 (M.D. Ala.
March 2, 2005) ...................................................................................................... 3

ii
Smith v. Ortho Pharm. Corp.,
770 F. Supp. 1561 (N.D. Ga. 1991) .............................................................passim
United Fire & Cas. Co. v. Whirlpool Corp.,
704 F.3d 1338 (11th Cir. 2013) (per curiam) ....................................................... 3

United States v. Frazier,


387 F.3d 1244 (11th Cir. 2004) ...................................................................passim
United States v. Wilk,
572 F.3d 1229 (11th Cir. 2009) ............................................................................ 4

OTHER AUTHORITIES

Chris Francescani, The men behind QAnon (Sept. 22,2020), ABC


NEWS, https://abcnews.go.com/Politics/men-
qanon/story?id=73046374 .................................................................................. 18
Fed. R. Evid. 702 ..............................................................................................passim

Mark Niesse, Absentee ballots can begin to be opened, but not


counted, in Georgia, The Atlanta Journal-Constitution (Oct 19,
2020), https://www.ajc.com/politics/absentee-ballots-can-begin-to-
be-opened-but-not-counted-in-
georgia/BRBLHVUJOFHB5OEHAMZV34HPDA/ .......................................... 23

iii
I. INTRODUCTION

In an attempt to support their claims of a multi-national conspiracy to rig the

results of the presidential election for President-Elect Joseph R. Biden, Jr.—which

Plaintiffs allege was accomplished by methods ranging from “ballot stuffing” at

voting machines via a hidden software algorithm to illegally processing tens of

thousands of absentee ballots—Plaintiffs have filed multiple “expert” declarations

and reports. But the individuals put forward by Plaintiffs as “experts” are wildly

unqualified. For example, a former Trump staffer who has publicly stated that he is

working hand in glove with the Trump campaign to get the election overturned and

delivered to the President purports to offer a statistical analysis of election data

despite having had no relevant training, skill, or experience. Others’ grounding in

their claimed areas of expertise is equally suspect. The analyses they offer rely on

patently incomplete or faulty data. Over and over, the reports fail to disclose the

methods employed by their authors, error rates, or even how underlying data was

obtained. Where their methodology is discernable, Plaintiffs’ “experts” regularly

use methods that are not at all standard or trusted in the relevant field, and draw

conclusions that are nothing more than speculation.1

1
Some reports were attached as exhibits to the Complaint, while others are
referenced in Plaintiffs’ motion for temporary restraining order, and some are not

1
Plaintiffs attempt to use these unreliable reports written by unqualified

individuals to seek extraordinary relief, including an order de-certifying the

November 2020 election results and a declaration that Georgia’s electoral college

votes will be awarded to President Trump despite Georgia voters’ clear decision

choosing President-Elect Biden. None of these reports supports this relief, and none

is sufficient to pass the Daubert standard for admissibility. All should be excluded.

II. LEGAL STANDARD

Courts may only admit expert testimony when “(1) the expert is qualified to

testify regarding the subject of the testimony; (2) the expert’s methodology is

sufficiently reliable as determined by the sort of inquiry mandated in Daubert [v.

Merrell Dow Pharm., Inc., 509 U.S. 579 (1993)]; and (3) the expert’s testimony will

assist the trier of fact in understanding the evidence or determining a fact at issue.”

Chapman v. Proctor & Gamble Distrib., LLC, 766 F.3d 1296, 1304 (11th Cir. 2014)

(quotation marks and citation omitted); see also Fed. R. Evid. 702. As proponents of

the expert testimony at issue, Plaintiffs bear the burden to establish these

requirements. Chapman, 766 F.3d at 1304.

An expert is qualified if they can testify competently regarding the matters

referenced in any motion or pleading at all. It is therefore unclear which reports


Plaintiffs plan to rely on in support of their motion for temporary restraining order;
in any event, all should be excluded.
2
addressed by virtue of their education, training, experience, knowledge, or skill.

United States v. Frazier, 387 F.3d 1244, 1260-61 (11th Cir. 2004). Where a proposed

expert fails to demonstrate experience, training, or other qualifications in the field

and that methodologies that they utilize to provide their opinion, they cannot be

qualified as an expert. Smith v. Ortho Pharm. Corp., 770 F. Supp. 1561, 1566 (N.D.

Ga. 1991).

In determining whether proffered expert testimony is reliable, courts consider

whether: (1) the expert’s methodology has been tested or is capable of being tested;

(2) the theory or technique has been subjected to peer review and publication; (3)

there is a known or potential error rate of the methodology; and (4) the technique

has been generally accepted in the relevant scientific community. United Fire &

Cas. Co. v. Whirlpool Corp., 704 F.3d 1338, 1341 (11th Cir. 2013) (per curiam)

(citing Daubert, 509 U.S. at 593-94). Failure to disclose the data or methodology

that form the basis of an expert’s conclusions warrants exclusion. Robinson v. City

of Montgomery, Civil Action No. 2:01cv40-CSC, 2005 WL 6743206, at *3 (M.D.

Ala. March 2, 2005).

Finally, and most fundamentally, the Court must ensure that the expert’s

testimony “is relevant to the task at hand.” Chapman, 766 F.3d at 1306 (quotation

marks and citation omitted). If the Court determines that the testimony is not

3
relevant, the Court should exclude even reliable expert testimony. See Daubert, 509

U.S. at 591; United States v. Wilk, 572 F.3d 1229, 1235 (11th Cir. 2009).

III. ARGUMENT
A. Ayyadurai is not qualified and fails to disclose his methods.

Shiva Ayyadurai is an engineer with training in mechanical engineering and

biomedical engineering. He seeks to testify regarding voting patterns in certain

Georgia counties. See Declaration of Shiva Ayyadurai, (“Ayyadurai Decl.”) ECF

No. 6-1, at ⁋⁋ 3, 15-17, 30. Ayyadurai, however, does not possess relevant

education, experience or background to offer opinions on these topics and, even if

he did, he fails to disclose his methodology.

Ayyadurai is not qualified to opine on voting behavior, projections, statistical

analysis of ethnicity data in relation to voting behavior, or cumulative voting

analysis. He has not been previously qualified to speak on these topics and his

report identifies no education or experience that equips him to offer these opinions.

Though Ayyadurai has degrees in engineering and computer science, applied

mechanics, and systems biology, he does not explain how these credentials qualify

him to offer the opinions at issue. See Horton v. Maersk Line Ltd., 603 Fed. App’x

791, 798-99 (11th Cir. 2015) (finding witness unqualified to opine on corner casing

defects even though witness knew how to repair corner casings). Ayyadurai claims

4
to be “an engineer” with “vast experience in engineering systems, pattern

recognition, [and] mathematical and computational modeling and analysis.”

Ayyadurai Decl. at ⁋ 2. His report, however, does not indicate how this “vast

experience” qualifies him to testify about or analyze voting behavior. See Horton,

603 Fed. App’x at 798-99. Indeed, it appears this is the first time in his entire career

that he has even contemplated these issues. His lack of qualifications alone warrants

exclusion. See, e.g., Smith, 770 F. Supp. at 1566; Chapman, 766 F.3d at 1313-14.

In addition, Ayyadurai’s report is inadmissible because he fails to disclose

the methods he used and, even if any method can be discerned, it is obviously

unreliable. McDowell v. Brown, 392 F.3d 1283, 1298 (11th Cir 2004). Ayyadurai

summarizes his conclusions as follows: (1) there are improbable vote pattern

anomalies, including instances of “High Republicans, Low Trump” vote patterns in

certain precincts; (2) in three counties the “only plausible explanation for the vote

distribution was that President Trump received near zero Black votes,” Ayyadurai

Decl. at 27-28; and, (3) an unidentified “‘weighted race’ algorithm” transferred

“approximately 48,000 votes from President Trump to Mr. Biden,” id. at 28. As

noted by Intervenor’s Rebuttal Expert Jonathan Rodden, however, Ayyadurai

“provides no indications about his data sources,” “does not explain how he

measures his variables,” and “[h]is claims about race and ethnicity are, frankly,

5
inscrutable, and thus difficult to evaluate with data analysis.” Report of Jonathan

Rodden (“Rodden Rep.”) at 24.2

For but one example, Ayyadurai summarizes demographic data from

undisclosed sources purportedly related to the percentage of Republican-,

Democratic-, and Independent-affiliated individuals within certain counties, as well

as the “ethnic” makeup of those counties, again by percentage. Ayyadurai Decl. at

⁋⁋ 14-21. Ayyadurai then references graphs that he claims show that as the

percentage of Republicans in certain precincts increases, the overall percentage of

Republican votes for President Trump decreases. See id. at ⁋⁋ 15-21. He does not

explain why this is problematic or how, as he also contends, these graphs show

fraud. As Dr. Rodden notes, such a pattern is not surprising—and it certainly is not

an indication of election fraud. See Rodden Rep. at 24-35. In fact, Ayyadurai’s

“phrase—‘high Republican but low Trump’—describes something that we saw not

only in Savannah, [Georgia], but in metro areas around Georgia and the United

States: white metro-area voters who typically vote for Republican candidates

continued to do so in down-ballot races, but a number of them voted for the

2
Dr. Rodden is a tenured Professor of Political Science at Stanford University and
the founder and director of the Stanford Spatial Social Science Lab. See Rodden
Rep. He is an established expert on election data analysis and has appeared—and
been credited—as an expert in numerous voting rights and election-related lawsuits
and litigation across the country. Rodden Rep. at 3-6.
6
Democratic candidate in the presidential race.” Id. at 30. Ayyadurai does not

account for the explanation provided by Dr. Rodden, provide the methodology used

to reach his conclusion or identify the source of his data.

The entirety of Ayyadurai’s report suffers from the same flaws: a

conspicuous failure to disclose the source(s) of the data relied on, how conclusions

were reached, and what methodology, if any, underlies the opinions. See, e.g.,

Ayyadurai Decl. ⁋⁋ 15(g)-(h), 16(g)-(h), 17(g)-(h) (failing to identify source or

relevance of data as well as method underlying opinion); ⁋⁋ 30-31 (lacking

reference to data source, explanation of algorithm or how votes were transferred

from Trump to Biden); Rodden Rep. at 24, 32-35. The report fails to disclose

enough about the methods employed or relied upon so that those methods can be

reviewed, tested, duplicated, and verified. See McDowell, 392 F.3d at 1298. Reports

that omit even a minimal disclosure of the underlying methods are inadmissible.

See Frazier, 387 F.3d at 1260. Ayyadurai’s declaration should be excluded. Id. at

1265 (“‘[t]he court’s gatekeeping function requires more than simply ‘taking the

expert's word for it.’’”).

B. Ramsland is not qualified and fails to disclose his methods.

Russell James Ramsland, Jr. offers opinions regarding whether the use of

certain voting machines influenced the outcome of the 2020 presidential election in

7
Georgia. See, e.g., Declaration of Russell James Ramsland, Jr. (“Ramsland Decl.”)

ECF No. 1-10, ⁋ 8. Ramsland’s report should be excluded: because Ramsland is not

qualified as an expert and fails to disclose the information relied on and the

methodology he (or others) utilized to reach his conclusions.

First, Ramsland is a businessman who lacks the qualifications necessary to

offer expert opinion testimony on the impact, if any, on the 2020 presidential

election from the use of certain voting machines. See id. at ⁋ 2. In his declaration,

Ramsland candidly admits his lack of relevant knowledge, education and

experience stating that he “relied on [his current employer’s] experts and

resources,” noting that his employer, “which provides a range of security services,”

“contract[s] with statisticians when needed,” and employs a “wide variety of cyber

and cyber forensic analysts as employees, consultants and contractors.” Id.

Ramsland does not disclose, however, who these unidentified “experts” are, which

of them were utilized, the sources of data they relied upon, the manner in which

they performed whatever work they might have done and in what way Ramsland,

in turn, relied on that work to prepare his own report. Id.; Bowers v. Norfolk S.

Corp., 300 Fed. App’x 700, 703 (11th Cir. 2008).

Instead, Ramsland appears to be parroting analyses from other unidentified

individuals who claim to possess expertise that he does not. This alone is more than

8
sufficient to exclude his report. See Redmond v. City of East Point, Georgia, No.

1:00-CV-2492-WEJ, 2004 WL 6060552, at *15 (N.D. Ga. Mar. 29, 2004) (noting

that, under Daubert, “[a] scientist, however well credentialed he may be, is not

permitted to be the mouthpiece of a scientist in a different specialty”).

Even if Ramsland were qualified (and he assuredly is not), his report is

inadmissible because it utterly fails to disclose the data or methodology he (or

others) used, as well as the bases for his (or other’s) analyses and conclusions. See

Frazier, 387 F.3d at 1264-65. Indeed, the report can be searched in vain for

Ramsland’s data sources, the statistical analyses conducted, margins of error, or

virtually anything that might suggest serious scholarly or expert analysis.

And, to the extent any methodology can be discerned from the scant

information in the report, that methodology is unreliable. McDowell, 392 F.3d at

1298. The proffered opinions are therefore inadmissible. See Fed. R. Evid. 702.

For example, Ramsland references a “regression analysis” used to “develop

a model/equation to predict in any county what percentage of vote could reasonably

be expected to go to candidate Biden,” noting that the model does a “good job”

predicting Biden’s percentage of votes in “most counties.” Ramsland Decl. at ⁋ 8.

But Ramsland fails to describe that “regression analysis,” or the “model/equation”

developed from it. He is remarkably silent as to the inputs for the regression

9
analysis, the method itself, any assumptions, the predictive findings, and the error

rate. He claims that the undescribed model does a “good job” predicting Biden’s

percentage of votes in “most counties,” but nothing more is provide: How accurate

is a “good job”? How many counties is “most counties”? This isn’t even close to

an appropriate or reliable statistical analysis.

Similarly, Ramsland concludes that, in counties that used certain voting

machines or devices, candidate Biden “over-performed” beyond the expected

results using the undisclosed predictive model, resulting in 123,725 votes in

Georgia that are “statistically invalid.” Id. at ⁋ 10. He opines that Biden’s

“overperformance” is “highly indicative (and 99.9% statistically significant)

that something strange is occurring with the [voting] machines.” Id. at ⁋ 11

(emphasis in original). Again, no details regarding these calculations—including

how it was determined that the results are statistically significant or how statistical

significance of “strangeness” might be measured—are disclosed. The exact type of

“strangeness” at issue is left to the reader’s imagination. Ramsland’s other opinions

suffer from the same issues. See, e.g., id. at ⁋ 13 (failing to disclose data or explain

method underlying plot purportedly showing widespread fraud); ⁋⁋ 15, 18-19

(estimating magnitude of “fraudulent[] and erroneous[]” vote attribution without

providing data or explaining methodology).

10
Moreover, an examination of the possible methodologies underlying

Ramsland’s opinions reveal deep flaws. As noted in Dr. Rodden’s rebuttal report,

Ramsland relies on “idiosyncratic, non-standard statistical techniques” that are ill-

suited for the analysis he attempts to conduct. Rodden Rep. at 36. Among the many

identified by Dr. Rodden: (1) inappropriate reliance on a correlation that is driven

primarily by cross-state variation; (2) failure to address causal inference problems

including that Democratic leaning counties were more likely to adopt Dominion

voting systems; and (3) failure to include fixed effects which is standard practice in

the type of social science research Ramsland attempted. Id. at 36-43. In short, “the

research design used in the Ramsland report is ill-equipped to detect differences in

vote shares that are caused by use of particular voting systems.” Id. at 46. The

rebuttal report of Kenneth R. Mayer identifies additional errors including, for

example, that the data Ramsland relies on from undisclosed sources does not match

the actual data from the state. Report of Kenneth R. Mayer (“Mayer Rep.”) at 4-5.3

Ramsland’s failure to provide or even describe the methodology underlying

his opinions as well as the lack of reliability in the methodology that can be

3
Kenneth R. Mayer has a Ph.D. in political science from Yale University and is on
the faculty of the political science department of the University of Wisconsin-
Madison. Mayer Rep. at 2. He has authored articles on election administration and
has been qualified as an expert in numerous matters. Id. at 2-3.
11
ascertained from his report mandate exclusion of Ramsland’s testimony. See, e.g.,

McCorvey v. Baxter Healthcare Corp., 298 F.3d 1253, 1256 (11th Cir. 2002)

(affirming exclusion of testimony where proffered expert did not test or consider

alternatives); McClain v. Metabolife Int’l, Inc., 401 F.3d 1233, 1240 (11th Cir.

2005) (determining it inappropriate to admit expert testimony that was “not

support[ed] . . . with sufficient data or reliable principles” and did not “follow the

basic methodology” used by experts in the relevant field).

C. Braynard is not qualified and his report does not utilize generally
accepted methodology.
Nearly a week after filing their motion, on December 3, Plaintiffs filed a report

from Matthew Braynard. Braynard seeks to offer opinions on the estimated number

of Georgia voters: (1) who received an absentee ballot but did not request one; (2)

who returned an absentee ballot but the state database reflects the voter as not having

returned a ballot; (3) recorded as having voted but who deny voting; (4) who were

not Georgia residents when they voted; (5) who were registered with a postal box

disguised as a residential address; and (6) who voted in multiple states. Report of

Matthew Braynard (“Braynard Rep.”) ECF No. 45-1, at 7-10. Braynard, however,

does not have the appropriate qualifications to opine on these topics, he does not

follow standard methodology in the relevant scientific field, and the survey

underlying several of his opinions is fatally flawed.


12
Braynard has a Bachelor of Business Administration and a Master of Fine Arts

in “Writing.” Braynard Rep. at Ex. 1. He has worked for, among others, the

Republican National Committee and Donald J. Trump for President. See id.

Braynard does not identify any education or experience in political science,

statistics, or survey design, nor does he list any publications, research projects, or

speaking engagements on those or any other subjects. He has not offered any expert

testimony in court or deposition in the last four years, if ever. Id. at 4. While he has

worked in the data analysis field, including in analysis of voter data, nothing in his

resume indicates education, experience, or knowledge in survey design or statistical

methods in social sciences. Because he lacks the requisite education, training,

experience, knowledge, and skill to offer his opinions, his report should be excluded.

See, e.g., Smith, 770 F. Supp. at 1566; Chapman, 766 F.3d at 1313-14.

Even if Braynard could qualify as an expert in the relevant fields, his report is

unreliable and therefore inadmissible. As more fully explained in the rebuttal report

of Stephen Ansolabehere, “none of the[] claims meets scientific standards” in the

appropriate field and Braynard has “no scientific basis for drawing any inferences

or conclusions from the data presented.”4 Rebuttal Report of Stephen Ansolabehere

4
Dr. Ansolabehere is the Frank G. Thompson Professor of Government in the
Department of Government at Harvard University. Ansolabehere Rep. I at ⁋ 10. His

13
Regarding Braynard (“Ansolabehere Rep. I”) ⁋ 3. Troublingly, none of Braynard’s

estimates are presented with a measure of statistical precision or uncertainty which

is standard in the field. Id. at ⁋⁋ 17, 23. Measures of uncertainty such as standard

errors, confidence intervals, or margins of error “are necessary for gauging how

informative estimates are, and what inferences and conclusions may be drawn,” and

“[w]ithout such quantities it is impossible to draw statistical inferences from data.”

Id. at ⁋⁋ 22-23. Moreover, Braynard’s conclusions are couched as having a

“reasonable degree of scientific certainty,” Braynard Rep. at 7-10, but that phrase is

meaningless in scientific research. Ansolabehere Rep. I at ⁋⁋ 24-26. As Dr.

Ansolabehere explains, errors in recordkeeping readily account for each of the

claims made in Braynard’s report. See id. at 30-33. Finally, the study on which

several of Braynard’s opinions rely is riddled with errors as more fully explained in

Section III.D below. See id. at ⁋⁋ 34-68. Braynard’s opinions should be excluded.

See McClain, 401 F.3d at 1240 (determining it inappropriate to admit expert

testimony that was “not supported with sufficient data or reliable principles” and did

not “follow the basic methodology” used by experts in the relevant field).

areas of expertise include statistical methods in social sciences and survey research
methods. Id. at ⁋ 12.
14
D. Briggs’ report is built on a faulty foundation and is not helpful.

Briggs has a Ph.D. in Statistics and considers himself a “statistical

consultant.” Declaration of William M. Briggs (“Briggs Decl.”) ECF No. 1-1, at 3,

21. But his report, which purports to quantify the magnitude of “troublesome”5

unreturned absentee ballots, is unreliable because, among other reasons, it rests

entirely on faulty data collected by a fatally flawed survey and fails to account for a

variety of unremarkable reasons for the existence of the so-called “troublesome”

ballots. Additionally, Briggs’ conclusion that there may have been “error[s] of some

kind” for certain ballots does not assist the trier of fact in that Briggs does not

conclude or even suggest that these purported “errors” had the possibility to change

the result of the presidential election in Georgia.6

Briggs’ report is based entirely on survey data from a survey performed by

Braynard. Id. at 1. Briggs notes at the outset that his analysis “assume[s] survey

respondents are representative and the data is accurate.” Id. at 2. Briggs, however,

offers no explanation as to why it is reasonable for him to assume the data is accurate

5
Briggs categorizes an unreturned absentee ballot as “troublesome” if it is: (1) a
ballot sent to a voter who did not request one, or (2) a voted ballot that was returned
but not recorded. Briggs Rep. at 1.
6
Briggs’ report includes information relating to multiple states. Though there are
errors in the survey methodology and data analysis for the other states, Intervenors
focus only on issues relating to Briggs’ analysis of Georgia ballots. See, e.g.,
Ansolabehere Rep. II at ⁋⁋ 20-24, 63, 67-74.
15
or the sample size representative. McDowell, 392 F.3d at 1299 (“[S]omething doesn't

become ‘scientific knowledge’ just because it’s uttered by a scientist; nor can an

expert’s self-serving assertion that his conclusions were ‘derived by the scientific

method’ be deemed conclusive.” (citation omitted)).

As fully described in the Rebuttal Report of Stephen Ansolabehere Regarding

Briggs (“Ansolabehere Rep. II”), Briggs’ report is unreliable. First, the survey used

to collect the data on which Briggs’ opinions are based was flawed because, among

other reasons, it allowed individuals other than the survey “target,” individuals

whose ballots were marked as unreturned, to answer survey questions. This error

contaminates the data, “and is of sufficient magnitude to alter the results

significantly.” Ansolabehere Rep. II at ⁋ 51. Second, Braynard’s survey had an

unacceptably low response rate. Braynard was only able to reach 0.4% of the

individuals he sought to interview. Id. at ⁋ 39. Put another way, 99.6% of the

individuals targeted by the survey did not respond. Id. This is not an acceptable

response rate. Id. at ⁋ 41. Further compounding this issue, without information about

the target population or the responding population, it is impossible to know whether

the responding population is representative and therefore whether there is any

scientific value to the survey. Id. at ⁋ 42. Third, Briggs’ report fails to account for

unremarkable reasons, such as late arriving ballots, missing or mismatched signature

16
rejections, or spoiled or voided ballots, for why returned absentee ballots might not

be recorded or counted. Id. at ⁋ 58. These serious issues render the report unreliable

and warrant its exclusion. Chapman, 766 F.3d at 1305-06 (finding a court “is free to

‘conclude that there is simply too great an analytical gap between the data and the

opinion proffered.’”).

Finally, Briggs’ report is not relevant to the question presented to the Court.

Setting aside the problems with the data, Briggs does not opine regarding the exact

nature of the “errors” or how any error would or even could have impacted the

outcome of the election. Plaintiffs claim that “[t]ens of thousands of votes counted

toward Vice President Biden’s final tally were the product of illegality, and physical

and computer-based fraud leading to ‘outright ballot stuffing.’” Pl.’s Mot. for TRO,

ECF No. 6, at 1. Briggs’ report, however, does not speak to these issues and is

therefore not helpful to the Court. The report should be disregarded on this ground

as well. See Fed. R. Evid. 702(a).

E. Watkins is not qualified and his report rests entirely on speculation.

On December 1, Plaintiffs belatedly filed a declaration from Ronald Watkins.

Watkins is a “network and information defense analyst and a network security

engineer” with nine years of experience. Declaration of Ronald Watkins (“Watkins

Decl.”) ECF No. 31-1, at ⁋ 5. He was the administrator of 8chan, an

17
anonymous online forum, and administered its successor forum, 8kun.

Chris Francescani, The men behind QAnon (Sept. 22,2020), ABC NEWS,

https://abcnews.go.com/Politics/men-qanon/story?id=73046374. Watkins seeks to

provide testimony to “alert the public and let the world know the truth about actual

voting tabulation software designed . . . to facilitate digital ballot stuffing.” Watkins

Decl. at ⁋ 4. While the declaration is not labeled as an expert report (though Watkins

claims he is an expert) and it is missing key components of an expert report (for

example, Watkins’ CV), to the extent Plaintiffs seek to offer Watkins as an expert in

support of their motion, it should be excluded.

Plaintiffs present no evidence that Watkins is qualified to offer any opinion

regarding election software. Watkins’ stated experience—as a “network and

information defense analyst and a network security engineer”—does not qualify him

to offer testimony regarding purported vulnerabilities in voting systems. See id. at 5.

Moreover, it is not clear whether Watkins has ever used or even examined the

software at issue or whether he has any experience in election administration.

Second, Watkins’ opinions are not helpful. His declaration appears to consist

entirely of unsupported speculation regarding purported vulnerabilities in election

software based on a review of publicly available documents including user manuals.

See, e.g., id. at ⁋⁋ 6-13. If it wishes, the Court can review these public documents

18
itself; Watkins’ speculation is not helpful. Such testimony should be disregarded.

See Frazier, 387 F.3d at 1260; see also Rider v. Sandoz Pharm. Corp., 295 F.3d

1194, 1202 (11th Cir. 2002) (“caution[ing courts] not to admit speculation,

conjecture, or inference that cannot be supported by sound scientific principles”).

F. Overholt discloses no relevant qualifications and his report contains


serious errors.

Plaintiffs recently filed the Affidavit of Benjamin A. Overholt (“Overholt

Aff.”). Overholt seeks to offer opinions on whether “anomalies existed that could

change the outcome of the presidential race in the 2020 General Election” based on

a review of public data from the Georgia Secretary of State. Overholt Aff., ECF No.

45-3, at ⁋ 4. As with many of the other proffered experts, Overholt provides only a

cursory explanation of his credentials and his report is riddled with errors.

Overholt states that he has a Ph.D. in Applied Statistics and Research

Methods, he is an “active federal civil servant” and has spent time reviewing

“election results” for the Civil Rights Division of the Justice Department. Id. at ⁋ 2.

He does not further describe his education, experience or other credentials or how

his prior work is similar or relates to, if at all, the work he performed for this matter.

He does not appear to have any experience with Georgia elections or analyzing

Georgia election data. The only other information that Overholt provides is his

assertion that he is qualified “[b]ased on [his] experience and because of [his]


19
personal interest in the matter.” Id. at ⁋ 4. This is patently insufficient to qualify

Overholt to offer opinions on whether there are “anomalies” in Georgia election data

that could change the outcome of the 2020 presidential election, and his opinions

should not be considered. See, e.g., Smith, 770 F. Supp. at 1566; Chapman, 766 F.3d

at 1313-14.

Even if the Court finds Overholt qualified, his affidavit contains serious

errors. As Intervenor’s rebuttal expert, Dr. Mayer, explains, “the claims made by . .

. Overholt are unsupported and incorrect.” Mayer Rep. at 1. Overholt does not know

“even the basics of . . . election administration or how elections are actually

conducted in Georgia or how election practices changed in 2020.” Id. Moreover,

Overholt’s report contains “inaccurate definitions of crucial terms (such as what a

‘spoiled’ ballot is) make[s] completely unsubstantiated claims based on pure

speculation and personal opinion, and reach[es] unsupported and incorrect

inferences about what the data show.” Id. For example, among other issues,

Overholt’s claim that there are 500,000 missing votes, is completely wrong. See id.

at ⁋ 20. There are, in fact, no missing votes, Overholt used the absentee voter request

file for his analysis which is not a record of all individuals who voted in the 2020

election but instead is a record of all absentee ballot requests. Mayer Rep. at 9. This

failure to understand the data being analyzed is a serious error and is one of many

20
examples demonstrating the unreliability of Overholt’s report. See also, Mayer at 6-

9. Overholt’s report should be excluded. See McClain, 401 F.3d at 1240; Rider, 295

F.3d at 1202.

G. Quinnell and Young are not qualified and their declarations are
unreliable.

The Court should exclude the declarations of Eric Quinnell and S. Stanley

Young. Neither are qualified to offer opinions on voting patterns in Georgia, and,

unsurprisingly, the opinions they do offer are not reliable. Two reports authored by

Quinnell (the second in collaboration with S. Stanley Young) have been submitted

in this matter. The first purports to analyze the results of the 2020 general election

in Fulton County. Declaration of Eric Quinnell (“Quinnell Decl.”) ECF No. 1-27.

The second seeks to corroborate Quinnell’s original findings. Declarations of Eric

Quinnell and S. Stanley Young (“Quinnell/Young Decl.”) ECF No. 45-2.

As pointed out by Intervenors’ rebuttal expert Dr. Rodden, Quinnell’s

methodologies are nonsensical, and his data analysis is flawed and meaningless.

Rodden Rep. at 7-8. Quinnell’s novel opinion is that election results should display

a normal distribution—a bell curve—and any departure from this indicates nefarious

activity, such as voter fraud. Quinnell Decl. at ¶ 18. As Dr. Rodden explains,

academically accepted literature dating back decades (as well as common sense)

confirms that partisan preferences are not uniformly distributed. Rodden Rep. at 9-
21
11. More frequently—and simply digesting the news over the course of the last few

decades would confirm—relevant social groups (such as young people, racial

minorities, or college graduates) are clustered and it is typical to see skewed voting

distributions. Id. at 11.

Quinnell’s second report is equally flawed. In that report, Quinnell and Young

sought to corroborate Quinnell’s earlier findings and identify what they characterize

as “anomalies in the voting patterns or new inferences that may explain some

existing results.” Quinnell/Young Decl. at ¶ 6. Primarily, they assert that their data

shows that nearly all of the absentee ballots for Trump were received by November

4, while the vast majority of absentee votes for Biden were received on or after

November 5, resulting in a distribution for Biden that “mathematically represents a

peculiar, non-linear external constraint unexplainable and unrelated to the arrival

and counting of absentee ballots.” Id. at ¶ 13.

Rather than corroborate Quinnell’s earlier report, the second report merely

compounds its errors. As Dr. Rodden explains in detail in his supplemental report

addressing the Quinnell/Young Declaration (“Rodden Supp. Rep.”), the declaration

utilizes unofficial data that may not reflect the running total of votes. Rodden Supp.

Rep. at 3-4. The report is also riddled with numerous unexplained, unsubstantiated,

and questionable assumptions built into their data and analysis (which, as before, is

22
not provided). Rodden Supp. Rep. at 4-7. In addition, the Quinnell/Young

Declaration claims that there is a “pattern” that represents a worrying anomaly in

voting patterns. Quinnell/Young Decl. at ¶ 6. But this is nonsense. As Dr. Rodden

explains, this “pattern” they purportedly discovered, even if it did exist, is entirely

consistent with what could happen naturally and is far from being anomalous.

Rodden Supp. Rep. at 8-9. This is because, at a very high level, there are many

precincts in Fulton County that are small and/or have very few absentee votes for

Trump. Id. at 9-11.

In addition, Quinnell/Young make fundamental errors in their analysis. For

instance, they note that “[a]ccording to the rules established in Georgia for the 2020

election, absentee ballots were allowed to be opened and counted for a full 3 weeks

leading up to and including election day.” Quinnell/Young Decl. at ¶ 20. But, as was

widely reported, this is false; Georgia election workers were only permitted to open

and scan—but not count—absentee ballots 15 days before election day.7 In the

analysis they conducted, where every day impacts the distribution, such a gross error

speaks to the lack of familiarity with the subject matter. The Quinnell and Young

7
See, e.g., Mark Niesse, Absentee ballots can begin to be opened, but not counted,
in Georgia, The Atlanta Journal-Constitution (Oct 19, 2020),
https://www.ajc.com/politics/absentee-ballots-can-begin-to-be-opened-but-not-
counted-in-georgia/BRBLHVUJOFHB5OEHAMZV34HPDA/.
23
declarations should be excluded.

H. It is impossible to assess the qualifications of the unnamed individual


known as “Spyder” and his declaration consists of nothing more than
speculation.
In support of their motion, Plaintiffs cite the “expert testimony” of an

individual whose name is redacted but is referred to by Plaintiffs as “Spyder.” See

Mot. to File Under Seal, ECF 5 at 9. Spyder claims to be an “electronic intelligence

analyst . . . with experience gathering SAM missile system electronic intelligence”

and “extensive experience as a white hat hacker used by some of the top election

specialists in the world.” Declaration of “Spyder” (“Spyder Decl.”) ECF No. 1-9, ⁋

2. Other than claiming to work for “top election specialists,” Spyder does not

disclose whether s/he has any experience with election administration or the

companies, software and machines used by states to conduct elections. Because

Spyder is not named, it is impossible to verify or even research what Spyder’s

credentials may be. On the record before the Court, Spyder cannot qualify as an

expert given his/her lack of relevant education, training, experience, knowledge, and

skill. See, e.g., Smith, 770 F. Supp. at 1566.

Spyder’s declaration should also be disregarded because it relies on nothing

more than speculation and s/he uses no discernable methodology in reaching his/her

conclusions. See Frazier, 387 F.3d at 1260; see also Rider, 295 F.3d at 1202; Greater

24
Hall Temple Church of God v. S. Mutual Church Ins. Co., 820 Fed. App'x 915, 919

(11th Cir. 2020). Following a dizzying array of screenshots, Spyder comes to the

startling conclusion that “Dominion Voter Systems and Edison Research” were

“accessible” and “compromised by rogue actors” and that these companies

“intentionally provided access to their infrastructure in order to monitor and

manipulate elections, including the most recent one in 2020.” Spyder. Decl. ⁋ 21. It

does not appear that Spyder applied any methodology other than a series of online

and other searches in reaching this conclusion which appears to rest entirely on

speculation regarding purported security issues and connections between various

individuals and entities. See McClain, 401 F.3d at 1240. And, in any event, s/he does

not opine that any alleged interference changed or had the ability to change the

trajectory of the election making his/her opinions unhelpful to the Court. See

Chapman, 766 F.3d at 1304, 1306-07; Fed. R. Evid. 702(a). Spyder’s declaration

should not be considered.

IV. CONCLUSION

For the reasons set forth above, Intervenors respectfully request that the

Court exclude these “experts” and their reports in their entirety.

25
Dated: December 5, 2020 Respectfully submitted,

Adam M. Sparks
Halsey G. Knapp, Jr.
Joyce Gist Lewis
Susan P. Coppedge
Adam M. Sparks
KREVOLIN AND HORST, LLC
One Atlantic Center
1201 W. Peachtree Street, NW, Ste. 3250
Atlanta, GA 30309
Telephone: (404) 888-9700
Facsimile: (404) 888-9577
hknapp@khlawfirm.com
jlewis@khlawfirm.com
coppedge@khlawfirm.com
sparks@khlawfirm.com

Marc E. Elias*
Amanda R. Callais*
PERKINS COIE LLP
700 Thirteenth Street, N.W., Suite 800
Washington, D.C. 20005-3960
Telephone: (202) 654-6200
Facsimile: (202) 654-6211
MElias@perkinscoie.com
ACallais@perkinscoie.com

Kevin J. Hamilton*
Amanda J. Beane*
PERKINS COIE LLP
1201 Third Avenue, Suite 4900
Seattle, WA 98101-3099
Telephone: (206) 359-8000
Facsimile: (206) 359-9000
KHamilton@perkinscoie.com
ABeane@perkinscoie.com
Matthew J. Mertens*
Georgia Bar No: 870320
PERKINS COIE LLP
1120 NW Couch Street, 10th Floor
Portland, OR 97209
Telephone: (503) 727-2000
Facsimile: (503) 727-2222
MMertens@perkinscoie.com

Counsel for Intervenors-Defendants

*Admitted Pro Hac Vice

- 27 -
IN THE UNITED STATES DISTRICT COURT
FOR THE NORTHERN DISTRICT OF GEORGIA
ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN ALSTON
LATHAM, and BRIAN JAY VAN
GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity as


Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair of
the Georgia State Election Board, DAVID
J. WORLEY, in his official capacity as a
member of the Georgia State Election
Board, REBECCA N. SULLIVAN, in her
official capacity as a member of the
Georgia State Election Board,
MATTHEW MASHBURN, in his official
capacity as a member of the Georgia State
Election Board, and ANH LE, in her
official capacity as a member of the
Georgia State Election Board,

Defendants.

CERTIFICATE OF COMPLIANCE
I hereby certify that the foregoing document has been prepared in

accordance with the font type and margin requirements of L.R. 5.1, using font type

of Times New Roman and a point size of 14.

Dated: December 5, 2020. Adam M. Sparks


Counsel for Intervenor-Defendants
IN THE UNITED STATES DISTRICT COURT
FOR THE NORTHERN DISTRICT OF GEORGIA
ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN ALSTON
LATHAM, and BRIAN JAY VAN
GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity as


Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair of
the Georgia State Election Board, DAVID
J. WORLEY, in his official capacity as a
member of the Georgia State Election
Board, REBECCA N. SULLIVAN, in her
official capacity as a member of the
Georgia State Election Board,
MATTHEW MASHBURN, in his official
capacity as a member of the Georgia State
Election Board, and ANH LE, in her
official capacity as a member of the
Georgia State Election Board,

Defendants.

CERTIFICATE OF SERVICE
I hereby certify that on December 5, 2020, I electronically filed the

foregoing with the Clerk of the Court using the CM/ECF system, which will send a

notice of electronic filing to all counsel of record.

Dated: December 5, 2020. Adam M. Sparks


Counsel for Intervenor-Defendants
IN THE UNITED STATES DISTRICT COURT
FOR THE NORTHERN DISTRICT OF GEORGIA
ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN ALSTON
LATHAM, and BRIAN JAY VAN
GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity as


Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair of
the Georgia State Election Board, DAVID
J. WORLEY, in his official capacity as a
member of the Georgia State Election
Board, REBECCA N. SULLIVAN, in her
official capacity as a member of the
Georgia State Election Board,
MATTHEW MASHBURN, in his official
capacity as a member of the Georgia State
Election Board, and ANH LE, in her
official capacity as a member of the
Georgia State Election Board,

Defendants.

ATTORNEY DECLARATION OF AMANDA R. CALLAIS


I, Amanda R. Callais, state as follows:

1. My name is Amanda R. Callais. I am over 18 years of age and have

personal knowledge of the below facts, which are true and accurate to the best of

my knowledge and belief.

2. I am an attorney with the firm of Perkins Coie LLP and counsel for

Intervenor-Defendants the Democratic Party of Georgia, Inc., DSCC, and DCCC

(“Intervenors”). I make this declaration in support of Intervenors’ Opposition to

Plaintiffs’ Motion for Emergency Injunctive Relief.

3. Attached hereto as Exhibit 1 is a true and correct copy of the expert

report of Dr. Stephen Ansolabehere responding to Matthew Braynard.

4. Attached hereto as Exhibit 2 is a true and correct copy of the expert

report of Dr. Stephen Ansolabehere responding to Dr. William Briggs.

5. Attached hereto as Exhibit 3 is a true and correct copy of the expert

report of Dr. Jonathan Rodden responding to Russell Ramsland, Dr. Eric Quinnell,

and Dr. Shiva Ayyadurai.

6. Attached hereto as Exhibit 4 is a true and correct copy of the expert

report of Dr. Kenneth R. Mayer responding to Russell Ramsland and Dr. Benjamin

Overholt.

–1–
7. Attached hereto as Exhibit 5 is a true and correct copy of the expert

report of Dr. Jonathan Rodden and William Marble responding to Dr. Eric

Quinnell and Dr. S. Stanley Young.

Dated: December 5, 2020 Amanda R. Callais


Amanda R. Callais*
PERKINS COIE LLP
700 Thirteenth Street NW, Suite 800
Washington, D.C. 20005
Telephone: (202) 654-6200
Facsimile: (202) 654-6211
acallais@perkinscoie.com
Counsel for Intervenor-Defendants
*Admitted Pro Hac Vice

–2–
IN THE UNITED STATES DISTRICT COURT
FOR THE NORTHERN DISTRICT OF GEORGIA
ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN ALSTON
LATHAM, and BRIAN JAY VAN
GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity as


Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair of
the Georgia State Election Board, DAVID
J. WORLEY, in his official capacity as a
member of the Georgia State Election
Board, REBECCA N. SULLIVAN, in her
official capacity as a member of the
Georgia State Election Board,
MATTHEW MASHBURN, in his official
capacity as a member of the Georgia State
Election Board, and ANH LE, in her
official capacity as a member of the
Georgia State Election Board,

Defendants.

CERTIFICATE OF COMPLIANCE
I hereby certify that the foregoing document has been prepared in

accordance with the font type and margin requirements of L.R. 5.1, using font type

of Times New Roman and a point size of 14.

Dated: December 5, 2020. Adam M. Sparks


Counsel for Intervenor-Defendants
IN THE UNITED STATES DISTRICT COURT
FOR THE NORTHERN DISTRICT OF GEORGIA
ATLANTA DIVISION

CORECO JA’QAN PEARSON,


VIKKI TOWNSEND CONSIGLIO, CIVIL ACTION FILE NO.
GLORIA KAY GODWIN, JAMES 1:20-cv-04809-TCB
KENNETH CARROLL, CAROLYN
HALL FISHER, CATHLEEN ALSTON
LATHAM, and BRIAN JAY VAN
GUNDY,

Plaintiffs,

v.

BRIAN KEMP, in his official capacity as


Governor of Georgia, BRAD
RAFFENSPERGER, in his official
capacity as Secretary of State and Chair of
the Georgia State Election Board, DAVID
J. WORLEY, in his official capacity as a
member of the Georgia State Election
Board, REBECCA N. SULLIVAN, in her
official capacity as a member of the
Georgia State Election Board,
MATTHEW MASHBURN, in his official
capacity as a member of the Georgia State
Election Board, and ANH LE, in her
official capacity as a member of the
Georgia State Election Board,

Defendants.

CERTIFICATE OF SERVICE
I hereby certify that on December 5, 2020, I electronically filed the

foregoing with the Clerk of the Court using the CM/ECF system, which will send a

notice of electronic filing to all counsel of record.

Dated: December 5, 2020. Adam M. Sparks


Counsel for Intervenor-Defendants
Response to Matthew Braynard Expert Report

Stephen Ansolabehere

December 4, 2020
I. Statement of Inquiry

1. I was asked to evaluate the expert report of Matthew Braynard dated November 20,

2020, and to determine whether the claims made therein and the related data collection supporting

them meet scientific standards for reliability and accuracy in my fields of research, which include

survey research and design, data science, and election analysis.

II. Summary

2. Matthew Braynard’s report makes six Claims:

(1) 18.39 percent of registered voters of Georgia who were sent but did not return absentee

ballots did not request absentee ballots;

(2) 33.29 percent of voters who were sent absentee ballots but were not recorded as having

returned absentee ballots stated that they did mail their ballots back;

(3) 1.53 percent of registered voters of Georgia who changed addresses before the election

and were recorded as having voted stated that they did not cast a vote;

(4) 20,312 absentee voters were not residents of the State of Georgia when they voted, and

(5) 1,043 early and absentee ballots were cast by people who were registered at post office

box addresses; and

(6) 234 Georgians voted in multiple states.

3. None of these claims meets scientific standards of my fields of research, including

survey research, political science, statistics and data sciences. There is no scientific basis for

drawing any inferences or conclusions from the data presented. None of the estimates are

presented with statistical measures that meet standards for evaluating evidence.
4. Each of the claims is couched with the phrase “to a reasonable degree of scientific

certainty.” This phrase is meaningless in scientific journals and disciplines. The National Institute

of Standards and Technology has warned against use of such a phrase by experts in legal

proceedings and concluded that “the term ‘reasonable degree of scientific [or discipline] certainty’

has no place in the judicial process.’” It has no place in the scientific research process.

5. The survey on which Claims (1) and (2) are based is riddled with errors and biases that

render it invalid for purposes of drawing inferences about the quantities at issue here. There are

data errors in the topline summaries of the survey data and obvious errors in the design of the

survey that produced the results. 1 Specifically, individuals who may not have been the correct

person were allowed to answer the survey. Further, registration-based surveys such as this rely on

matching phone numbers to registration records, a process that is prone to error. The results

observed by Mr. Braynard can easily be explained by mismatches of phone numbers to voter

records in conducting the survey.

6. The survey used to support Claims (1) and (2) and the survey used to support Claim (3)

have unacceptably low response rates, and no effort is made to correct for non-response bias. Less

than one percent of people who were targeted for contact ultimately responded to these surveys.

The report naively extrapolates from the data, assuming that the 99 percent of people who could

not be contacted or who refused to participate are just like the 1 percent who did participate. In

my professional experience, data with such low response rates are either not accepted as valid or

must be proven to be representative and accurate before they are relied on to draw scientifically

valid inferences and conclusions. The report provides no information about the descriptive

1
“Topline” data generally represents a summary of the figures collected and relied upon in a survey or study.
characteristics of the sample or the population studied and provides no assessment of whether the

data are in fact representative or accurate.

7. Claims (3), (4), and (6) are based on list matching. The list matching methodologies

are not described adequately. The lack of a complete description of list matching methodology

fails to meet scientific standards of transparency and data presentation. What little information is

presented suggests that it is based on methodologies that have been debunked by statisticians and

by the US Civil Rights Commission for producing large numbers of incorrect matches.

8. Claim (5) is based on analysis of addresses. This analysis does not meet scientific

standards of my fields of research. The statistics that are presented reveal that there is no

uniformity of coding and assessment and that the results are not reliable.

9. Claim (6) is asserted but there is no further information in Mr. Braynard’s report to

support it beyond the claim.

III. Qualifications

10. I am the Frank G. Thompson Professor of Government in the Department of

Government at Harvard University in Cambridge, MA. Formerly, I was an Assistant Professor at

the University of California, Los Angeles, and I was Professor of Political Science at the

Massachusetts Institute of Technology, where I held the Elting R. Morison Chair and served as

Associate Head of the Department of Political Science. I am the Principal Investigator of the

Cooperative Congressional Election Study (CCES), a survey research consortium of over 250

faculty and student researchers at more than 50 universities, directed the Caltech/MIT Voting

Technology Project from its inception in 2000 through 2004, and served on the Board of Overseers

of the American National Election Study from 1999 to 2013. I am a consultant to CBS News’
Election Night Decision Desk. I am a member of the American Academy of Arts and Sciences

(inducted in 2007). My curriculum vitae is attached to this report as Appendix B.

11. I have worked as a consultant to the Brennan Center in the case of McConnell v. FEC,

540 U.S. 93 (2003). I have testified before the U.S. Senate Committee on Rules, the U.S. Senate

Committee on Commerce, the U.S. House Committee on Science, Space, and Technology, the

U.S. House Committee on House Administration, and the Congressional Black Caucus on matters

of election administration in the United States. I filed an amicus brief with Professors Nathaniel

Persily and Charles Stewart on behalf of neither party to the U.S. Supreme Court in the case of

Northwest Austin Municipal Utility District Number One v. Holder, 557 U.S. 193 (2009) and an

amicus brief with Professor Nathaniel Persily and others in the case of Evenwel v. Abbott 138 S.Ct.

1120 (2015). I have served as a testifying expert for the Gonzales intervenors in State of Texas v.

United States before the U.S. District Court in the District of Columbia (No. 1:11-cv-01303); the

Rodriguez plaintiffs in Perez v. Perry, before the U. S. District Court in the Western District of

Texas (No. 5:11-cv-00360); for the San Antonio Water District intervenor in LULAC v. Edwards

Aquifer Authority in the U.S. District Court for the Western District of Texas, San Antonio

Division (No. 5:12cv620-OLG); for the Department of Justice in State of Texas v. Holder, before

the U.S. District Court in the District of Columbia (No. 1:12-cv-00128); for the Guy plaintiffs in

Guy v. Miller in U.S. District Court for Nevada (No. 11-OC-00042-1B); for the Florida

Democratic Party in In re Senate Joint Resolution of Legislative Apportionment in the Florida

Supreme Court (Nos. 2012-CA-412, 2012-CA-490); for the Romo plaintiffs in Romo v. Detzner

in the Circuit Court of the Second Judicial Circuit in Florida (No. 2012 CA 412); for the

Department of Justice in Veasey v. Perry, before the U.S. District Court for the Southern District

of Texas, Corpus Christi Division (No. 2:13cv00193); for the Harris plaintiffs in Harris v.
McCrory in the U. S. District Court for the Middle District of North Carolina (No.

1:2013cv00949); for the Bethune-Hill plaintiffs in Bethune-Hill v. Virginia State Board of

Elections in the U.S. District Court for the Eastern District of Virginia (No. 3: 2014cv00852); for

the Fish plaintiffs in Fish v. Kobach in the U.S. District Court for the District of Kansas ( No. 2:16-

cv-02105-JAR); and for intervenors in Voto Latino, et al. v. Hobbs, in the U.S. District Court for

the District of Arizona (No. 2:19-cv-05685-DWL). I served as an expert witness and filed an

Affidavit in the North Carolina State Board of Elections hearings regarding absentee ballot fraud

in the 2018 election for Congressional District 9 in North Carolina.

12. My areas of expertise include American government, with particular expertise in

electoral politics, representation, and public opinion, as well as statistical methods in social

sciences and survey research methods. I have authored numerous scholarly works on voting

behavior and elections, the application of statistical methods in social sciences, legislative politics

and representation, and distributive politics. This scholarship includes articles in such academic

journals as the Journal of the Royal Statistical Society, American Political Science Review,

American Economic Review, the American Journal of Political Science, Legislative Studies

Quarterly, Quarterly Journal of Political Science, Electoral Studies, and Political Analysis. I have

published articles on issues of election law in the Harvard Law Review, Texas Law Review,

Columbia Law Review, New York University Annual Survey of Law, and Election Law Journal,

for which I am a member of the editorial board. I have coauthored three scholarly books on

electoral politics in the United States, The End of Inequality: Baker v. Carr and the Transformation

of American Politics, Going Negative: How Political Advertising Shrinks and Polarizes the

Electorate, and The Media Game: American Politics in the Media Age. I am coauthor with

Benjamin Ginsberg, and Ken Shepsle of American Government: Power and Purpose. I am being
compensated at the rate of $550 an hour. My compensation is not dependent on my conclusions in

any way.

IV. Sources

13. I have relied on the expert report of Matthew Braynard in this case.

14. I have relied on the report of Dr. William Briggs in King v. Whitmer in the District

Court in the Eastern District of Michigan (No. 2:20-cv-13134). The Topline Tables appended to

Dr. Briggs’ report provide information on the response rates, design and implementation of, and

responses to the surveys used in Claims (1) and (2) of Matthew Braynard’s report. This information

was not disclosed in Mr. Braynard’s report in this case.

15. I have relied on the Election Assistance Commission, “Election Administration and

Voting Survey (EAVS) for 2018: https://www.eac.gov/research-and-data/studies-and-reports. I

present data from 2018 because it is the most recent federal election for which data on absentee

and permanent absentee voting is available. The 2018 data are instructive about the magnitude of

permanent absentee voters and of the magnitude of unreturned, late, rejected, and spoiled absentee

ballots. The 2020 data are not yet reported.

V. Findings

16. Matthew Braynard’s report makes six Claims:

(1) 18.39 percent of registered voters of Georgia who were sent absentee but did not return

absentee ballots did not request absentee ballots;

(2) 33.29 percent of voters who were sent absentee ballots but were not recorded as having

returned absentee ballots stated that they did mail their ballots back;
(3) 1.53 percent of registered voters of Georgia who changed addresses before the election

and were recorded as having voted stated that they did not cast a vote;

(4) 20,312 absentee voters were not residents of the State of Georgia when they voted;

(5) 1,043 early and absentee ballots were cast by people who were registered at post office

box addresses; and

(6) 234 Georgians voted in multiple states.

17. There is no scientific basis for reaching any of these conclusions. Mr. Braynard

prefaces each claim with the phrase “to a reasonable degree of scientific certainty,” a phrase that

the National Institutes of Standards and Technology concludes has no scientific mean and which,

as a journal editor, is not acceptable in the fields of survey research, data science, or political

science. Mr. Braynard presents no standard errors or confidence intervals, which are necessary to

gauge how informative estimates are.

18. The estimates in Claims (1), (2), and (3) are extrapolations to a population of 138,000

registered voters from a few hundred responses to surveys that have design flaws that make the

survey unrepresentative of the population that is being studied.

19. The basic information about these surveys is never disclosed by Mr. Braynard, in

violation of standards of transparency set by the American Association of Public Opinion

Researchers. From what information I have found in the reports of Dr. William Briggs about one

of the surveys, it is riddled with questionnaire design flaws and spread sheet errors, indicative of

quality control failures in the conduct of the survey, which render unreliable the calculation of any

estimates using it.


20. The surveys have unacceptably high rates of non-response. In the state of Georgia the

response rate to this survey was only 0.4 percent, meaning that of the entire set of people that Mr.

Braynard set out to study 99.6 percent could not be reached or would not answer the survey.

19. An error in the branching of the survey questionnaire allows people who were not the

person that the survey targeted to answer Question 2 (did you request an absentee ballot?). More

people were improperly asked Question 2 (255) than responded that they did not return an absentee

vote (128).

20. Claims (3), (4) and (6) are based on list matching and record linkage. There is no

disclosure of the methods used, especially which fields are used. Recent studies have found

millions of errors in list matching methodologies using first name, last name, and date of birth.

21. The design of the survey and the resulting claims fail to account for features of absentee

voting and registration in Georgia. The surveys do not account for the fact that Georgia has

“rollover” absentees, which allow people to sign up to have ballots sent to them without requesting

them. According to estimates of the Georgia Office of the Secretary of State that were reported in

the media, there were approximately 580,000 rollover ballots in 2020. That figure far exceeds the

numbers “unrequested” absentees in Mr. Braynard’s report. The surveys do not separate rollover

voters from other absentee voters. Moreover, many absentee ballots arrive late or are rejected for

various reasons (e.g., lack of signatures). None of the Claims made by Mr. Braynard, then, are

supported by the data or analyses or meet standards of scientific inference.

A. This report is not up to scientific standards of evidence.

i. The report offers no conclusions based on scientifically accepted


standards of evidence.

22. Scientific standards in survey research, statistics and data science, and political science,

require that when researchers present statistics and estimates, such as Mr. Braynard does in each
of his claims, the estimates be accompanied by statistical measures of the researcher’s confidence

or uncertainty about the estimates. Most frequently, researchers present a standard error,

confidence interval, or margin of error. Such quantities are necessary for gauging how informative

estimates are, and what inferences and conclusions may be drawn. Survey research is not accepted

for publication without such information.

23. Mr. Braynard’s report offers no measures of statistical precision or uncertainty in

association with any of the estimates presented in Claims 1, 2, 3, 4, 5, or 6. Without such quantities

it is impossible to draw statistical inferences from data. And, without such measures of the amount

of information in or uncertainty about estimates, the estimates are not accepted in scientific

research journals and publications as scientific evidence.

ii. The report couches its conclusions as having “Reasonable Scientific


Certainty,” which is meaningless in scientific research.

24. The only expression of a foundation for the conclusion for each of the six factual claims

made in Braynard’s report is the following assertion: “it is my opinion, to a reasonable degree of

scientific certainty.”

25. The expression “a reasonable degree of scientific certainty” is not a standard by which

scientific inferences and conclusions are made. It is not used in any of the journals in which I have

published, which includes the top journals in the fields of statistics, political science, and

economics, or journals on whose editorial boards I have served or have served as an editor,

including the Harvard Data Science Review and Public Opinion Quarterly.

26. The standard-setting bodies that provide guidance to researchers have concluded that

“a reasonable degree of scientific certainty” should not be used to characterize scientifically drawn

conclusions or inferences in a judicial setting. Researchers across all fields follow the guidance

on the use of terminology from their own professions and from standard setting institutions, such
as the National Institutes of Standards and Technology of the Department of Commerce. The

National Commission on Forensic Science of the National Institute of Standards and Technology

in its report “Testimony Using the Term ‘Reasonable Degree of Scientific Certainty’”

acknowledges that “The legal community should recognize that medical professionals and other

scientists do not routinely use ‘to a reasonable degree of scientific certainty’ when expressing

conclusions outside of the courts. Such terms have no scientific meaning and may mislead

factfinders [jurors or judges] when deciding whether guilt has been proved beyond a reasonable

doubt.” The NIST report concludes, “the term ‘reasonable degree of scientific [or discipline]

certainty’ has no place in the judicial process.’” 2

iii. There is no disclosure of the methodologies and data used in this report.

27. Mr. Braynard does not disclose sampling methodologies, sample sizes, questionnaires,

or response and breakoff rates. Mr. Braynard states that he conducted “randomized” surveys, but

the topline tables appended to Dr. Briggs’ report indicate, in my professional assessment, that at

the outset of the studies all people in the target population could have been included in the study

and that no randomization in fact occurred. Mr. Braynard does not disclose the number of correctly

matched phone numbers and the number of wrong numbers, though the toplines appended to Dr.

Briggs report reveal some statistics related to wrong numbers and records for which no phone

number was available.

28. Mr. Braynard does not disclose list matching methodologies used for matching the

registration records to NCOA lists and voter files. It is my professional experience, based on my

own research and that of other scholars in my fields of study, that many of the algorithms

2
National Commission on Forensic Science of the National Institute of Standards and Technology, U.S. Department
of Commerce, “Testimony on Using the Term ‘Reasonable Scientific Certainty,”
https://www.justice.gov/archives/ncfs/file/795336/download.
commonly used for list matching are highly susceptible to errors of omission from the lists (false

negatives) and errors of inclusion of people who should not have been considered matches (false

positives). It is standard for scientific research using list matching and record linkage to provide

detailed information about the matching algorithms and to include measures of the accuracy of the

algorithms used. 3 No indicators of accuracy of matching methods, such as false positives and false

negatives, are included in Mr. Braynard’s report.

29. The lack of transparency in Mr. Braynard’s report violates basic standards for scientific

evidence. The report does not disclose the basic features of the survey, including the survey

selection and contact procedures, the questionnaire design, and contact, response, and breakoff

rates. This violates accepted rules of scientific evidence in academic survey research and the Code

of Professional Ethics and Practices of the American Association of Public Opinion Researchers

(AAPOR). Journals such as Public Opinion Quarterly, which is the flagship journal of AAPOR,

require reporting of such information as a condition for publication of scientifically sound survey

research. 4 Mr. Braynard’s description of the research conducted is not up to the scientific

standards of fields in which I have published or serve in an editorial capacity.

B. Errors in record keeping can readily explain all six claims made in this study.

30. Past academic research on the accuracy of information on voter files nationwide has

found small rates of errors, on the order of 1 to 4 percent, in various fields on voter files, including

whether someone voted and how they voted. Specifically, past research that I have conducted has

found that nationwide the record of whether an individual voted is incorrect 2 percent of the time.

3
W. E. Winkler, “Matching and Record Linkage” Statistical Research Division, U.S. Bureau of the Census (1993).
https://www.census.gov/srd/papers/pdf/rr93-8.pdf.
4
American Association for Public Opinion Research, Disclosure Standards, https://www.aapor.org/Standards-
Ethics/AAPOR-Code-of-Ethics/Disclosure-Standards.aspx.
These errors are omissions (neglecting to record that someone voted) and typos. 5 The state of

Georgia has over 7.2 million registration records. An error of 2 percent would correspond to

144,000 incorrect recordings of whether an individual voted. That number far exceeds the

magnitudes of the estimates that Mr. Braynard offers.

31. Clerical errors in voter files make it difficult to conduct surveys based on these files to

determine whether or how an individual registrant or survey respondent voted. Research by

Matthew Berent and Jon Krosnick finds that such record keeping errors create errors in surveys

that are linked to voter files, such as the surveys conducted by Mr. Braynard, and make problematic

any attempts to draw inferences about whether a particular individual did or did not in fact vote. 6

These clerical errors result in discrepancies between votes counted according to the voter

registration rolls and votes counted in the official certification. 7 Record keeping errors and

inconsistencies are sufficient to account for Claims (1), (2), and (3) in Mr. Braynard’s report.

32. Clerical error and inconsistencies in fields such as name, address, and date of birth

can create errors in attempts to link records across different lists, such as a voter file to NCOA or

across different states’ voter files. Specifically, typographical errors, variations in names, and

omitted information can lead to incorrect matches of voter registration records to commercial

phone lists, National Change of Address lists, and official government lists (including other states’

registration lists). Both false positives (matches that should not have occurred) and false negatives

(matches that did not occur but should have) arise. The quality of such matches is highly dependent

5
Stephen Ansolabehere and Eitan Hersh, “The Quality of Voter Registration Records: A State-by-State Analysis”
Caltech-MIT Voting Technology Project Report Number 6 (July 14, 2010), http://vote.caltech.edu/reports/6.
6
Matthew Berent, Jon Krosnick, and Arthur Lupia, “Measuring Voter Registration and Turnout in Surveys: Do
Government Records Yield More Accurate Assessments?” Public Opinion Quarterly 80 (2016): DOI:
10.1093/poq/nfw021.
7
Ansolabehere & Hersh, op. cit., page 16.
on the algorithms used. 8 Based on past research on the accuracy of voter files, the number of

clerical errors on statewide voter files across the nation is sufficiently high as to plausibly be larger

than any of the numbers presented in Mr. Braynard’s report.

33. It is unclear from Mr. Braynard’s report what efforts he made, if any, to verify the

correctness of the information on the voter registration lists. Also, it is unclear what effort was

made to make sure that the algorithms used had very low rates of false positive and false negative

matches and were robust to the sorts of errors and inconsistencies encountered on registration and

commercial lists. None of the algorithms for matching phone numbers to registration records or

for matching registered voters to Georgia to NCOA or other states’ registration lists are disclosed.

C. The survey reported in the study is not of sufficient quality to support the claims
made.

34. Mr. Braynard relies on a phone survey of people linked to registration records to assert

Claim (1) and Claim (2). The survey has a very high non-response rate which makes inferences

suspect. Claim (3) is evidently based on a second survey. Sample design problems, such errors

linking of commercial lists with phone numbers to voter registration lists, high rates of non-

response, and flaws in the questionnaires used, can easily account for the observed results.

35. Some information about the survey used to support Claim (1) and Claim (2) is

disclosed in the report of Dr. William Briggs. Matthew Braynard did not disclose this information

in this case. Examination of that data revealed fatal flaws in the design of the survey that render

it useless for reaching conclusions about Claim (1) and Claim (2).

i. The surveys used to support Claims 1, 2, and 3 have high non-response


rates.

8
Stephen Ansolabehere and Eitan Hersh, “ADGN: An Algorithm for Record Linkage Using Address, Date of Birth,
Gender, and Name,” Statistics and Public Policy 4 (2017): 1-10.
36. The Braynard report does not present a response rate, which violates accepted rules of

scientific evidence in academic survey research. The American Association of Public Opinion

Researchers (AAPOR) sets standards for reporting of response rates for surveys. Journals such as

Public Opinion Quarterly and the American Journal of Political Science require reporting of

response rates to surveys for all published papers. 9 Surveys with very low response rates, below

5 percent, are never accepted in scientific journals.

37. According to the information in the Briggs Report, the response rate to Mr. Braynard’s

Georgia survey is approximately one half of one percent—four times lower than the response rate

of the survey rejected by the court in Texas v. Holder. That is, 99.5 percent of all people that

Matthew Braynard’s firm sought to contact either could not be contacted, did not respond to the

survey calls, or refused to participate in the survey. The survey originally targeted the entire set

of 138,029 absentee ballots than had not been returned. The appendix to William Briggs survey

shows that the firm was able to obtain potentially-correct phone numbers for 34,355 people.

Attempts to contact these people winnowed the set of respondents to 1,175 people (those who

answered Question 1 of the survey, which ascertains who the person is.) Just 964 people were

asked Question 2 of the survey, which is whether the person requested an absentee ballot. Of these

people, 128 hung up or refused to answer, reducing the number of respondents to 736. That is

only 736 people responded to the survey out of the original 138,029 that the firm sought to

interview. Table 1 summarizes the number of people sought in the survey, the number of match

phone numbers, the number of Completes, and the number of people responding to Questions 1, 2

and 5 (the final question of the survey).

9
American Association for Public Opinion Research, Response Rates, https://www.aapor.org/Education-
Resources/For-Researchers/Poll-Survey-FAQ/Response-Rates-An-Overview.aspx.
38. The response rate is not reported for the second survey, used in making Claim 3.

Based on figures in Mr. Braynard’s report, I calculate the response rate to be 1.7 percent, which is

again unacceptably low.

39. In my work as an expert witness for the Department of Justice, it is my experience that

surveys similar to this one with response rates of 2 percent (higher than the surveys here) are not

acceptable as evidence because of potential biases due to the unrepresentativeness of the

respondents who do answer the surveys. Specifically, in Texas v. Holder, Professor Daron Shaw

offered evidence based on phone surveys of registration lists. These surveys had very low response

rates of 2 percent, and the court rejected the data because of serious questions about accuracy and

reliability of surveys with very low response rates. See Texas v. Holder, 888 F. Supp. 2d 113, 131

(D.D.C. 2012), vacated and remanded, 570 U.S. 928, 133 S. Ct. 2886, 186 L. Ed. 2d 930 (2013).

40. In my experience as a researcher and journal editor, survey data with such a low

response rate are generally not accepted in academic research, as the potential non-response bias

errors are substantial. Researchers sometimes do use data with very low response rates, but only

upon affirmatively demonstrating that the data are representative of the population being studied

or upon correcting for potential non-response bias. Mr. Braynard’s report does neither—it makes

no attempt to show that the 736 people in Georgia who the survey ultimately asked whether they

returned their absentee ballots are representative of the 138,029 that the researchers originally

sought to interview, and it makes no attempt to correct for potential non-response bias.

41. Mr. Braynard presents survey data with unacceptably high rates of non-response. He

offers no accounting of or explanation for this very low response rate, but instead without

explanation treats the one half of one percent who did respond as if they were representative of the

99.5 percent of people who did not respond. This fails to meet standards of scientific research.
ii. Registration-Based samples typically have many incorrect matches to
phone numbers, and these can explain the findings.

42. The surveys used in this report to support Claim 1, 2, and 3 are based on an attempt to

match phone numbers to records on the voter files.

43. There is no information reported on the methodology for matching phone numbers to

voter files. Specifically, there is no information on the specific algorithm used for matching phone

numbers to voter records and its accuracy. There is no report of the rate of successful matches,

erroneous matches (both false positives and false negatives), and non-matches, or of the rate of

obsolete and wrong numbers on the voter file. It is standard in academic research using

registration-based sampling to report such information in connection with registration-based

sample surveys. 10 Transparency in reporting algorithm is part of the scientific practice because

some algorithms are known to be more accurate than others, and because reporting such

information allows for replication of research. I have published on this list matching and voter

validation in academic journals, and journals expect publication information on rates of successful

matches and erroneous matches when publishing scientific research on this topic. 11 I served as an

expert for the Department of Justice in two cases (Texas v. Holder and United States v. Texas)

involving matching voter registration lists to other records, and information on correct and

incorrect matches was expected as part of the disclosure in those cases.

44. Prior research has documented that there are substantial errors matching phone

numbers to voter files. Professors Donald Green and Alan Gerber have documented that a third

of records on voter files have no phone number; approximately 10 percent of numbers on voter

10
Donald P. Green & Alan S. Gerber, “Can Registration-Based Sampling Improve the Accuracy of Midterm
Election Forecasts,” Public Opinion Quarterly 70 (2006): 197-223.
11
Stephen Ansolabehere & Eitan D. Hersh, “ADGN: An Algorithm for Record Linkage Using Address, Date of
Birth, Gender, and Name,” Statistics and Public Policy 4 (2017): 1-10.
files are incorrect. 12 Studies conducted by the Pew Research Center on Methodology have found

that in conducting surveys in which phone numbers are matched to voter registration lists that 40

percent of cell phone and 70 percent of landline respondents are not the correct person. 13

45. Table 1 provides evidence of a high level of incorrect phone numbers in the survey

relied on by Mr. Braynard. First, of the 15,719 Completes, 4,902 (32.3 percent) are flagged for

wrong numbers and language. That indicates a high rate of mismatches even among the phone

numbers that were matched. Second, of the 1,175 responses to Question 1, 255 (21.7 percent)

could not be verified to be the Target of the survey.

46. Errors of that magnitude in matching phone numbers to voter files and in existing

phone numbers on voter files can easily explain the estimates provided in the report. For example,

using the figures from the Pew Study cited in paragraph 43, if 40 percent of the 1,170 people who

actually answered Mr. Braynard’s survey were the wrong person then the study would have started

with 470 wrong people interviewed. That number far exceeds the number who answered No to

Question 2 or Yes to Question 3. The magnitudes of other potential errors, such as wrong phone

numbers on registration lists or list matching errors, are also of sufficient magnitude to account for

Claims (1), (2), and (3).

iii. The data for Claims 1 and 2 include people who should not have been
interviewed.

47. Mr. Braynard states that his staff first determined that the person was the correct

person, and then asked of that person whether they requested an absentee ballot. (See page 6 of

12
Green and Gerber, op. cit., page 202.
13
Pew Research Center, “Comparing Survey Sampling Strategies: Random-Digit Dialing vs. Voter Files,” Pew
Research Center: Methods at 24-25 (October 9, 2018),
https://www.pewresearch.org/methods/2018/10/09/performance-of-the-samples/.
his report.) The toplines reported in the appendix to the report of William Briggs reveal that this

is not the protocol of the survey.

48. According to the toplines, Question 1 asks “May I please speak to <lead on the

screen>?” Lead on the screen is the name of the registered voter. 767 cases were recorded as

“Reached Target [Go to Q2].” 255 cases were recorded as blank, but the instructions also state

“[Go to Q2]”. Responses to the same survey conducted in other states indicate that these 255

people were of “uncertain” status. They may or may not have been the correct person.

Nonetheless, they were kept in the pool. As a consequence, 25 percent of the people

interviewed in the survey did not affirmatively state that they were in fact the person that the

interviewer wished to speak to.

49. Question 2 asks “Did you request an absentee ballot?” 591 people said yes. 128

people said no. In his analysis, Mr. Braynard also includes as yes the 39 people are listed as

“member confirmed ‘Yes’”, and the 14 respondents listed as “member confirmed ‘No’” as no. It

is my understanding from the topline reports for other states appended to Dr. Briggs’ report that

these are family members who were interviewed, and not the actual registrant. The 255 people

who were not the “Reached Target” according to Q1 is larger than the number of people who

said they did not request an absentee ballot on Q2 (128) or the number of people and their family

members who indicated that they did not request a ballot (142). As a result, the entire result of

this survey can be explained by improper inclusion of 255 people who were not the Target of the

survey in the pool of respondents to Question 2 and Question 3.

50. Responses to Question 2 indicate that family members are interviewed and treated as

valid and reliable responses for a given voter. That contradicts the description of the survey as

interviews of the specific people on the voter files who requested absentee ballots.
51. In addition to the branching error from Question 1 to Question 2, there is also a

branching error from Question 2 to Question 3. Question 3 includes people who said that they

were Uncertain as to whether they requested an absentee ballot.

52. This branching error is a fatal error in the design of the survey. It means that the pool

of respondents has people in it who were not in fact part of the target population. The survey

allowed people who were not supposed to be asked Question 2 to nonetheless be asked Question

2. These are critical errors in survey design. They mean that the set of people who responded to

the survey and were asked Questions 2 and 3 included people should not have been asked

Questions 2 and 3. On its face, the respondents who answered Questions 2 and 3 are not an

accurate representation of those people in the small set of people who responded to the survey who

should have been interviewed.

iv. There are inconsistencies in the accounting of the number of cases across
Questions in the survey used for Claims 1 and 2.

53. There are unexplained missing cases running throughout the topline tables for this

survey. Table 2 presents the Toplines for the questions as reported in Dr. Briggs’ report. From

Table 1 we can calculate the number of people eligible for Question 1, these are “Complete” cases

that are coded as Q5=1 or 2 or Early Hangup/Refused. There are 1,700 cases. Table 2 shows that

1,175 respondents made it to Question 1 in the survey. Hence, 525 respondents are not included

in the total number of responses to Question 1.

54. The total number of responses to Question 1 that are assigned to Question 2 is 1,022.

That is the number of people listed as “1 Reached Target [Go to Q2]” or as “[Go to Q2].” Of the

original, 58 (5.7% of those assigned to Q2) are unaccounted for.

54. The total number of responses to Question 2 that are assigned to Question 3 is 670.

That is the number of people listed as “Yes [Go to Q3]” or as “Member confirmed “Yes” [Go to
Q3]” or as “5. Unsure [Go to 3].” Of the 670 respondents assigned to Q3, only 623 are accounted

for, a slippage of 47 cases (or 7%).

55. In my professional judgment as a survey researcher, such discrepancies in the

accounting of cases are flags for failures in quality control. A total of 105 cases are not accounted

for in the jumps from Question 1 to Question 2 and from Question 2 to Question 3. Another 525

are not accounted for in the launching of Question 1. Combined, 630 cases are lost in the toplines.

These unaccounted-for cases are on top of the people who refused or hung up. That is a

considerable number of unaccounted cases, given that Claims 1 and Claims 2 are based are only

142 and 257 survey respondents.

56. I checked the topline tables for the survey data that Dr. Briggs appended to his report

and that Dr. Briggs attributes to Mr. Braynard. Other states show inconsistencies and data errors.

For example, for the state of Wisconsin, the Sum of Respondents for Question 1 is less than the

sum of cases. There are more cases for assigned to Question 2 than answer Question 2 in some

states. In other states there are fewer cases assigned to Question 2 than answer Question 2. The

integrity checks in these other states lead me to believe that the inconsistencies in the Georgia data

are systematic and widespread in these data.

57. In my experience, when such discrepancies arise during routine integrity checks, they

are either spreadsheet errors or programming logic errors in the survey system (i.e., the logic that

assigns individuals to questions). That these errors appear in the toplines indicates that there was

not a high level of scrutiny into the quality or integrity of the survey data produced in this study.

58. Based on this assessment, I have no confidence that the data in the survey used to

study Claims 1 and 2 are correct. Basic integrity checks for the data evidently were not performed

or reported. This creates doubt about the survey data for Claim 3, as well.
v. The survey does not ascertain Rollover Absentee Voters or disambiguate
Rollover Absentee Voters from Other Absentee Voters.

59. Question 2 is not sufficiently clear and specific regarding the meaning of “request an

absentee ballot.” The survey does not ascertain whether respondents are rollover absentee voters

or have a designated person who may request a ballot on their behalf. Georgia allows voters who

are over 65 or incapacitated to receive an absentee ballot automatically without requesting one, so

long as they sign up for that service each year. These are called rollover absentee voters because

their absentee status rolls over from one election to the next in a given year, such as from the

primary to the general election. They do not have to make a request for an absentee to be sent to

them for a specific election.

60. There were approximately 582,000 “rollover” ballots in Georgia in the November 3,

2020, general election. 14

61. The substantial number of rollover absentee voters in Georgia creates ambiguity in the

interpretation of the Question 2 of the survey and the meaning of Claim 1. Some permanent

absentee voters may answer “yes” because they registered for permanent absentee status, while

others may say no because they do not need to request a ballot before each election to receive one.

The ambiguity of Question 2, and the failure to disambiguate permanent absentee voters from other

absentee voters in the responses, introduces measurement error in the survey. Additional survey

questions would be required to distinguish different types of absentee voters. Without

disambiguating the voters, the survey data cannot be used to draw the conclusion that some survey

respondents received an absentee ballot in error, or that they received an absentee ballot without

requesting one because that is their absentee status.

14
Stephen Fowler, “Nearly 800,000 Georgians Have Already Requested Absentee Ballots for November,” Ga.
Today, (September 2, 2020) https://www.gpb.org/news/2020/09/02/nearly-800000-georgians-have-already-
requested-absentee-ballots-for-november.
vi. The survey cannot determine whether the respondent properly mailed a
ballot to the election office.

62. Claim 2 holds that 33.29 percent of the 138,029 people who requested an absentee

ballot mailed one to the election office.

63. This is based on an extrapolation from 257 responses to Question 3. As already

described, the survey has an unacceptably low response rate, a flawed questionnaire design, and

accounting inconsistencies. Moreover, Question 3 is inadequate to measure whether the election

office should have recorded a mailed ballot as received.

64. It is my experience working with election administrators and researching election

administration as part of the Caltech/MIT Voting Technology Project that many absentee ballots

are not recorded or counted because they are not received on time or are not properly prepared and

submitted. Late absentees are not accepted, and they are usually not recorded in the tally of ballots

received. Ballots that are spoiled, unsigned or in the incorrect envelopes or rejected for some other

reason are not counted. The fact that there is no record of a vote or of a received absentee ballot

is not necessarily evidence of an error in the handling of the ballot. Instead it may be evidence of

correct treatment of ballots by the election officials in accordance with state laws.

65. According to figures reported by the county election offices in the State of Georgia to

the Election Assistance Commission, there were 3,525 late absentee ballots and 36,255

unaccounted absentee ballots in Georgia in 2018. 15 In addition, there were 7,512 rejected

absentees and 2,322 undeliverable absentees in the State in 2018. These figures far exceed the

total number of survey responses.

15
I compiled these figures from the spreadsheets published by the Election Assistance Commission for the 2018
Election Administration and Voting Comprehensive Survey (EAVS), last accessed December 2, 2020,
https://www.eac.gov/research-and-data/studies-and-reports.
66. Question 3 does not ascertain when the ballot was sent, whether it was signed, and

other factors that would affect whether it was received on time (and thus recorded) or was in fact

a valid ballot. Without accounting for those variables, the conclusion based on the data from

Question 3 is unreliable.

vii. Question 3, which asks whether the respondent mailed the ballot, is
subject to social desirability bias and memory errors.

67. Question 3 asks people whether they voted. Specifically, it asks people who said that

they requested an absentee ballot whether they returned an absentee ballot, that is, whether they

voted that ballot.

68. It has long been understood in political science that respondents to surveys overreport

voting in elections. The most commonly identified sorts of biases are memory errors and social

desirability bias in questions asking people whether they voted. 16 In the context of this survey

such biases would lead to overstatement of Yes responses to Question 3. Mr. Braynard’s report

gives no indication that he attempted to account or correct for these biases.

D. The list matching methodology that links registration records to NCOA and to
other states’ voter databases likely has sufficient errors to account for Claims 3, 4,
and 6.

69. Claims 3, 4, and 6 rely on data derived from matching voter registration records to

NCOA files or to other states voter files.

70. The exact methods used for matching the state’s voter files to the NCOA list and to

other states’ voter files are not described. The lack of transparency in reporting the specific fields

for matching and the algorithms used violates academic standards in this field. Exhibit 2 of

16
See, e.g., Allyson L. Holbrook and Jon A. Krosnick, “Social Desirability Bias in Voter Turnout Reports: Test
Using the Item Count Technique,” Public Opinion Quarterly 74 (2010): 37-67. See also Stephen Ansolabehere and
Eitan Hersh, ,”Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate,” Political
Analysis 20 (2012): 437-459
Matthew Braynard’s report does mention use of complete date of birth, but no other fields are

mentioned for list matching. It is standard scientific practice to report algorithms used, match

rates, non-match rates, rates of false positives and false negatives, and sensitivity analyses in

scientific reports and articles using matching and record linkage. 17 Without such information it is

impossible to evaluate the reliability of methods used. No such information is reported here.

71. Recent academic research on attempts to match voter registration records to other

state’s voter files or to national lists, such as NCOA has shown that this task can be prone to high

rates of error. Crosscheck, a collaboration of 28 states, matches people across states based on first

name, last name, and date of birth. This approach has been determined to be unreliable because it

yields a very high number of incorrect matches. One study found that Crosscheck’s methodology

identified almost 3 million “matching individuals who voted twice nationwide.” All but 600 of

these records were deemed to be false positives, in which the method says two people are the same

but in fact they are not. For those 600 other cases, it could not be determined whether they were

or were not the same individual. 18 The Crosscheck experience suggests that it is quite easy to link

records incorrectly when matching voter files to national lists (such as NCOA) or other states’

registration databases. This example underscores the need to disclose algorithms and provide

evidence that there are no large numbers of false positives and false negatives. Matching on name

and date of birth, as was done using Crosscheck, will likely produce huge numbers of false

positives.

E. Claim 5 argues that 1,034 individuals disguised their addresses.

17
W. E. Winkler, “Matching and Record Linkage” Statistical Research Division, U.S. Bureau of the Census, 1993.
https://www.census.gov/srd/papers/pdf/rr93-8.pdf.
18
United States Commission on Civil Rights, An Assessment of Minority Voting Rights Access in the United
States: 2018 Statutory Enforcement Report. Transmitted to the President September 12, 2018.
https://www.usccr.gov/pubs/2018/Minority_Voting_Access_2018.pdf, at pages 112-113.
72. Voter registration forms for the State of Georgia allow for separate residential and

mailing addresses. The form provides a space for apartment numbers but not PO Boxes in mailing

addresses. In my experience working with state databases and performing record linkage, it is

entirely plausible that individuals put PO Box numbers in this blank because the form does not

provide a specific space for that information in mailing addresses.

73. The list of records appended to Mr. Braynard’s report in Exhibit 3 does not specify

whether the address listed is the residential address or the mailing address of the individual.

74. It is unclear who these individuals are and why they might use a PO Box address.

These may, for example, be homeless sex offenders or domestic abuse victims. In my experience

working with election administrators through the Caltech/MIT Voting Technology Project I have

learned that many jurisdictions across the United States are not allowed to enforce address rules

for voter registration in special circumstances. I do not know the degree to which Georgia election

administration procedures are flexible about address fields, but certainly the information provided

in Mr. Braynard’s report does not determine whether these might be such circumstances.

75. There is no description of the procedures used in making this list, especially what fields

are used. No program or code was appended to the report or included, so it is impossible to verify

if the analysis was done correctly.

F. Summary

76. None of the six claims made in Matthew Braynard’s report reach acceptable standards

of scientific research. There is a lack of transparency in reporting the survey, matching, and coding

methods, and errors in matching could completely account for any reported numbers or claims.

There are demonstrable and fatal flaws in the survey research, especially unacceptable response

rates, branching errors and data inconsistencies.


77. The design of the studies does not test for the obvious explanations of any findings.

The ballots that were received and not requested could be the result of nothing more than the

500,000 rollover absentee voters in the state, who receive ballots without requesting them. The

surveys did not explore this very likely explanation. Many or all of the “unreturned” ballots are

likely late ballots or respondents saying they voted when in fact they had not.

78. None of the estimates offered as support of the five claims are presented with

appropriate measures of statistical certainty or inferences. Instead, Mr. Braynard prefaces each

claim with the phrase “to a reasonable degree of scientific certainty,” a phrase that has no scientific

meaning and that the National Institutes of Standards and Technology and the Attorney General

of the United States has warned experts not to use. 19 There is no scientific basis offered for the

conclusions reached.

19
Office of the Attorney General, Recommendations of the National Commission on Forensic Science;
Announcement for NCFS Meeting Eleven, https://www.justice.gov/opa/file/891366/download.
APPENDIX A

TABLES

Table 1. Phone Survey Targets, Attempts and Completes

Percent of Targets for Survey

Number of Cases Remaining in the Survey

Process

People the Survey Sought to 138,029 100%

Reach (all Unreturned Ballots)

[Targets for Survey]

List Penetration No number reported 58.45%

Data Loads (Phone Numbers 34,355 24.89%

Loaded into the Survey System)

“Completes”

Wrong Numbers/Language 4,902

Answering Machines 13,479

Early Hang Up/Refused 1,516

Q5= 01 or 02 184

Subtotal: “Completes” 15179 11.00%

Completes Eligible for Survey 1,700 1.23%

(Q5 or Early Hang Up/Refused)


Asked Q1 1,175 0.85%

Completed Q1 (not Refused or 1,022 0.74%

Hangup to Q1)

Offered a Response to Q2 736 0.53%

(without hanging up or refusing)

Completed Entire Survey (Q5) 185 0.13%

Source: William Briggs Report; Briggs states that Matthew Braynard provided him these data.

Table 2. Toplines for the Georgia Survey conducted by Mr. Matthew Braynard as
reported in the report of Dr. William Briggs
Q1 – May I Please Speak to <lead on screen>?

1. Reached Target [Go to Q2] 767

[Go to Q2]* 255

X = Refused <Go to CLOSE A> 153

Q= Hangup <Go to CLOSE A> 385

Sum of All Responses 1,175

* Note: Toplines for other states in Briggs’ report list the


second response category as “Uncertain.”

Q2 – Did you request an absentee ballot?


1. Yes [Go to Q3] 591

2. No [Go to Q4] 128

Member confirmed “Yes” [Go to* 39

Member confirmed “No” [Go to 14

4*

5. .Unsure [Go to 3] 40

Moment. [Go to Close A] 82

X = Refused <Go to CLOSE A> 70

Q= Hangup <Go to CLOSE A> 58

Sum of All Responses 964

*Note: Toplines for Wisconsin in Briggs’ report describe

these as “per Spouse/family Member.”

Q3 – Did you mail back that ballot?

1. Yes [Go to Q4] 240

2. No [Go to Close A] 317

Member confirmed “Yes” [Go to* 17

Member confirmed “No” [Go to 9

Close A]*

5. .Unsure [Go to Close A] 24

Moment. [Go to Close A] 11


X = Refused <Go to CLOSE A> 5

Q= Hangup <Go to CLOSE A> 7

Sum of All Responses 623

*Note: Toplines for Wisconsin in Briggs’ report describe


these as “per Spouse/family Member.”
Signed at Boston, Massachusetts, on the date below.
Date: December 4, 2020

_________________________________
Stephen Ansolabehere
STEPHEN DANIEL ANSOLABEHERE

Department of Government
Harvard University
1737 Cambridge Street
Cambridge, MA 02138
sda@gov.harvard.edu

EDUCATION

Harvard University Ph.D., Political Science 1989


University of Minnesota B.A., Political Science 1984
B.S., Economics

PROFESSIONAL EXPERIENCE

ACADEMIC POSITIONS

2016-present Frank G. Thompson Professor of Government, Harvard University


2008-present Professor, Department of Government, Harvard University
2015-present Director, Center for American Politics, Harvard University
1998-2009 Elting Morison Professor, Department of Political Science, MIT
(Associate Head, 2001-2005)
1995-1998 Associate Professor, Department of Political Science, MIT
1993-1994 National Fellow, The Hoover Institution
1989-1993 Assistant Professor, Department of Political Science,
University of California, Los Angeles

FELLOWSHIPS AND HONORS

American Academy of Arts and Sciences 2007


Carnegie Scholar 2000-02
National Fellow, The Hoover Institution 1993-94
Harry S. Truman Fellowship 1982-86

1
2
PUBLICATIONS

Books

2019 American Government, 15th edition. With Ted Lowi, Benjamin Ginsberg
and Kenneth Shepsle. W.W. Norton.

2014 Cheap and Clean: How Americans Think About Energy in the Age of Global
Warming. With David Konisky. MIT Press.
Recipient of the Donald K. Price book award.

2008 The End of Inequality: One Person, One Vote and the Transformation of
American Politics. With James M. Snyder, Jr., W. W. Norton.

1996 Going Negative: How Political Advertising Divides and Shrinks the American
Electorate. With Shanto Iyengar. The Free Press. Recipient of the Goldsmith
book award.

1993 Media Game: American Politics in the Television Age. With Roy Behr and
Shanto Iyengar. Macmillan.

Journal Articles

2021 “The CPS Voting and Registration Supplement Overstates Turnout” Journal of
Politics Forthcoming (with Bernard Fraga and Brian Schaffner)

2021 "Congressional Representation: Accountability from the Constituent's Perspective,"


American Journal of Political Science forthcoming (with Shiro Kuriwaki)

2020 “Proximity, NIMBYism, and Public Support for Energy Infrastructure”


Public Opinion Quarterly (with David Konisky and Sanya Carley)
https://doi.org/10.1093/poq/nfaa025

2020 “Understanding Exponential Growth Amid a Pandemic: An Internal Perspective,”


Harvard Data Science Review 2 (October) (with Ray Duch, Kevin DeLuca,
Alexander Podkul, Liberty Vittert)

2020 “Unilateral Action and Presidential Accountability,” Presidential Studies Quarterly


50 (March): 129-145. (with Jon Rogowski)

3
2019 “Backyard Voices: How Sense of Place Shapes Views of Large-Scale Energy
Transmission Infrastructure” Energy Research & Social Science
forthcoming(with Parrish Bergquist, Carley Sanya, and David Konisky)

2019 “Are All Electrons the Same? Evaluating support for local transmission lines
through an experiment”PLOS ONE 14 (7): e0219066
(with Carley Sanya and David Konisky)
https://doi.org/10.1371/journal.pone.0219066

2018 “Learning from Recounts” Election Law Journal 17: 100-116 (with Barry C. Burden,
Kenneth R. Mayer, and Charles Stewart III)
https://doi.org/10.1089/elj.2017.0440

2018 “Policy, Politics, and Public Attitudes Toward the Supreme Court” American
Politics Research (with Ariel White and Nathaniel Persily).
https://doi.org/10.1177/1532673X18765189

2018 “Measuring Issue-Salience in Voters’ Preferences” Electoral Studies (with Maria


Socorro Puy) 51 (February): 103-114.

2018 “Divided Government and Significant Legislation: A History of Congress,” Social


Science History (with Maxwell Palmer and Benjamin Schneer).42 (1).

2017 “ADGN: An Algorithm for Record Linkage Using Address, Date of Birth
Gender and Name,” Statistics and Public Policy (with Eitan Hersh)

2017 “Identity Politics” (with Socorro Puy) Public Choice. 168: 1-19.
DOI 10.1007/s11127-016-0371-2

2016 “A 200-Year Statistical History of the Gerrymander” (with Maxwell Palmer) The
Ohio State University Law Journal

2016 “Do Americans Prefer Co-Ethnic Representation? The Impact of Race on House
Incumbent Evaluations” (with Bernard Fraga) Stanford University Law Review
68: 1553-1594

2016 Revisiting Public Opinion on Voter Identification and Voter Fraud in an Era of
Increasing Partisan Polarization” (with Nathaniel Persily) Stanford Law Review
68: 1455-1489

2015 “The Perils of Cherry Picking Low Frequency Events in Large Sample Surveys”
(with Brian Schaffner and Samantha Luks) Electoral Studies 40 (December):
409-410.

4
2015 “Testing Shaw v. Reno: Do Majority-Minority Districts Cause Expressive
Harms?” (with Nathaniel Persily) New York University Law Review 90

2015 “A Brief Yet Practical Guide to Reforming U.S. Voter Registration, Election Law
Journal, (with Daron Shaw and Charles Stewart) 14: 26-31.

2015 “Waiting to Vote,” Election Law Journal, (with Charles Stewart) 14: 47-53.

2014 “Mecro-economic Voting: Local Information and Micro-Perceptions of the


Macro-Economy” (With Marc Meredith and Erik Snowberg), Economics and
Politics 26 (November): 380-410.

2014 “Does Survey Mode Still Matter?” Political Analysis (with Brian Schaffner) 22:
285-303

2013 “Race, Gender, Age, and Voting” Politics and Governance, vol. 1, issue 2.
(with Eitan Hersh)
http://www.librelloph.com/politicsandgovernance/article/view/PaG-1.2.132

2013 “Regional Differences in Racially Polarized Voting: Implications for the


Constitutionality of Section 5 of the Voting Rights Act” (with Nathaniel Persily
and Charles Stewart) 126 Harvard Law Review F 205 (2013)
http://www.harvardlawreview.org/issues/126/april13/forum_1005.php

2013 “Cooperative Survey Research” Annual Review of Political Science (with


Douglas Rivers)

2013 “Social Sciences and the Alternative Energy Future” Daedalus (with Bob Fri)

2013 “The Effects of Redistricting on Incumbents,” Election Law Journal


(with James Snyder)

2012 “Asking About Numbers: How and Why” Political Analysis (with Erik
Snowberg and Marc Meredith). doi:10.1093/pan/mps031

2012 “Movers, Stayers, and Registration” Quarterly Journal of Political Science


(with Eitan Hersh and Ken Shepsle)

2012 “Validation: What Big Data Reveals About Survey Misreporting and the Real
Electorate” Political Analysis (with Eitan Hersh)

2012 “Arizona Free Enterprise v. Bennett and the Problem of Campaign Finance”
Supreme Court Review 2011(1):39-79

5
2012 “The American Public’s Energy Choice” Daedalus (with David Konisky)

2012 “Challenges for Technology Change” Daedalus (with Robert Fri)

2011 “When Parties Are Not Teams: Party positions in single-member district and
proportional representation systems” Economic Theory 49 (March)
DOI: 10.1007/s00199-011-0610-1 (with James M. Snyder Jr. and William
Leblanc)

2011 “Profiling Originalism” Columbia Law Review (with Jamal Greene and Nathaniel
Persily).

2010 “Partisanship, Public Opinion, and Redistricting” Election Law Journal (with
Joshua Fougere and Nathaniel Persily).

2010 “Primary Elections and Party Polarization” Quarterly Journal of Political Science
(with Shigeo Hirano, James Snyder, and Mark Hansen)

2010 “Constituents’ Responses to Congressional Roll Call Voting,” American


Journal of Political Science (with Phil Jones)

2010 “Race, Region, and Vote Choice in the 2008 Election: Implications for
the Future of the Voting Rights Act” Harvard Law Review April, 2010. (with
Nathaniel Persily, and Charles H. Stewart III)

2010 “Residential Mobility and the Cell Only Population,” Public Opinion Quarterly
(with Brian Schaffner)

2009 “Explaining Attitudes Toward Power Plant Location,” Public Opinion Quarterly
(with David Konisky)

2009 “Public risk perspectives on the geologic storage of carbon dioxide,”


International Journal of Greenhouse Gas Control (with Gregory Singleton and
Howard Herzog) 3(1): 100-107.

2008 “A Spatial Model of the Relationship Between Seats and Votes” (with William
Leblanc) Mathematical and Computer Modeling (November).

2008 “The Strength of Issues: Using Multiple Measures to Gauge Preference Stability,
Ideological Constraint, and Issue Voting” (with Jonathan Rodden and James M.
Snyder, Jr.) American Political Science Review (May).

2008 “Access versus Integrity in Voter Identification Requirements.” New York

6
University Annual Survey of American Law, vol 63.

2008 “Voter Fraud in the Eye of the Beholder” (with Nathaniel Persily) Harvard Law
Review (May)

2007 “Incumbency Advantages in U. S. Primary Elections,” (with John Mark Hansen,


Shigeo Hirano, and James M. Snyder, Jr.) Electoral Studies (September)

2007 “Television and the Incumbency Advantage” (with Erik C. Snowberg and
James M. Snyder, Jr). Legislative Studies Quarterly.

2006 “The Political Orientation of Newspaper Endorsements” (with Rebecca


Lessem and James M. Snyder, Jr.). Quarterly Journal of Political Science vol. 1,
issue 3.

2006 “Voting Cues and the Incumbency Advantage: A Critical Test” (with Shigeo
Hirano, James M. Snyder, Jr., and Michiko Ueda) Quarterly Journal of
Political Science vol. 1, issue 2.

2006 “American Exceptionalism? Similarities and Differences in National Attitudes


Toward Energy Policies and Global Warming” (with David Reiner, Howard
Herzog, K. Itaoka, M. Odenberger, and Fillip Johanssen) Environmental Science
and Technology (February 22, 2006),
http://pubs3.acs.org/acs/journals/doilookup?in_doi=10.1021/es052010b

2006 “Purple America” (with Jonathan Rodden and James M. Snyder, Jr.) Journal
of Economic Perspectives (Winter).

2005 “Did the Introduction of Voter Registration Decrease Turnout?” (with David
Konisky). Political Analysis.

2005 “Statistical Bias in Newspaper Reporting: The Case of Campaign Finance”


Public Opinion Quarterly (with James M. Snyder, Jr., and Erik Snowberg).

2005 “Studying Elections” Policy Studies Journal (with Charles H. Stewart III and R.
Michael Alvarez).

2005 “Legislative Bargaining under Weighted Voting” American Economic Review


(with James M. Snyder, Jr., and Michael Ting)

2005 “Voting Weights and Formateur Advantages in Coalition Formation: Evidence


from Parliamentary Coalitions, 1946 to 2002” (with James M. Snyder, Jr., Aaron
B. Strauss, and Michael M. Ting) American Journal of Political Science.

7
2005 “Reapportionment and Party Realignment in the American States” Pennsylvania
Law Review (with James M. Snyder, Jr.)

2004 “Residual Votes Attributable to Voting Technologies” (with Charles Stewart)


Journal of Politics

2004 “Using Term Limits to Estimate Incumbency Advantages When Office Holders
Retire Strategically” (with James M. Snyder, Jr.). Legislative Studies Quarterly
vol. 29, November 2004, pages 487-516.

2004 “Did Firms Profit From Soft Money?” (with James M. Snyder, Jr., and Michiko
Ueda) Election Law Journal vol. 3, April 2004.

2003 “Bargaining in Bicameral Legislatures” (with James M. Snyder, Jr. and Mike
Ting) American Political Science Review, August, 2003.

2003 “Why Is There So Little Money in U.S. Politics?” (with James M. Snyder, Jr.)
Journal of Economic Perspectives, Winter, 2003.

2002 “Equal Votes, Equal Money: Court-Ordered Redistricting and the Public
Spending in the American States” (with Alan Gerber and James M. Snyder, Jr.)
American Political Science Review, December, 2002.
Paper awarded the Heinz Eulau award for the best paper in the American Political
Science Review.

2002 “Are PAC Contributions and Lobbying Linked?” (with James M. Snyder, Jr. and
Micky Tripathi) Business and Politics 4, no. 2.

2002 “The Incumbency Advantage in U.S. Elections: An Analysis of State and Federal
Offices, 1942-2000” (with James Snyder) Election Law Journal, 1, no. 3.

2001 “Voting Machines, Race, and Equal Protection.” Election Law Journal, vol. 1,
no. 1

2001 “Models, assumptions, and model checking in ecological regressions” (with


Andrew Gelman, David Park, Phillip Price, and Larraine Minnite) Journal of
the Royal Statistical Society, series A, 164: 101-118.

2001 “The Effects of Party and Preferences on Congressional Roll Call Voting.”
(with James Snyder and Charles Stewart) Legislative Studies Quarterly
(forthcoming).
Paper awarded the Jewell-Lowenberg Award for the best paper published on
legislative politics in 2001. Paper awarded the Jack Walker Award for the best
paper published on party politics in 2001.

8
2001 “Candidate Positions in Congressional Elections,” (with James Snyder and
Charles Stewart). American Journal of Political Science 45 (November).

2000 “Old Voters, New Voters, and the Personal Vote,” (with James Snyder and
Charles Stewart) American Journal of Political Science 44 (February).

2000 “Soft Money, Hard Money, Strong Parties,” (with James Snyder) Columbia Law
Review 100 (April):598 - 619.

2000 “Campaign War Chests and Congressional Elections,” (with James Snyder)
Business and Politics. 2 (April): 9-34.

1999 “Replicating Experiments Using Surveys and Aggregate Data: The Case of
Negative Advertising.” (with Shanto Iyengar and Adam Simon) American
Political Science Review 93 (December).

1999 “Valence Politics and Equilibrium in Spatial Models,” (with James Snyder),
Public Choice.

1999 “Money and Institutional Power,” (with James Snyder), Texas Law Review 77
(June, 1999): 1673-1704.

1997 “Incumbency Advantage and the Persistence of Legislative Majorities,” (with Alan
Gerber), Legislative Studies Quarterly 22 (May 1997).

1996 “The Effects of Ballot Access Rules on U.S. House Elections,” (with Alan
Gerber), Legislative Studies Quarterly 21 (May 1996).

1994 “Riding the Wave and Issue Ownership: The Importance of Issues in Political
Advertising and News,” (with Shanto Iyengar) Public Opinion Quarterly 58:
335-357.

1994 “Horseshoes and Horseraces: Experimental Evidence of the Effects of Polls on


Campaigns,” (with Shanto Iyengar) Political Communications 11/4 (October-
December): 413-429.

1994 “Does Attack Advertising Demobilize the Electorate?” (with Shanto Iyengar),
American Political Science Review 89 (December).

1994 “The Mismeasure of Campaign Spending: Evidence from the 1990 U.S. House
Elections,” (with Alan Gerber) Journal of Politics 56 (September).

1993 “Poll Faulting,” (with Thomas R. Belin) Chance 6 (Winter): 22-28.

9
1991 “The Vanishing Marginals and Electoral Responsiveness,” (with David Brady and
Morris Fiorina) British Journal of Political Science 22 (November): 21-38.

1991 “Mass Media and Elections: An Overview,” (with Roy Behr and Shanto Iyengar)
American Politics Quarterly 19/1 (January): 109-139.

1990 “The Limits of Unraveling in Interest Groups,” Rationality and Society 2:


394-400.

1990 “Measuring the Consequences of Delegate Selection Rules in Presidential


Nominations,” (with Gary King) Journal of Politics 52: 609-621.

1989 “The Nature of Utility Functions in Mass Publics,” (with Henry Brady) American
Political Science Review 83: 143-164.

Special Reports and Policy Studies

2010 The Future of Nuclear Power, Revised.

2006 The Future of Coal. MIT Press. Continued reliance on coal as a primary power
source will lead to very high concentrations of carbon dioxide in the atmosphere,
resulting in global warming. This cross-disciplinary study – drawing on faculty
from Physics, Economics, Chemistry, Nuclear Engineering, and Political Science
– develop a road map for technology research and development policy in order to
address the challenges of carbon emissions from expanding use of coal for
electricity and heating throughout the world.

2003 The Future of Nuclear Power. MIT Press. This cross-disciplinary study –
drawing on faculty from Physics, Economics, Chemistry, Nuclear Engineering,
and Political Science – examines the what contribution nuclear power can make to
meet growing electricity demand, especially in a world with increasing carbon
dioxide emissions from fossil fuel power plants.

2002 “Election Day Registration.” A report prepared for DEMOS. This report analyzes
the possible effects of Proposition 52 in California based on the experiences of 6
states with election day registration.

2001 Voting: What Is, What Could Be. A report of the Caltech/MIT Voting
Technology Project. This report examines the voting system, especially
technologies for casting and counting votes, registration systems, and polling place
operations, in the United States. It was widely used by state and national
governments in formulating election reforms following the 2000 election.

10
2001 “An Assessment of the Reliability of Voting Technologies.” A report of the
Caltech/MIT Voting Technology Project. This report provided the first
nationwide assessment of voting equipment performance in the United States. It
was prepared for the Governor’s Select Task Force on Election Reform in Florida.

Chapters in Edited Volumes

2016 “Taking the Study of Public Opinion Online” (with Brian Schaffner) Oxford
Handbook of Public Opinion, R. Michael Alvarez, ed. Oxford University Press:
New York, NY.

2014 “Voter Registration: The Process and Quality of Lists” The Measure of
American Elections, Barry Burden, ed..

2012 “Using Recounts to Measure the Accuracy of Vote Tabulations: Evidence from
New Hampshire Elections, 1946-2002” in Confirming Elections, R. Michael
Alvarez, Lonna Atkeson, and Thad Hall, eds. New York: Palgrave, Macmillan.

2010 “Dyadic Representation” in Oxford Handbook on Congress, Eric Schickler, ed.,


Oxford University Press.

2008 “Voting Technology and Election Law” in America Votes!, Benjamin Griffith,
editor, Washington, DC: American Bar Association.

2007 “What Did the Direct Primary Do to Party Loyalty in Congress” (with
Shigeo Hirano and James M. Snyder Jr.) in Process, Party and Policy
Making: Further New Perspectives on the History of Congress, David
Brady and Matthew D. McCubbins (eds.), Stanford University Press, 2007.

2007 “Election Administration and Voting Rights” in Renewal of the Voting


Rights Act, David Epstein and Sharyn O’Hallaran, eds. Russell Sage Foundation.

2006 “The Decline of Competition in Primary Elections,” (with John Mark Hansen,
Shigeo Hirano, and James M. Snyder, Jr.) The Marketplace of Democracy,
Michael P. McDonald and John Samples, eds. Washington, DC: Brookings.

2005 “Voters, Candidates and Parties” in Handbook of Political Economy, Barry


Weingast and Donald Wittman, eds. New York: Oxford University Press.

2003 “Baker v. Carr in Context, 1946 – 1964” (with Samuel Isaacharoff) in


Constitutional Cases in Context, Michael Dorf, editor. New York: Foundation
Press.

11
2002 “Corruption and the Growth of Campaign Spending”(with Alan Gerber and James
Snyder). A User’s Guide to Campaign Finance, Jerry Lubenow, editor. Rowman
and Littlefield.

2001 “The Paradox of Minimal Effects,” in Henry Brady and Richard Johnston, eds.,
Do Campaigns Matter? University of Michigan Press.

2001 “Campaigns as Experiments,” in Henry Brady and Richard Johnson, eds., Do


Campaigns Matter? University of Michigan Press.

2000 “Money and Office,” (with James Snyder) in David Brady and John Cogan, eds.,
Congressional Elections: Continuity and Change. Stanford University Press.

1996 “The Science of Political Advertising,” (with Shanto Iyengar) in Political


Persuasion and Attitude Change, Richard Brody, Diana Mutz, and Paul
Sniderman, eds. Ann Arbor, MI: University of Michigan Press.

1995 “Evolving Perspectives on the Effects of Campaign Communication,” in Philo


Warburn, ed., Research in Political Sociology, vol. 7, JAI.

1995 “The Effectiveness of Campaign Advertising: It’s All in the Context,” (with
Shanto Iyengar) in Campaigns and Elections American Style, Candice Nelson and
James A. Thurber, eds. Westview Press.

1993 “Information and Electoral Attitudes: A Case of Judgment Under Uncertainty,”


(with Shanto Iyengar), in Explorations in Political Psychology, Shanto Iyengar
and William McGuire, eds. Durham: Duke University Press.

Working Papers

2009 “Sociotropic Voting and the Media” (with Marc Meredith and Erik Snowberg),
American National Election Study Pilot Study Reports, John Aldrich editor.

2007 “Public Attitudes Toward America’s Energy Options: Report of the 2007 MIT
Energy Survey” CEEPR Working Paper 07-002 and CANES working paper.

2006 "Constituents' Policy Perceptions and Approval of Members' of Congress" CCES


Working Paper 06-01 (with Phil Jones).

2004 “Using Recounts to Measure the Accuracy of Vote Tabulations: Evidence from
New Hampshire Elections, 1946 to 2002” (with Andrew Reeves).

2002 “Evidence of Virtual Representation: Reapportionment in California,” (with

12
Ruimin He and James M. Snyder).

1999 “Why did a majority of Californians vote to lower their own power?” (with James
Snyder and Jonathan Woon). Paper presented at the annual meeting of the
American Political Science Association, Atlanta, GA, September, 1999.
Paper received the award for the best paper on Representation at the 1999 Annual
Meeting of the APSA.

1999 “Has Television Increased the Cost of Campaigns?” (with Alan Gerber and James
Snyder).

1996 “Money, Elections, and Candidate Quality,” (with James Snyder).

1996 “Party Platform Choice - Single- Member District and Party-List Systems,”(with
James Snyder).

1995 “Messages Forgotten” (with Shanto Iyengar).

1994 “Consumer Contributors and the Returns to Fundraising: A Microeconomic


Analysis,” (with Alan Gerber), presented at the Annual Meeting of the American
Political Science Association, September.

1992 “Biases in Ecological Regression,” (with R. Douglas Rivers) August, (revised


February 1994). Presented at the Midwest Political Science Association Meetings,
April 1994, Chicago, IL.

1992 “Using Aggregate Data to Correct Nonresponse and Misreporting in Surveys”


(with R. Douglas Rivers). Presented at the annual meeting of the Political
Methodology Group, Cambridge, Massachusetts, July.

1991 “The Electoral Effects of Issues and Attacks in Campaign Advertising” (with
Shanto Iyengar). Presented at the Annual Meeting of the American Political
Science Association, Washington, DC.

1991 “Television Advertising as Campaign Strategy: Some Experimental Evidence”


(with Shanto Iyengar). Presented at the Annual Meeting of the American
Association for Public Opinion Research, Phoenix.

1991 “Why Candidates Attack: Effects of Televised Advertising in the 1990 California
Gubernatorial Campaign,” (with Shanto Iyengar). Presented at the Annual
Meeting of the Western Political Science Association, Seattle, March.

1990 “Winning is Easy, But It Sure Ain’t Cheap.” Working Paper #90-4, Center for the
American Politics and Public Policy, UCLA. Presented at the Political Science
Departments at Rochester University and the University of Chicago.

13
Research Grants

1989-1990 Markle Foundation. “A Study of the Effects of Advertising in the 1990


California Gubernatorial Campaign.” Amount: $50,000

1991-1993 Markle Foundation. “An Experimental Study of the Effects of Campaign


Advertising.” Amount: $150,000

1991-1993 NSF. “An Experimental Study of the Effects of Advertising in the 1992
California Senate Electoral.” Amount: $100,000

1994-1995 MIT Provost Fund. “Money in Elections: A Study of the Effects of Money on
Electoral Competition.” Amount: $40,000

1996-1997 National Science Foundation. “Campaign Finance and Political Representation.”


Amount: $50,000

1997 National Science Foundation. “Party Platforms: A Theoretical Investigation of


Party Competition Through Platform Choice.” Amount: $40,000

1997-1998 National Science Foundation. “The Legislative Connection in Congressional


Campaign Finance. Amount: $150,000

1999-2000 MIT Provost Fund. “Districting and Representation.” Amount: $20,000.

1999-2002 Sloan Foundation. “Congressional Staff Seminar.” Amount: $156,000.

2000-2001 Carnegie Corporation. “The Caltech/MIT Voting Technology Project.”


Amount: $253,000.

2001-2002 Carnegie Corporation. “Dissemination of Voting Technology Information.”


Amount: $200,000.

2003-2005 National Science Foundation. “State Elections Data Project.” Amount:


$256,000.

2003-2004 Carnegie Corporation. “Internet Voting.” Amount: $279,000.

2003-2005 Knight Foundation. “Accessibility and Security of Voting Systems.” Amount:


$450,000.

2006-2008 National Science Foundation, “Primary Election Data Project,” $186,000

14
2008-2009 Pew/JEHT. “Measuring Voting Problems in Primary Elections, A National
Survey.” Amount: $300,000

2008-2009 Pew/JEHT. “Comprehensive Assessment of the Quality of Voter Registration


Lists in the United States: A pilot study proposal” (with Alan Gerber).
Amount: $100,000.

2010-2011 National Science Foundation, “Cooperative Congressional Election Study,”


$360,000

2010-2012 Sloan Foundation, “Precinct-Level U. S. Election Data,” $240,000.

2012-2014 National Science Foundation, “Cooperative Congressional Election Study, 2010-


2012 Panel Study” $425,000

2012-2014 National Science Foundation, “2012 Cooperative Congressional Election


Study,” $475,000

2014-2016 National Science Foundation, “Cooperative Congressional Election Study, 2010-


2014 Panel Study” $510,000

2014-2016 National Science Foundation, “2014 Cooperative Congressional Election


Study,” $400,000

2016-2018 National Science Foundation, “2016 Cooperative Congressional Election


Study,” $485,000

2018-2020 National Science Foundation, “2018 Cooperative Congressional Election


Study,” $844,784.

2019-2022 National Science Foundation, RIDIR: “Collaborative Research: Analytic Tool


for Poststratification and small-area estimation for survey data.” $942,607

Professional Boards

Editor, Cambridge University Press Book Series, Political Economy of Institutions and
Decisions, 2006-2016

Member, Board of the Reuters International School of Journalism, Oxford University, 2007 to
present.

Member, Academic Advisory Board, Electoral Integrity Project, 2012 to present.

15
Contributing Editor, Boston Review, The State of the Nation.

Member, Board of Overseers, American National Election Studies, 1999 - 2013.

Associate Editor, Public Opinion Quarterly, 2012 to 2013.

Editorial Board of Harvard Data Science Review, 2018 to present.


Editorial Board of American Journal of Political Science, 2005 to 2009.
Editorial Board of Legislative Studies Quarterly, 2005 to 2010.
Editorial Board of Public Opinion Quarterly, 2006 to present.
Editorial Board of the Election Law Journal, 2002 to present.
Editorial Board of the Harvard International Journal of Press/Politics, 1996 to 2008.
Editorial Board of Business and Politics, 2002 to 2008.
Scientific Advisory Board, Polimetrix, 2004 to 2006.

Special Projects and Task Forces

Principal Investigator, Cooperative Congressional Election Study, 2005 – present.

CBS News Election Decision Desk, 2006-present

Co-Director, Caltech/MIT Voting Technology Project, 2000-2004.

Co-Organizer, MIT Seminar for Senior Congressional and Executive Staff, 1996-2007.

MIT Energy Innovation Study, 2009-2010.


MIT Energy Initiative, Steering Council, 2007-2008
MIT Coal Study, 2004-2006.
MIT Energy Research Council, 2005-2006.
MIT Nuclear Study, 2002-2004.
Harvard University Center on the Environment, Council, 2009-present

Expert Witness, Consultation, and Testimony

2001 Testimony on Election Administration, U. S. Senate Committee on Commerce.


2001 Testimony on Voting Equipment, U.S. House Committee on Science, Space,
and Technology
2001 Testimony on Voting Equipment, U.S. House Committee on House
Administration
2001 Testimony on Voting Equipment, Congressional Black Caucus
2002-2003 McConnell v. FEC, 540 U.S. 93 (2003), consultant to the Brennan Center.
2009 Amicus curiae brief with Professors Nathaniel Persily and Charles Stewart on
behalf of neither party to the U.S. Supreme Court in the case of Northwest

16
Austin Municipal Utility District Number One v. Holder, 557 U.S. 193 (2009).
2009 Testimony on Voter Registration, U. S. Senate Committee on Rules.
2011-2015 Perez v. Perry, U. S. District Court in the Western District of Texas (No. 5:11-
cv-00360). Exert witness on behalf of Rodriguez intervenors.
2011-2013 State of Texas v. United States, the U.S. District Court in the District of
Columbia (No. 1:11-cv-01303), expert witness on behalf of the Gonzales
intervenors.
2012-2013 State of Texas v. Holder, U.S. District Court in the District of Columbia (No.
1:12-cv-00128), expert witness on behalf of the United States.
2011-2012 Guy v. Miller in U.S. District Court for Nevada (No. 11-OC-00042-1B), expert
witness on behalf of the Guy plaintiffs.
2012 In re Senate Joint Resolution of Legislative Apportionment, Florida Supreme
Court (Nos. 2012-CA-412, 2012-CA-490), consultant for the Florida
Democratic Party.
2012-2014 Romo v. Detzner, Circuit Court of the Second Judicial Circuit in Florida (No.
2012 CA 412), expert witness on behalf of Romo plaintiffs.
2013-2014 LULAC v. Edwards Aquifer Authority, U.S. District Court for the Western
District of Texas, San Antonio Division (No. 5:12cv620-OLG,), consultant and
expert witness on behalf of the City of San Antonio and San Antonio Water
District
2013-2014 Veasey v. Perry, U. S. District Court for the Southern District of Texas, Corpus
Christi Division (No. 2:13-cv-00193), consultant and expert witness on behalf of
the United States Department of Justice.
2013-2015 Harris v. McCrory, U. S. District Court for the Middle District of North
Carolina (No. 1:2013cv00949), consultant and expert witness on behalf of the
Harris plaintiffs. (later named Cooper v. Harris)
2014 Amicus curiae brief, on behalf of neither party, Supreme Court of the United
States, Alabama Democratic Conference v. State of Alabama.
2014- 2016 Bethune-Hill v. Virginia State Board of Elections, U. S. District Court for the
Eastern District of Virginia (No. 3:2014cv00852), consultant and expert on
behalf of the Bethune-Hill plaintiffs.
2015 Amicus curiae brief in support of Appellees, Supreme Court of the United
States, Evenwell v. Abbott
2016-2017 Perez v. Abbott, U. S. District Court in the Western District of Texas (No. 5:11-
cv-00360). Exert witness on behalf of Rodriguez intervenors.
2017-2018 Fish v. Kobach, U. S. District Court in the District of Kansas (No. 2:16-cv-
02105-JAR). Expert witness of behalf of the Fish plaintiffs.

17
Response to Report of William Briggs

Stephen Ansolabehere

December 4, 2020
I. Statement of Inquiry

1. I have been asked by counsel for the Democratic Party of Georgia, the DSCC, and the

DCCC to evaluate the report of Dr. William Briggs. I am compensated at the rate of $550 an hour.

2. A brief summary of my high-level opinions and conclusions is below. Overall, however,

based on my review, I find the estimates and analyses in Dr. Briggs report to be unreliable, and

the analysis is not up to scientific standards of survey research, statistics and data science, or

election analysis. There are substantial errors in the design of the survey, and errors and

inconsistencies in the data used in the analysis that are sufficient to invalidate any calculations or

estimates based on these data. The survey design and implementation fail to meet basic scientific

standards of survey research and statistical analysis of surveys. And, the interpretation of the data

does not account for obvious and important features of absentee voting, including permanent

absentee voters who do not need to request ballots to receive them, and late, rejected, invalid, and

spoiled absentee ballots. The errors in design, analysis, and interpretation of the data are of

sufficient magnitude that there is no foundation for drawing any conclusions or inferences based

on Dr. Briggs’ report.

II. Summary Assessment

3. In his report, Dr. Briggs evaluates survey data that was provided to him by a third party

and assumes that “the respondents [to the survey] are representative and the data are accurate.” 1

There is no indication in his report that any analysis was conducted by him, or by those who

provided the data to him, to verify the correctness or integrity of the data provided, the quality of

the survey, or the representativeness of the sample on which Dr. Briggs based his analysis. It is

standard scientific practice in the field of survey research to give careful scrutiny to data before

1
William M. Briggs, “An Analysis of Surveys Regarding Absentee Ballots Across Several States,” November 23,
2020, page 1.

1
conducting any statistical analysis, including understanding the structure and wording of the

survey questions, the sampling method and response rate, and the characteristics of the sample,

such as demographic and behavioral indicators. It is never the practice to assume that survey data

are representative and correct.

4. In his report, Dr. Briggs defines two types of errors. People who received absentee

ballots even though the survey indicates they did not request an absentee ballot are designated

“Error #1.” People who returned absentee ballots even though the election office did not record an

absentee vote from them are designated “Error #2.” Combined, Dr. Briggs calls these two errors

“troublesome ballots.” Based on the information in Dr. Briggs’ report, it is my conclusion that

neither assumed “error” is justified. The estimates of Error #1 and Error #2 he presents reflect

defects in the design of the survey, fatal data errors evident in the survey toplines, calculation

errors, and errors in the interpretation of the data. It is my professional judgment that none of the

estimates and projections in his report are valid.

5. The design of the survey contaminates the data and any estimates, rendering them

invalid. Specifically, in Question 1 of the survey, the surveyor asks to speak to a specific person.

Some of the respondents are flagged as “Reached Target,” while others are flagged as “Uncertain”

or “What is this about?” Both groups of people (Reached Target and Uncertain) are then asked

Question 2, Did you request an absentee ballot? This is a serious survey design error because some

or perhaps all of the people flagged as “Uncertain” are not the Target of the interview. As a result,

the structure of the very beginning of the survey allows non-Target people to be treated as if they

were the Target in the remaining questions. This flaw leads to the contamination of all results. It

also means that, on its face, the sample is not representative of the population being studied because

the set of people who responded to the survey include a large number of respondents who were

2
not supposed to be interviewed. This fact is evident in the tables that characterize the survey

responses, called Topline Tables or “toplines,” that were attached to Dr. Briggs’ report.

6. The survey suffers from ambiguously worded questions, which introduces measurement

errors in any estimates it makes. Question 2 asks respondents whether they requested an absentee

ballot. The question does not follow up and clarify different ways that people obtain absentee

ballots or whether the ballot was actually received. Perhaps the largest category of voters for whom

this question is vexing are those who are registered to receive ballots without requesting them,

called permanent absentee and early voters or rollover absentee voters (PEVs). A PEV is sent an

absentee ballot automatically without needing to request a ballot for a particular election. Four of

the states in question (Arizona, Michigan, Pennsylvania, and Wisconsin) allow permanent

absentee voting for all voters, and Georgia allows rollover absentee voting each election cycle for

those 65 years of age or older, military voters, and incapacitated voters. For these voters, both

“yes” and “no” may be viewed as correct answers to the question of whether they requested an

absentee ballot. A respondent who is a PEV might respond yes because they did sign up for that

status, or they might as correctly respond no because they did not have to request a ballot in order

to have one sent to them. The questionnaire provides no way to clarify such cases; there is no

follow up question to disambiguate permanent absentee voters from others. This is just one

example of the substantial problems with the wording and structure of Question 2.

7. The wording of Question 3 is also problematic. First, it does not ascertain whether the

ballot was mailed back in a timely manner so as to be included in the record of ballots cast. Some

or possibly all of the ballots at issue are late ballots and thus may not be included in the absentee

vote record. Second, Question 3 asks whether someone voted. As is well known among political

3
scientists and survey researchers, survey questions asking whether someone voted are notoriously

subject to social desirability biases that lead to inflation in the estimated number of voters.

8. There are also errors and inconsistencies in the survey data, as is evident in the summary

of the survey appended to Dr. Briggs’ report. The appended summary includes a series of tables,

called Topline Tables (“toplines”), for each state. The toplines provide basic statistics about the

survey reported for each question, as well as the questions themselves and the response categories

for each. There are errors in the spreadsheet of toplines that indicate data inconsistencies. For

example, responses to Question 1 for the state of Wisconsin sum to more than the reported total

number of cases. In the tables for Arizona, Michigan, Pennsylvania, and Wisconsin, the number

of respondents to Question 1 who are supposed to be asked Question 2 does not sum to the number

of respondents to Question 2. In two cases, there are too many respondents to Question 2 (Arizona

and Michigan). And in two cases, there are too few respondents to Question 2 (Pennsylvania and

Wisconsin). These errors infect and bias responses to Q2 and Q3. Generally, such errors indicate

fundamental problems with the management of the survey and the databases generated by the

survey. In standard survey practice, the presence of discrepancies in these Topline Tables indicates

fatal flaws in the data that prompt researchers to clarify the problems and possibly discard the data

altogether. Dr. Briggs’ report makes no mention of these inconsistencies and errors and assumes

that the underlying data are correct. These errors and inconsistencies reveal that the data are not

correct.

9. The survey has extremely low response rates. The highest response rate is 1.5 percent

(in Pennsylvania). The other four states have response rates of fractions of one percent, meaning

that over 99 percent of people who the firm surveyed in the target group could not be contacted,

refused to participate, or were not in fact the correct person. High non-response rates generally

4
create biases in survey results because the samples are rarely representative of the population under

study. Surveys with as low a response rate as here are not accepted in scientific publications, except

on rare occasions and with proper analyses that ensure the respondents are in fact representative.

When researchers have low response rates, they must offer affirmative proof of representativeness

or attempt to correct for biases. Neither has been done here.

10. In performing his analysis, Dr. Briggs extrapolates from the poorly designed survey

with an extraordinarily high non-response rate and evident data errors and inconsistencies. The

high non-response rate, data errors, and survey design flaws are all evident in the Topline Tables

that Dr. Briggs appended to his report. These data should not have been relied on for this analysis

given that they are not correct and that the respondents to the survey are highly unlikely to

represent the population in question. The data, and Dr. Briggs’ interpretation of it, are not up to

scientific standards.

11. Dr. Briggs’ interpretation that the data evinces voting “errors” and “troublesome

ballots” fails to account for the rules and realities of absentee voting. First, Dr. Briggs designates

as Error #1 absentee ballots that were received by voters but were not “requested.” This

interpretation fails to consider permanent absentee voters, who receive ballots without requesting

them. All five states in the report allow for permanent absentee voting for some or all registrants.

Second, Dr. Briggs designates as Error #2 ballots that were sent by voters but not recorded. This

interpretation fails to account for late, undeliverable, rejected, and spoiled ballots. Most

jurisdictions, for example, do not record late ballots in the tally of returned absentee ballots. The

results in his analysis, if they are real, are likely the consequence of the normal practice of absentee

voting.

5
III. Qualifications

12. I am the Frank G. Thompson Professor of Government in the Department of

Government at Harvard University in Cambridge, MA. Formerly, I was an Assistant Professor at

the University of California, Los Angeles, and I was Professor of Political Science at the

Massachusetts Institute of Technology, where I held the Elting R. Morison Chair and served as

Associate Head of the Department of Political Science. I am the Principal Investigator of the

Cooperative Congressional Election Study (CCES), a survey research consortium of over 250

faculty and student researchers at more than 50 universities, directed the Caltech/MIT Voting

Technology Project from its inception in 2000 through 2004, and served on the Board of Overseers

of the American National Election Study from 1999 to 2013. I am an election analyst for and

consultant to CBS News’ Election Night Decision Desk. I am a member of the American Academy

of Arts and Sciences (inducted in 2007). My curriculum vitae is attached to this report as Appendix

B.

13. I have worked as a consultant to the Brennan Center in the case of McConnell v. FEC,

540 U.S. 93 (2003). I have testified before the U.S. Senate Committee on Rules, the U.S. Senate

Committee on Commerce, the U.S. House Committee on Science, Space, and Technology, the

U.S. House Committee on House Administration, and the Congressional Black Caucus on matters

of election administration in the United States. I filed an amicus brief with Professors Nathaniel

Persily and Charles Stewart on behalf of neither party to the U.S. Supreme Court in the case of

Northwest Austin Municipal Utility District Number One v. Holder, 557 U.S. 193 (2009), and an

amicus brief with Professor Nathaniel Persily and others in the case of Evenwel v. Abbott 138 S.Ct.

1120 (2015). I have served as a testifying expert for the Gonzales intervenors in State of Texas v.

6
United States before the U.S. District Court in the District of Columbia (No. 1:11-cv-01303); the

Rodriguez plaintiffs in Perez v. Perry before the U. S. District Court in the Western District of

Texas (No. 5:11-cv-00360); the San Antonio Water District intervenor in LULAC v. Edwards

Aquifer Authority in the U.S. District Court for the Western District of Texas, San Antonio

Division (No. 5:12cv620-OLG); the Department of Justice in State of Texas v. Holder before the

U.S. District Court in the District of Columbia (No. 1:12-cv-00128); the Guy plaintiffs in Guy v.

Miller in U.S. District Court for Nevada (No. 11-OC-00042-1B); the Florida Democratic Party in

In re Senate Joint Resolution of Legislative Apportionment in the Florida Supreme Court (Nos.

2012-CA-412, 2012-CA-490); the Romo plaintiffs in Romo v. Detzner in the Circuit Court of the

Second Judicial Circuit in Florida (No. 2012 CA 412); the Department of Justice in Veasey v.

Perry before the U.S. District Court for the Southern District of Texas, Corpus Christi Division

(No. 2:13cv00193); the Harris plaintiffs in Harris v. McCrory in the U. S. District Court for the

Middle District of North Carolina (No. 1:2013cv00949); the Bethune-Hill plaintiffs in Bethune-

Hill v. Virginia State Board of Elections in the U.S. District Court for the Eastern District of

Virginia (No. 3: 2014cv00852); the Fish plaintiffs in Fish v. Kobach in the U.S. District Court for

the District of Kansas (No. 2:16-cv-02105-JAR); and intervenors in Voto Latino, et al. v. Hobbs

in the U.S. District Court for the District of Arizona (No. 2:19-cv-05685-DWL). I served as an

expert witness and filed an Affidavit in the North Carolina State Board of Elections hearings

regarding absentee ballot fraud in the 2018 election for Congressional District 9 in North Carolina.

14. My areas of expertise include American government—with particular expertise in

electoral politics, representation, and public opinion—as well as statistical methods in social

sciences and survey research methods. I have authored numerous scholarly works on voting

behavior and elections, the application of statistical methods in social sciences, legislative politics

7
and representation, and distributive politics. This scholarship includes articles in such academic

journals as the Journal of the Royal Statistical Society, American Political Science Review,

American Economic Review, the American Journal of Political Science, Legislative Studies

Quarterly, Quarterly Journal of Political Science, Electoral Studies, and Political Analysis. I have

published articles on election law issues in the Harvard Law Review, Texas Law Review,

Columbia Law Review, New York University Annual Survey of Law, and Election Law Journal,

for which I am a member of the editorial board. I am associate editor of the Harvard Data Science

Review and have served as associate editor of the Public Opinion Quarterly. I have coauthored

three scholarly books on electoral politics in the United States, The End of Inequality: Baker v.

Carr and the Transformation of American Politics, Going Negative: How Political Advertising

Shrinks and Polarizes the Electorate, and The Media Game: American Politics in the Media Age.

I am coauthor, with Benjamin Ginsberg and Ken Shepsle, of American Government: Power and

Purpose.

IV. Sources

15. I have relied on the report of Dr. William Briggs, especially the appended Topline

Tables.

16. I have relied on the Election Assistance Commission, “Election Administration and

Voting Survey (EAVS) for 2018: https://www.eac.gov/research-and-data/studies-and-reports. I

present data from 2018 because it is the most recent federal election for which data on absentee

and permanent absentee voting is available. The 2018 data are instructive about the magnitude of

permanent absentee voters and of the magnitude of unreturned, late, rejected, and spoiled

absentee ballots. The 2020 data are not yet reported.

8
V. Findings

18. In my professional judgment there are fundamental flaws in the survey design and

survey data that Dr. Briggs relied on, as well as in his interpretation of answers to the survey

questions. These flaws create biases in his estimates and analyses of the survey results. The survey

is likely highly unrepresentative because it has a response rate of less than 1 percent. The survey

data are contaminated by respondents who should not have been included in the survey. The basic

data in the Topline summaries of the data do not add up, indicating fatal flaws in the

implementation of the survey. These flaws in the survey design, implementation, and data mean

that the respondents to the survey cannot be assumed to be representative of the population being

studied, and the survey data cannot be assumed to be accurate.

A. Critique of Interpretation

i. The survey data and its interpretation does not account for Permanent

Absentee and Early Voters (PEV).

19. The analysis of Question 2 is used to estimate the number of people who received but

did not request an absentee ballot. Briggs calls this Error #1.

20. The interpretation of these data as an Error in balloting does not account for the

presence of a large number of Permanent Absentee and Early Voters (PEVs) in Arizona, Michigan,

Pennsylvania, and Wisconsin. Georgia automatically mails ballots for voters who qualify for

“rollover” ballots—people who are over 65, disabled, or in the military, and who sign up annually

to have ballots automatically sent to them. I consider rollover ballots to be a form of PEV, but the

voter does need to sign up each year.

9
21. PEVs are automatically sent their absentee ballots. They do not need to request that a

ballot be sent for a particular election.

22. There are a sizable number of PEVs in the five states under study. Table 1 presents

data from the number of absentee ballots sent in 2018 and the number of permanent absentee

ballots sent to voters in Arizona, Georgia (rollover ballots), Michigan, Pennsylvania, and

Wisconsin. The number of permanent absentee ballots sent in Arizona, Michigan, and Wisconsin

far exceeds the estimated Error #1 in the first table in Briggs’ report. The EAC reports no data on

permanent absentee ballots for Georgia in 2018. Those data cover 2018 and are presented to

indicate the likely magnitude of PEVs in the states in 2020.

23. There were at least 582,000 “rollover” ballots in Georgia in 2020. 2 This figure far

exceeds the total number of absentee ballots that Dr. Briggs classifies as Error #1—those who

received ballots without requesting them.

24. The survey makes no effort to distinguish PEVs from other sorts of absentee voters.

Not accounting for PEVs is a serious error in survey design and interpretation of the survey

numbers.

Table 1. Permanent Absentee Voters in Arizona, Georgia, Michigan, Pennsylvania, and

Wisconsin in 2018

Total Absentee Permanent Absentee Permanent Absentee

Ballots Sent Ballots Sent Ballots as a Percent

of Total

2
Stephen Fowler, “Nearly 800,000 Georgians Have Already Requested Absentee Ballots for November” GA Today
gpb.org, September 2, 2020. https://www.gpb.org/news/2020/09/02/nearly-800000-georgians-have-already-
requested-absentee-ballots-for-november

10
(i.e., ballots sent

automatically without

a specific ballot

request)

Arizona 2,672,384 2,545,198 95.2%

Georgia 281,490 * *

Michigan 1,123,415 549,894 48.9%

Pennsylvania 216,575 6,340 2.9%

Wisconsin 168,788 54,113 32.1%

Source: EAC, EAVS 2018.

Note: * means no data reported.

ii. The interpretation of Question 3 fails to account for the proper handling

of late, invalid, and spoiled absentee ballots by Local Election Offices.

25. The analysis of Question 3 is used to estimate the number of people who stated they

returned an absentee ballot, but for whom no vote was recorded. Dr. Briggs calls this Error #2.

26. His interpretation does not account for absentee ballots that are in fact not received or

counted by election officers because the ballots are not returned by the postal system, are spoiled,

are returned late, or are rejected. Such ballots are the obvious explanation for the data observed.

No effort in the survey or the analysis is made to ascertain the likelihood that a voter cast a late or

invalid absentee ballot. As noted below there are other problems with this question that make it

impossible to take the Error #2 estimates at face value.

11
27. It is my experience researching elections over the past two decades that “uncounted”

absentees are a normal part of the election process. Table 2 presents counts of rejected, late,

undelivered, and voided absentee ballots in Arizona, Georgia, Michigan, Pennsylvania, and

Wisconsin for 2018, the most recent federal election for which systematic data on absentee voting

are available. An undeliverable absentee ballot is one that was returned to the election office as

not being deliverable to the address on the voter registration lists. The final column presents the

number of sent absentee ballots not received by voters and for which the status of the ballot is

unknown. It is likely these ballots were simply not returned by voters or were lost or delayed in

the US Postal System. Delays in the postal system was a particular concern in 2020, as there were

widespread reports of staffing problems during COVID for USPS, delays in mail delivery, and

declines in the rate of on-time delivery. 3

28. The magnitude of ballots that are returned to the office but are rejected, spoiled, or late

is quite large. The sum of the columns reflecting these numbers is comparable in magnitude to that

of “Error #2” in Dr. Briggs’ report. These figures are not definitive of the numbers in 2020, which

have not yet been reported. Rather, they demonstrate the fact that there are sound, documented

administrative reasons that returned absentee ballots are not recorded as having been voted,

especially tardiness, spoilage, and rejection for lack of signatures, valid envelopes, and the like.

These are ballots that are not allowed to be counted under law, and they are comparable in

magnitude to the estimates of Error #2 reported by Dr. Briggs for each state.

Table 2. Rejected, Undelivered, Voided, and Late Absentees in Arizona, Georgia, Michigan,

Pennsylvania, and Wisconsin in 2018

3
Hailey Fuchs, “Some Regions Still Experience Slow Delivery of Mail Ballots,” New York Times, November 3, 2020,
Section A, Page 23. https://www.nytimes.com/2020/11/02/us/politics/mail-ballot-usps.html

12
Rejected Undeliverable Spoiled/Voided Late Status

Absentee Absentee Absentee Absentee Unknown

Ballots Ballots Ballots Ballots

Arizona 8,567 102,896 27,804 2,515 642,210

Georgia 7,512 2,322 252 3,525 36,255

Michigan 6,013 791 19,679 2,207 41,120

Pennsylvania 8,714 * * 8,162 20,622

Wisconsin 2,517 1,718 2,794 1,445 12,407

Source: EAC, EAVS 2018.

Note: * means no data reported.

B. Critique of Survey Design

29. Dr. Briggs offers no assessment of the design of the survey that generated the data that

he presents. Rather, he assumes that the data are accurate. Also, there is no report of the survey

design, beyond the information embedded in the Topline Tables. It would be standard for any

scientifically sound report of survey data to describe fully the survey instrument used in the study

and to make it publicly available.

30. It is my understanding that Matthew Braynard designed and conducted these surveys.

The methodology he used is described in his expert report, submitted December 4, 2020.

i. The surveys have unacceptably high non-response rates.

31. The response rate to the survey is measured as the number of people who answered

the first substantive question (Q2) in the survey divided by the number of people whom the

13
surveyor sought to contact. The response rate is less than 1 percent in Arizona, Georgia, Michigan,

and Wisconsin, and it is 1.5 percent in Pennsylvania. These response rates are extremely low and

a critical threat to any inferences one might draw from the data.

32. In his report, Mr. Braynard identifies that the survey attempts to interview all registered

voters who were recorded as requesting but not returning an absentee ballot. Mr. Braynard’s firm

attempted to match phone numbers to records of registered voters in each of the states and then

attempted to interview all the people associated with each registration record of interest.

33. The appendix to Dr. Briggs’ report presents a set of tables, the first of which is for the

state of Georgia and is titled: Unreturned_Absentee Live ID Topline Tables. Each of the five states

that Dr. Briggs studies have similar Topline Tables. It is evident from the toplines that there are

significant shortcomings in the ability of the survey firm to match phone numbers to registration

records. The field called “Data Loads” corresponds to the number of matched phone numbers that

were loaded into the survey system to be called. They are only a fraction of the population of all

Unreturned Absentees.

34. The toplines also list Completes. These are phone numbers for which an interview

commenced, an answering machine was reached, or a returned call was requested. For example,

in the topline table for Georgia, the first three rows of the first table (QA5, Answering Machines,

and up/RC) sum to 15,179, which is the number of Completes listed on the top of the table.

35. There is no description in Dr. Briggs’ report of the generation of Data Loads or the

methodology used for determining matches of phone numbers to registration records. Matching is

a difficult process. Mismatches, either false positives or false negatives, will generate errors in

surveys. Incorrectly matched phone numbers will lead the surveyor to interview the wrong person

14
(a false positive), and errors in matching may lead the researcher to exclude the person from the

survey when in fact a valid number could have been found (a false negative). 4

36. The percent of registered voters with Unreturned Absentees who were recorded as

“Completes” in the Toplines is 10 percent or less in every state. The Completes as a percent of

Unreturned Absentee Ballots is the middle column of Table 3. The rate of Completes is as low as

1 percent (in Arizona) and as high as 11 percent (Georgia). Thus, 90 percent of the potential

respondents to the survey were lost even before the survey began. There is no analysis as to why

the survey failed to identify a higher number of valid phone numbers for the people the researchers

sought to interview, and there is no attempt to ensure that the people for whom a valid phone

number could be found are similar to those for whom a valid phone number could not be found.

37. Once the survey commences, there is first a screener question to determine whether

the person interviewed should continue with the interview. That is Question 1. Question 2 is the

first question of interest in Dr. Briggs’ analysis. It asks, “Did you request an Absentee Ballot in

the State of <state name>?” Respondents could answer Yes, No, some other answer, Refuse to

answer, or Hang up.

38. The response rate to the survey is the number of valid responses to Question 2, i.e. the

total number of responses to the question less the number of people who refused to answer or hung

up. The second column of Table 3 is the percent of people the researchers sought to interview (all

Unreturned Absentee Ballots) who ultimately gave a valid response to Question 2.

39. The response rates to this survey are perilously low. Pennsylvania has the highest

response rate of 1.5 percent. Michigan comes next at .8 percent (eight-tenths of one percent);

4
Alan S. Gerber and Donald P. Green, “Can Registration-Based Sampling Improve the Accuracy of Midterm Election
Forecasts?” Public Opinion Quarterly 70 (2006): 197-223, esp. page 202.

15
Arizona has a response rate of .6 percent (six-tenths of one percent); and Georgia and Wisconsin

each have response rates of .4 percent (four-tenths of one percent).

40. Once the entire survey process had been completed, over 99 percent of people whom

the researcher sought to interview were not interviewed in Arizona, Georgia, Michigan, and

Wisconsin. In Pennsylvania, 98.5 percent of those the researchers set out to study were ultimately

not included in the study for one reason or another.

41. This is an extremely high non-response rate. In most disciplines of study that I am

familiar with, these response rates would indicate that the underlying sample on which a survey

relied is not scientifically acceptable or reliable. For example, I am an associate editor of the

Harvard Data Sciences Review, which broadly covers fields of statistics and data sciences, and

specialty fields—such as political science, public opinion, survey methodology, and economics—

in which I have published. Papers with high non-responses like those in Dr. Briggs’ report are

rejected on their face as not plausibly valid studies.

42. Dr. Briggs’ assumption that those who responded to the question are representative of

the relevant population under study (i.e. the other 99 percent of people who could not or would not

participate in the survey) is heroic. When surveys have high non-response rates, it is standard

practice to analyze information about the sample and the target population, such as demographic

characteristics or behavioral and attitudinal statistics, to confirm that the assumption of

representativeness of a sample can be maintained. In fact, this is done even when response rates

are quite high. When the response rates are very low, however, such an analysis is necessary in

order to determine whether there is any scientific value to the survey. No such analysis is offered

here.

16
Table 3. Response Rates to Surveys Reported by Dr. William Briggs

State “Completes”/ Question 2 Valid Response/

Unreturned Absentee Ballots Unreturned Absentee Ballots

Arizona .011 .006

Georgia .110 .004

Michigan .027 .008

Pennsylvania .109 .015

Wisconsin .048 .004

Note: Ballots is the number of registered voters the survey sought to reach. See Table 1 of

Briggs’ report.

“Completes” is the number of “complete” contacts in the firsts part of each state’s topline

report.

Question 2 Response is the number of respondents who answered Question 2 and did not

Refuse or Hangup.

ii. The survey has an unacceptably high interview breakoff rate.

43. The breakoff rate in surveys is the rate at which people who start the survey breakoff,

for whatever reason. The interview may be stopped by the respondent or by the surveyor. In the

toplines, these are indicated as refusals and hang ups. The breakoff rate is measured as the number

of people answering the last question in the survey divided by the number of Completes. The

opposite of the breakoff rate is the survey completion rate.

44. The breakoff rates are extremely high in these surveys. The breakoff rates are 87.8

percent in Arizona, 98.8 percent in Georgia, 93.5 percent in Michigan, 95.4 percent in

17
Pennsylvania, and 90.6 percent in Wisconsin. In Georgia the breakoff rate of 98.8 percent means

that once the survey began, only 1.2 percent of respondents made it to the end.

45. The breakoff rate is a quality control indicator. Very high breakoff rates, such as those

observed here, are signs of quality control problems with the survey itself, such as hostile or poorly

trained interviewers or poorly worded questions. Any experienced survey researcher uses high

breakoff rates to catch quality control failures. The surveys here have extremely high rates of

survey failures, which indicates the data produced are of very poor quality.

iii. The screening question improperly allows people to take the survey who

should not.

46. A second substantial flaw in the survey is that the instructions allow people who are

not affirmatively determined to be the correct person to take the survey.

47. Past research has documented that phone surveys using registered voter lists are often

answered by someone other than the person who was listed on the registered voter file. The two

most common problems are that the wrong number was matched to the voter list and that someone

other than the person the research sought to speak with answered the phone. The latter occurs most

often with landlines. 5

48. Question 1 (Q1) of the survey asks, “May I please speak to <lead on screen>?” “Lead

on screen” is the name from the voter registration list that is linked to the phone number the survey

has dialed. Responses to Q1 are listed as reached target, other/uncertain, refused, and hang up. For

example, in the first table (Georgia), the responses are “Reached Target [Go to Q2]” and “[Go to

Q2],” without further explanation. For other states, the toplines describe this second response

5
Pew Research Center, “Comparing Survey Sampling Strategies: Random-Digit Dialing vs. Voter Files,” 2018.
https://www.pewresearch.org/methods/2018/10/09/comparing-survey-sampling-strategies-random-digit-dial-vs-
voter-files/, See page 25-26.

18
category as “Uncertain” or “What’s this about?” Importantly, cases classified as “Reached Target”

and as “Uncertain” are both instructed to “Go to Q2.”

49. This is an error in the branching design of the survey. People who are not affirmatively

identified as the correct person for the interview are allowed to answer the remaining questions in

the survey. For example, responses to Questions 2 and 3 evince that spouses and other family

members were asked Questions 2 and 3 even though they were not the people whose absentee

voting records were in question.

50. A significant percent and number of respondents who are listed as not giving an

affirmative answer to Question 1 are in fact kept in the survey and asked Question 2. Table 4 shows

the percent and number of respondents who were asked Questions 2 and 3 inappropriately because

they were not affirmatively identified as “the target.” This error in the survey design affects 13

percent of cases in Arizona and Michigan, 16 percent of cases in Pennsylvania, and 25 percent of

cases in Georgia. It is not possible to calculate the percent in Wisconsin because the topline report

pools the “Reached Target” and “Uncertain” in a single response category.

51. This survey branching error contaminates all the results and is of sufficient magnitude

to alter the results significantly, perhaps explaining away all the survey findings entirely. The

number of respondents in Georgia who were improperly asked Question 2 is larger than the number

of respondents who said that they did not request an absentee ballot. In Pennsylvania, it explains

most of the people who did not request an absentee ballot. In Arizona and Michigan, it can explain

half of those who did not request an absentee ballot.

52. As shown in Part D, this branching error in the survey design can completely account

for the number of people who answered that they did not request an absentee ballot in the State of

19
Georgia. In the survey data for Georgia, there were 255 people who were classified as “Uncertain”

in Question 1 and 142 respondents who answered that they did not request a ballot.

53. These figures and aspects of the survey design show that the data for Q2 and Q3 were

contaminated by improper branching from Q1. This information was available to, and even

reported by, Dr. Briggs, but he did not take it into account in calculating or interpreting his Error

#1 and Error #2.

Table 4. Respondents Who Were Not the Target of the Survey Were Allowed to

Answer the Survey

State Percent and Number of respondents to Q1 who

were NOT the target registrant, but who were asked

Q2

Arizona 12.6% [N=335]

Georgia 25.0% [N=255]

Michigan 12.9% [N=142]

Pennsylvania 15.7% [N=422]

Wisconsin *

* The Topline Table for Wisconsin pools respondents who were coded as “Reached

Target” and “Uncertain” and “What is this about?” It is not possible to identify how

many Wisconsin respondents were inappropriately asked Question 2.

20
iv. Question 2 (did you request an absentee ballot) does not ascertain

Permanent Absentee Voters or disambiguate Permanent Absentee Voters

from Other Voters.

54. Question 2 is not sufficiently clear and specific to answer the question the researcher

wants to answer. The survey does not ascertain whether respondents are permanent absentee voters

or have a designated person who may request a ballot on their behalf, even though Arizona,

Georgia, Michigan, Pennsylvania, and Wisconsin allow for some or all voters to be permanent

absentee voters. Permanent absentee voters do not need to request a ballot in order for one to be

sent to them for a specific election.

55. The presence of permanent absentee voters in the registration system creates ambiguity

in the interpretation of the question. Some permanent absentee voters may answer yes because

they registered for permanent absentee status, while others may say no because they do not need

to request a ballot to receive one. The ambiguity of Question 2, and the failure to disambiguate

permanent absentee voters from other absentee voters in the responses, introduces measurement

error in the survey. Additional survey questions are required to distinguish different types of

absentee voters.

56. The measurement error will create errors in the survey that are of the form of Error #1

described by Dr. Briggs. These “errors” reflect cases that would be wrongly identified as people

who were erroneously sent a ballot, even though they did not request one. In fact, they did not need

to request one. The survey data cannot be used to draw the conclusion that some survey

respondents received an absentee ballot in error.

v. The survey cannot determine whether there was an error in handling of

the ballot.

21
57. Dr. Briggs describes a second sort of error in absentee balloting that arises because

people say they returned a ballot, but no absentee ballot is received or recorded by the election

office.

58. It is my experience working with election administrators and researching election

administration as part of the Caltech/MIT Voting Technology Project that many absentee ballots

are not recorded or counted because they are not received on time or are not properly prepared and

submitted. Late absentees are not accepted, and they are usually not recorded in the tally of ballots

received. Ballots that are spoiled, unsigned, or in the incorrect envelopes or rejected for some other

reason are not counted. The fact that there is no record of a vote or of a received absentee ballot is

not necessarily evidence of an error in the handling of the ballot. Instead, it may be evidence of

correct treatment of ballots by the election officials in accordance with state laws.

59. Question 3 does not ascertain when the ballot was mailed back or how it was mailed.

There is no follow up question asking when the ballot was sent, whether it was signed, whether it

was witnessed (in states where that is a requirement), and in what envelope it was sent. In short,

the question does not allow one to determine whether or not the ballot was returned in compliance

with state laws, and thus whether there was or was not an error in handling the ballot. It is incorrect

for Dr. Briggs to conclude that ballots that were not received or recorded are in fact errors.

vi. Question 3 is subject to memory errors and social desirability bias.

60. Question 3 asks people whether they voted. Specifically, it asks people who said they

requested an absentee ballot whether they returned an absentee ballot—that is, whether they voted

that ballot.

61. It has long been understood in political science that respondents to surveys over-report

voting in elections. Typically, the overstatement is approximately 10 to 20 percentage points. That

22
is, if 65 percent of people in a sample actually voted, the reported vote rates in surveys are usually

around 75 to 85 percent. The most commonly identified sorts of biases are memory errors and

social desirability bias in questions asking people whether they voted. 6 In the context of this

survey, such biases would lead to overstatement of Yes responses to Question 3.

C. Critique of the Survey Databases and Data Analyses

62. There are obvious data errors and inconsistencies revealed in the toplines that are

appended to Dr. Briggs’ report. As I understand his report, the toplines are based on the data and

reports that he relied on in making his estimates and projections. Dr. Briggs states that he assumes

“the data is accurate.” I have examined the accounting in the Topline Tables and discovered that

the data do not add up. A routine analysis to check the consistency and integrity of data reported

in the toplines is standard practice in the survey research field. I have performed such a check, and

it reveals that the data lack integrity and are not correct. They should not be assumed to be accurate.

i. The figures on responses to Q1 simply do not add up for the State of

Wisconsin.

63. The Topline table for Wisconsin reports that 2,261 people were coded as either “A-

Reached Target” or “B-What Is This About?/Uncertain.” An additional 1,677 respondents were

coded as “X=Refused.” No other response categories are reported. The sum of 1,677 and 2,261 is

3,938. The bottom of the table reports the “Sum of All Responses” is 3,495. The rows clearly do

not total to the reported bottom line.

64. All other survey questions and calculations for this table branch off of Question 1.

Therefore, errors in this question infect responses to Questions 2 and 3 and make it unacceptable

6
See for example, Allyson L. Holbrook and Jon A. Krosnick, “Social Desirability Bias in Voter Turnout Reports:
Test Using the Item Count Technique,” Public Opinion Quarterly 74 (2010): 37-67. See also Stephen Ansolabehere
and Eitan Hersh, ,”Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate,” Political
Analysis 20 (2012): 437-459.

23
for anyone to rely on the table to form conclusions. The branching error is a red flag for survey

researchers indicating lack of data integrity. It should have signaled to the analyst, in this instance

Dr. Briggs, that there is a problem with the programs that generated the data for this and other

states. This red flag was the first one indicating to me that the data cannot be assumed to be

accurate.

ii. The survey data for Questions 1 and 2 cannot be reconciled.

65. I have examined the accounting across questions to make sure the number of cases that

are indicated as passing from Question 1 to Question 2 are the same as the number of cases reported

for Question 2. For Georgia, the data across questions are consistent, but for Arizona, Michigan,

Pennsylvania, and Wisconsin there are substantial and idiosyncratic discrepancies. The accounting

for Q1 and Q2 is shown in Table 5.

66. First, consider Georgia. Question 1 has two categories: Reached Target and Uncertain.

There are 767 Reached Target and 255 Uncertain. Those sum to 1,022. Those two groups are then

asked Question 2. Question 2 has several response categories. There are 591 Yes responses, 128

No responses, 175 “other” responses across various options (e.g., “member [Go to Q3]”), 70

Refused, and 58 Hang ups. These sum to 1,022. For Georgia, the total number of responses to Q2

equals the total number of respondents coded for Q2, and the data appear to be okay. But, looking

at the other states reveals inconsistencies that lead me to doubt the integrity and veracity of any of

the data presented here, including Georgia.

67. Second, consider Arizona. The topline table for Q1 has 2,147 respondents who are

either “Reached Target” or “Uncertain” and are instructed to Go to Q2. Applying the same

accounting used for Georgia in Arizona, there are 2,489 respondents listed in Q2. That is, there are

more than 300 respondents who answered Q2 but were not indicated in the accounting for Q1 as

24
directed to that question. There is no other way indicated in the survey data to get to Q2 without

going through Q1.

68. Third, consider Michigan. The topline for Q1 has 1,100 respondents who are either

“Reached Target” or “Uncertain.” However, there are 1,515 respondents to Q2. Thus, 415 people

were asked Q2 that were not allowed to do so under the branching rules of the survey.

69. Fourth, consider Pennsylvania. The topline table for Q1 has 2,684 respondents who

are either “Reached Target” or “Uncertain.” However, there are 2,537 respondents to Q2. That is,

147 fewer people were asked Q2 than were supposed to have been asked.

70. Fifth, consider Wisconsin. The topline for Q1 has 3,938 respondents who are either

“Reached Target” or “Uncertain.” However, there are 2,723 respondents to Q2. That is, 1,215

fewer people were asked Q2 than were supposed to have been asked.

Table 5. Accounting Discrepancies in the number of cases reported in Toplines for Question 1

and Question 2 by State

State Question 1 Question 2

Number of Cases Number of Cases Difference

“Reached Target” or “Sum of All Number (%)

“Uncertain/Other” Responses”

Arizona 2,147 2,489 -342*

Georgia 1,022 1,022 0

Michigan 1,100 1,515 -415

Pennsylvania 2,684 2,537 +147

Wisconsin 3,938 2,723 +1,215

25
Source: Toplines appended with Dr. William Briggs’ report.

* Negative values mean there are fewer Reached Target or Uncertain responses to Question 1

than there are to Question 2. Positive values mean there are more Reached Target or

Uncertain responses to Question 1 than there are to Question 2.

71. I attempted to resolve these discrepancies by removing refusals and hang ups, but

different discrepancies arose. The discrepancies in the accounting in Arizona or Michigan were

not resolved by removing the hang ups or refusals. And, doing so created accounting discrepancies

elsewhere. Georgia developed a deficit of cases, and the deficits in Pennsylvania and Wisconsin

worsened.

72. These errors in the spreadsheets will also contaminate the data in Q3, as the

classification of respondents according to Q1 and Q2 determines whether the individual is asked

Q3.

73. In my experience running, designing, and analyzing large scale surveys through the

Cooperative Congressional Election Study and serving on the board of the American National

Election Study, errors such as these usually have two sources: (i) errors in the program that assigns

questions to people, or (ii) errors in the program that generates the spreadsheet. Either sort of error

is catastrophic for this analysis, and they render the estimates, projections, and inferences in Dr.

Briggs’ report entirely unreliable.

74. In sum, the Topline Tables indicate that the survey data fail the most rudimentary data

integrity checks. There are inconsistencies throughout the data that Dr. Briggs relied on. This leads

me to conclude that the programs used to generate the survey spreadsheets for the survey, or the

underlying survey themselves, are not reliable or correct. Dr. Briggs assumed that the data are

26
accurate. The inconsistencies in the spreadsheets and failures in the integrity checks lead me to

conclude that the data, on their face, cannot be assumed to be correct or accurate.

iii. There are inconsistencies in calculations.

75. I performed a sensitivity analysis of Dr. Briggs’ calculations of the estimated ranges

of Error #1 and Error #2. Specifically, I sought to explore how various discrepancies in the

accounting might affect the estimates presented in Dr. Briggs’ report. The figures he presents are

extrapolations from a few hundred survey responses to tens of thousands of absentee requests.

Thus, errors in a few dozen cases out of the few hundred survey responses that he identifies as

errors would be highly consequential.

76. In performing the sensitivity analysis, I discovered that there were substantial

inconsistencies in the way that Dr. Briggs calculated the rates of Error #1 and Error #2 using the

survey data.

77. Consider, first, the calculation of Error #1. I converted the first table in Dr. Briggs’

report from counts to percentages. I did this by dividing his lower and upper bound estimates for

Error #1 by the total number of ballots. These are reported in the second column of Table 6.

Second, I calculated the percent of people who responded No or No on behalf of their spouse to

Question 2 and divided by the number of responses to Question 2. Third, I report two different

Numbers of Cases used in making the calculations: the number of cases reported as “Sum of All

Responses” in the Topline Tables, and that number less respondents who refused to answer.

Finally, I calculated the percent of respondents who answered No to Q2 or whose spouse answered

No to Q2 using the two different numbers of cases in column 4. I underline the number that was

used by Dr. Briggs to estimate Error #1 for each state. These calculations are shown in the fifth

column of Table 6.

27
Table 6. Calculation Inconsistencies in the Estimates for Error #1

State Range Of Question 2 Percent of

Error #1 Number of Cases Number of Respondents Who

Expressed in “Sum of All Cases answered No to Q2

Percentages Responses”

Arizona 40.2 to 44.3% 885 No 2,489 36.4%

21 Spouse - No 2,126 (less refusals) 42.6%

Georgia 12.3 to 16.5% 128 No 964 14.7%

14 Spouse - No 894 (less refusals) 15.9%

Michigan 21.3 to 26.2% 239 No 1,515 16.9%

17 Spouse - No 1,106 (less refusals) 23.1%

Pennsylvania 19.6 to 22.6% 531 No 2,537 21.9%

25 Spouse - No 2,430 (less refusals) 22.9%

Wisconsin 16.9 to 19.9% 379 No 2,723 14.1%

4 Spouse - No 2,162 (less refusals) 17.7%

Source: Toplines appended with Dr. William Briggs’ report.

78. Dr. Briggs is inconsistent in his calculations. In Georgia and Pennsylvania, the

denominator is the sum of all responses (that is, all cases who reach Q2). But in Arizona, Michigan,

and Wisconsin, he excludes some respondents from the total number of cases. The effect of

excluding those cases is to inflate the estimates by 6.2 percentage points for Arizona, by 6.2

28
percentage points for Michigan, and by 3.6 percentage points for Wisconsin. In Arizona and

Wisconsin, the estimate using all cases in the denominator lies outside of the range of possible

rates of Error #1 provided by Dr. Briggs. The estimates he offers are highly sensitive to which

denominator he chooses to use in making his calculations. This inconsistency shows a lack of rigor

in performing the analysis that was presented.

79. Similar inconsistencies arise in the analysis of Question 3 for the estimation of the rate

of Error #2. Table 7 parallels Table 6, but for Question 3. The second column shows the ranges of

Error #2 expressed in Percentages. The third column shows the Number of respondents who

answered Yes or Yes on behalf of their spouse. The fourth column is the number of respondents

to Q2 and to Q3. The fifth column is the Percent of Survey Respondents who Answered Yes to

Question 3.

80. Different denominators are used for the calculation of Error #2 in different states. In

two instances (Georgia and Pennsylvania), Dr. Briggs uses the number of responses to Q2 as the

denominator. In three instances (Arizona, Michigan, and Wisconsin), Dr. Briggs uses the number

of responses to Q3 and does not adjust for refusals, as was done in Table 6. He offers no

explanation of his calculations or why he chose different denominators in different instances. It is

highly unusual to see different statistical formulas used for the computation of what is supposed

to be the same quantity for different cases (in this instance the states) in the same report. The basic

statistical methods deployed here lack rigor.

81. Dr. Briggs’ estimates fail the sensitivity analysis suggested by his own calculations.

The ranges presented in his report are not robust to variations in the formulas that he himself uses.

In his report, he reports a range of possible values for Error #1 and Error #2. Values outside of

those ranges are highly unlikely to occur. The sensitivity analysis I have conducted reveals that

29
simply using the different formulas he deploys yields values that fall outside the ranges that he

presents. He uses the Number of Cases for Q2 in calculating Error #2 for Georgia and

Pennsylvania, and the Number of Cases for Q3 in calculating Error #2 for Arizona, Michigan, and

Wisconsin. Consistently using the Number of Cases for Q2 produces estimated values of Error #2

that are below the lower bound estimates for Arizona (14.3 versus 15.2), for Michigan (16.0 versus

20.6), and for Wisconsin (11.9 versus 14.4). Hence, the estimated range of Error #2 presented in

Dr. Briggs’ report is not robust even to variations in the way he calculates that rate from the survey

data. 7

Table 7. Calculation Inconsistencies in the Estimates for Error #2

State Range Of Question 2 Percent of

Error #2 Number of Cases Number of Respondents Who

Expressed in “Sum of All Cases answered Yes to Q3

Percentages Responses”

Arizona 15.2 to 18.3% 344 Yes Q2: 2,489 14.3%

11 Spouse - Yes Q3: 2,129 16.7%

Georgia 22.9 to 28.2% 240 Yes Q2: 964 26.4%

17 Spouse - Yes Q3: 623 41.3%

Michigan 20.6 to 24.9% 232 Yes Q2: 1,515 16.0%

10 Spouse - Yes Q3: 1,090 22.2%

Pennsylvania 16.3 to 19.1% 452 Yes Q2: 2,537 18.2%

7
By robust, I mean that variations in the numbers used fall outside of the ranges of likely values predicted by the
analysis. In this particular instance, the conclusions are not robust for the variation in the formula used.

30
11 Spouse - Yes Q3: 1,137 40.7%

Wisconsin 14.4 to 17.3% 316 Yes Q2: 2,723 11.9%

9 Spouse - Yes Q3: 2,154 15.1%

Source: Toplines appended with Dr. William Briggs’ report.

D. Sensitivity

82. A further exercise in sensitivity analysis is to measure the effect on the analysis of Q2

of the inclusion of people who should not have been included. To see the potential effect of the

inclusion of these people in the analysis, assume that all of the people who answered Uncertain

Q1 in fact answered No to Question 2. That is an assumption for the sake of sensitivity analysis.

83. What is the potential effect of this branching error alone (excluding all other issues)

on the survey estimates? Table 8 entertains that possibility. The Adjusted Percent who Responded

No to Q2 subtracts the Number of Uncertain Cases from the Numerator and the Denominator. The

rate of Error #1 cases is substantially reduced in every one of the states by the exclusion of these

cases. In every case, the adjusted rate is far below the estimate provided in Dr. Briggs’ report. In

Georgia, that rate falls entirely to 0. That is, the branching error can completely account for his

Error #1 results in Georgia. The data and estimates are highly sensitive to the problems of survey

design and computational formulas used.

Table 8. Calculation Inconsistencies in the Estimates for Error #1

31
State Range Of Question 2 Adjusted Percent of

Error #1 Number of Cases Number of Respondents Who

Expressed in “Sum of All “Uncertain” answered No to Q2

Percentages Responses” Responses to Q1 (without

“Uncertain” cases)

Arizona 885 No

40.2 to 44.3% 21 Spouse - No 335 26.7%

Georgia 128 No

12.3 to 16.5% 14 Spouse - No 255 0%

Michigan 239 No

21.3 to 26.2% 17 Spouse - No 142 13.9%

Pennsylvania 531 No

19.6 to 22.6% 25 Spouse - No 422 5.3%

Wisconsin 379 No unknown No calculation

16.9 to 19.9% 4 Spouse - No Possible

Source: Toplines appended with Dr. William Briggs’ report.

E. Conclusion

84. The estimates and projections presented by Dr. Briggs are based on survey data

collected in Arizona, Georgia, Michigan, Pennsylvania, and Wisconsin. My overall assessment of

these data is that they are unreliable and riddled with accounting and survey design errors. These

errors are of sufficient magnitude and severity as to make the estimates completely uninformative.

32
85. The data are not accurate. The Topline summaries of the survey data appended to Dr.

Briggs’ report reveal fatal accounting errors in the data. No sound estimates or inferences can be

drawn based on these data.

86. Each of these problems would create significant biases in the estimates and projections

offered in Dr. Briggs’ report, and no valid estimates and conclusions can be made based on these

data. Dr. Briggs assumed at the outset that the respondents to the surveys are representative and

the data are accurate. Neither assumption is correct. Indeed, the information contained in and

appended to Dr. Briggs’ report showed that to be evident. Even the most basic review of the

information about the survey reveals deep flaws in the design and errors and inconsistencies in the

accounting of the survey design. These data, and the analyses based on them, do not meet the

standards for scientifically acceptable research and should not be relied on at all.

33
Signed at Boston, Massachusetts, on the date below.
Date: December 4, 2020

_________________________________
Stephen Ansolabehere
STEPHEN DANIEL ANSOLABEHERE

Department of Government
Harvard University
1737 Cambridge Street
Cambridge, MA 02138
sda@gov.harvard.edu

EDUCATION

Harvard University Ph.D., Political Science 1989


University of Minnesota B.A., Political Science 1984
B.S., Economics

PROFESSIONAL EXPERIENCE

ACADEMIC POSITIONS

2016-present Frank G. Thompson Professor of Government, Harvard University


2008-present Professor, Department of Government, Harvard University
2015-present Director, Center for American Politics, Harvard University
1998-2009 Elting Morison Professor, Department of Political Science, MIT
(Associate Head, 2001-2005)
1995-1998 Associate Professor, Department of Political Science, MIT
1993-1994 National Fellow, The Hoover Institution
1989-1993 Assistant Professor, Department of Political Science,
University of California, Los Angeles

FELLOWSHIPS AND HONORS

American Academy of Arts and Sciences 2007


Carnegie Scholar 2000-02
National Fellow, The Hoover Institution 1993-94
Harry S. Truman Fellowship 1982-86

1
2
PUBLICATIONS

Books

2019 American Government, 15th edition. With Ted Lowi, Benjamin Ginsberg
and Kenneth Shepsle. W.W. Norton.

2014 Cheap and Clean: How Americans Think About Energy in the Age of Global
Warming. With David Konisky. MIT Press.
Recipient of the Donald K. Price book award.

2008 The End of Inequality: One Person, One Vote and the Transformation of
American Politics. With James M. Snyder, Jr., W. W. Norton.

1996 Going Negative: How Political Advertising Divides and Shrinks the American
Electorate. With Shanto Iyengar. The Free Press. Recipient of the Goldsmith
book award.

1993 Media Game: American Politics in the Television Age. With Roy Behr and
Shanto Iyengar. Macmillan.

Journal Articles

2021 “The CPS Voting and Registration Supplement Overstates Turnout” Journal of
Politics Forthcoming (with Bernard Fraga and Brian Schaffner)

2021 "Congressional Representation: Accountability from the Constituent's Perspective,"


American Journal of Political Science forthcoming (with Shiro Kuriwaki)

2020 “Proximity, NIMBYism, and Public Support for Energy Infrastructure”


Public Opinion Quarterly (with David Konisky and Sanya Carley)
https://doi.org/10.1093/poq/nfaa025

2020 “Understanding Exponential Growth Amid a Pandemic: An Internal Perspective,”


Harvard Data Science Review 2 (October) (with Ray Duch, Kevin DeLuca,
Alexander Podkul, Liberty Vittert)

2020 “Unilateral Action and Presidential Accountability,” Presidential Studies Quarterly


50 (March): 129-145. (with Jon Rogowski)

3
2019 “Backyard Voices: How Sense of Place Shapes Views of Large-Scale Energy
Transmission Infrastructure” Energy Research & Social Science
forthcoming(with Parrish Bergquist, Carley Sanya, and David Konisky)

2019 “Are All Electrons the Same? Evaluating support for local transmission lines
through an experiment”PLOS ONE 14 (7): e0219066
(with Carley Sanya and David Konisky)
https://doi.org/10.1371/journal.pone.0219066

2018 “Learning from Recounts” Election Law Journal 17: 100-116 (with Barry C. Burden,
Kenneth R. Mayer, and Charles Stewart III)
https://doi.org/10.1089/elj.2017.0440

2018 “Policy, Politics, and Public Attitudes Toward the Supreme Court” American
Politics Research (with Ariel White and Nathaniel Persily).
https://doi.org/10.1177/1532673X18765189

2018 “Measuring Issue-Salience in Voters’ Preferences” Electoral Studies (with Maria


Socorro Puy) 51 (February): 103-114.

2018 “Divided Government and Significant Legislation: A History of Congress,” Social


Science History (with Maxwell Palmer and Benjamin Schneer).42 (1).

2017 “ADGN: An Algorithm for Record Linkage Using Address, Date of Birth
Gender and Name,” Statistics and Public Policy (with Eitan Hersh)

2017 “Identity Politics” (with Socorro Puy) Public Choice. 168: 1-19.
DOI 10.1007/s11127-016-0371-2

2016 “A 200-Year Statistical History of the Gerrymander” (with Maxwell Palmer) The
Ohio State University Law Journal

2016 “Do Americans Prefer Co-Ethnic Representation? The Impact of Race on House
Incumbent Evaluations” (with Bernard Fraga) Stanford University Law Review
68: 1553-1594

2016 Revisiting Public Opinion on Voter Identification and Voter Fraud in an Era of
Increasing Partisan Polarization” (with Nathaniel Persily) Stanford Law Review
68: 1455-1489

2015 “The Perils of Cherry Picking Low Frequency Events in Large Sample Surveys”
(with Brian Schaffner and Samantha Luks) Electoral Studies 40 (December):
409-410.

4
2015 “Testing Shaw v. Reno: Do Majority-Minority Districts Cause Expressive
Harms?” (with Nathaniel Persily) New York University Law Review 90

2015 “A Brief Yet Practical Guide to Reforming U.S. Voter Registration, Election Law
Journal, (with Daron Shaw and Charles Stewart) 14: 26-31.

2015 “Waiting to Vote,” Election Law Journal, (with Charles Stewart) 14: 47-53.

2014 “Mecro-economic Voting: Local Information and Micro-Perceptions of the


Macro-Economy” (With Marc Meredith and Erik Snowberg), Economics and
Politics 26 (November): 380-410.

2014 “Does Survey Mode Still Matter?” Political Analysis (with Brian Schaffner) 22:
285-303

2013 “Race, Gender, Age, and Voting” Politics and Governance, vol. 1, issue 2.
(with Eitan Hersh)
http://www.librelloph.com/politicsandgovernance/article/view/PaG-1.2.132

2013 “Regional Differences in Racially Polarized Voting: Implications for the


Constitutionality of Section 5 of the Voting Rights Act” (with Nathaniel Persily
and Charles Stewart) 126 Harvard Law Review F 205 (2013)
http://www.harvardlawreview.org/issues/126/april13/forum_1005.php

2013 “Cooperative Survey Research” Annual Review of Political Science (with


Douglas Rivers)

2013 “Social Sciences and the Alternative Energy Future” Daedalus (with Bob Fri)

2013 “The Effects of Redistricting on Incumbents,” Election Law Journal


(with James Snyder)

2012 “Asking About Numbers: How and Why” Political Analysis (with Erik
Snowberg and Marc Meredith). doi:10.1093/pan/mps031

2012 “Movers, Stayers, and Registration” Quarterly Journal of Political Science


(with Eitan Hersh and Ken Shepsle)

2012 “Validation: What Big Data Reveals About Survey Misreporting and the Real
Electorate” Political Analysis (with Eitan Hersh)

2012 “Arizona Free Enterprise v. Bennett and the Problem of Campaign Finance”
Supreme Court Review 2011(1):39-79

5
2012 “The American Public’s Energy Choice” Daedalus (with David Konisky)

2012 “Challenges for Technology Change” Daedalus (with Robert Fri)

2011 “When Parties Are Not Teams: Party positions in single-member district and
proportional representation systems” Economic Theory 49 (March)
DOI: 10.1007/s00199-011-0610-1 (with James M. Snyder Jr. and William
Leblanc)

2011 “Profiling Originalism” Columbia Law Review (with Jamal Greene and Nathaniel
Persily).

2010 “Partisanship, Public Opinion, and Redistricting” Election Law Journal (with
Joshua Fougere and Nathaniel Persily).

2010 “Primary Elections and Party Polarization” Quarterly Journal of Political Science
(with Shigeo Hirano, James Snyder, and Mark Hansen)

2010 “Constituents’ Responses to Congressional Roll Call Voting,” American


Journal of Political Science (with Phil Jones)

2010 “Race, Region, and Vote Choice in the 2008 Election: Implications for
the Future of the Voting Rights Act” Harvard Law Review April, 2010. (with
Nathaniel Persily, and Charles H. Stewart III)

2010 “Residential Mobility and the Cell Only Population,” Public Opinion Quarterly
(with Brian Schaffner)

2009 “Explaining Attitudes Toward Power Plant Location,” Public Opinion Quarterly
(with David Konisky)

2009 “Public risk perspectives on the geologic storage of carbon dioxide,”


International Journal of Greenhouse Gas Control (with Gregory Singleton and
Howard Herzog) 3(1): 100-107.

2008 “A Spatial Model of the Relationship Between Seats and Votes” (with William
Leblanc) Mathematical and Computer Modeling (November).

2008 “The Strength of Issues: Using Multiple Measures to Gauge Preference Stability,
Ideological Constraint, and Issue Voting” (with Jonathan Rodden and James M.
Snyder, Jr.) American Political Science Review (May).

2008 “Access versus Integrity in Voter Identification Requirements.” New York

6
University Annual Survey of American Law, vol 63.

2008 “Voter Fraud in the Eye of the Beholder” (with Nathaniel Persily) Harvard Law
Review (May)

2007 “Incumbency Advantages in U. S. Primary Elections,” (with John Mark Hansen,


Shigeo Hirano, and James M. Snyder, Jr.) Electoral Studies (September)

2007 “Television and the Incumbency Advantage” (with Erik C. Snowberg and
James M. Snyder, Jr). Legislative Studies Quarterly.

2006 “The Political Orientation of Newspaper Endorsements” (with Rebecca


Lessem and James M. Snyder, Jr.). Quarterly Journal of Political Science vol. 1,
issue 3.

2006 “Voting Cues and the Incumbency Advantage: A Critical Test” (with Shigeo
Hirano, James M. Snyder, Jr., and Michiko Ueda) Quarterly Journal of
Political Science vol. 1, issue 2.

2006 “American Exceptionalism? Similarities and Differences in National Attitudes


Toward Energy Policies and Global Warming” (with David Reiner, Howard
Herzog, K. Itaoka, M. Odenberger, and Fillip Johanssen) Environmental Science
and Technology (February 22, 2006),
http://pubs3.acs.org/acs/journals/doilookup?in_doi=10.1021/es052010b

2006 “Purple America” (with Jonathan Rodden and James M. Snyder, Jr.) Journal
of Economic Perspectives (Winter).

2005 “Did the Introduction of Voter Registration Decrease Turnout?” (with David
Konisky). Political Analysis.

2005 “Statistical Bias in Newspaper Reporting: The Case of Campaign Finance”


Public Opinion Quarterly (with James M. Snyder, Jr., and Erik Snowberg).

2005 “Studying Elections” Policy Studies Journal (with Charles H. Stewart III and R.
Michael Alvarez).

2005 “Legislative Bargaining under Weighted Voting” American Economic Review


(with James M. Snyder, Jr., and Michael Ting)

2005 “Voting Weights and Formateur Advantages in Coalition Formation: Evidence


from Parliamentary Coalitions, 1946 to 2002” (with James M. Snyder, Jr., Aaron
B. Strauss, and Michael M. Ting) American Journal of Political Science.

7
2005 “Reapportionment and Party Realignment in the American States” Pennsylvania
Law Review (with James M. Snyder, Jr.)

2004 “Residual Votes Attributable to Voting Technologies” (with Charles Stewart)


Journal of Politics

2004 “Using Term Limits to Estimate Incumbency Advantages When Office Holders
Retire Strategically” (with James M. Snyder, Jr.). Legislative Studies Quarterly
vol. 29, November 2004, pages 487-516.

2004 “Did Firms Profit From Soft Money?” (with James M. Snyder, Jr., and Michiko
Ueda) Election Law Journal vol. 3, April 2004.

2003 “Bargaining in Bicameral Legislatures” (with James M. Snyder, Jr. and Mike
Ting) American Political Science Review, August, 2003.

2003 “Why Is There So Little Money in U.S. Politics?” (with James M. Snyder, Jr.)
Journal of Economic Perspectives, Winter, 2003.

2002 “Equal Votes, Equal Money: Court-Ordered Redistricting and the Public
Spending in the American States” (with Alan Gerber and James M. Snyder, Jr.)
American Political Science Review, December, 2002.
Paper awarded the Heinz Eulau award for the best paper in the American Political
Science Review.

2002 “Are PAC Contributions and Lobbying Linked?” (with James M. Snyder, Jr. and
Micky Tripathi) Business and Politics 4, no. 2.

2002 “The Incumbency Advantage in U.S. Elections: An Analysis of State and Federal
Offices, 1942-2000” (with James Snyder) Election Law Journal, 1, no. 3.

2001 “Voting Machines, Race, and Equal Protection.” Election Law Journal, vol. 1,
no. 1

2001 “Models, assumptions, and model checking in ecological regressions” (with


Andrew Gelman, David Park, Phillip Price, and Larraine Minnite) Journal of
the Royal Statistical Society, series A, 164: 101-118.

2001 “The Effects of Party and Preferences on Congressional Roll Call Voting.”
(with James Snyder and Charles Stewart) Legislative Studies Quarterly
(forthcoming).
Paper awarded the Jewell-Lowenberg Award for the best paper published on
legislative politics in 2001. Paper awarded the Jack Walker Award for the best
paper published on party politics in 2001.

8
2001 “Candidate Positions in Congressional Elections,” (with James Snyder and
Charles Stewart). American Journal of Political Science 45 (November).

2000 “Old Voters, New Voters, and the Personal Vote,” (with James Snyder and
Charles Stewart) American Journal of Political Science 44 (February).

2000 “Soft Money, Hard Money, Strong Parties,” (with James Snyder) Columbia Law
Review 100 (April):598 - 619.

2000 “Campaign War Chests and Congressional Elections,” (with James Snyder)
Business and Politics. 2 (April): 9-34.

1999 “Replicating Experiments Using Surveys and Aggregate Data: The Case of
Negative Advertising.” (with Shanto Iyengar and Adam Simon) American
Political Science Review 93 (December).

1999 “Valence Politics and Equilibrium in Spatial Models,” (with James Snyder),
Public Choice.

1999 “Money and Institutional Power,” (with James Snyder), Texas Law Review 77
(June, 1999): 1673-1704.

1997 “Incumbency Advantage and the Persistence of Legislative Majorities,” (with Alan
Gerber), Legislative Studies Quarterly 22 (May 1997).

1996 “The Effects of Ballot Access Rules on U.S. House Elections,” (with Alan
Gerber), Legislative Studies Quarterly 21 (May 1996).

1994 “Riding the Wave and Issue Ownership: The Importance of Issues in Political
Advertising and News,” (with Shanto Iyengar) Public Opinion Quarterly 58:
335-357.

1994 “Horseshoes and Horseraces: Experimental Evidence of the Effects of Polls on


Campaigns,” (with Shanto Iyengar) Political Communications 11/4 (October-
December): 413-429.

1994 “Does Attack Advertising Demobilize the Electorate?” (with Shanto Iyengar),
American Political Science Review 89 (December).

1994 “The Mismeasure of Campaign Spending: Evidence from the 1990 U.S. House
Elections,” (with Alan Gerber) Journal of Politics 56 (September).

1993 “Poll Faulting,” (with Thomas R. Belin) Chance 6 (Winter): 22-28.

9
1991 “The Vanishing Marginals and Electoral Responsiveness,” (with David Brady and
Morris Fiorina) British Journal of Political Science 22 (November): 21-38.

1991 “Mass Media and Elections: An Overview,” (with Roy Behr and Shanto Iyengar)
American Politics Quarterly 19/1 (January): 109-139.

1990 “The Limits of Unraveling in Interest Groups,” Rationality and Society 2:


394-400.

1990 “Measuring the Consequences of Delegate Selection Rules in Presidential


Nominations,” (with Gary King) Journal of Politics 52: 609-621.

1989 “The Nature of Utility Functions in Mass Publics,” (with Henry Brady) American
Political Science Review 83: 143-164.

Special Reports and Policy Studies

2010 The Future of Nuclear Power, Revised.

2006 The Future of Coal. MIT Press. Continued reliance on coal as a primary power
source will lead to very high concentrations of carbon dioxide in the atmosphere,
resulting in global warming. This cross-disciplinary study – drawing on faculty
from Physics, Economics, Chemistry, Nuclear Engineering, and Political Science
– develop a road map for technology research and development policy in order to
address the challenges of carbon emissions from expanding use of coal for
electricity and heating throughout the world.

2003 The Future of Nuclear Power. MIT Press. This cross-disciplinary study –
drawing on faculty from Physics, Economics, Chemistry, Nuclear Engineering,
and Political Science – examines the what contribution nuclear power can make to
meet growing electricity demand, especially in a world with increasing carbon
dioxide emissions from fossil fuel power plants.

2002 “Election Day Registration.” A report prepared for DEMOS. This report analyzes
the possible effects of Proposition 52 in California based on the experiences of 6
states with election day registration.

2001 Voting: What Is, What Could Be. A report of the Caltech/MIT Voting
Technology Project. This report examines the voting system, especially
technologies for casting and counting votes, registration systems, and polling place
operations, in the United States. It was widely used by state and national
governments in formulating election reforms following the 2000 election.

10
2001 “An Assessment of the Reliability of Voting Technologies.” A report of the
Caltech/MIT Voting Technology Project. This report provided the first
nationwide assessment of voting equipment performance in the United States. It
was prepared for the Governor’s Select Task Force on Election Reform in Florida.

Chapters in Edited Volumes

2016 “Taking the Study of Public Opinion Online” (with Brian Schaffner) Oxford
Handbook of Public Opinion, R. Michael Alvarez, ed. Oxford University Press:
New York, NY.

2014 “Voter Registration: The Process and Quality of Lists” The Measure of
American Elections, Barry Burden, ed..

2012 “Using Recounts to Measure the Accuracy of Vote Tabulations: Evidence from
New Hampshire Elections, 1946-2002” in Confirming Elections, R. Michael
Alvarez, Lonna Atkeson, and Thad Hall, eds. New York: Palgrave, Macmillan.

2010 “Dyadic Representation” in Oxford Handbook on Congress, Eric Schickler, ed.,


Oxford University Press.

2008 “Voting Technology and Election Law” in America Votes!, Benjamin Griffith,
editor, Washington, DC: American Bar Association.

2007 “What Did the Direct Primary Do to Party Loyalty in Congress” (with
Shigeo Hirano and James M. Snyder Jr.) in Process, Party and Policy
Making: Further New Perspectives on the History of Congress, David
Brady and Matthew D. McCubbins (eds.), Stanford University Press, 2007.

2007 “Election Administration and Voting Rights” in Renewal of the Voting


Rights Act, David Epstein and Sharyn O’Hallaran, eds. Russell Sage Foundation.

2006 “The Decline of Competition in Primary Elections,” (with John Mark Hansen,
Shigeo Hirano, and James M. Snyder, Jr.) The Marketplace of Democracy,
Michael P. McDonald and John Samples, eds. Washington, DC: Brookings.

2005 “Voters, Candidates and Parties” in Handbook of Political Economy, Barry


Weingast and Donald Wittman, eds. New York: Oxford University Press.

2003 “Baker v. Carr in Context, 1946 – 1964” (with Samuel Isaacharoff) in


Constitutional Cases in Context, Michael Dorf, editor. New York: Foundation
Press.

11
2002 “Corruption and the Growth of Campaign Spending”(with Alan Gerber and James
Snyder). A User’s Guide to Campaign Finance, Jerry Lubenow, editor. Rowman
and Littlefield.

2001 “The Paradox of Minimal Effects,” in Henry Brady and Richard Johnston, eds.,
Do Campaigns Matter? University of Michigan Press.

2001 “Campaigns as Experiments,” in Henry Brady and Richard Johnson, eds., Do


Campaigns Matter? University of Michigan Press.

2000 “Money and Office,” (with James Snyder) in David Brady and John Cogan, eds.,
Congressional Elections: Continuity and Change. Stanford University Press.

1996 “The Science of Political Advertising,” (with Shanto Iyengar) in Political


Persuasion and Attitude Change, Richard Brody, Diana Mutz, and Paul
Sniderman, eds. Ann Arbor, MI: University of Michigan Press.

1995 “Evolving Perspectives on the Effects of Campaign Communication,” in Philo


Warburn, ed., Research in Political Sociology, vol. 7, JAI.

1995 “The Effectiveness of Campaign Advertising: It’s All in the Context,” (with
Shanto Iyengar) in Campaigns and Elections American Style, Candice Nelson and
James A. Thurber, eds. Westview Press.

1993 “Information and Electoral Attitudes: A Case of Judgment Under Uncertainty,”


(with Shanto Iyengar), in Explorations in Political Psychology, Shanto Iyengar
and William McGuire, eds. Durham: Duke University Press.

Working Papers

2009 “Sociotropic Voting and the Media” (with Marc Meredith and Erik Snowberg),
American National Election Study Pilot Study Reports, John Aldrich editor.

2007 “Public Attitudes Toward America’s Energy Options: Report of the 2007 MIT
Energy Survey” CEEPR Working Paper 07-002 and CANES working paper.

2006 "Constituents' Policy Perceptions and Approval of Members' of Congress" CCES


Working Paper 06-01 (with Phil Jones).

2004 “Using Recounts to Measure the Accuracy of Vote Tabulations: Evidence from
New Hampshire Elections, 1946 to 2002” (with Andrew Reeves).

2002 “Evidence of Virtual Representation: Reapportionment in California,” (with

12
Ruimin He and James M. Snyder).

1999 “Why did a majority of Californians vote to lower their own power?” (with James
Snyder and Jonathan Woon). Paper presented at the annual meeting of the
American Political Science Association, Atlanta, GA, September, 1999.
Paper received the award for the best paper on Representation at the 1999 Annual
Meeting of the APSA.

1999 “Has Television Increased the Cost of Campaigns?” (with Alan Gerber and James
Snyder).

1996 “Money, Elections, and Candidate Quality,” (with James Snyder).

1996 “Party Platform Choice - Single- Member District and Party-List Systems,”(with
James Snyder).

1995 “Messages Forgotten” (with Shanto Iyengar).

1994 “Consumer Contributors and the Returns to Fundraising: A Microeconomic


Analysis,” (with Alan Gerber), presented at the Annual Meeting of the American
Political Science Association, September.

1992 “Biases in Ecological Regression,” (with R. Douglas Rivers) August, (revised


February 1994). Presented at the Midwest Political Science Association Meetings,
April 1994, Chicago, IL.

1992 “Using Aggregate Data to Correct Nonresponse and Misreporting in Surveys”


(with R. Douglas Rivers). Presented at the annual meeting of the Political
Methodology Group, Cambridge, Massachusetts, July.

1991 “The Electoral Effects of Issues and Attacks in Campaign Advertising” (with
Shanto Iyengar). Presented at the Annual Meeting of the American Political
Science Association, Washington, DC.

1991 “Television Advertising as Campaign Strategy: Some Experimental Evidence”


(with Shanto Iyengar). Presented at the Annual Meeting of the American
Association for Public Opinion Research, Phoenix.

1991 “Why Candidates Attack: Effects of Televised Advertising in the 1990 California
Gubernatorial Campaign,” (with Shanto Iyengar). Presented at the Annual
Meeting of the Western Political Science Association, Seattle, March.

1990 “Winning is Easy, But It Sure Ain’t Cheap.” Working Paper #90-4, Center for the
American Politics and Public Policy, UCLA. Presented at the Political Science
Departments at Rochester University and the University of Chicago.

13
Research Grants

1989-1990 Markle Foundation. “A Study of the Effects of Advertising in the 1990


California Gubernatorial Campaign.” Amount: $50,000

1991-1993 Markle Foundation. “An Experimental Study of the Effects of Campaign


Advertising.” Amount: $150,000

1991-1993 NSF. “An Experimental Study of the Effects of Advertising in the 1992
California Senate Electoral.” Amount: $100,000

1994-1995 MIT Provost Fund. “Money in Elections: A Study of the Effects of Money on
Electoral Competition.” Amount: $40,000

1996-1997 National Science Foundation. “Campaign Finance and Political Representation.”


Amount: $50,000

1997 National Science Foundation. “Party Platforms: A Theoretical Investigation of


Party Competition Through Platform Choice.” Amount: $40,000

1997-1998 National Science Foundation. “The Legislative Connection in Congressional


Campaign Finance. Amount: $150,000

1999-2000 MIT Provost Fund. “Districting and Representation.” Amount: $20,000.

1999-2002 Sloan Foundation. “Congressional Staff Seminar.” Amount: $156,000.

2000-2001 Carnegie Corporation. “The Caltech/MIT Voting Technology Project.”


Amount: $253,000.

2001-2002 Carnegie Corporation. “Dissemination of Voting Technology Information.”


Amount: $200,000.

2003-2005 National Science Foundation. “State Elections Data Project.” Amount:


$256,000.

2003-2004 Carnegie Corporation. “Internet Voting.” Amount: $279,000.

2003-2005 Knight Foundation. “Accessibility and Security of Voting Systems.” Amount:


$450,000.

2006-2008 National Science Foundation, “Primary Election Data Project,” $186,000

14
2008-2009 Pew/JEHT. “Measuring Voting Problems in Primary Elections, A National
Survey.” Amount: $300,000

2008-2009 Pew/JEHT. “Comprehensive Assessment of the Quality of Voter Registration


Lists in the United States: A pilot study proposal” (with Alan Gerber).
Amount: $100,000.

2010-2011 National Science Foundation, “Cooperative Congressional Election Study,”


$360,000

2010-2012 Sloan Foundation, “Precinct-Level U. S. Election Data,” $240,000.

2012-2014 National Science Foundation, “Cooperative Congressional Election Study, 2010-


2012 Panel Study” $425,000

2012-2014 National Science Foundation, “2012 Cooperative Congressional Election


Study,” $475,000

2014-2016 National Science Foundation, “Cooperative Congressional Election Study, 2010-


2014 Panel Study” $510,000

2014-2016 National Science Foundation, “2014 Cooperative Congressional Election


Study,” $400,000

2016-2018 National Science Foundation, “2016 Cooperative Congressional Election


Study,” $485,000

2018-2020 National Science Foundation, “2018 Cooperative Congressional Election


Study,” $844,784.

2019-2022 National Science Foundation, RIDIR: “Collaborative Research: Analytic Tool


for Poststratification and small-area estimation for survey data.” $942,607

Professional Boards

Editor, Cambridge University Press Book Series, Political Economy of Institutions and
Decisions, 2006-2016

Member, Board of the Reuters International School of Journalism, Oxford University, 2007 to
present.

Member, Academic Advisory Board, Electoral Integrity Project, 2012 to present.

15
Contributing Editor, Boston Review, The State of the Nation.

Member, Board of Overseers, American National Election Studies, 1999 - 2013.

Associate Editor, Public Opinion Quarterly, 2012 to 2013.

Editorial Board of Harvard Data Science Review, 2018 to present.


Editorial Board of American Journal of Political Science, 2005 to 2009.
Editorial Board of Legislative Studies Quarterly, 2005 to 2010.
Editorial Board of Public Opinion Quarterly, 2006 to present.
Editorial Board of the Election Law Journal, 2002 to present.
Editorial Board of the Harvard International Journal of Press/Politics, 1996 to 2008.
Editorial Board of Business and Politics, 2002 to 2008.
Scientific Advisory Board, Polimetrix, 2004 to 2006.

Special Projects and Task Forces

Principal Investigator, Cooperative Congressional Election Study, 2005 – present.

CBS News Election Decision Desk, 2006-present

Co-Director, Caltech/MIT Voting Technology Project, 2000-2004.

Co-Organizer, MIT Seminar for Senior Congressional and Executive Staff, 1996-2007.

MIT Energy Innovation Study, 2009-2010.


MIT Energy Initiative, Steering Council, 2007-2008
MIT Coal Study, 2004-2006.
MIT Energy Research Council, 2005-2006.
MIT Nuclear Study, 2002-2004.
Harvard University Center on the Environment, Council, 2009-present

Expert Witness, Consultation, and Testimony

2001 Testimony on Election Administration, U. S. Senate Committee on Commerce.


2001 Testimony on Voting Equipment, U.S. House Committee on Science, Space,
and Technology
2001 Testimony on Voting Equipment, U.S. House Committee on House
Administration
2001 Testimony on Voting Equipment, Congressional Black Caucus
2002-2003 McConnell v. FEC, 540 U.S. 93 (2003), consultant to the Brennan Center.
2009 Amicus curiae brief with Professors Nathaniel Persily and Charles Stewart on
behalf of neither party to the U.S. Supreme Court in the case of Northwest

16
Austin Municipal Utility District Number One v. Holder, 557 U.S. 193 (2009).
2009 Testimony on Voter Registration, U. S. Senate Committee on Rules.
2011-2015 Perez v. Perry, U. S. District Court in the Western District of Texas (No. 5:11-
cv-00360). Exert witness on behalf of Rodriguez intervenors.
2011-2013 State of Texas v. United States, the U.S. District Court in the District of
Columbia (No. 1:11-cv-01303), expert witness on behalf of the Gonzales
intervenors.
2012-2013 State of Texas v. Holder, U.S. District Court in the District of Columbia (No.
1:12-cv-00128), expert witness on behalf of the United States.
2011-2012 Guy v. Miller in U.S. District Court for Nevada (No. 11-OC-00042-1B), expert
witness on behalf of the Guy plaintiffs.
2012 In re Senate Joint Resolution of Legislative Apportionment, Florida Supreme
Court (Nos. 2012-CA-412, 2012-CA-490), consultant for the Florida
Democratic Party.
2012-2014 Romo v. Detzner, Circuit Court of the Second Judicial Circuit in Florida (No.
2012 CA 412), expert witness on behalf of Romo plaintiffs.
2013-2014 LULAC v. Edwards Aquifer Authority, U.S. District Court for the Western
District of Texas, San Antonio Division (No. 5:12cv620-OLG,), consultant and
expert witness on behalf of the City of San Antonio and San Antonio Water
District
2013-2014 Veasey v. Perry, U. S. District Court for the Southern District of Texas, Corpus
Christi Division (No. 2:13-cv-00193), consultant and expert witness on behalf of
the United States Department of Justice.
2013-2015 Harris v. McCrory, U. S. District Court for the Middle District of North
Carolina (No. 1:2013cv00949), consultant and expert witness on behalf of the
Harris plaintiffs. (later named Cooper v. Harris)
2014 Amicus curiae brief, on behalf of neither party, Supreme Court of the United
States, Alabama Democratic Conference v. State of Alabama.
2014- 2016 Bethune-Hill v. Virginia State Board of Elections, U. S. District Court for the
Eastern District of Virginia (No. 3:2014cv00852), consultant and expert on
behalf of the Bethune-Hill plaintiffs.
2015 Amicus curiae brief in support of Appellees, Supreme Court of the United
States, Evenwell v. Abbott
2016-2017 Perez v. Abbott, U. S. District Court in the Western District of Texas (No. 5:11-
cv-00360). Exert witness on behalf of Rodriguez intervenors.
2017-2018 Fish v. Kobach, U. S. District Court in the District of Kansas (No. 2:16-cv-
02105-JAR). Expert witness of behalf of the Fish plaintiffs.

17
December 5, 2020

Pearson v. Kemp, Case No. 1:20-cv-4809-TCB

United States District Court for Northern District of Georgia

Expert Report of Jonathan Rodden, PhD

737 Mayfield Avenue


Stanford, CA 94305

__________________________
Jonathan Rodden, PhD

1
I. INTRODUCTION AND SUMMARY

On Saturday, November 28, 2020 I received declarations from Dr. Eric

Quinnell, Dr. Shiva Ayyadurai, and Mr. James Ramsland, Jr. Each of these

declarations makes rather strong claims to have demonstrated “anomalies” or

“irregularities” in the results of the presidential election in Georgia on November 3,

2020. I have been asked by Counsel to assess the validity of their claims.

Unfortunately, these reports do not meet basic standards for scientific inquiry. For

the most part, they are not based on discernable logical arguments. Without any

citations to relevant scientific literature about statistics or elections, the authors

identify common and easily explained patterns in the 2020 election results, and

without explanation, assert that they are somehow “anomalous.” Each of these

reports lacks even a basic level of clarity or transparency about research methods

that would be expected in a scientific communication. As detailed below, each of

these reports is based on puzzling but serious mistakes and misunderstandings about

how to analyze election data.

Dr. Quinnell’s report amounts to an odd claim that there is something

“anomalous” about the fact that Joseph Biden achieved sizable increases in votes

over Hillary Clinton’s totals in the fast-growing suburban precincts of Fulton

County. Dr. Ayyadurai’s report amounts to a claim that there is something

“anomalous” about the fact that in a set of suburban counties that he chose to study,

2
Biden made gains in relatively white, Republican-leaning precincts. He does not

explain why split-ticket voting or deviations from strict ethnic voting are indicative

of fraud. Finally, Mr. Ramsland’s report identifies a cross-state correlation between

voting equipment and election outcomes, but the fact that Democratic and

Republican regions of the country have adopted different types of voting equipment

cannot possibly be taken as evidence of fraud.

II. QUALIFICATIONS

I am currently a tenured Professor of Political Science at Stanford University

and the founder and director of the Stanford Spatial Social Science Lab (“the

Lab”)—a center for research and teaching with a focus on the analysis of geo-spatial

data in the social sciences. In my affiliation with the Lab, I am engaged in a variety

of research projects involving large, fine-grained geo-spatial data sets including

ballots and election results at the level of polling places, individual records of

registered voters, census data, and survey responses. I am also a senior fellow at the

Stanford Institute for Economic Policy Research and the Hoover Institution. Prior to

my employment at Stanford, I was the Ford Professor of Political Science at the

Massachusetts Institute of Technology. I received my Ph.D. from Yale University

and my B.A. from the University of Michigan, Ann Arbor, both in political science.

A copy of my current C.V. is included as an Appendix to this report.

3
In my current academic work, I conduct research on the relationship between

the patterns of political representation, geographic location of demographic and

partisan groups, and the drawing of electoral districts. I have published papers using

statistical methods to assess political geography, balloting, and representation in a

variety of academic journals including Statistics and Public Policy, Proceedings of

the National Academy of Science, American Economic Review Papers and

Proceedings, the Journal of Economic Perspectives, the Virginia Law Review, the

American Journal of Political Science, the British Journal of Political Science, the

Annual Review of Political Science, and the Journal of Politics. One of these papers

was recently selected by the American Political Science Association as the winner

of the Michael Wallerstein Award for the best paper on political economy published

in the last year, and another received an award from the American Political Science

Association section on social networks.

I have recently written a series of papers, along with my co-authors, using

automated redistricting algorithms to assess partisan gerrymandering. This work has

been published in the Quarterly Journal of Political Science, Election Law Journal,

and Political Analysis, and it has been featured in more popular publications like the

Wall Street Journal, the New York Times, and Boston Review. I have recently

completed a book, published by Basic Books in June of 2019, on the relationship

between political districts, the residential geography of social groups, and their

4
political representation in the United States and other countries that use winner-take-

all electoral districts. The book was reviewed in the New York Times, New York

Review of Books, Wall Street Journal, The Economist, and The Atlantic, among

others.

I have expertise in the use of large data sets and geographic information

systems (GIS), and conduct research and teaching in the area of applied statistics

related to elections. My PhD students frequently take academic and private sector

jobs as statisticians and data scientists. I frequently work with geo-coded voter files

and other large administrative data sets, including in recent paper published in the

Annals of Internal Medicine and The New England Journal of Medicine. I have

developed a national data set of geo-coded precinct-level election results that has

been used extensively in policy-oriented research related to redistricting and

representation.1

I have been accepted and testified as an expert witness in six recent election

law cases: Romo v. Detzner, No. 2012-CA-000412 (Fla. Cir. Ct. 2012); Missouri

State Conference of the NAACP v. Ferguson-Florissant School District, No. 4:2014-

CV-02077 (E.D. Mo. 2014); Lee v. Virginia State Board of Elections, No. 3:15-CV-

00357 (E.D. Va. 2015); Democratic National Committee et al. v. Hobbs et al., No.

16-1065-PHX-DLR (D. Ariz. 2016); Bethune-Hill v. Virginia State Board of

1
The dataset can be downloaded at http://projects.iq.harvard.edu/eda/home.

5
Elections, No. 3:14-cv-00852-REP-AWA-BMK (E.D. Va. 2014); and Jacobson et

al. v. Lee, No. 4:18-cv-00262 (N.D. Fla. 2018). I also worked with a coalition of

academics to file amicus briefs in the Supreme Court in Gill v. Whitford, No. 16-

1161, and Rucho v. Common Cause, No. 18-422. Much of the testimony in these

cases had to do with geography, voting, ballots, and election administration. I am

being compensated at the rate of $500/hour for my work in this case. My

compensation is not dependent upon my conclusions in any way.

III. DATA SOURCES

I have collected county-level data on presidential elections for each year from

1988 to 2020 from the Georgia Secretary of State. I have also collected 2016

precinct-level data on Georgia from the Metric Geometry and Gerrymandering

Group at Tufts University. I obtained digitized 2020 Georgia precinct boundary files

from the Voting and Election Science Team at the University of Florida and Wichita

State University. I also obtained geo-spatial boundaries from the county GIS

departments of DeKalb, Chatham, and Fulton Counties. I obtained precinct-level

data on race among registered voters from the Georgia Secretary of State, as well as

2020 and 2016 precinct-level election results. I created a national county-level

dataset on election results using information assembled from county election

administrators by the New York Times and Associated Press, along with

demographic data from the 2014-2018 American Community Survey (ACS), as well

6
as the September 2020 county-level unemployment rate from the Bureau of Labor

Statistics, and as described in detail below, data on voting technologies used in each

U.S. jurisdiction collected by Verified Voting. I have also collected yearly county-

level population estimates for Georgia from the U.S. Census Department.

IV. QUINNELL REPORT

At the heart of Dr. Quinnell’s analysis is a claim that, in my 25 years of

election data analysis, I have never heard before. He claims that if one has a set of

results from an election, the distribution of votes for candidates should approximate

a normal, bell-shaped statistical distribution, and any departure from a normal

distribution is unnatural and somehow suspicious: “As we often expect our data to

be close to a normal distribution, significant deviations from these values can

indicate an event that is statistically anomalous” (paragraph 18). Specifically, Dr.

Quinnell claims that if votes for one of the candidates has a long tail—that is to say,

he or she has a concentration of support in a small number of districts where the vote

share is much greater than the average district—this is “anomalous” and indicative

of fraud. He then goes on to analyze a highly flawed precinct-level data set from

Fulton County, about which he makes a set of puzzling claims.

First, Dr. Quinnell’s basic claims about the distribution of election data across

geographic units are nonsensical and should be rejected out of hand. Second, his data

analysis is fatally flawed and essentially meaningless. The skewed distribution of

7
Biden vote gains pointed out in his report is merely a reflection of Biden’s success

in rapidly-growing suburban areas.

The Geographic Distribution of Election Results

Dr. Quinnell begins with a tangential anecdote about Henri Poincaré’s baker,

who was caught dropping a set of values from a data set that fell below a certain

threshold. In that case, the left side of the distribution—all of the low values—had

been simply discarded. He also mentions the sub-prime mortgage crisis, but the

relevance to his report is unclear. Neither of these anecdotes provides even the

slightest intuition for his claim that election results from a set of geographic units

should display a normal distribution, or why departures from the normal distribution

are indicative of fraud.

He cites no academic literature. Nor does he attempt to articulate a theory of

vote distributions and fraud. The reader is left to imagine how such a theory might

work. If a nefarious election administrator or computer programmer were able to

take votes from candidate B and give them to candidate A in some county, it is not

clear why this action would affect the distribution of votes across precincts. The

entire distribution would simply shift in the direction of candidate A. Perhaps Dr.

Quinnell wishes to imply that such nefarious actors are only able to operate in a

small minority of precincts. Perhaps this is why he believes it is suspicious if

candidate A experiences a cross-precinct distribution with a long right tail—that is

8
to say, a distribution in which candidate A performs especially well in a set of

precincts, without a corresponding set of precincts where candidate B does

exceptionally well.

However, there are many far more plausible explanations for non-normal

distributions of votes across precincts, counties, or districts. There is nothing even

slightly unusual about skewed distributions of votes, vote shares, or changes over

time in votes or vote shares, across geographic units. A very large literature dating

back to the earliest mathematical analyses of elections has explained, and

demonstrated using high-quality data analysis, that these distributions are very

frequently non-normal. In their classic 1979 book, Graham Gudgin and Peter Taylor

argue that if the partisan divide in a country with two political parties is correlated

with some social characteristic (for instance race or social class) that is not uniformly

distributed in space but is rather concentrated in certain districts, the distribution of

vote shares will be skewed. They presented evidence that because working-class

voters were concentrated in neighborhoods near factories, the distribution of support

across electoral districts for Labor parties in Britain and Australia was highly skewed

for much of the 20th century. 2 More recently, I have demonstrated that support for

the Democratic Party in the United States typically has a pronounced right skew

2
See Graham Gudgin and Peter Taylor, 1979, Seats, Votes, and the Spatial Organisation of
Elections. London: Pion. For a literature review, see Jonathan Rodden, 2010, “The Geographic
Distribution of Political Preferences.” Annual Review of Political Science 13,55.

9
across districts, counties, and often precincts. 3 The fact that the Labour Party

consistently wins by extremely large margins in urban districts in London, or that

the Democrats win by extremely large margins in urban Atlanta or Austin, has

nothing to do with fraud.

In Figure 1 below, I provide a histogram of Joe Biden’s vote share across

counties in 2020. Like the precinct-level histograms from Fulton County in Dr.

Quinnell’s report, the distribution is clearly right-skewed, but it is very difficult to

imagine what this might have to do with fraud.

Figure 1: Distribution of Biden Vote Share Across U.S. Counties, 2020

3
Jonathan Rodden. 2019. Why Cities Lose: The Deep Roots of the Urban-Rural Divide. New York:
Basic Books.

10
In short, there is no natural law suggesting that election results across

geographic units should be normally distributed around the mean, especially if those

units are asymmetric in their size. To the contrary, when relevant social groups are

clustered in space, it is more typical to see a skewed distribution.

Dr. Qinnell’s underlying theory of fraud, however, apparently relates to the

change in vote share. Perhaps he means to argue that the distribution of the change

from one election to the next in votes or vote shares across geographic units should

always have a normal distribution. But this argument would make no more sense

than an argument about voting levels. Members of politically relevant groups—for

instance young people, racial minorities, or college graduates—are typically not

uniformly or randomly distributed across geographic units, especially in the United

States. If an incumbent candidate pursues policies and rhetoric that attract or repel a

geographically clustered group, we can expect to see a non-normal distribution of

changes in vote shares.

For instance, it appears that Donald Trump’s appeals in the 2020 election

resonated with Cuban and Venezuelan Americans in South Florida, and with Tejano

voters in Texas. As a result, Trump experienced surprisingly large increases in vote

shares in counties where those groups made up a large share of the population. This

translated into a right-skewed distribution of changes in the Republican vote share

from 2016 to 2020. We can see this in the top panel of Figure 2, which focuses on

11
Texas. I take the 2020 Trump margin of victory (or loss) in each county and subtract

the 2016 margin so that higher numbers mean Trump improved his vote share over

2016, while lower numbers mean that he lost support relative to 2016.

Figure 2: County Histograms of Increase in Trump Margin, 2016-2020

In Texas, the distribution of Trump’s gains across counties has a pronounced

right skew—just as in Dr. Quinnell’s graphs. On the left side of the graph, there are

a large number of suburban counties in which Trump lost support, but some counties

in the tail of the distribution experienced rather extraordinary increases in

Republican vote share. Yet, according to Dr. Quinnell’s rule, we must conclude that

some nefarious actor committed fraud on behalf of President Trump in Texas. This

is simply not a credible argument. The counties in the tail of the distribution are

12
majority-Hispanic counties along the border. A far more likely story is that President

Trump experienced a non-fraudulent increase in support among this population of

Hispanic voters.

The next panel in Figure 2 repeats this histogram for the counties of Georgia.

In Georgia, there is a slight left skew, indicating that there are a handful of counties

where Biden’s gains were a bit further from the average county than in the rural

counties on the right side of the histogram, where Trump was gaining. Note that the

left side of the distribution in Georgia looks similar to that in Texas. As in Texas,

there are some suburban counties, like Cobb, Forsyth, and Henry, where the

Democratic margin increased substantially. Just as it makes little sense to blame the

very long right tail of the Texas distribution on fraud, it makes little sense to blame

the modest left tail of the Georgia distribution on fraud.

A much better explanation is that Georgia is similar to almost every other state

in the country, in that Biden made especially large gains relative to Clinton in

diverse, educated, and growing suburbs. Prior to 2020, many of these suburban

counties had Republican majorities. This fact is relevant for conspiracy theories

about nefarious actors, since in many of these counties in Georgia and around the

country, election administrators were appointed by Republicans. It is difficult to

comprehend why Republican election administrators would participate in a plot to

help the Democratic presidential candidate.

13
In other words, just as with Republicans in Texas—where the story has to do

with a shift among Hispanic voters—in Georgia there is an obvious reason why the

distribution of changes in votes for the Democratic presidential candidate would be

skewed relative to those of the Republican candidate. In Georgia, as in many other

states, population growth is an important part of the story. Perhaps the most striking

feature of the Georgia counties where Biden made the largest gains relative to

Clinton is that they have been experiencing high population growth, above all due

to in-migration from other places.

Figure 3: Population Change and Change in Democratic Vote Share, Georgia


Counties

Figure 3 uses population estimates from the census department to calculate county-

level population change over the last decade on the horizontal axis. On the vertical

axis, it displays the change in county-level Democratic vote share from 2016 to 2020,

14
so that higher numbers correspond to higher Democratic vote share in 2020 than in

2016. The size of the data marker corresponds to the size of the county in 2019. We

can see that throughout the state, Trump’s support increased primarily in small, rural

counties where the population has been declining over the last decade (the lower

left-hand part of the graph). Relative to Clinton, Biden’s support increased the most

in counties where the population grew the most (the upper right-hand part of the

graph). In fact, this is true in almost every U.S. state, and this trend was already quite

strong prior to 2020.4 Thus, there is nothing anomalous or nefarious about the fact

that Biden added far more votes than Trump in the rapidly-growing suburban

counties of Georgia.

Precinct-level analysis of Fulton County

Perhaps for good reason, Dr. Quinnell did not test his “departure from

normality” theory on county-level data. For reasons he does not explain, he

examined only precinct-level data from Fulton County. His choice of Fulton County

for a case study is rather odd. He seems to want to argue that the shift toward the

Democrats in Fulton County was suspiciously high and anomalous. In order to

examine whether this claim is plausible, Figure 4 displays the evolution of the

Democratic vote share over time in Fulton County and several other Georgia

counties. While Fulton County is indeed one of the most Democratic counties in the

4
Rodden, Why Cities Lose, op cit, chapter 9.

15
state, there is no way to interpret the Fulton County time series as displaying a

deviation from trend in 2020. In fact, the increase in Democratic vote share over the

previous election was far lower in 2020 than in 2016. As described above, the

Democratic vote share has been growing far more rapidly in suburban counties

surrounding Fulton County, like Cobb, Douglas, Henry, and Gwinnett.

Figure 4: Democratic Presidential Vote, 1988 to 2020, Selected Georgia


Counties

Even though there is little evidence that Fulton County’s overall results are

anomalous in any sense, let us examine Dr. Quinnell’s claims about the distribution

of votes across Fulton County’s precincts. Dr. Quinnell’s analysis focuses on the

distribution of changes in raw vote totals for the two parties from 2016 to 2020

across precincts in Fulton County. Evidently, Dr. Quinnell downloaded precinct-

16
level results from 2016 and 2020 and attempted to merge the two datasets together

based on their precinct identifiers. Unfortunately, constructing a meaningful time-

series precinct-level data set is not so simple. County-level election administrators

frequently combine or split precincts or change their boundaries. Sometimes only

two or three precincts in an area are affected; other times, officials re-precinct a wide

swath of territory. In order to draw inferences about changes in votes over time

within precincts, one must be absolutely certain that the boundaries are identical in

the two time periods. This was most certainly not the case in Fulton County between

2016 and 2020. In November of 2016, votes were recorded in 342 precincts in Fulton

County, whereas in November of 2020, votes were recorded in 384 precincts—an

increase of 42 precincts. This is a problem for Dr. Quinnell’s analysis because he is

comparing votes cast in two different systems of precincts. In many cases, precincts

with the same name in 2016 and 2020 are quite different in the two years, especially

in suburban areas.

I have obtained digital boundary files for the precincts used in 2016 and 2020.

Using geo-spatial software, I mapped the two boundary systems, and inspected each

of the 384 precincts used in 2020 to ascertain which precincts used the same

boundaries in 2016 and again in 2020. I discovered that only 260 of the precincts

used the same boundaries in both years. It is not clear what Dr. Quinnell has done

with the other 124 precincts. Some of them are completely new precincts that have

17
been carved out since 2016, such that there was no precinct by the same name in

2016. For many others, I discovered a mix of splits, combinations, and swaths of

geography where the boundaries have been completely redrawn. It is often the case

that a precinct still exists with the same name, but it has different boundaries and

includes a different set of voters. For each of these precincts, it is completely

meaningless to subtract the precinct-level vote total of one of the candidates in 2016

from the total in 2020 for the precinct with the same name. Many of the precincts

that experienced boundary changes were in the rapidly-growing, suburban sections

of South Fulton County, such as Chatahoochee Hills and Fairburn, where new real

estate developments are bringing significant change to the built environment each

year.

It seems that Dr. Quinnell was at least somewhat aware of this problem,

because in his report, he placed asterisks by the precincts that he claims were

redistricted. He does not explain, however, how he ascertained which precincts were

redistricted. And something went wrong, because Dr. Quinnell’s list is far from

complete. For instance, just to take one example, in the table on page 15, he does not

place an asterisk next to precinct RW11A (in Roswell). In Figure 5, I provide a map

of the boundaries of precincts in that part of Fulton County in 2016, in solid red, and

in 2020, with a dashed black line. We can see that the old precinct RW11A was

18
subdivided into RW11A and RW11B. A comparison of vote totals in the old and

new versions of RW11A based on a simple name merge would not be meaningful.

Figure 5
Selected Precinct Boundaries in Fulton County, Georgia

In fact, in much of Fulton County, the problem of matching precincts is far

more complex than simple splits like RW11A and RW11B. For instance, consider

the case of precinct SC30B, in the middle of Figure 6. The old boundary is in red.

The new boundaries (marked with black dashes) carve out parts of SC30B and place

fragments in 11C, 11M, and 10B. Meaningful over-time comparisons cannot be

made in any of these precincts. Note that there are similar issues throughout Figure

6. For instance, fragments of the old SC14 have been placed in 10A, FC02, and

19
SC14A. Similar examples, where the red and dashed block lines are not directly on

top of one another, can be found throughout Fulton County.

Figure 6
Selected Precinct Boundaries in Fulton County, Georgia

Perhaps in anticipation of this type of critique, Dr. Quinnell conducted some

analysis in which he aggregated the data to the level of units he refers to as

“counties.” If I understand correctly, he aggregates 2016 and 2020 votes by clusters

of precincts according to the first two letters of the precinct name (10, 11, EP, SC,

FC, and so forth). Those beginning with numbers are based in the city of Atlanta.

The others correspond loosely to names of other cities of Fulton County, e.g. EP =

East Point, CP = College Park, and so on. This clustering, however, does not solve

the problem at all because these units are not stable over time. That is to say, precinct

20
splits, combinations, and complete redraws frequently cross over from one of these

clusters to another, as demonstrated in Figure 6. This problem is especially severe

in suburban parts of Southern Fulton County.

In sum, I am skeptical that any inferences can be drawn from Dr. Quinell’s

data set at all—even the observations without asterisks. Fulton County’s precinct

structure has experienced far too much change for his data set to be useful. He wishes

to characterize certain precinct-level vote changes as “anomalous,” even though

many of his so-called anomalies are likely completely meaningless because they

compare different geographic units, and hence different voters, over time.

Let us examine the 260 precincts, for which I have verified that the precinct

geography is common over time, and examine whether there is evidence of

something odd about the data in these precincts for which valid over-time

comparisons can be made. As explained above, Dr. Quinnell’s main concern is that

there are a number of precincts with very large increases in Democratic votes relative

to the increases in Republican votes. Indeed, in my data set, which includes most of

central and Northern Fulton County, there are 28 precincts in which Biden’s total

number of votes exceeded Clinton’s by more than 500, and there is not a single

precinct where Donald Trump’s vote total increased by more than 500 votes since

2016.

21
Figure 7: Trump 2016 Vote Share and Increases in Votes for Both
Candidates in 2020, Fulton County Precincts

What is going on with these precincts where votes for Biden increased by a

great deal and votes for Trump did not? First of all, these precincts are not the

extremely Democratic precincts of the Atlanta urban core. Figure 7 presents a scatter

plot, where Donald Trump’s 2016 vote share is displayed on the horizontal axis. On

the vertical axis is, for each precinct, a red dot for the increase in raw votes for Trump

vis-à-vis 2016, and blue dot for the increase in votes for Biden over Clinton’s votes

in 2016. It shows that there is not a strong relationship between precinct partisanship

and the relative increase in Biden votes. If anything, Biden’s gains were somewhat

larger in more Republican precincts—a pattern that was also noted by Dr. Ayyadurai

(see below).

22
Figure 8: Increase in Registered Voters and Increases in Votes for Both
Candidates in 2020, Fulton County Precincts

Figure 8 resolves any mystery about the precincts that experienced large

asymmetric increases in Democratic votes in Fulton County. It once again plots the

raw vote changes for the candidates on the vertical axis, but on the horizontal axis

it plots the increase in the number of registered voters from 2016 to 2020. On the

left side of the graph are precincts that did not experience much population gain

over the last four years. Many of these are in the urban core of Atlanta. As we

move to the right on the graph, we move into rapidly-growing precincts in more

suburban parts of Fulton County, where new housing developments, and in some

cases entirely new neighborhoods, have been built since 2016. In other words, the

precinct-level results in Fulton County are entirely consistent with the county-level

relationship discussed above, and indeed with the relationship that has been seen in

23
metro areas around the country: Biden’s gains were modest in the stagnant urban

core and largest in the most rapidly-growing suburban areas. There is nothing

anomalous about Fulton County and nothing that would indicate fraud. Just as

Trump’s large gains in certain Hispanic neighborhoods do not indicate fraud,

Biden’s large gains in growing suburban neighborhoods do not indicate fraud.

V. AYYADURAI REPORT

Dr. Ayyadurai claims to have discovered “massive anomalies in Republican

voting patterns and ethnic distribution of votes.” First, he uses data from several

counties to establish a pattern that he repeatedly calls “High Republican, But Low

Trump.” He provides no indications about his data sources and does not explain how

he measures his variables. Yet he appears to claim, in essence, that split-ticket voting

among white Republicans is evidence of fraud. His claims about race and ethnicity

are, frankly, inscrutable, and thus difficult to evaluate with data analysis.

Nevertheless, I have assembled precinct-level data in order to search for any possible

anomalies that might be linked with the most reasonable possible interpretations of

what Dr. Ayyadurai appears to be claiming.

Let us begin, as does Dr. Ayyadurai, in Chatham County—home to Savannah.

On page four of his report, Dr. Ayyadurai presents a graph that purports to show that

“as the percentage of Republicans in precincts increases, President Trump gets fewer

votes.” He does not explain why this is problematic or what these graphs even mean.

24
If one takes some quantity of interest and then subtracts some number from it, it is

quite likely to be negatively correlated with that number. He also does not explain

how he determines “the percentage of Republicans in precincts.” Partisanship is not

an immutable characteristic, and in Georgia, one does not register with election

administrators as a member of one party or the other. When participating in

primaries, voters can request the ballot of any party they choose. Perhaps Dr.

Ayyadurai has obtained precinct-level results of the most recent primary and

determined that “the percentage of Republicans in a precinct” is simply the number

of Republican ballots cast as a share of all ballots cast in the primary.

This would be a very poor measure of precinct-level partisanship, however, because

relatively few voters participate in primaries, and their participation is likely to be

driven by the competitiveness of the races for each party. For instance, President

Trump was not being challenged in the June primary, while there was a competitive

Democratic primary. In any case, in an effort to reverse engineer Dr. Ayyadurai’s

analysis, I have calculated the share of ballots cast for President Trump in the 2020

primary as a share of all ballots cast in either party’s presidential primary. In Figure

9, I plot Trump’s share of all primary ballots cast—my best guess of Ayyadurai’s

measure of Republican partisanship—on the horizontal axis, and Trump’s share of

the vote in the 2020 general election on the vertical axis. I also include a 45-degree

25
line, so that any observation above the line indicates that Trump over-performed in

the general election vis-à-vis the primary.

Figure 9: Trump Share of 2020 Total Primary Ballots Cast and Trump
Share of 2020 General Election Vote, Precincts, Chatham County

Given that there was considerable excitement about the primary among

Democrats, and there was only a single uncontested candidate for the Republicans,

it is not surprising that most of the dots are above the line. It appears that there was

a participation gap in favor of Democrats in the primary, but this gap faded by

election day. Only in the very Republican precincts were the observations clustered

around the 45-degree line or slightly below.

Let us now transform this graph into the one presented by Dr. Ayyadurai. We

can measure Trump’s over-performance in the general election relative to the

primary by subtracting the primary vote share from the election-day vote share. We

26
can then plot that quantity on the vertical axis, and the primary vote share—

presumably Ayyadurai’s measure of “the share of Republicans in a precinct”—on

the horizontal axis.

Figure 10: Reverse Engineering of Ayyadurai Plot

Figure 10 looks very similar to Dr. Ayyadurai’s plot (page 5). Due to the

relatively weak primary turnout among Republicans relative to Democrats, it is not

at all surprising that Trump received a higher vote share in the General Election than

in the primary in most precincts. It is also not surprising that this effect would fade

in precincts with relatively few Democrats. What is surprising is that this could

possibly be viewed as somehow indicative of fraud.

27
Figure 11: Down-Ballot Republican Vote Share and Trump Vote Share,
2020 General Election, Precincts of Chatham County

Figure 12: Trump Over-Performance Relative to Down-Ballot


Republicans, 2020 General Election, Precincts of Chatham County

Let us take another approach to the measurement of precinct-level

partisanship by looking at other races that occurred on the same ballot on November

3, 2020. In addition to the first round of the Senate election, there were two relatively

low-profile races for the Georgia Public Services Commission. One might argue that

such races are more likely to be based on underlying partisan attachments rather than

28
personalities. I have added up the Republican vote share in these down-ballot races

and plot this against the Trump vote share in Figure 11, again including a 45-degree

line. And in Figure 12, I present the data using Dr. Ayyadurai’s approach.

In Figure 11, we see that in the majority-Democratic precincts on the left,

down-ballot vote shares and presidential vote shares are almost exactly the same.

However, as we move to the right—into more Republican precincts—we see that

Trump begins to under-perform relative to the down-ballot Republicans. And in

Figure 12, we see once again the pattern that Dr. Ayyadurai refers to as “high

Republican but low Trump.” Trump under-performed relative to other Republican

candidates throughout Chatham County, but that under-performance was most

pronounced in the most Republican districts—many of which are overwhelmingly

white, educated, and high-income. Figure 13 helps us visualize this. I have obtained

geographic boundary files of Chatham county’s 2020 precincts and combined them

with data on race and election results. On the left is a map of race, and on the right

is a map of split ticket voting expressed as Trump’s over-performance relative to

down-ballot Republicans. The darkest orange color captures the precincts where

Trump very slightly over-performed relative to down-ballot Republicans. Many of

these are precincts with relatively large African-American populations. As the colors

get lighter and move toward yellow, Trump under-performs relative to down-ballot

Republicans by larger amounts. We can see that his greatest under-performance was

29
in white, traditionally Republican neighborhoods, many of which are relatively

educated and affluent.

Figure 13: Race and Split-Ticket Voting in Chatham County, GA,


November 2020

Dr. Ayyadurai’s phrase—“high Republican but low Trump”—describes

something we saw not only in Savannah but in metro areas around Georgia and the

United States: white metro-area voters who typically vote for Republican candidates

continued to do so in down-ballot races, but a number of them voted for the

Democratic candidate in the presidential race. It is quite unclear what this pattern of

split-ticket voting could possibly have to do with election fraud.

30
In addition to his curious claims about partisanship, Dr. Ayyadurai also makes

some statements about race that are difficult to comprehend. He presents graphs that

he says are “cumulative vote totals.” He does not explain what he means by this or

what is happening as one moves from left to right on these graphs. It is unclear

whether they are supposed to represent an array of precincts, arranged from small to

large or from Republican to Democratic, vote counts as they unfold over time on

election night, or something else. He then introduces a line on the graph that he says

“plots the number of votes for President Trump based on the same ethnic

demographic distribution to match the pattern of actual votes reported by the

Secretary of State” (p.7). I simply have no idea what this means. Perhaps he has

estimated some sort of model using precinct-level data, where he tries to predict vote

shares from precinct-level racial data. He does not tell the reader what he has done

with racial data, what assumptions he has made, or why race is even relevant for his

analysis. Without any corresponding analysis or data, he then makes a truly

incomprehensible claim: “the only way to explain the results, reported by the

Secretary of State, is if President Trump did not receive one single Black vote” (p.

8). Because this claim is not supported by any data or even a description of the logic

that gave rise to it, I am not sure how to evaluate it. Dr. Ayyadurai seems to have

made some unusual assumptions about how ethnic identity should, in his view,

translate into votes in Georgia. The ballot is secret, and individual-level data on race

31
and voting are unavailable. It is possible to conduct ecological inference analysis

using precinct-level data in order to estimate the voting behavior of racial groups,

but Dr. Ayyadurai makes no mention of having conducted this type of analysis, and

even if he had, it is simply not possible to use aggregate data to make a claim like

the one about Trump “not receiving a single black vote.” One cannot draw any such

conclusion from the data at hand.

In the remainder of his report, Dr. Ayyadurai repeats the same analysis for

several additional counties. For each of the counties, Dr. Ayyadurai merely points

out that Donald Trump under-performed in relatively white, Republican suburban

areas. At no point does he explain what President Trump’s difficulties in suburban

Georgia have to do with election fraud.

Finally, Dr. Ayyadurai makes an additional claim. On page 26, he claims to

find “unequivocal evidence of an algorithm that has been put in place such that when

a precinct nears approximately ten-percent (“10%”) in White voters, a linearly

increasing percentage of total votes is transferred from President Trump to Mr.

Biden.” Dr. Ayyadurai does not provide any evidence of any such phenomenon.

Once again, it is quite difficult to piece together the logic behind this claim, or to

make sense of the data that Dr. Ayyadurai believes might support it. His analysis

appears to involve some estimate of “the difference between Mr. Biden’s votes as

reported by the Secretary of State of Georgia and what he should have received based

32
on the ethnic distribution of DeKalb County” (p. 27). Dr. Ayyadurai does not help

the reader by explaining what Biden “should have received.” Evidently, he believes

that Biden should have received only votes from African Americans, and zero votes

from whites, such that any Biden vote share above 60 percent, for instance, in a 60

percent white precinct is viewed as somehow anomalous or excessive. For reasons

that are unclear, he seems to then claim that it is especially suspicious if Biden’s

over-performance relative to an “ethnic headcount” model is larger in whiter

precincts. This view of voting as a simple ethnic headcount in a diverse suburban

environment like DeKalb County is unusual to say the least. Moreover, it is unclear

why a strong performance for Biden in majority-white suburban precincts would

constitute evidence of fraud.

Once again, it is helpful to visualize the data in question. From the Secretary

of State, I have obtained precinct-level racial data along with 2020 election results

for Fulton County. In Figure 14, I plot whites as a share of registered voters on the

horizontal axis, and Biden’s vote share on the vertical axis. There is a negative

relationship between whites as a share of registered voters and Biden’s vote share,

but DeKalb County elections cannot be characterized as an ethnic headcount. Note

that in DeKalb County, even the precincts that are over 80 percent white are still, on

average, strongly Democratic. And in the upper right-hand section of the graph, there

are a large number of overwhelmingly white precincts where Biden received a very

33
large share of votes. It is not possible to identify anything resembling a mechanical,

machine-like increase in Democratic vote share as one moves from left to right in

Figure 14. Rather, there is a cloud of majority-white districts where Biden performs

especially well.

Figure 14: Whites as Share of Registered Voters and November 2020


Biden Vote Share, Precincts of DeKalb County, GA

34
Figure 15: Map of Race and November 2020 Biden Vote Share,
Precincts of DeKalb County, GA

It is also useful to visualize DeKalb County election results on a map. For instance,

many of the white precincts with relatively high Biden vote shares are contiguous

neighbors on the West side of the county, closer to Atlanta. There is nothing about

the data displayed in Figures 14 or 15 that would seem to indicate any kind of fraud.

Support for Democrats among suburban whites in racially heterogeneous areas is

common around the United States and does not constitute evidence of fraud.

35
VI. RAMSLAND REPORT

Mr. Ramsland presents empirical analysis that demonstrates, in his telling,

that Joseph Biden receive higher vote shares in counties that use voting machines

made by the manufacturers Dominion and Hart, and that Biden “overperforms” in a

larger share of counties using those machines than in counties using other machines.

Mr. Ramsland makes vague allusions to rogue foreign actors, and concludes with

the statement that the use of certain voting machines “affected 2020 election results”

(page 11), indicating that he believes he has uncovered a causal relationship,

whereby certain types of machines are responsible for boosting the Democratic vote

share. Mr. Ramsland’s research design is flawed in several crucial respects. First, he

relies on idiosyncratic, non-standard statistical techniques that are not suited for the

analysis he wishes to accomplish, and more importantly, he relies on a correlation

that is driven primarily by cross-state variation and makes no effort to address a

serious causal inference problem.

To demonstrate these problems and conduct a more appropriate analysis, I

have created my own dataset of county-level votes from 2008 to 2020, merged with

county demographic data from the 2014-2018 American Community Survey

(ACS), 5 September 2020 county-level unemployment rate from the Bureau of Labor

5
Demographic variables from the ACS include: the age distribution, sex distribution, percent
Black, percent Latino, the percent of renters, median household income, percent of the county with
a college degree, and percent under the poverty line.

36
Statistics, and data on voting technologies used in each jurisdiction collected by

Verified Voting. 6 Verified Voting is a “non-partisan organization focused

exclusively on the critical role technology plays in election administration” that has

developed “the most comprehensive publicly-accessible database of voting systems

used around the country.” 7 I accessed a dataset showing the various voting systems

that were in place for each jurisdiction in 2012, 2016, and 2020.

Mr. Ramsland’s report says he uses data from the Election Assistance

Commission (EAC). I have been unable to locate a dataset from the EAC that

contains data on voting systems used across the country in 2020. The most recent

data available from the EAC is from 2018. 8 Its 2020 survey of election

administrators—which appears to be the source of the data on voting systems—has

yet to be released. As the complaint notes, Georgia had not adopted Dominion voting

equipment in 2018.

Mr. Ramsland describes a two-step procedure that is not a standard method of

data analysis. Instead of generating predictions using a model that does not include

data on voting systems, a more appropriate analysis should include both voting-

6
In preparing this this data set and conducting the analysis set forth in this section of the report, I
received assitance from William Marble—a advanced PhD candidate in political science at
Stanford University. Mr. Marble has worked with me in a similar capacity in the past and it is
standard to utilize such assistants in my field of expertise.
7
https://verifiedvoting.org/about/
8
https://www.eac.gov/research-and-data/datasets-codebooks-and-surveys

37
systems data as well as demographic data in one unified model. 9 I conduct such an

analysis below. Additionally, Mr. Ramsland makes some incorrect statements when

describing his analysis. The report states that “[i]n normal circumstances any

candidate should perform above expectations roughly 50% of the time and under-

perform roughly 50% of the time” (par. 11). This statement is incorrect. In fact, the

statistical procedure used in Mr. Ramsland’s report guarantees that the average

difference between the actual vote share and the predicted vote share is 0. It does not

guarantee, however, that the proportion of observations in which the vote share is

over- or under-predicted is roughly 50%.10

Though Mr. Ramsland’s two-step procedure is not especially useful, let us

take very seriously his claim that the introduction of certain types of voting

technology, via some unspecified form of fraud, actually has a causal impact on vote

shares. In other words, we would like to answer the following question: if there are

two counties that are otherwise identical in every respect, including their initial type

of voting technology, and one switches from some other voting technology to

Dominion and the other stays the same, does the switching county exhibit a change

in voting behavior relative to the “control” county that stayed the same? In the ideal

9
Additionally, Mr. Ramsland’s report is light on methodological details. For example, it does not
describe which Census variables are included in his model.
10
This is a well-known result. Technically, linear regression finds a set of coefficients so that the
sum of squared deviations between the predicted and actual values is minimized, along with the
constraint that the average deviation is 0. This procedure can produce results where there are many
small positive deviations, offset by a few large negative deviations (or vice versa).

38
world, we would conduct an experiment, much like a drug trial, randomly assigning

some counties but not others to either the “treatment condition”—the use of

Dominion software—or the control condition—the maintenance of the existing

system. By randomizing a sufficiently large number of counties to the treatment and

control condition, a researcher would be able to anticipate that there are no

systematic differences between the treatment and control counties. Above all, we

would hope that this randomization would achieve a balance between the two

groups, such that prior Democratic or Republican voting would be similar in the two

groups, as would other correlates of voting behavior, such as income, race, and

education. We would then be able to isolate any possible impact of voting

equipment.

Unfortunately, this type of experiment is unavailable to us. Counties and states

have adopted voting technology in a way that is far from random. Counties that

adopted Dominion systems between 2016 and 2020 are quite different from those

that did not. Counties that switched to Dominion systems between 2016 and 2020

have larger shares of female residents, Latino residents, college-educated residents,

and have lower median incomes. All of these variables are correlated with political

attitudes. Moreover, they are likely correlated with unobservable variables that also

correlate with political attitudes and partisanship.

39
Even worse, it is clearly the case that Democratic counties have been more

likely to adopt Dominion machines than Republican counties. This is demonstrated

in Figure 16. The left-hand panel considers all counties in the country and shows

that counties won by Clinton in 2016 were far more likely than counties won by

Trump to make use of Dominion technology in 2020. The right-hand panel focuses

on counties that were not yet using Dominion technology in 2016 and shows that

counties won by Clinton were significantly more likely than counties won by Trump

to adopt Dominion technology.

Figure 16: Voting Technology Use in 2020 by County Partisanship

40
Seven states have adopted Dominion technology across all of their counties,

and 20 states have not adopted Dominion technology in any of their counties. The

former counties are predominately Democratic, and the latter lean Republican. This

can be seen in Figure 17, which plots Hillary Clinton’s 2016 statewide vote share on

the horizontal axis and the share of counties using Dominion software in 2020 on

the vertical axis. It shows that Dominion software was mostly prominently in use in

2020 in states that were already relatively Democratic in 2016.

Figure 17: Clinton 2016 Vote Share and 2020 Voting Technology

By now it should be clear why Mr. Ramsland’s empirical analysis suffers from

a vexing causal inference problem. If extremely Democratic counties in states like

New England adopted a certain software in the past, and one examined a

contemporary correlation between voting behavior and the use of that technology,

41
that correlation could not plausibly be interpreted as evidence that the technology

caused the voting outcomes, even if one attempted to control for potential observable

confounders like race and income. It is simply not plausible that Connecticut is more

Democratic than Wyoming because of its voting technology.

State Fixed Effects Model

Mr. Ramsland sweeps these complexities under the rug. Unfortunately, there

is no easy solution to this causal inference problem. At a minimum, we can try to

draw inferences from within the states where there is variation across counties in

voting technology, attempting to control for observable county-level confounders.

This can be achieved by estimating a model with “fixed effects” for states. Inclusion

of state-level fixed effects allows us to control for a variety of common factors within

states that cause there to be a correlation in counties’ outcomes within the same state.

This does not “solve” the causal inference problem, but at least it allows for more

valid comparisons. For this reason, inclusion of fixed effects is standard practice in

social science research for this type of study. 11

I estimate a county-level model in which the dependent variable is the 2020

Democratic vote share and the main independent variable of interest is a binary

11
For example, see Angrist, J., and Pischke, S., Mostly Harmless Econometrics. 2009. Princeton,
NJ: Princeton University Press.

42
variable indicating whether the state used Dominion technology in 2020. The model

includes a set of demographic control variables, past election results, and state-level

fixed effects. The full results are presented in Appendix Table A1. The coefficient

capturing the impact of the use of Dominion technology is statistically

indistinguishable from zero. The same is true for the use of Hart technology.

Placebo Test Using Bordering Counties

In sum, when we rely on comparisons of counties within states, there is no

evidence that election technology has an impact on vote shares. Mr. Ramsland

provides no regression output or details about his analysis, but he seems to have

estimated some sort of regression model. He makes no mention of having included

fixed effects. As one can see in Figure 17 above, it is clear that a naïve empirical

model without fixed effects for states would generate the illusion of a relationship

between voting technology and election outcomes simply because Democratic states

have been somewhat more likely to purchase Dominion equipment.

A good way to observe this phenomenon is to conduct a “placebo” test in

which we examine Biden’s vote share in counties that did not use Dominion systems

but border a county that did use Dominion. If there is an impact of voting software

on election outcomes via fraud, it should certainly not be detected in counties that

border the Dominion counties but use some other election technology system. If we

43
see that those counties have elevated Democratic vote shares mimicking the

supposed “effect” of Dominion software—what is known as a “placebo” effect—we

should be very skeptical about claims that use of the software is associated with

increased Democratic voting. Rather, we would understand that the correlation

reported by Mr. Ramsland is driven by some features of the types of regions where

Dominion software has been adopted—not the software itself.

The result of this analysis is shown in Appendix Table A2. It shows results of a linear

regression of Biden vote share on an indicator variable for whether a county borders

a Dominion (or Hart) county. This regression is estimated among counties that used

neither Dominion nor Hart systems, and it includes a set of demographic control

variables. It shows that Biden received a higher vote share, about .86 of a percentage

point, in counties that border a Dominion county than in those that do not. It would

be implausible to claim that voting technology in bordering counties has a causal

impact on Biden’s vote share. A more plausible interpretation is that there are some

common features of politics in the regions that have adopted the software, and the

research design that Mr. Ramsland appears to have used in his report is likely to turn

up spurious results.

Placebo Test Using Prior Election Results

44
A research strategy designed to estimate the effect of one variable on another

variable can be evaluated by its tendency to detect an effect when an

effect does exist, and its tendency not to detect an effect when an effect does

not exist. When a research design detects an effect where none exists, we say it

returns a false positive. Designs with a high false positive rate are not very

informative: an effect could be detected by the research design due to the existence

of a real effect, or it could be a false positive.

We can make a further evaluation of the propensity of the research design that

Ramsland appears to have used in his report to return false positives by seeing

whether it detects that future events have an “effect” on past outcomes. Of course,

this is logically impossible—we know that events happening in the future cannot

affect past outcomes. Thus, any effect detected on past outcomes is necessarily a

false positive.

In Appendix Table A3, I replicate the basic research design that I believe was

used in the Ramsland report. It uses linear regression models, without state fixed

effects, to predict Democratic vote share as a function of whether a county used

Dominion voting technology in 2020, along with county-level demographic and

economic control variables. Except, instead of predicting 2020 vote share, I predict

2012 and 2016 vote share. I exclude counties that used Dominion systems at the time

of the election being analyzed.

45
The results indicate that in 2012, in counties that did not use Dominion in

2012 but did use them in 2020, Barack Obama received about 5 to 7 percentage point

higher vote share, compared to counties that did not use Dominion machines in either

2012 or 2020. The next column shows a similar pattern for 2016. Future use of

Dominion predicts higher Clinton vote share in 2016, even in counties that did not

use Dominion in 2016.

These results are false positives: there is no logical way that future use of

Dominion voting machines could have affected past outcomes. Instead, these results

are due to the fact that counties that used Dominion voting systems in 2020 are

politically different than counties that did not, even after controlling for demographic

and economic variables. This test shows that the research design used in the

Ramsland report is ill-equipped to detect differences in vote shares that

are caused by use of particular voting systems. As such, the statistical analysis in the

Ramsland report provides no evidence of fraud due to use of Dominion or Hart

voting machines.

Ranked Choice Voting

Mr. Ramsland also makes a confusing claim that election results may have

been altered in Michigan because voting machines were set to perform ranked choice

voting, which Mr. Ramsland refers to as a “feature enhancement.” From this

46
discussion, it seems likely that Mr. Ramsland is not familiar with ranked choice

voting. It involves a different type of ballot, in which voters rank their preferences

among candidates. This type of ballot was not used in Michigan. Even if all of the

ballots in Michigan were somehow counted or processed using ranked choice voting,

but using ballots that only allowed voters to select one candidate, the result would

be the same. Ranked choice voting is a system where in the first round of counting,

if one candidate has a majority, the process is over, and no votes are redistributed. If

there were multiple candidates and voters’ choices were ranked, there would then be

a second round, where the lowest-ranked candidate would be dropped, and those

voters who ranked that candidate first would then have their second-choice votes

tallied. Clearly, nothing of the sort happened in Georgia. Jo Jorgensen, the

Libertarian candidate, was credited with 62,138 votes in Georgia. Significant votes

were also recorded throughout the state for additional parties as well as write-in

candidates.

Mr. Ramsland also seems to believe that ranked choice voting would

somehow produce non-integer vote totals. This is simply not the cases. Ranked-

choice voting is no more capable of producing non-integer vote totals than is the

winner-take-all plurality system. I have examined precinct-level vote totals from

county election officials around Georgia and have seen no non-integer vote totals. It

appears that Mr. Ramsland may have been thrown off by election-night reporting by

47
Edison Research that contained Biden and Trump vote totals that were not always

whole numbers. One obvious possibility is that when sharing data on election night,

workers at Edison Research multiplied total votes cast by vote shares that had been

rounded when producing a field for total vote numbers in their data feed.

VII. Conclusion

None of these authors offers a specific theory about how they believe fraud

was actually carried out. They veer between insinuations that foreign actors changed

votes via malicious software, to more traditional efforts to blame nefarious election

administrators in specific counties or precincts. Dr. Quinnell does not specify

whether he believes that some unspecified fraud took place among administrators in

particular suburban Fulton County precincts, or that a malicious actor at the county

level or beyond somehow selected these suburban precincts to manipulate. For

reasons that are unclear, Dr. Ayyadurai seems to suggest that malicious coders

decided to add Democratic votes to precisely the white, suburban, traditionally

Republican precincts in Georgia that have been trending away from the Republican

Party in the Trump era. Mr. Ramsland seems to have a broader conspiracy in mind,

where malicious coders are subverting the will of voters in every state, including

extremely Democratic states of the Northeast.

48
The visions of fraud and conspiracy that motivate these reports are difficult to

pin down and seem to conflict with one another. The data presented in these reports

have nothing to do with fraud, and the authors do not even attempt to link their so-

called “anomalies” to theories about how fraud might be carried out. Though these

reports offer some insight into the production process for conspiracy theories, they

provide no evidence whatsoever of anomalies or irregularities in Georgia’s 2020

general election results.

49
Appendix

Table A1: Fixed Effects Model, County-Level Democratic Vote Share in 2020

Dem vote
share, 2020
Dominion 2020 0.031
(0.25)
Hart 2020 -0.014
(0.08)
female -0.003
(0.18)
Black 0.022
(2.57)*
Latino -0.078
(9.43)**
College 0.086
(7.31)**
Age 25-34 0.014
(0.52)
Age 35-44 0.074
(2.56)*
Age 45-54 -0.028
(0.85)
Age 55-64 0.123
(4.16)**
Age 65 and over -0.030
(1.63)
Median income -0.016
(1.79)
Poverty rate -0.003
(0.16)
Unemployment rate -0.140
(3.73)**
Renter share -0.011
(0.88)
Share urban 0.019
(7.81)**
Log population density 0.240
(3.54)**
Dem. vote share 2016 1.047
(51.38)**
Dem. vote share 2012 -0.093
(3.76)**
Dem. vote share 2008 -0.026
(1.43)
Constant 0.465
(0.26)
R2 0.99

50
N 3,110
* p<0.05; ** p<0.01

51
Table A2: Border Placebo Analysis

Dem vote
share, 2020
Dominion 2020 0.855*
(1.96)
Hart 2020 -3.860
(6.97)**
female 0.067
(0.60)
Black 0.389
(16.44)**
Latino 0.148
(5.00)**
College 0.746
(13.81)**
Age 25-34 -0.238
(1.53)
Age 35-44 -0.504
(3.03)**
Age 45-54 0.060
(0.33)
Age 55-64 0.738
(3.70)**
Age 65 and over -0.231
(2.43)*
Median income 0.156
(3.05)**
Poverty rate 0.564
(5.58)**
Unemployment rate 0.901
(6.10)**
Renter share 0.274
(4.56)**
Share urban 0.014
(1.04)
Log population density 1.812
(7.04)**
Constant -25.082
(2.43)*
2
R 0.68
N 1,846
* p<0.05; ** p<0.01

52
Table A3: Previous Election Placebo Analysis
2012 Dem 2016 Dem
vote share vote share
2020 Dominion 5.605 3.310
(1.241)** (1.358)*
female 0.400 0.198
(0.131)** (0.113)
Black 0.352 0.466
(0.024)** (0.021)**
Latino 0.143 0.258
(0.034)** (0.031)**
College 0.331 0.660
(0.061)** (0.054)**
Age 25-34 -0.411 -0.254
(0.177)* (0.153)
Age 35-44 -0.799 -0.576
(0.194)** (0.168)**
Age 45-54 0.272 0.269
(0.225) (0.198)
Age 55-64 0.842 0.850
(0.235)** (0.206)**
Age 65 and over -0.117 -0.033
(0.120) (0.100)
Median income 0.152 0.150
(0.061)* (0.050)**
Poverty rate 0.656 0.671
(0.108)** (0.098)**
Renter share 0.325 0.337
(0.077)** (0.068)**
Share urban 0.008 0.006
(0.016) (0.013)
Log population density 2.444 2.387
(0.276)** (0.246)**
Constant -29.495 -41.937
(12.358)* (10.381)**
2
R 0.39 0.61
N 1,946 2,097
* p<0.05; ** p<0.01

53
Jonathan Rodden
Stanford University
Department of Political Science Phone: (650) 723-5219
Encina Hall Central Fax: (650) 723-1808
616 Serra Street Email: jrodden@stanford.edu
Stanford, CA 94305

Personal
Born on August 18. 1971, St. Louis, MO.
United States Citizen.

Education
Ph.D. Political Science, Yale University, 2000.
Fulbright Scholar, University of Leipzig, Germany, 1993–1994.
B.A., Political Science, University of Michigan, 1993.

Academic Positions
Professor, Department of Political Science, Stanford University, 2012–present.

Senior Fellow, Hoover Institution, Stanford University, 2012–present.


Senior Fellow, Stanford Institute for Economic Policy Research, 2020–present.
Director, Spatial Social Science Lab, Stanford University, 2012–present.
W. Glenn Campbell and Rita Ricardo-Campbell National Fellow, Hoover Institution, Stanford Univer-
sity, 2010–2012.
Associate Professor, Department of Political Science, Stanford University, 2007–2012.
Fellow, Center for Advanced Study in the Behavioral Sciences, Palo Alto, CA, 2006–2007.
Ford Career Development Associate Professor of Political Science, MIT, 2003–2006.

Visiting Scholar, Center for Basic Research in the Social Sciences, Harvard University, 2004.
Assistant Professor of Political Science, MIT, 1999–2003.
Instructor, Department of Political Science and School of Management, Yale University, 1997–1999.

1
Publications
Books
Why Cities Lose: The Deep Roots of the Urban-Rural Divide. Basic Books, 2019.
Decentralized Governance and Accountability: Academic Research and the Future of Donor Programming. Co-
edited with Erik Wibbels, Cambridge University Press, 2019.
Hamilton‘s Paradox: The Promise and Peril of Fiscal Federalism, Cambridge University Press, 2006. Winner,
Gregory Luebbert Award for Best Book in Comparative Politics, 2007.
Fiscal Decentralization and the Challenge of Hard Budget Constraints, MIT Press, 2003. Co-edited with
Gunnar Eskeland and Jennie Litvack.

Peer Reviewed Journal Articles


Partisan Dislocation: A Precinct-Level Measure of Representation and Gerrymandering, 2020, Political
Analysis forthcoming (with Daryl DeFord Nick Eubank).

Who is my Neighbor? The Spatial Efficiency of Partisanship, 2020, Statistics and Public Policy (with
Nick Eubank).
Handgun Ownership and Suicide in California, 2020, New England Journal of Medicine 382:2220-2229
(with David M. Studdert, Yifan Zhang, Sonja A. Swanson, Lea Prince, Erin E. Holsinger, Matthew J.
Spittal, Garen J. Wintemute, and Matthew Miller).

Viral Voting: Social Networks and Political Participation, 2020, Quarterly Journal of Political Science (with
Nick Eubank, Guy Grossman, and Melina Platas).
It Takes a Village: Peer Effects and Externalities in Technology Adoption, 2020, American Journal of
Political Science (with Romain Ferrali, Guy Grossman, and Melina Platas). Winner, 2020 Best Conference
Paper Award, American Political Science Association Network Section.

Assembly of the LongSHOT Cohort: Public Record Linkage on a Grand Scale, 2019, Injury Prevention
(with Yifan Zhang, Erin Holsinger, Lea Prince, Sonja Swanson, Matthew Miller, Garen Wintemute, and
David Studdert).
Crowdsourcing Accountability: ICT for Service Delivery, 2018, World Development 112: 74-87 (with Guy
Grossman and Melina Platas).

Geography, Uncertainty, and Polarization, 2018, Political Science Research and Methods doi:10.1017/
psrm.2018.12 (with Nolan McCarty, Boris Shor, Chris Tausanovitch, and Chris Warshaw).
Handgun Acquisitions in California after Two Mass Shootings, 2017, Annals of Internal Medicine 166(10):698-
706. (with David Studdert, Yifan Zhang, Rob Hyndman, and Garen Wintemute).

Cutting Through the Thicket: Redistricting Simulations and the Detection of Partisan Gerrymanders,
2015, Election Law Journal 14,4:1-15 (with Jowei Chen).
The Achilles Heel of Plurality Systems: Geography and Representation in Multi-Party Democracies,
2015, American Journal of Political Science 59,4: 789-805 (with Ernesto Calvo). Winner, Michael Waller-
stein Award for best paper in political economy, American Political Science Association.

Why has U.S. Policy Uncertainty Risen Since 1960?, 2014, American Economic Review: Papers and Pro-
ceedings May 2014 (with Nicholas Bloom, Brandice Canes-Wrone, Scott Baker, and Steven Davis).

2
Unintentional Gerrymandering: Political Geography and Electoral Bias in Legislatures, 2013, Quarterly
Journal of Political Science 8: 239-269 (with Jowei Chen).
How Should We Measure District-Level Public Opinion on Individual Issues?, 2012, Journal of Politics
74, 1: 203-219 (with Chris Warshaw).

Representation and Redistribution in Federations, 2011, Proceedings of the National Academy of Sciences
108, 21:8601-8604 (with Tiberiu Dragu).
Dual Accountability and the Nationalization of Party Competition: Evidence from Four Federatons,
2011, Party Politics 17, 5: 629-653 (with Erik Wibbels).
The Geographic Distribution of Political Preferences, 2010, Annual Review of Political Science 13: 297–340.

Fiscal Decentralization and the Business Cycle: An Empirical Study of Seven Federations, 2009, Eco-
nomics and Politics 22,1: 37–67 (with Erik Wibbels).
Getting into the Game: Legislative Bargaining, Distributive Politics, and EU Enlargement, 2009, Public
Finance and Management 9, 4 (with Deniz Aksoy).

The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint,
and Issue Voting, 2008. American Political Science Review 102, 2: 215–232 (with Stephen Ansolabehere
and James Snyder).
Does Religion Distract the Poor? Income and Issue Voting Around the World, 2008, Comparative Political
Studies 41, 4: 437–476 (with Ana Lorena De La O).

Purple America, 2006, Journal of Economic Perspectives 20,2 (Spring): 97–118 (with Stephen Ansolabehere
and James Snyder).
Economic Geography and Economic Voting: Evidence from the U.S. States, 2006, British Journal of
Political Science 36, 3: 527–47 (with Michael Ebeid).

Distributive Politics in a Federation: Electoral Strategies, Legislative Bargaining, and Government


Coalitions, 2004, Dados 47, 3 (with Marta Arretche, in Portuguese).
Comparative Federalism and Decentralization: On Meaning and Measurement, 2004, Comparative Poli-
tics 36, 4: 481-500. (Portuguese version, 2005, in Revista de Sociologia e Politica 25).

Reviving Leviathan: Fiscal Federalism and the Growth of Government, 2003, International Organization
57 (Fall), 695–729.
Beyond the Fiction of Federalism: Macroeconomic Management in Multi-tiered Systems, 2003, World
Politics 54, 4 (July): 494–531 (with Erik Wibbels).
The Dilemma of Fiscal Federalism: Grants and Fiscal Performance around the World, 2002, American
Journal of Political Science 46(3): 670–687.
Strength in Numbers: Representation and Redistribution in the European Union, 2002, European Union
Politics 3, 2: 151–175.
Does Federalism Preserve Markets? Virginia Law Review 83, 7 (with Susan Rose-Ackerman). Spanish
version, 1999, in Quorum 68.

3
Working Papers
Federalism and Inter-regional Redistribution, Working Paper 2009/3, Institut d’Economia de Barcelona.
Representation and Regional Redistribution in Federations, Working Paper 2010/16, Institut d’Economia
de Barcelona (with Tiberiu Dragu).

Chapters in Books
Political Geography and Representation: A Case Study of Districting in Pennsylvania (with Thomas
Weighill), forthcoming 2021.
Decentralized Rule and Revenue, 2019, in Jonathan Rodden and Erik Wibbels, eds., Decentralized Gov-
ernance and Accountability, Cambridge University Press.
Geography and Gridlock in the United States, 2014, in Nathaniel Persily, ed. Solutions to Political
Polarization in America, Cambridge University Press.
Can Market Discipline Survive in the U.S. Federation?, 2013, in Daniel Nadler and Paul Peterson, eds,
The Global Debt Crisis: Haunting U.S. and European Federalism, Brookings Press.
Market Discipline and U.S. Federalism, 2012, in Peter Conti-Brown and David A. Skeel, Jr., eds, When
States Go Broke: The Origins, Context, and Solutions for the American States in Fiscal Crisis, Cambridge
University Press.
Federalism and Inter-Regional Redistribution, 2010, in Nuria Bosch, Marta Espasa, and Albert Sole
Olle, eds., The Political Economy of Inter-Regional Fiscal Flows, Edward Elgar.
Back to the Future: Endogenous Institutions and Comparative Politics, 2009, in Mark Lichbach and
Alan Zuckerman, eds., Comparative Politics: Rationality, Culture, and Structure (Second Edition), Cam-
bridge University Press.
The Political Economy of Federalism, 2006, in Barry Weingast and Donald Wittman, eds., Oxford Hand-
book of Political Economy, Oxford University Press.
Fiscal Discipline in Federations: Germany and the EMU, 2006, in Peter Wierts, Servaas Deroose, Elena
Flores and Alessandro Turrini, eds., Fiscal Policy Surveillance in Europe, Palgrave MacMillan.
The Political Economy of Pro-cyclical Decentralised Finance (with Erik Wibbels), 2006, in Peter Wierts,
Servaas Deroose, Elena Flores and Alessandro Turrini, eds., Fiscal Policy Surveillance in Europe, Palgrave
MacMillan.
Globalization and Fiscal Decentralization, (with Geoffrey Garrett), 2003, in Miles Kahler and David
Lake, eds., Governance in a Global Economy: Political Authority in Transition, Princeton University Press:
87-109. (Updated version, 2007, in David Cameron, Gustav Ranis, and Annalisa Zinn, eds., Globalization
and Self-Determination: Is the Nation-State under Siege? Routledge.)
Introduction and Overview (Chapter 1), 2003, in Rodden et al., Fiscal Decentralization and the Challenge
of Hard Budget Constraints (see above).
Soft Budget Constraints and German Federalism (Chapter 5), 2003, in Rodden, et al, Fiscal Decentral-
ization and the Challenge of Hard Budget Constraints (see above).
Federalism and Bailouts in Brazil (Chapter 7), 2003, in Rodden, et al., Fiscal Decentralization and the
Challenge of Hard Budget Constraints (see above).
Lessons and Conclusions (Chapter 13), 2003, in Rodden, et al., Fiscal Decentralization and the Challenge
of Hard Budget Constraints (see above).

4
Online Interactive Visualization
Stanford Election Atlas, 2012 (collaboration with Stephen Ansolabehere at Harvard and Jim Herries at
ESRI)

Other Publications
How America’s Urban-Rural Divide has Shaped the Pandemic, 2020, Foreign Affairs, April 20, 2020.
An Evolutionary Path for the European Monetary Fund? A Comparative Perspective, 2017, Briefing
paper for the Economic and Financial Affairs Committee of the European Parliament.
Representation and Regional Redistribution in Federations: A Research Report, 2009, in World Report
on Fiscal Federalism, Institut d’Economia de Barcelona.
On the Migration of Fiscal Sovereignty, 2004, PS: Political Science and Politics July, 2004: 427–431.
Decentralization and the Challenge of Hard Budget Constraints, PREM Note 41, Poverty Reduction and
Economic Management Unit, World Bank, Washington, D.C. (July).
Decentralization and Hard Budget Constraints, APSA-CP (Newsletter of the Organized Section in
Comparative Politics, American Political Science Association) 11:1 (with Jennie Litvack).
Book Review of The Government of Money by Peter Johnson, Comparative Political Studies 32,7: 897-900.

Fellowships and Honors


Fund for a Safer Future, Longitudinal Study of Handgun Ownership and Transfer (LongSHOT),
GA004696, 2017-2018.
Stanford Institute for Innovation in Developing Economies, Innovation and Entrepreneurship research
grant, 2015.
Michael Wallerstein Award for best paper in political economy, American Political Science Association,
2016.
Common Cause Gerrymandering Standard Writing Competition, 2015.
General support grant from the Hewlett Foundation for Spatial Social Science Lab, 2014.
Fellow, Institute for Research in the Social Sciences, Stanford University, 2012.
Sloan Foundation, grant for assembly of geo-referenced precinct-level electoral data set (with Stephen
Ansolabehere and James Snyder), 2009-2011.
Hoagland Award Fund for Innovations in Undergraduate Teaching, Stanford University, 2009.
W. Glenn Campbell and Rita Ricardo-Campbell National Fellow, Hoover Institution, Stanford Univer-
sity, beginning Fall 2010.
Research Grant on Fiscal Federalism, Institut d‘Economia de Barcelona, 2009.
Fellow, Institute for Research in the Social Sciences, Stanford University, 2008.
United Postal Service Foundation grant for study of the spatial distribution of income in cities, 2008.
Gregory Luebbert Award for Best Book in Comparative Politics, 2007.

5
Fellow, Center for Advanced Study in the Behavioral Sciences, 2006-2007.
National Science Foundation grant for assembly of cross-national provincial-level dataset on elections,
public finance, and government composition, 2003-2004 (with Erik Wibbels).
MIT Dean‘s Fund and School of Humanities, Arts, and Social Sciences Research Funds.
Funding from DAAD (German Academic Exchange Service), MIT, and Harvard EU Center to organize
the conference, ”European Fiscal Federalism in Comparative Perspective,” held at Harvard University,
November 4, 2000.
Canadian Studies Fellowship (Canadian Federal Government), 1996-1997.
Prize Teaching Fellowship, Yale University, 1998-1999.
Fulbright Grant, University of Leipzig, Germany, 1993-1994.
Michigan Association of Governing Boards Award, one of two top graduating students at the Univer-
sity of Michigan, 1993.
W. J. Bryan Prize, top graduating senior in political science department at the University of Michigan,
1993.

Other Professional Activities


International Advisory Committee, Center for Metropolitan Studies, Sao Paulo, Brazil, 2006–2010.
Selection committee, Mancur Olson Prize awarded by the American Political Science Association Po-
litical Economy Section for the best dissertation in the field of political economy.
Selection committee, Gregory Luebbert Best Book Award.
Selection committee, William Anderson Prize, awarded by the American Political Science Association
for the best dissertation in the field of federalism and intergovernmental relations.

Courses
Undergraduate
Politics, Economics, and Democracy
Introduction to Comparative Politics
Introduction to Political Science
Political Science Scope and Methods
Institutional Economics
Spatial Approaches to Social Science

Graduate
Political Economy of Institutions
Federalism and Fiscal Decentralization
Politics and Geography

6
Consulting
2017. Economic and Financial Affairs Committee of the European Parliament.
2016. Briefing paper for the World Bank on fiscal federalism in Brazil.
2013-2018: Principal Investigator, SMS for Better Governance (a collaborative project involving USAID,
Social Impact, and UNICEF in Arua, Uganda).
2019: Written expert testimony in McLemore, Holmes, Robinson, and Woullard v. Hosemann, United States
District Court, Mississippi.
2019: Expert witness in Nancy Corola Jacobson v. Detzner, United States District Court, Florida.
2018: Written expert testimony in League of Women Voters of Florida v. Detzner No. 4:18-cv-002510,
United States District Court, Florida.
2018: Written expert testimony in College Democrats of the University of Michigan, et al. v. Johnson, et al.,
United States District Court for the Eastern District of Michigan.
2017: Expert witness in Bethune-Hill v. Virginia Board of Elections, No. 3:14-CV-00852, United States
District Court for the Eastern District of Virginia.
2017: Expert witness in Arizona Democratic Party, et al. v. Reagan, et al., No. 2:16-CV-01065, United
States District Court for Arizona.
2016: Expert witness in Lee v. Virginia Board of Elections, 3:15-cv-357, United States District Court for
the Eastern District of Virginia, Richmond Division.
2016: Expert witness in Missouri NAACP v. Ferguson-Florissant School District, United States District
Court for the Eastern District of Missouri, Eastern Division.
2014-2015: Written expert testimony in League of Women Voters of Florida et al. v. Detzner, et al., 2012-CA-
002842 in Florida Circuit Court, Leon County (Florida Senate redistricting case).
2013-2014: Expert witness in Romo v Detzner, 2012-CA-000412 in Florida Curcuit Court, Leon County
(Florida Congressional redistricting case).
2011-2014: Consultation with investment groups and hedge funds on European debt crisis.
2011-2014: Lead Outcome Expert, Democracy and Governance, USAID and Social Impact.
2010: USAID, Review of USAID analysis of decentralization in Africa.
2006–2009: World Bank, Independent Evaluations Group. Undertook evaluations of World Bank de-
centralization and safety net programs.
2008–2011: International Monetary Fund Institute. Designed and taught course on fiscal federalism.
1998–2003: World Bank, Poverty Reduction and Economic Management Unit. Consultant for World De-
velopment Report, lecturer for training courses, participant in working group for assembly of decentral-
ization data, director of multi-country study of fiscal discipline in decentralized countries, collaborator
on review of subnational adjustment lending.

Last updated: October 19, 2020

7
Report of Kenneth R. Mayer, Ph.D.
December 5, 2020

I. Introduction and Summary of Conclusions

I have been asked by counsel for the Democratic Party of Georgia, the DSCC, and the
DCCC to evaluate claims made by Russell James Ramsland, Jr. in his affidavit of November 25,
2020, and by Dr. Benjamin A. Overholt in his affidavit of November 29, 2020. 1

Ramsland asserts that that “red flags” in mail absentee data show that 96,000 mail absentee
ballots were voted but not recorded as received by counties, and that 5,990 ballots had “impossible
mail out and received back complete dates” (Ramsland Affidavit, paragraph 15). Based on these
findings, Ramsland concludes “to a reasonable degree of professional certainty that at least 96,600
votes were illegally counted in the Georgia general election” (Ramsland Affidavit, paragraph 15).
I show that this a fundamental mistake in interpreting the data, as there are 96,600 cancelled mail
absentee ballots with no return date, denoted by a “C” value in the ballot status field that Ramsland
mistakenly thinks means “counted” instead of cancelled.

Overholt claims generally the existence of “anomalies” or “discrepancies” in the Georgia’s


2020 general election absentee files, which he defines as differences in the rates of mail absentee
ballots spoiled, rejected, or cancelled in the 2020 general election when compared to rates in
previous elections (Overholt Affidavit, paragraphs 5 and 14). The result, he asserts, is that
somewhere between 1,600 and 17,500 ballots counted in the November 2020 election should have
been rejected. Overholt also claims there are issues with how the Secretary of State calculated
rejection rates (Overholt Affidavit, paragraph 15). These conclusions reflect a fundamental
misunderstanding of what the data actually show, and do not in any sense suggest that these ballots
should have been rejected.

As discussed further below, I have significant expertise working with voter files, absentee
files, and other large election- and voting-related data sets, including in the state of Georgia. Based
on that expertise, it is my conclusion that the claims made by both Ramsland and Overholt are
unsupported and incorrect. Ramsland’s and Overholt’s reports do not comport with scientifically
acceptable data standards or methodology in my field of expertise. It is clear that neither knows
even the basics of the data they purport to examine, election administration or how elections are
actually conducted in Georgia or how election practices changed in 2020. Both reports use
inaccurate definitions of crucial terms, make completely unsubstantiated claims based on pure
speculation and personal opinion, and reach unsupported and incorrect inferences about what the
data show.

Even on things as basic as describing what files they are examining and the methodologies
they use in arriving at their conclusions, their reports do not meet the most fundamental
requirements of conducting a reliable and replicable analysis.

1
It is actually not clear when Overholt submitted his report, as he does not show a date. The
report was notarized on November 29, 2020.
1
To more specifically summarize the issues with these reports:

1. Ramsland falsely insinuates that absentee ballots sent to voters but not returned or
cancelled by voters, suggest fraud.
2. Ramsland erroneously conflates routine administrative recordkeeping anomalies with
fraud, and presents wildly inaccurate figures regarding the number of absentee ballots
accepted but not recorded as being returned. These errors would be immediately obvious
to anyone familiar with election administration or the details of Georgia’s absentee and
voter history files, and no reputable expert would make such mistakes.
3. Overholt, similarly, insinuates that so-called “anomalies” indicate fraudulent ballots were
accepted in the 2020 presidential election. Yet the “anomalies” he claims to have found
actually reflect normal variations that regularly occur from one election to another.
4. Overholt does not take into account that Georgia law and election practices eliminated the
address and birthdate section of the absentee ballot return envelope in 2020. He also does
not take into account that the methods used to conduct signature matching changed before
the 2020 primary election.
5. Overholt seizes on what he insists is a “misleading” and “flawed” calculation of ballot
rejection percentages, again insinuating that a trivial difference in how one percentage was
calculated on what amounts to a press release on the Secretary of State’s web-site suggests
some impropriety. This analysis ultimately demonstrates that the mail ballot signature
rejection rate in the 2020 presidential primary was 0.26% and 0.15% in the 2018 general
election on the Secretary of State web-site when, according to his calculations, it should
have been 0.28%. and 0.20%, respectively.

II. Qualifications and Expertise

I have a Ph.D. in political science from Yale University, where my graduate training
included courses in econometrics and statistics. My undergraduate degree is from the University
of California, San Diego, where I majored in political science and minored in applied
mathematics. I have been on the faculty of the political science department at the University of
Wisconsin-Madison since August 1989. My curriculum vitae is attached to this report as
Appendix A.

All publications that I have authored and published in the past ten years appear in my
curriculum vitae. Those publications include the following peer-reviewed journals: Journal of
Politics, American Journal of Political Science, Election Law Journal, Legislative Studies
Quarterly, Presidential Studies Quarterly, American Politics Research, Congress and the
Presidency, Public Administration Review, Political Research Quarterly, and PS: Political
Science and Politics. I have also published in law reviews, including the Richmond Law Review,
the UCLA Pacific Basin Law Journal, and the University of Utah Law Review. My work on
campaign finance has been published in Legislative Studies Quarterly, Regulation, PS: Political
Science and Politics, Richmond Law Review, the Democratic Audit of Australia, and in an edited
volume on electoral competitiveness published by the Brookings Institution Press. My research on
campaign finance has been cited by the U.S. Government Accountability Office and by legislative
research offices in Connecticut and Wisconsin.

2
My work on election administration has been published in the Election Law Journal,
American Journal of Political Science, Public Administration Review, Political Research
Quarterly, and American Politics Research. I was part of a research group retained by the
Wisconsin Government Accountability Board to review their compliance with federal mandates
and reporting systems under the Help America Vote Act and to survey local election officials
throughout the state. I serve on the Steering Committee of the Wisconsin Elections Research
Center, a unit within the UW-Madison College of Letters and Science. In 2012, I was retained
by the U.S. Department of Justice to analyze data and methods regarding Florida’s efforts to
identify and remove claimed ineligible noncitizens from the statewide file of registered voters.

In the past nine years, I have testified as an expert witness in trial or deposition or
submitted a report in the following cases:

Federal: The New Georgia Project et al. v. Raffensperger et al. No. 1:20-CV-01986-EL0052 (N.D.
Ga.); Fair Fight Action v. Raffensperger, No. 1:18-cv-05391-SCJ (N.D. Ga. 2019); Kumar
v. Frisco Independent School District, No. 4:19-cv-00284 (E.D. Tex. 2019); Vaughan v.
Lewisville Independent School District, No. 4:19-cv-00109 (E.D. Tex. 2019); Dwight, et
al. v Raffensperger, No: 1:18-cv-2869-RWS (N.D. Ga. 2018); League of Women Voters of
Michigan, et al. v. Johnson, No. 2:17-cv-14148-DPH-SDD (S.D. Mich. 2018); One Wis.
Institute, Inc. v. Thomsen 198 F. Supp. 3d 896 (W.D. Wis. 2016); Whitford v. Gill, 218 F.
Supp. 3d 837 (W.D. Wis. 2016); Baldus v. Members of Wis. Gov’t Accountability Bd., 849
F. Supp. 2d 840 (E.D. Wis. 2012).

State: North Carolina Alliance for Retired Americans et al. v. North Carolina State Board of
Elections (Wake Cty., NC),; LaRose et al. v. Simon, No. 62-CV-20-3149 (2d Jud. Dist. Ct.,
Ramsey Cty., MN); Michigan Alliance for Retired Americans et al. v Benson et al. No
2020-000108-MM (Mich. Court of Claims); Driscoll v. Stapleton, No. DV 20 0408 (13th
Judicial Ct. Yellowstone Cty., Mont. 2020); Priorities U.S.A, et al. v. Missouri, et al., No.
19AC-CC00226 (Cir. Ct. of Cole Cty., M. 2018); Milwaukee Branch of the NAACP v.
Walker, 851 N.W. 2d 262 (Wis. 2014); Kenosha Cty. v. City of Kenosha, No. 11-CV-
1813 (Wis. Cir. Ct., Kenosha Cty., Wis. 2011).

Courts consistently have accepted my expert opinions, and the basis for those opinions.
No court has ever excluded my expert opinion under Daubert or any other standard. Courts
have cited my expert opinions in their decisions, finding my opinions reliable and persuasive.
See Driscoll v. Stapleton, No. DV 20 0408 (13th Judicial Ct. Yellowstone Cty., Mont., 2020);
Priorities U.S.A., et al. v. Missouri, et al., No. 19AC-CC00226 (Cir. Ct. Cole Cty., Mo. 2018);
Whitford v. Gill, 218 F. Supp. 3d 837 (W.D. Wis. 2016); One Wis. Inst., Inc. v. Thomsen, 198 F.
Supp. 3d 896 (W.D. Wis. 2016); Baldus v. Members of Wis. Gov’t Accountability Bd., 849 F. Supp.
2d 840 (E.D. Wis. 2012); Milwaukee Branch of the NAACP v. Walker, 851 N.W. 2d 262 (Wis.
2014); Baumgart v. Wendelberger, No. 01-C-0121, 2002 WL 34127471 (E.D. Wis. May 30, 2002).

3
III. Ramsland Affidavit

A. The Claim That 96,600 Mail Absentee Ballots With No Return Record Were
Counted

Ramsland claims that county voter records show that 96,600 mail absentee ballots were
counted but never recorded as received. He does not explain how he derived this number, and
does not disclose which data files or methodologies he used to reach this conclusion (whether
county-level absentee files, the statewide absentee voter file, or the voter history file), which fields
in these files he relied on to conclude that a ballot was counted but not recorded as received, the
dates on which the voter files or absentee request files were generated, or, in fact, information
about the methodologies he relied on to generate this number. More importantly, he does not
explain why a blank return date field indicates an illegal ballot rather than an administrative error.
These failures, by themselves, would warrant rejection of his conclusions as completely unreliable.

But an even greater – indeed fatal – flaw exists in Ramsland’s analysis, which is that his
numbers are entirely incorrect. As I show below, he appears to arrive at this number through a
basic error in interpreting the absentee request file.

On December 1, 2020, I downloaded voter history files and absentee request files available
on the Georgia Secretary of State website. The statewide absentee request file includes all 159
county-level absentee request files. After merging the two files using the unique voter registration
number, I identified the following figures for the November 2020 general election.

The most important data point from these files is the number of mail absentee ballots
recorded as accepted and counted, but which do not have a return date recorded in the absentee
ballot request file (what Ramsland claims is a ballot counted but never received by election
officials). Ramsland claims that there are 96,600 such ballots. He does not explain how he
generated this quantity, but I believe I have identified how he derived this figure.

In the absentee ballot request file, there are 96,600 cancelled mailed absentee ballots that
do not have a date of return recorded, matching exactly the total of mailed ballots that Ramsland
claims were counted but never submitted. This figure almost certainly represents the ballots
Ramsland is referring to, as no other aggregation of ballots could plausibly lead to this precise
match. These ballots are recorded in the ballot status field as “C” (cancelled). I suspect that
Ramsland mistakenly thinks that “C” means counted, rather than cancelled, and does not realize
that counted ballots are noted as “A” (accepted) in the ballot status field. This is an egregious error
that no qualified expert familiar with Georgia’s voter files would make. 2

2
Ramsland incorrectly claims that 134,588 mail ballots have no return date and were cancelled
(Ramsland Affidavit, paragraph 15). He does not explain how he arrived at this figure, but as
explained, the number of cancelled mail ballots with no return date is not 134,588, but rather
96,600. I suspect Ramsland added together mailed ballots with no return date and a ballot status
of either A (accepted) (4 ballots), R (rejected) (468 ballots), S (spoiled) (235 ballots) or blank
(133,880 ballots), to incorrectly generate the 134,588 number. The one ballot difference is

4
The statewide absentee ballot file shows that the actual number of mail ballots accepted
with no date of receipt recorded is not 96,600, but rather 4. 3 This is almost certainly a
recordkeeping issue that affected a trivially small number (0.0003%) of mail absentee ballots.

Further, the merged absentee request and voter history file show the following for the
November 2020 general election:

1. The absentee request file shows that 4,018,064 absentee voters requested and submitted
an accepted absentee ballot. 1,308,440 were mail absentee, 2,695,547 were in person
absentee, and 14,077 were electronic absentee.
2. The voter history file shows that 4,018,800 voters cast absentee ballots. The voter file
does not record whether an absentee ballot was mail, in person or electronic.
3. The two files do not match exactly, but the difference between the absentee ballot file
and the voter history file is 736 votes, not 96,600 votes. 736 votes is 0.018% of the
number of absentee ballots recorded in the voter history file.
4. This difference – 736 – is the kind of administrative error ubiquitous in voter
registration files, and is the result of recordkeeping errors, recording mistakes, or other
anomalies that have occurred in every statewide voter file I have examined over more
than 20 years of studying election administration.
5. In the merged file, 86 voters are shown as casting an accepted absentee ballot but not
recorded as voting absentee in the voter history file. This is 0.007% of all mail absentee
ballots recorded as accepted, and is again almost certainly a recordkeeping issue.

Ramsland’s numbers are wildly incorrect and reflect an astounding lack of understanding
of how the data are organized and the meaning of the ballot status field in the absentee request file.
His conclusion – that at least 96,600 ballots were counted illegally – is ludicrous.

B. Administrative Discrepancies in Sent and Return Dates

Ramsland asserts that the sent and returned dates recorded in the absentee voter file –
reflecting the date an absentee ballot was sent, and the date an absentee ballot was received in a
clerk’s office – also raise “red flags.” He claims that 1,887 mail ballots were received the same
day they were sent out; 1,786 ballots were received one day after being mailed out; 2,275 ballots
received two days after being mailed out; and 42 ballots were received the day before they were
sent out. He concludes that this is “impossible.”

This conclusion is based entirely on Ramsland’s personal opinion that such delivery and
return times are impossible. As I show below, some of these send and return dates are likely
correct, and the remainder are recordkeeping issues.

almost certainly due to the fact that the underlying data files were generated on two different
dates.
3
I calculated this number by identifying accepted mail absentee ballots with a blank entry in the
ballot return date field in the absentee ballot request file.
5
Again, Ramsland does not disclose what methodologies he used to generate his estimates
or reach his conclusion. And once again, his numbers are incorrect. The absentee ballot request
file shows the following results:

1. 89 mail ballots are recorded as received before the date sent out (not 42)
2. 467 ballots are recorded as received the same day they were sent out (not 1,887)
3. 374 ballots are recorded as received 1 day after being sent out (not 1,786)
4. 963 ballots are recorded as received 2 days after being sent out (not 2,275)

Many of these sent and return dates are in fact plausible. A mailed absentee ballot returned
by a voter in person at a clerk’s office will be recorded as a received mail ballot on that date,
because what is recorded in the absentee file is the type of ballot requested, not the manner in
which it is returned (whether by mail or in person). Many of these ballots were, likely, accurately
recorded on the date received because they were returned in person rather than by mail. Any
remaining anomalies are almost certainly recordkeeping mistakes.

It is true that it is not possible for a mailed ballot to arrive before it was sent out. But this
is clearly a recording error affecting a very small number of mailed ballots (89 out of 1,308,440
ballots, or 0.0068%)

Moreover, the numbers are not material. The total number of ballots recorded as received
with 2 days of being sent out is 1,893, or 0.14% of all accepted mail ballots, not 5,990 as Ramsland
claims. As I note above, some of this information is most likely correct, and any expert familiar
with statewide voter files would immediately recognize the remaining anomalies as a
recordkeeping issue, not an indication of fraud.

C. Conclusion

Ramsland’s conclusions about mail absentee ballots are meritless, and show a complete
lack of understanding of statewide absentee voter and voter history files. An analysis of the correct
absentee request and voter history files from the November 2020 general election shows clearly
that Ramsland’s numbers are wildly wrong, and his conclusions are based on faulty data, errors in
how he interprets the data, unsupported personal opinions, and completely unwarranted inferences.

The absentee request file and voter history file from 2020 show minor discrepancies that
are entirely consistent with administrative errors in prior years and other states, and do not, by any
stretch, indicate fraud.

IV. Overholt Affidavit

Overholt’s main conclusions consist of assertions that (a) there are “discrepancies in the
number of mail ballots that were ‘rejected’ and ‘spoiled’ when comparing previous elections to
the 2020 General Election” (Overholt Affidavit, paragraph 5); (b) the Secretary of State web-site
uses “misleading” and inconsistent methods when calculating signature rejection rates between
2018 and 2020 (Overholt Affidavit, paragraphs 15-19); (c) that 500,000 votes are missing in a the
2020 data when compared to the “official” election results (Overholt Affidavit, paragraph 20);

6
and (d) that other unspecified “anomalies in the reported data. . . many (sic) raise significant
questions” about the 2020 election results.

Overholt concludes, based on these results, that between 1,600 and 17,500 ballots “should
have been rejected” in the 2020 general election (Overholt Affidavit, paragraphs 11 and 13).

Overholt is correct about only one minor, and ultimately irrelevant, detail in this cavalcade
of unsupported and inaccurate claims: calculations on the Georgia Secretary of State web-site do,
in fact, use different denominators in calculations of mail absentee ballot signature rejection rates
in the 2018 general, the 2020 primary, and the 2020 general elections. As I show below, this is a
trivial result that has no substantive significance.

Moreover, Overholt completely misunderstands the data that he is using, and fails to
account for changes that occurred before the 2020 elections, including a 2019 state law that
changed required information on absentee ballot return envelopes, as well as changes to the
methodologies for conducting signature matching. As explained further below, he also confuses
the number of absentee ballot requests with the number of votes cast. This error is so elementary
that it calls into question the entirety of his opinion.

A. Alleged “Discrepancies” in Spoiled and Rejected Mail Absentee Rates

Overholt alleges that number of rejected mail ballots and mail ballot rejection rates in 2020
general election differed from the numbers and rates in the 2020 primary, the 2018 general, and
the 2016 general election. He calculates that the rejection rate for signature reasons was 0.15% in
the 2020 general, compared to 0.28% in the 2016 general and 2020 primary elections. This, he
asserts, “would suggest somewhere around 1,600 additional ballots should have been rejected for
signature issues.”

This conclusion is entirely wrong. He makes two fundamental errors. First, he incorrectly
assumes that the 2016 general and 2020 primary rejection rates should be viewed as the “true” or
expected rejection rate for all other elections. There is no basis for such a conclusion. One could
just as easily assert that the 2020 general election rejection rate (0.15%) is the “true” rejection rate,
and that excess rejections occurred in 2016 and 2018 (he also conveniently ignores the rejection
rate in the 2018 general election, which at 0.20% is closer to the 2020 general rate than either the
2016 general or 2020 primary rejections rates).

Second, he ignores (or is unaware of) the fact that the signature matching and oath
requirements changed between 2018 and 2020. In March 2020, the Secretary of State entered into
a settlement agreement that required 2 of 3 election judges to agree that a signature does not match,
and required clerks to notify voters that their ballots were rejected. 4 403 mail absentee voters
whose initial absentee ballots were rejected for signature reasons were able to either cure their
ballot or submit another absentee ballot that was accepted. 5
4
Democratic Party of Georgia v. Raffensperger, Joint Notice of Settlement as to State
Defendants, No. 1:19-ccv-5028-WMR (N.D. Ga. March 6, 2020).
5
These data are in the absentee ballot request file.

7
In addition, the oath requirements changed in April 2019 to eliminate the requirement that
voters include their address and date of birth on the oath (errors or omissions on either would result
in a rejected ballot). 6

Consequently, Overholt’s application of the oath-related rejection rates in 2016 and 2018
to the 2020 election and his resulting claim that “an additional 7,900 or 17,500 ballots should have
been rejected” (Overholt Affidavit, paragraph 13) are simply wrong, because the oath requirements
changed, and a defect that would result in a rejected ballot in 2018 could not have resulted in a
rejection in 2020.

Next, Overholt claims that discrepancies existed with respect to spoiled ballots (Overholt
Affidavit, paragraph 14). It is not clear what point Overholt is making here, because the spoiled
ballot rate was higher in 2020 than it was in 2016 and 2018. This entire section of his report
amounts only to an observation that the spoiled ballot rate in 2020 was higher than in previous
elections, which, Overholt insinuates without explanation, indicates some unspecified irregularity.

B. Differences in Signature Rejection Rate Calculations

Overholt devotes considerable time to a claim that a single page on the Georgia Secretary
of State’s web-site (which he inaccurately describes as “an article”) calculates the rejected ballot
rate in the 2020 primary and 2018 general elections incorrectly (Overholt Affidavit, paragraphs
15-19). He exaggerates the scope of this error to assert that the “[Secretary of State] Analysis is
flawed” (Overholt Affidavit, paragraph 15) and that the calculation was “generated improperly
and inconsistently and is misleading” (Overholt Affidavit, paragraph 19).

This is a tremendous amount of weight to place on a trivial error. On the web page in
question, a different denominator is used in a calculation of the 2020 primary election signature
rejection rate (accepted mail ballots) and the 2018 general (issued absentee ballots) than in the
calculation of rejection rates in November 2020 (accepted, rejected, and spoiled absentee ballots).
But the amount of attention Overholt devotes to this issue is vastly disproportionate to the
insignificance of the error itself, and he fails to explain why these differences matter (they do not).
He merely insinuates that these minor errors constitute an intentional misrepresentation of what
the data indicate.

The signature rejection rate on the web page is 0.26% in the 2020 primary election and
0.15% in the 2018 general, using what Overholt claims are the wrong denominators. Correcting
this, and using the same denominator in all three calculations, produces a rejection rate of 0.20%
in the 2018 general and 0.28% in the 2020 primary. This is an entirely immaterial difference that
has no substantive relevance.

C. “Further Anomalies”

6
House Bill 319 (effective April 2, 2019), http://www.legis.ga.gov/legislation/en-
US/Display/20192020/HB/316.
8
At the end of his report, Overholt asserts that several additional anomalies raise “significant
questions” about the 2020 election. The relevance of these claims is unclear, and they demonstrate
Overholt’s complete lack of understanding of the data he claims to analyze in his report, casting
further doubt on the credibility of his analysis and conclusions.

His first claim is that “the dataset for the 2020 General Election . . . contains records for
4,505,778 ballots, while Georgia’s official election totals currently show a total of 4,998,482 votes
cast” (Overholt Affidavit, paragraph 20). The difference in these two numbers, he asserts, suggest
something amiss, particularly because the datafile he uses “is missing around 500,000 votes.” 7
“The effect of the difference in ballot totals on this analysis,” he concludes, “is unknown and
cannot be calculated without better understanding of the underlying conduct of the election
throughout Georgia (Overholt Affidavit, paragraph 21).

Here, Overholt is mistaking each record in the absentee ballot request file as a counted
vote, unaware of the difference between the absentee ballot request file and the voter history file.
He does not seem to know that the absentee ballot request file is not a record of everyone who
voted in the 2020 presidential election, but a record of voters who requested absentee ballots.

The absentee ballot file indeed contains 4,505,778 records, but each record in this file is an
absentee ballot requests, not a file all votes cast in November 2020. This file cannot be compared
to the number of votes cast, because the latter total includes those who voted in person on election
day (982,630) who do not appear in the absentee ballot request file Overholt is comparing
proverbial apples and oranges (or, perhaps more accurately, raisins and pumpkins).

There is no discrepancy. There are no “missing” 500,000 votes. There is nothing


“surprising” about any of this, except, perhaps, that no expert who had any understanding of
Georgia’s voter files would make such a glaring and basic error.

Finally, at the end of his report, Overholt asserts that “other anomalies in the reported data”
raise questions about the conduct of the 2020 election. Overholt never identifies what these alleged
anomalies are, what “reported data” he is using, or what “questions” he thinks these unspecified
and unsupported anomalies raise. This unspecified and unsupported claim require no response.

D. Conclusion

Overholt’s report is a string of errors and unfounded assertations that reflects a lack of
knowledge about Georgia’s election practices and how to properly analyze statewide voter files.
He does not account for changes in absentee ballot requirements between 2018 and 2020, and
confuses absentee ballot requests with actual vote counts. He erroneously concludes that variation
in ballot rejection rates in different elections constitute “anomalies” that suggest fraud.

His opinions, to put it mildly, should be regarded as uninformative.

7
Presumably the “dataset” in question is the absentee ballot request file, though Overholt does
not specify as much.
9
December 5, 2020

10
Appendix A – CV

Kenneth R. Mayer

Department of Political Science Phone: 608-263-2286


Affiliate, La Follette School of Public Affairs Email: krmayer@wisc.edu
110 North Hall / 1050 Bascom Mall
University of Wisconsin – Madison
Madison, WI 53706

Education
Yale University, Department of Political Science, Ph.D., 1988.
Yale University, Department of Political Science, M.A., M.Phil.,1987.
University of California, San Diego, Department of Political Science, B.A., 1982.

Positions Held
University of Wisconsin, Madison. Department of Political Science.
Professor, July 2000-present.
Associate Professor, June 1996-June 2000.
Assistant Professor, August 1989-May 1996.
Fulbright-ANU Distinguished Chair in Political Science, Australian National University (Canberra,
ACT), July-December 2006.
Director, Data and Computation Center, College of Letters and Science, University of Wisconsin-
Madison, June 1996-September 2003
Consultant, The RAND Corporation, Washington DC, 1988-1994. Conducted study of acquisition
reform, and the effects of acquisition policy on the defense industrial base. Performed computer
simulations of U.S. strategic force posture and capabilities.
Contract Specialist, Naval Air Systems Command, Washington D.C., 1985-1986. Responsible for cost
and price analysis, contract negotiation, and contract administration for aerial target missile
programs in the $5 million - $100 million range.

Awards
American Political Science Association, State Politics and Policy Section. Award for best Journal Article
Published in the American Journal of Political Science in 2014. Awarded for Burden, Canon,
Mayer, and Moynihan, “Election Laws, Mobilization, and Turnout.”
Robert H. Durr Award, from the Midwest Political Science Association, for Best Paper Applying
Quantitative Methods to a Substantive Problem Presented at the 2013 Meeting. Awarded for
Burden, Canon, Mayer, and Moynihan, “Election Laws and Partisan Gains.”
Leon Epstein Faculty Fellow, College of Letters and Science, 2012-2015
UW Housing Honored Instructor Award, 2012, 2014, 2017, 2018
Recipient, Jerry J. and Mary M. Cotter Award, College of Letters and Science, 2011-2012
Alliant Underkofler Excellence in Teaching Award, University of Wisconsin System, 2006
Pi Sigma Alpha Teaching Award, Fall 2006
Vilas Associate, 2003-2004, University of Wisconsin-Madison Graduate School.
2002 Neustadt Award. Awarded by the Presidency Research Group of the American Political Science
Association, for the best book published on the American presidency in 2001. Awarded for
With the Stroke of a Pen: Executive Orders and Presidential Power.
Lilly Teaching Fellow, University of Wisconsin-Madison, 1993-1994.
Interfraternity Council award for Outstanding Teaching, University of Wisconsin-Madison, 1993.
Selected as one of the 100 best professors at University of Wisconsin-Madison, Wisconsin Student

11
Association, March 1992.
Olin Dissertation Fellow, Center for International Affairs, Harvard University, 1987-1988

Service as an Expert Witness


1. North Carolina Alliance for Retired Americans et al. v. North Carolina State Board of Elections
(Wake Cty., NC), absentee ballots (2020).
2. LaRose et al. v. Simon, No. 62-CV-20-3149 (2d Jud. Dist. Ct., Ramsey Cty., MN), absentee
ballots (2020).
3. Michigan Alliance for Retired Americans et al. v Benson et al. No 2020-000108-MM (Mich.
Court of Claims), absentee ballots (2020).
4. The New Georgia Project et al. v. Raffensperger et al. No. 1:20-CV-01986-EL0052 (N.D. Ga.),
absentee ballots (2020).
5. Driscoll v. Stapleton, No. DV 20 0408 (13th Judicial Ct. Yellowstone Cty., MT), absentee ballots
(2020)
6. The Andrew Goodman Foundation v. Bostelmann, No. 19-cv-955 (W.D. Wisc.), voter ID (2020).
7. Kumar v. Frisco Independent School District et al., No,4:19-cv-00284 (E.D. Tex.), voting rights
(2019).
8. Fair Fight Action v. Raffensperger No. 1:18-cv-05391-SCJ (N.D. Ga.), voting rights (2019)
9. Vaughan v. Lewisville Independent School District, No. 4:19-cv-00109 (E.D. Texas), voting
rights (2019).
10. Dwight et al. v Raffensperger, No: 1:18-cv-2869-RWS (N.D. Ga.), redistricting, voting rights
(2018).
11. Priorities U.S.A.et al. v. Missouri et al., No. 19AC-CC00226 (Cir. Ct. of Cole Cty., MO), voter
ID (2018).
12. Tyson v. Richardson Independent School District, No. 3:18-cv-00212 (N.D. Texas), voting rights
(2018).
13. League of Women Voters of Michigan, et al. v. Johnson, No. 2:17-cv-14148-DPH-SDD (S.D.
Mich.), redistricting (2018).
14. One Wisconsin Institute, Inc., et al. v. Nichol, et al., 198 F. Supp. 3d 896 (W.D. Wis.), voting
rights (2016).
15. Whitford et al. v. Gill et al, 218 F. Supp. 3d 837, (W.D. Wis.), redistricting (2016).
16. Milwaukee NAACP et al. v. Scott Walker et. al, N.W.2d 262 (Wis. 2014), voter ID (2012).
17. Baldus et al. v. Brennan et al., 849 F. Supp. 2d 840 (E.D. Wis.), redistricting, voting rights
(2012).
18. County of Kenosha v. City of Kenosha, No. 22-CV-1813 (Wis. Cir. Ct., Kenosha Cty.)
municipal redistricting (2011).
19. McComish et al. v Brewer et al.. 2010 WL 2292213 (D. Ariz.), campaign finance (2009).
20. Baumgart et al. v. Wendelberger et al., 2002 WL 34127471 (E.D. Wis.), redistricting (2002).

Grants
“A Multidisciplinary Approach for Redistricting Knowledge.” Principal Investigator. Co-PIs Adeline Lo
(UW Madison, Department of Political Science), Song Gao (UW Madison, Department of
Geography), and Barton Miller and Jin-Yi Cai (UW Madison, Department of Computer
Sciences). University of Wisconsin Alumni Research Foundation (WARF), and UW Madison
Office of the Vice Chancellor for Research and Graduate Education. July 1, 2020-June 30, 2022.
$410,711.
“Analyzing Nonvoting and the Student Voting Experience in Wisconsin.” Dane County (WI) Clerk,
$44,157. November 2016-December 2017. Additional support ($30,000) provided by the Office
of the Chancellor, UW-Madison.
Campaign Finance Task Force, Stanford University and New York University, $36,585. September 2016-
August 2017.

12
Participant and Board Member, 2016 White House Transition Project, PIs Martha Joynt Kumar (Towson
State University) and Terry Sullivan (University of North Carolina-Chapel Hill).
“How do You Know? The Structure of Presidential Advising and Error Correction in the White House.”
Graduate School Research Committee, University of Wisconsin, $18,941. July 1, 2015-June 30,
2016.
“Study and Recommendations for the Government Accountability Board Chief Inspectors’ Statements
and Election Incident Report Logs.” $43,234. Co-PI. With Barry C. Burden (PI), David T. Canon
(co-PI), and Donald Moynihan (co-PI). October 2011-May 2012.
“Public Funding in Connecticut Legislative Elections.” Open Society Institute. September 2009-
December 2010. $55,000.
“Early Voting and Same Day Registration in Wisconsin and Beyond.” Co-PI. October 2008- September
2009. Pew Charitable Trusts. $49,400. With Barry C. Burden (PI), David T. Canon (Co-PI),
Kevin J. Kennedy (Co-PI), and Donald P. Moynihan (Co-PI).
City of Madison, Blue Ribbon Commission on Clean Elections. Joyce Foundation, Chicago, IL. $16,188.
January-July 2008.
“Wisconsin Campaign Finance Project: Public Funding in Connecticut State Legislative Elections.” JEHT
Foundation, New York, NY. $84,735. November 2006-November 2007.
“Does Public Election Funding Change Public Policy? Evaluating the State of Knowledge.” JEHT
Foundation, New York, NY. $42,291. October 2005-April 2006.
“Wisconsin Campaign Finance Project: Disseminating Data to the Academic, Reform, and Policy
Communities.” Joyce Foundation, Chicago, IL. $20,900. September 2005- August 2006.
“Enhancing Electoral Competition: Do Public Funding Programs for State and Local Elections Work?”
Smith Richardson Foundation, Westport, CT. $129,611. December 2002-June 2005
WebWorks Grant (implementation of web-based instructional technologies), Division of Information
Technology, UW-Madison, $1,000. November 1999.
“Issue Advocacy in Wisconsin during the 1998 Election.” Joyce Foundation, Chicago, IL. $15,499. April
1999.
Instructional Technology in the Multimedia Environment (IN-TIME) grant, Learning Support Services,
University of Wisconsin. $5,000. March 1997.
“Public Financing and Electoral Competitiveness in the Minnesota State Legislature.” Citizens’ Research
Foundation, Los Angeles, CA, $2,000. May-November 1996.
“The Reach of Presidential Power: Policy Making Through Executive Orders." National Science
Foundation (SBR-9511444), $60,004. September 1, 1995-August 31, 1998. Graduate School
Research Committee, University of Wisconsin, $21,965. Additional support provided by the
Gerald R. Ford Library Foundation, the Eisenhower World Affairs Institute, and the Harry S.
Truman Library Foundation.
The Future of the Combat Aircraft Industrial Base.” Changing Security Environment Project, John M.
Olin Institute for Strategic Studies, Harvard University (with Ethan B. Kapstein). June 1993-
January 1995. $15,000.
Hilldale Student Faculty Research Grant, College of Letters and Sciences, University of Wisconsin (with
John M. Wood). 1992. $1,000 ($3,000 award to student)
“Electoral Cycles in Federal Government Prime Contract Awards” March 1992 – February 1995.
National Science Foundation (SES-9121931), $74,216. Graduate School Research Committee at
the University of Wisconsin, $2,600. MacArthur Foundation, $2,500.
C-SPAN In the Classroom Faculty Development Grant, 1991. $500

Professional and Public Service


Education and Social and Behavioral Sciences Institutional Review Board, 2008-2014. Acting Chair,
Summer 2011. Chair, May 2012- June 2014.
Participant, U.S. Public Speaker Grant Program. United States Department of State (nationwide
speaking tour in Australia, May 11-June 2, 2012).

13
Expert Consultant, Voces de la Frontera. Milwaukee Aldermanic redistricting, (2011).
Expert Consultant, Prosser for Supreme Court. Wisconsin Supreme Court election recount (2011).
Chair, Blue Ribbon Commission on Clean Elections (Madison, WI), August 2007-April 2011.
Consultant, Consulate of the Government of Japan (Chicago) on state politics in Illinois, Indiana,
Minnesota, and Wisconsin, 2006-2011.
Section Head, Presidency Studies, 2006 Annual Meeting of the American Political Science Association.
Co-Chair, Committee on Redistricting, Supreme Court of Wisconsin, November 2003-December 2009.
Section Head, Presidency and Executive Politics, 2004 Annual Meeting of the Midwest Political Science
Association, Chicago, IL.
Presidency Research Group (organized section of the American Political Science Association) Board,
September 2002-present.
Book Review Editor, Congress and the Presidency, 2001-2006.
Editorial Board, American Political Science Review, September 2004-September 2007.
Consultant, Governor’s Blue Ribbon Commission on Campaign Finance Reform (Wisconsin), 1997.

PUBLICATIONS
Books
Presidential Leadership: Politics and Policymaking, 11th edition. Lanham, MD: Rowman and Littlefield,
forthcoming 2019. With George C. Edwards, III and Steven J. Wayne. Previous editions 10th
(2018).
The 2016 Presidential Elections: The Causes and Consequences of an Electoral Earthquake. Lanham,
MD: Lexington Press, 2017. Co-edited with Amnon Cavari and Richard J. Powell.
The Enduring Debate: Classic and Contemporary Readings in American Government. 8th ed. New York:
W.W. Norton & Co. 2017. Co-edited with David T. Canon and John Coleman. Previous editions
1st (1997), 2nd (2000), 3rd (2002), 4th (2006), 5th (2009), 6th (2011), 7th (2013).
Faultlines: Readings in American Government, 5th ed. New York: W.W. Norton & Co. 2017. Co-edited
with David T. Canon and John Coleman. Previous editions 1st (2004), 2nd (2007), 3rd (2011), 4th
(2013).
The 2012 Presidential Election: Forecasts, Outcomes, and Consequences. Lanham, MD: Rowman and
Littlefield, 2014. Co-edited with Amnon Cavari and Richard J. Powell.
Readings in American Government, 7th edition. New York: W.W. Norton & Co. 2002. Co-edited with
Theodore J. Lowi, Benjamin Ginsberg, David T. Canon, and John Coleman). Previous editions
4th (1996), 5th (1998), 6th (2000).
With the Stroke of a Pen: Executive Orders and Presidential Power. Princeton, NJ: Princeton
University Press. 2001. Winner of the 2002 Neustadt Award from the Presidency Studies
Group of the American Political Science Association, for the Best Book on the Presidency
Published in 2001.
The Dysfunctional Congress? The Individual Roots of an Institutional Dilemma. Boulder, CO: Westview
Press. 1999. With David T. Canon.
The Political Economy of Defense Contracting. New Haven: Yale University Press. 1991.

Monographs
2008 Election Data Collection Grant Program: Wisconsin Evaluation Report. Report to the Wisconsin
Government Accountability Board, September 2009. With Barry C. Burden, David T. Canon,
Stéphane Lavertu, and Donald P. Moynihan.
Issue Advocacy in Wisconsin: Analysis of the 1998 Elections and A Proposal for Enhanced Disclosure.
September 1999.
Public Financing and Electoral Competition in Minnesota and Wisconsin. Citizens’ Research
Foundation, April 1998.

14
Campaign Finance Reform in the States. Report prepared for the Governor’s Blue Ribbon
Commission on Campaign Finance Reform (State of Wisconsin). February 1998. Portions
reprinted in Anthony Corrado, Thomas E. Mann, Daniel Ortiz, Trevor Potter, and Frank J.
Sorauf, ed., Campaign Finance Reform: A Sourcebook. Washington, D.C.: Brookings
Institution, 1997.
“Does Public Financing of Campaigns Work?” Trends in Campaign Financing. Occasional Paper Series,
Citizens' Research Foundation, Los Angeles, CA. 1996. With John M. Wood.
The Development of the Advanced Medium Range Air-to-Air Missile: A Case Study of Risk and Reward
in Weapon System Acquisition. N-3620-AF. Santa Monica: RAND Corporation. 1993.
Barriers to Managing Risk in Large Scale Weapons System Development Programs. N-4624-AF. Santa
Monica: RAND Corporation. 1993. With Thomas K. Glennan, Jr., Susan J. Bodilly, Frank
Camm, and Timothy J. Webb.

Articles
“Voter Identification and Nonvoting in Wisconsin - Evidence from the 2016 Election.” Election Law
Journal 18:342-359 (2019). With Michael DeCrescenzo.
“Waiting to Vote in the 2016 Presidential Election: Evidence from a Multi-county Study.” Political
Research Quarterly 71 (2019). With Robert M. Stein, Christopher Mann, Charles Stewart III, et
al.
“Learning from Recounts.” Election Law Journal 17:100-116 (No. 2, 2018). With Stephen Ansolabehere,
Barry C. Burden, and Charles Stewart, III.
“The Complicated Partisan Effects of State Election Laws.” Political Research Quarterly 70:549-563
(No. 3, September 2017). With Barry C. Burden, David T. Canon, and Donald P. Moynihan.
“What Happens at the Polling Place: Using Administrative Data to Look Inside Elections.” Public
Administration Review 77:354-364 (No. 3, May/June 2017). With Barry C. Burden, David T.
Canon, Donald P. Moynihan, and Jacob R. Neiheisel.
“Alien Abduction, and Voter Impersonation in the 2012 U.S. General Election: Evidence from a Survey
List Experiment.” Election Law Journal 13:460-475 No.4, December 2014). With John S.
Ahlquist and Simon Jackman.
“Election Laws, Mobilization, and Turnout: The Unanticipated Consequences of Election Reform.”
American Journal of Political Science, 58:95-109 (No. 1, January 2014). With Barry C. Burden,
David T. Canon, and Donald P. Moynihan. Winner of the State Politics and Politics Section of the
American Political Science Association Award for the best article published in the AJPS in 2014.
“Executive Power in the Obama Administration and the Decision to Seek Congressional Authorization
for a Military Attack Against Syria: Implications for Theories of Unilateral Action.” Utah Law
Review 2014:821-841 (No. 4, 2014).
“Public Election Funding: An Assessment of What We Would Like to Know.” The Forum 11:365-485
(No. 3, 2013).
“Selection Method, Partisanship, and the Administration of Elections.” American Politics Research
41:903-936 (No. 6, November 2013). With Barry C. Burden, David T. Canon, Stéphane Lavertu,
and Donald Moynihan.
“The Effect of Administrative Burden on Bureaucratic Perception of Policies: Evidence from Election
Administration.” Public Administration Review 72:741-451 (No. 5, September/October 2012).
With Barry C. Burden, David T. Canon, and Donald Moynihan.
“Early Voting and Election Day Registration in the Trenches: Local Officials’ Perceptions of Election
Reform.” Election Law Journal 10:89-102 (No. 2, 2011). With Barry C. Burden, David T.
Canon, and Donald Moynihan.
“Is Political Science Relevant? Ask an Expert Witness," The Forum: Vol. 8, No. 3, Article 6 (2010).
“Thoughts on the Revolution in Presidency Studies,” Presidential Studies Quarterly 39 (no. 4, December
2009).
“Does Australia Have a Constitution? Part I – Powers: A Constitution Without Constitutionalism.”

15
UCLA Pacific Basin Law Journal 25:228-264 (No. 2, Spring 2008). With Howard Schweber.
“Does Australia Have a Constitution? Part II: The Rights Constitution.” UCLA Pacific Basin Law
Journal 25:265-355 (No. 2, Spring 2008). With Howard Schweber.
“Public Election Funding, Competition, and Candidate Gender.” PS: Political Science and Politics
XL:661-667 (No. 4,October 2007). With Timothy Werner.
“Do Public Funding Programs Enhance Electoral Competition?” In Michael P. McDonald and John
Samples, eds., The Marketplace of Democracy: Electoral Competition and American Politics
(Washington, DC: Brookings Institution Press, 2006). With Timothy Werner and Amanda
Williams. Excerpted in Daniel H. Lowenstein, Richard L. Hasen, and Daniel P. Tokaji, Election
Law: Cases and Materials. Durham, NC: Carolina Academic Press, 2008.
“The Last 100 Days.” Presidential Studies Quarterly 35:533-553 (No. 3, September 2005). With William
Howell.
“Political Reality and Unforeseen Consequences: Why Campaign Finance Reform is Too Important To
Be Left To The Lawyers,” University of Richmond Law Review 37:1069-1110 (No. 4, May
2003).
“Unilateral Presidential Powers: Significant Executive Orders, 1949-1999.” Presidential Studies
Quarterly 32:367-386 (No. 2, June 2002). With Kevin Price.
“Answering Ayres: Requiring Campaign Contributors to Remain Anonymous Would Not Resolve
Corruption Concerns.” Regulation 24:24-29 (No. 4, Winter 2001).
“Student Attitudes Toward Instructional Technology in the Large Introductory US Government
Course.” PS: Political Science and Politics 33:597-604 (No. 3 September 2000). With John
Coleman.
“The Limits of Delegation – the Rise and Fall of BRAC.” Regulation 22:32-38 (No. 3, October 1999).
“Executive Orders and Presidential Power.” The Journal of Politics 61:445-466 (No.2, May 1999).
“Bringing Politics Back In: Defense Policy and the Theoretical Study of Institutions and Processes."
Public Administration Review 56:180-190 (1996). With Anne Khademian.
“Closing Military Bases (Finally): Solving Collective Dilemmas Through Delegation.” Legislative
Studies Quarterly, 20:393-414 (No. 3, August 1995).
“Electoral Cycles in Federal Government Prime Contract Awards: State-Level Evidence from the 1988
and 1992 Presidential Elections.” American Journal of Political Science 40:162-185 (No. 1,
February 1995).
“The Impact of Public Financing on Electoral Competitiveness: Evidence from Wisconsin, 1964-1990.”
Legislative Studies Quarterly 20:69-88 (No. 1, February 1995). With John M. Wood.
“Policy Disputes as a Source of Administrative Controls: Congressional Micromanagement of the
Department of Defense.” Public Administration Review 53:293-302 (No. 4, July-August 1993).
“Combat Aircraft Production in the United States, 1950-2000: Maintaining Industry Capability in an Era
of Shrinking Budgets.” Defense Analysis 9:159-169 (No. 2, 1993).

Book Chapters
“Is President Trump Conventionally Disruptive, or Unconventionally Destructive?” In The 2016
Presidential Elections: The Causes and Consequences of an Electoral Earthquake. Lanham, MD:
Lexington Press, 2017. Co-edited with Amon Cavari and Richard J. Powell.
“Lessons of Defeat: Republican Party Responses to the 2012 Presidential Election. In Amnon Cavari,
Richard J. Powell, and Kenneth R. Mayer, eds. The 2012 Presidential Election: Forecasts,
Outcomes, and Consequences. Lanham, MD: Rowman and Littlefield. 2014.
“Unilateral Action.” George C. Edwards, III, and William G. Howell, Oxford Handbook of the
American Presidency (New York: Oxford University Press, 2009).
“Executive Orders,” in Joseph Bessette and Jeffrey Tulis, The Constitutional Presidency. Baltimore:
Johns Hopkins University Press, 2009.
“Hey, Wait a Minute: The Assumptions Behind the Case for Campaign Finance Reform.” In Gerald C.
Lubenow, ed., A User’s Guide to Campaign Finance Reform. Lanham, MD: Rowman &

16
Littlefield, 2001.
“Everything You Thought You Knew About Impeachment Was Wrong.” In Leonard V. Kaplan and
Beverly I. Moran, ed., Aftermath: The Clinton Impeachment and the Presidency in the Age of
Political Spectacle. New York: New York University Press. 2001. With David T. Canon.
“The Institutionalization of Power.” In Robert Y. Shapiro, Martha Joynt Kumar, and Lawrence R.
Jacobs, eds. Presidential Power: Forging the Presidency for the 21st Century. New York:
Columbia University Press, 2000. With Thomas J. Weko.
“Congressional-DoD Relations After the Cold War: The Politics of Uncertainty.” In Downsizing
Defense, Ethan Kapstein ed. Washington DC: Congressional Quarterly Press. 1993.
“Elections, Business Cycles, and the Timing of Defense Contract Awards in the United States.” In Alex
Mintz, ed. The Political Economy of Military Spending. London: Routledge. 1991.
“Patterns of Congressional Influence In Defense Contracting.” In Robert Higgs, ed., Arms, Politics, and
the Economy: Contemporary and Historical Perspectives. New York: Holmes and Meier. 1990.

Other
“Campaign Finance: Some Basics.” Bauer-Ginsberg Campaign Finance Task Force, Stanford University.
September 2017. With Elizabeth M. Sawyer.
“The Wisconsin Recount May Have a Surprise in Store after All.” The Monkey Cage (Washington Post),
December 5, 2016. With Stephen Ansolabehere, Barry C. Burden, and Charles Stewart, III.
Review of Jason K. Dempsey, Our Army: Soldiers, Politicians, and American Civil-Military Relations.
The Forum 9 (No. 3, 2011).
“Voting Early, but Not Often.” New York Times, October 25, 2010. With Barry C. Burden.
Review of John Samples, The Fallacy of Campaign Finance Reform and Raymond J. La Raja, Small
Change: Money, Political Parties, and Campaign Finance Reform. The Forum 6 (No. 1, 2008).
Review Essay, Executing the Constitution: Putting the President Back Into the Constitution, Christopher
S, Kelley, ed.; Presidents in Culture: The Meaning of Presidential Communication, David
Michael Ryfe; Executive Orders and the Modern Presidency: Legislating from the Oval Office,
Adam L. Warber. In Perspective on Politics 5:635-637 (No. 3, September 2007).
“The Base Realignment and Closure Process: Is It Possible to Make Rational Policy?” Brademas Center
for the Study of Congress, New York University. 2007.
“Controlling Executive Authority in a Constitutional System” (comparative analysis of executive power
in the U.S. and Australia), manuscript, February 2007.
“Campaigns, Elections, and Campaign Finance Reform.” Focus on Law Studies, XXI, No. 2 (Spring
2006). American Bar Association, Division for Public Education.
“Review Essay: Assessing The 2000 Presidential Election – Judicial and Social Science Perspectives.”
Congress and the Presidency 29: 91-98 (No. 1, Spring 2002).
Issue Briefs (Midterm Elections, Homeland Security; Foreign Affairs and Defense Policy; Education;
Budget and Economy; Entitlement Reform) 2006 Reporter’s Source Book. Project Vote Smart.
2006. With Meghan Condon.
“Sunlight as the Best Disinfectant: Campaign Finance in Australia.” Democratic Audit of Australia,
Australian National University. October 2006.
“Return to the Norm,” Brisbane Courier-Mail, November 10, 2006.
“The Return of the King? Presidential Power and the Law,” PRG Report XXVI, No. 2 (Spring 2004).
Issue Briefs (Campaign Finance Reform, Homeland Security; Foreign Affairs and Defense Policy;
Education; Budget and Economy; Entitlement Reform), 2004 Reporter’s Source Book. Project
Vote Smart. 2004. With Patricia Strach and Arnold Shober.
“Where's That Crystal Ball When You Need It? Finicky Voters and Creaky Campaigns Made for a
Surprise Electoral Season. And the Fun's Just Begun.” Madison Magazine. April 2002.
“Capitol Overkill.” Madison Magazine, July 2002.
Issue Briefs (Homeland Security; Foreign Affairs and Defense Policy; Education; Economy, Budget and
Taxes; Social Welfare Policy), 2002 Reporter’s Source Book. Project Vote Smart. 2002. With

17
Patricia Strach and Paul Manna.
“Presidential Emergency Powers.” Oxford Analytica Daily Brief. December 18, 2001.
“An Analysis of the Issue of Issue Ads.” Wisconsin State Journal, November 7, 1999.
“Background of Issue Ad Controversy.” Wisconsin State Journal, November 7, 1999.
“Eliminating Public Funding Reduces Election Competition." Wisconsin State Journal, June 27, 1999.
Review of Executive Privilege: The Dilemma of Secrecy and Democratic Accountability, by Mark J.
Rozell. Congress and the Presidency 24 (No. 1, 1997).
“Like Marriage, New Presidency Starts In Hope.” Wisconsin State Journal. March 31, 1996.
Review of The Tyranny of the Majority: Fundamental Fairness in Representative Democracy, by Lani
Guinier. Congress and the Presidency 21: 149-151 (No. 2, 1994).
Review of The Best Defense: Policy Alternatives for U.S. Nuclear Security From the 1950s to the 1990s,
by David Goldfischer. Science, Technology, and Environmental Politics Newsletter 6 (1994).
Review of The Strategic Defense Initiative, by Edward Reiss. American Political Science Review
87:1061-1062 (No. 4, December 1993).
Review of The Political Economy of Defense: Issues and Perspectives, Andrew L. Ross ed. Armed
Forces and Society 19:460-462 (No. 3, April 1993)
Review of Space Weapons and the Strategic Defense Initiative, by Crockett Grabbe. Annals of the
American Academy of Political and Social Science 527: 193-194 (May 1993).
“Limits Wouldn't Solve the Problem.” Wisconsin State Journal, November 5, 1992. With David T.
Canon.
“Convention Ceded Middle Ground.” Wisconsin State Journal, August 23, 1992.
“CBS Economy Poll Meaningless.” Wisconsin State Journal, February 3, 1992.
“It's a Matter of Character: Pentagon Doesn't Need New Laws, it Needs Good People.” Los Angeles
Times, July 8, 1988.

Conference Papers
“Voter Identification and Nonvoting in Wisconsin – Evidence from the 2016 Election.” Presented at the
2018 Annual Meeting of the Midwest Political Science Association, Chicago, IL April 5-8, 2018.
With Michael G. DeCrescenzo.
“Learning from Recounts.” Presented at the Workshop on Electoral Integrity, San Francisco, CA, August
30, 2017, and at the 2017 Annual Meeting of the American Political Science Association,
San Francisco, CA, August 31-September 3, 2017. With Stephen Ansolabehere, Barry C. Burden,
and Charles Stewart, III.
“What Happens at the Polling Place: Using Administrative Data to Understand Irregularities at the Polls.”
Conference on New Research on Election Administration and Reform, Massachusetts Institute of
Technology, Cambridge, MA, June 8, 2015. With Barry C. Burden, David T. Canon, Donald P.
Moynihan, and Jake R Neiheisel.
“Election Laws and Partisan Gains: What are the Effects of Early Voting and Same Day Registration on
the Parties' Vote Shares.” 2013 Annual Meeting of the Midwest Political Science Association,
Chicago, IL, April 11-14, 2013. Winner of the Robert H. Durr Award.
“The Effect of Public Funding on Electoral Competition: Evidence from the 2008 and 2010 Cycles.”
Annual Meeting of the American Political Science Association, Seattle, WA, September 1-4,
2011. With Amnon Cavari.
“What Happens at the Polling Place: A Preliminary Analysis in the November 2008 General Election.”
Annual Meeting of the American Political Science Association, Seattle, WA, September 1-4,
2011. With Barry C. Burden, David T. Canon, Donald P. Moynihan, and Jake R. Neiheisel.
“Election Laws, Mobilization, and Turnout: The Unanticipated Consequences of Election Reform.” 2010
Annual Meeting of the American Political Science Association, Washington, DC, September 2-5,
2010. With Barry C. Burden, David T. Canon, Stéphane Lavertu and Donald P. Moynihan.
“Selection Methods, Partisanship, and the Administration of Elections. Annual Meeting of the Midwest
Political Science Association, Chicago, IL, April 22-25, 2010. Revised version presented at the

18
Annual Meeting of the European Political Science Association, June 16-19, 2011, Dublin,
Ireland. With Barry C. Burden, David T. Canon, Stéphane Lavertu and Donald P. Moynihan.
“The Effects and Costs of Early Voting, Election Day Registration, and Same Day Registration in the
2008 Elections.” Annual Meeting of the American Political Science Association, Toronto,
Canada, September 3-5, 2009. With Barry C. Burden, David T. Canon, and Donald P. Moynihan.
“Comparative Election Administration: Can We Learn Anything From the Australian Electoral
Commission?” Annual Meeting of the American Political Science Association, Chicago, IL,
August 29-September 1, 2007.
“Electoral Transitions in Connecticut: Implementation of Public Funding for State Legislative Elections.”
Annual Meeting of the American Political Science Association, Chicago, IL, August 29-
September 1, 2007. With Timothy Werner.
“Candidate Gender and Participation in Public Campaign Finance Programs.” Annual Meeting of the
Midwest Political Science Association, Chicago IL, April 7-10, 2005. With Timothy Werner.
“Do Public Funding Programs Enhance Electoral Competition?” 4th Annual State Politics and Policy
Conference,” Akron, OH, April 30-May 1, 2004. With Timothy Werner and Amanda Williams.
“The Last 100 Days.” Annual Meeting of the American Political Science Association, Philadelphia, PA,
August 28-31, 2003. With William Howell.
“Hey, Wait a Minute: The Assumptions Behind the Case for Campaign Finance Reform.” Citizens’
Research Foundation Forum on Campaign Finance Reform, Institute for Governmental Studies,
University of California Berkeley. August 2000.
“The Importance of Moving First: Presidential Initiative and Executive Orders.” Annual Meeting of the
American Political Science Association, San Francisco, CA, August 28-September 1, 1996.
“Informational vs. Distributive Theories of Legislative Organization: Committee Membership and
Defense Policy in the House.” Annual Meeting of the American Political Science Association,
Washington, DC, September 2-5, 1993.
“Department of Defense Contracts, Presidential Elections, and the Political-Business Cycle.” Annual
Meeting of the American Political Science Association, Washington, DC, September 2-5, 1993.
“Problem? What Problem? Congressional Micromanagement of the Department of Defense.” Annual
Meeting of the American Political Science Association, Washington DC, August 29 - September
2, 1991.

Talks and Presentations


“Turnout Effects of Voter ID Laws.” Rice University, March 23, 2018; Wisconsin Alumni Association,
October 13, 2017. With Michael DeCrescenzo.
“Informational and Turnout Effects of Voter ID Laws.” Wisconsin State Elections Commission,
December 12, 2017; Dane County Board of Supervisors, October 26, 2017. With Michael
DeCrescenzo.
“Voter Identification and Nonvoting in Wisconsin, Election 2016. American Politics Workshop,
University of Wisconsin, Madison, November 24, 2017.
“Gerrymandering: Is There A Way Out?” Marquette University. October 24, 2017.
“What Happens in the Districting Room and What Happens in the Courtroom” Geometry of Redistricting
Conference, University of Wisconsin-Madison October 12, 2017.
“How Do You Know? The Epistemology of White House Knowledge.” Clemson University, February
23, 2016.
Roundtable Discussant, Separation of Powers Conference, School of Public and International Affairs,
University of Georgia, February19-20, 2016.
Campaign Finance Task Force Meeting, Stanford University, February 4, 2016.
Discussant, “The Use of Unilateral Powers.” American Political Science Association Annual Meeting,
August 28-31, 2014, Washington, DC.
Presenter, “Roundtable on Money and Politics: What do Scholars Know and What Do We Need to
Know?” American Political Science Association Annual Meeting, August 28-September 1, 2013,

19
Chicago, IL.
Presenter, “Roundtable: Evaluating the Obama Presidency.” Midwest Political Science Association
Annual Meeting, April 11-14, 2012, Chicago, IL.
Panel Participant, “Redistricting in the 2010 Cycle,” Midwest Democracy Network,
Speaker, “Redistricting and Election Administration,” Dane County League of Women Voters, March 4,
2010.
Keynote Speaker, “Engaging the Electorate: The Dynamics of Politics and Participation in 2008.”
Foreign Fulbright Enrichment Seminar, Chicago, IL, March 2008.
Participant, Election Visitor Program, Australian Electoral Commission, Canberra, ACT, Australia.
November 2007.
Invited Talk, “Public Funding in State and Local Elections.” Reed College Public Policy Lecture Series.
Portland, Oregon, March 19, 2007.
Fulbright Distinguished Chair Lecture Tour, 2006. Public lectures on election administration and
executive power. University of Tasmania, Hobart (TAS); Flinders University and University of
South Australia, Adelaide (SA); University of Melbourne, Melbourne (VIC); University of
Western Australia, Perth (WA); Griffith University and University of Queensland, Brisbane
(QLD); Institute for Public Affairs, Sydney (NSW); The Australian National University,
Canberra (ACT).
Discussant, “Both Ends of the Avenue: Congress and the President Revisited,” American Political
Science Association Meeting, September 2-5, 2004, Chicago, IL.
Presenter, “Researching the Presidency,” Short Course, American Political Science Association Meeting,
September 2-5, 2004, Chicago, IL.
Discussant, Conference on Presidential Rhetoric, Texas A&M University, College Station, TX. February
2004.
Presenter, “Author Meets Author: New Research on the Presidency,” 2004 Southern Political Science
Association Meeting, January 8-11, New Orleans, LA.
Chair, “Presidential Secrecy,” American Political Science Association Meeting, August 28-31,2003,
Philadelphia, PA.
Discussant, “New Looks at Public Approval of Presidents.” Midwest Political Science Association
Meeting, April 3-6, 2003, Chicago, IL.
Discussant, “Presidential Use of Strategic Tools.” American Political Science Association Meeting,
August 28-September 1, 2002, Boston, MA.
Chair and Discussant, “Branching Out: Congress and the President.” Midwest Political Science
Association Meeting, April 19-22, 2001, Chicago, IL.
Invited witness, Committee on the Judiciary, Subcommittee on Commercial and Administrative Law,
U.S. House of Representatives. Hearing on Executive Order and Presidential Power,
Washington, DC. March 22, 2001.
“The History of the Executive Order,” Miller Center for Public Affairs, University of Virginia (with
Griffin Bell and William Howell), January 26, 2001.
Presenter and Discussant, Future Voting Technologies Symposium, Madison, WI May 2, 2000.
Moderator, Panel on Electric Utility Reliability. Assembly Staff Leadership Development Seminar,
Madison, WI. August 11, 1999.
Chair, Panel on “Legal Aspects of the Presidency: Clinton and Beyond.” Midwest Political Science
Association Meeting, April 15-17, 1999, Chicago, IL.
Session Moderator, National Performance Review Acquisition Working Summit, Milwaukee, WI. June
1995.
American Politics Seminar, The George Washington University, Washington D.C., April 1995.
Invited speaker, Defense and Arms Control Studies Program, Massachusetts Institute of Technology,
Cambridge, MA, March 1994.
Discussant, International Studies Association (Midwest Chapter) Annual Meeting, Chicago IL, October
29-30, 1993.

20
Seminar on American Politics, Princeton University, January 16-17,1992.
Conference on Defense Downsizing and Economic Conversion, October 4, 1991, Harvard University.
Conference on Congress and New Foreign and Defense Policy Challenges, The Ohio State University,
Columbus OH, September 21-22, 1990, and September 19-21, 1991.
Presenter, "A New Look at Short Term Change in Party Identification," 1990 Meeting of the American
Political Science Association, San Francisco, CA.

University and Department Service


Cross-Campus Human Research Protection Program (HRPP) Advisory Committee, 2019-present.
UW Athletic Board, 2014-present.
General Education Requirements Committee (Letters and Science), 1997-1998.
Communications-B Implementation Committee(Letters and Science), 1997-1999
Verbal Assessment Committee (University) 1997-1998.
College of Letters & Science Faculty Appeals Committee (for students dismissed for academic reasons).
Committee on Information Technology, Distance Education and Outreach, 1997-98.
Hilldale Faculty-Student Research Grants, Evaluation Committee, 1997, 1998.
Department Computer Committee, 1996-1997; 1997-1998, 2005-2006. Chair, 2013-present.
Faculty Senate, 2000-2002, 2002-2005. Alternate, 1994-1995; 1996-1999; 2015-2016.
Preliminary Exam Appeals Committee, Department of Political Science, 1994-1995.
Faculty Advisor, Pi Sigma Alpha (Political Science Honors Society), 1993-1994.
Department Honors Advisor, 1991-1993.
Brown-bag Seminar Series on Job Talks (for graduate students), 1992.
Keynote speaker, Undergraduate Honors Symposium, April 13 1991.
Undergraduate Curriculum Committee, Department of Political Science, 1990-1992; 1993-1994.
Individual Majors Committee, College of Letters and Sciences, 1990-1991.
Dean Reading Room Committee, Department of Political Science, 1989-1990; 1994-1995.

Teaching
Undergraduate
Introduction to American Government (regular and honors)
The American Presidency
Campaign Finance
Election Law
Classics of American Politics
Presidential Debates
Comparative Electoral Systems
Legislative Process
Theories of Legislative Organization
Senior Honors Thesis Seminar

Graduate
Contemporary Presidency
American National Institutions
Classics of American Politics
Legislative Process

21
December 5, 2020

Pearson v. Kemp, Case No. 1:20-cv-4809-TCB

United States District Court for Northern District of Georgia

Expert Report of Jonathan Rodden, PhD and William Marble

__________________________
Jonathan Rodden, PhD

__

William Marble

1
I. Introduction

Dr. Eric Quinnell and Dr. S. Stanley Young (hereafter QY) present several analyses of data

from Fulton County, Georgia that they claim show “unexplainable statistical anomalies” and vote

patterns that “fail basic sanity and mathematical fidelity checks.” Their analyses are based on data

from Edison Research, a market research firm that gathers vote data and distributes it to news

outlets. QY make a number of unfounded assumptions about the data, which render their

conclusions suspect. Moreover, even granting their assumptions, none of their analyses show

“unexplainable statistical anomalies.” In fact, they closely mirror patterns that we would expect to

see in a fair election. None of their analyses provide any evidence whatsoever of fraudulent

activity.

II. Qualifications

Jonathan Rodden is Professor of Political Science and Senior Fellow at the Center for

Economic Policy Research and the Hoover Institution at Stanford University. For a full description

of his qualifications, see Section II of the other report by Dr. Rodden filed in this case and the

curriculum vitae attached thereto. Mr. Marble is a PhD candidate in the political science

department at Stanford University. He received a B.A. in political science and economics from the

University of Pennsylvania with a minor in mathematics. He has published papers in top political

science journals, including Journal of Politics, Political Science Research and Methods, and the

statistics journal Political Analysis. He has been awarded with a number of grants from Stanford

University and the Stanford Institute for Economic Research, as well as a computational social

science grant from the Russell Sage Foundation. His research uses statistical tools to study voting

behavior, public opinion, political geography, and campaigns. For his work teaching statistical

methods to PhD students, he won a Stanford teaching award. During the 2014 midterm elections,

2
he worked with the NBC News Decision Desk as part of the University of Pennsylvania’s Program

on Opinion Research and Election Studies.

III. Data from Edison Research is Not Official Data

QY present analysis of the Edison Research data feed. Edison Research is a consumer

research firm that conducts exit polls and collects vote return data for the National Election Pool

consortium of news networks. 1 On Election Night and in the days after Edison provides periodic

updates of vote counts to news organizations. QY’s analyses all revolve around the timing of

updates provided by Edison, and specifically the cumulative share of each candidate’s absentee

votes that were counted by different times on Election Day and thereafter in Fulton County,

Georgia.

While Edison’s data feed facilitates disseminating information about election results, it

does not represent official election results. It is unclear the extent to which individual batches of

Edison updates reflect the actual running total of votes counted by election officials at different

points in time. QY present essentially no description of the Edison data, how it is collected, how

it is distributed, or why “anomalies” in the Edison data should be used to infer anything about the

integrity of official vote return data, nor have they provided the Edison data.

Moreover, innocent human errors in Edison’s Election Night reporting occasionally occur.

For example, in the 2018 Wisconsin Senate race, one of Edison’s batch update included an error

where a large batch of votes were assigned to the wrong candidate. 2 The error was quickly caught

and corrected by Edison and news networks, but the fact that such an error can occur in the raw

1
https://web.archive.org/web/20201201125532/https://www.edisonresearch.com/election-polling/, accessed Dec. 4,
2020.
2
Stephen Pettigrew and Charles Stewart III. 2020. “Protecting the Perilous Path of Election Returns: From the
Precinct to the News.” The Ohio State Technology Law Journal 16(2): 587-638.

3
feed from Edison casts doubt on whether anomalies in the Edison live updates reflect actual

anomalies, let alone outright fraud, in the official vote tabulation.

IV. Quinnell and Young Make Faulty Assumptions

Edison’s data feed may not reflect official results. Nonetheless, for the sake of argument,

we will maintain QY’s assumption that Edison’s live updates reflect the actual timing of counting

votes within Fulton County. QY’s report centers around the timing of when each candidate’s

absentee votes were counted within each precinct — as proxied by the times at which Edison

reported batches of ballots. 3

A statistic QY return to repeatedly is the share of a precinct’s absentee ballots that are

included in Edison’s first batch of results — which were reported on November 4 at 12:59 AM.4

They assume that this first batch of results reflects all of the absentee votes that were returned in

the weeks prior to Election Day. This assumption is not supported by any evidence about how

Fulton County officials count ballots. Instead, QY make this assumption on two bases: first,

because election officials in Georgia are allowed to process absentee ballots before Election Day;

and second, because the next several Edison updates do not contain any absentee ballots.

This assumption is certainly incorrect. Figure 1 plots the cumulative share of absentee

ballots in Fulton County that were received by date, according to records kept by Georgia’s

3
Somewhat confusingly, clusters of what are typically referred to as “precincts” are occasionally called “counties”
in QY’s report and elsewhere. We adopt the term “precinct” to avoid confusion.
4
QY do not indicate what time zone Edison uses for its timestamps. In Edison’s county-level time series data, which
we have obtained, the timestamp is followed by a “Z,” indicating “Zulu” or Greenwich Mean Time. Assuming
Edison’s precinct-level time series data follows the same standard, the first batch of results was actually reported on
at 8:59 PM Eastern Daylight Time on November 3 (Election Day). This interpretation makes more sense, as news
outlets began reporting results — which are based on Edison’s data — on Election Night well before 1:00 AM the
next morning.

4
Secretary of State. 5 Nearly all absentee ballots in Fulton County — nearly 99% — were returned

before November 3. Edison’s first batch of election results, in contrast, contained only about 50%

of the eventual total number of absentee votes. QY’s presumption that this batch contains all of

the absentee votes received before Election Day is obviously incorrect. Without a doubt, there

were many ballots received before Election Day that were not counted until after that first batch.

It is important to note that we have no information, and evidently QY also have no

information, about the specific procedure by which absentee ballots are counted in Fulton County,

which matters a great deal for the story that QY are trying to tell. They are not explicit about their

assumptions, but their discussion seems to indicate that they believe all of the Trump and Biden

absentee ballots were in one, large, mixed-up pile, so that the probability of a particular ballot

being counted at a particular time should be equal for Democrats and Republicans. This

assumption evidently drives their claim that the Democratic and Republican ballots should have

exactly the same likelihood of being counted before or after specific points in time.

There are several obvious reasons to doubt this assumption. We know that election officials

are required to attribute each absentee ballot to a precinct. One possibility is that as the ballots

come in, they are pre-sorted by precinct—or by groups of precincts—so that during the counting

process, it would be likely that many of a precinct’s voters would be counted in clumps.

5
These data are drawn from the Georgia Secretary of State’s website, which provides a version of the state’s voter
file that includes a column indicating when a voter’s absentee ballot was returned:
https://elections.sos.ga.gov/Elections/voterabsenteefile.do

5
Figure 1: Share of Fulton County absentee ballots returned by each date.

Another possibility is that there is no pre-processing into piles by precinct, but something

like this happens as a matter of course due to the way ballots are collected and delivered to election

administrators, either by U.S. Postal Service, or from the process of bringing in ballots from the

various ballot dropbox locations in Fulton County. It extremely likely that ballots are arriving in a

way that is geographically “lumpy.” That is to say, a ballot from Chattahoochee Hills is likely to

be close in the pile with other ballots from Chattahoochee Hills. A ballot from the urban core of

Atlanta is likely to be in the pile near other ballots from urban Atlanta. This is very likely to be

true of ballots sent in the mail, since they are retrieved from specific neighborhoods by letter

carriers, or taken in bunches by postal workers from “blue boxes” or post office drop points. And

6
it is almost certainly true of ballots retrieved from ballot drop-boxes, which are scattered in

locations throughout the county.

Unless, through some strange process, the ballots are shuffled like a deck of cards as they

come in, we can probably assume that there is some geographic correlation in the time at which

ballots are counted. Given that partisans are not randomly or uniformly distributed geographically,

this geographic “lumpiness” in ballot counting matters a great deal. Consider some important facts

about Fulton County: 1) There were far more Democratic than Republican absentee ballots overall,

because Fulton County is largely Democratic, and because Republicans were strongly discouraged

by their leaders from voting absentee, 2) of 384 precincts, there are only 6 precincts where Trump

received a majority of absentee ballots cast, and 3) there are 165 precincts where over 90 percent

of the absentee ballots were for Biden.

To see this more clearly, we include in Figure 2 a histogram of Biden’s share of absentee

ballots cast in Fulton County precincts. It is clear that there are not very many Trump absentee

ballots to count in the first place, and they are relatively clustered in a handful of precincts where

Biden still receives a majority. Because of this, if there is any geographic “lumpiness” to the

counting of votes over time, we would anticipate large spikes of Biden votes showing up whenever

a clump of voters drawn from the overwhelmingly Democratic precincts happened to be counted.

We would not expect corresponding spikes for Trump.

7
Figure 2: Distribution of Biden Share of Absentee Votes Cast, Fulton County Precincts

V. The Share of Absentee Ballots Counted Before November 3 Is Not Suspicious

QY make the faulty assumption that absentee ballots reported in Edison’s first update

reflect ballots received and counted before November 3, and absentee ballots reported in

subsequent Edison updates reflect ballots received after November 3 (para. 8). 6 This interpretation

is faulty for the reasons pointed out above: the vast majority of absentee ballots were returned prior

to November 4. The reason is that, by Georgia law, absentee ballots need to be returned by 7:00

PM on Election Day (November 3) in order to be counted. 7 It is our understanding that the only

6
Actually, QY refer to these ballots as being received before and after November 4, which was the day after
Election Day. Due to the time zone issue noted in footnote 4, we instead refer to November 3 in summarizing their
arguments.
7
https://www.acluga.org/en/take-action/2020-election-dates-and-deadlines

8
ballots received after November 3 that might be counted are military voters’ ballots, which must

be received by November 6.

This simple fact undermines QY’s contention that the number of ballots received before

November 3 is “curiously close” to the number of absentee ballots received after November 3. It

simply cannot be the case that all ballots received prior to November 3 were reported in Edison’s

first batch of updates. Instead, this first batch of results, which contained roughly 50% of the total

absentee ballots, must have included roughly half of the absentee votes received before November

3 — because, again, nearly all ballots were returned before that date. A more likely explanation is

that election officials had the capacity to process about half of the absentee ballots received before

Election Day in time to report those votes to Edison Research by the time Edison issued its first

batch of results.

However, suppose we grant QY’s assumption that the first batch of results contained all of

the absentee votes received before November 3. Even if roughly equal numbers of absentee ballots

had been returned before Election Day as on Election Day, there is absolutely no reason why this

would be indicative of fraud. QY provide no comparison data — for example, from other states or

counties — to suggest that this pattern is anomalous.

VI. Small Precincts Are Likely to Have 0% or 100% of Trump Absentee Votes Counted
by Election Day

There is no reason to think that the absentee votes reported by Edison in its first batch

update reflect all of the absentee votes received before Election Day. Instead, it more likely reflects

constraints on the speed with which election administrators can count votes.

Nonetheless, QY make a series of claims about statistical anomalies evident in data on the

share of a candidate’s absentee votes within each precinct that was included in Edison’s first batch

9
of results. We show that their claims of data irregularities are baseless: there is absolutely nothing

anomalous about the distribution of votes that were counted before Election Day. Using simple

arguments from probability theory as well as a simple numerical simulation of a fair election, we

show that the patterns documented in QY’s report are similar to what we would expect in a fair

election.

QY present analysis showing that there is a relatively large number of precincts where

nearly 100% of Trump’s absentee votes were counted in Edison’s first batch of results. Meanwhile,

there were no precincts in which over 71% of Biden’s absentee votes were reported in Edison’s

first batch of results. QY suggest that this pattern is unusual and indicative of data irregularities,

claiming that there is less than 0.01% probability of observing a precinct in which all of Trump’s

absentee votes are received before Election Day. In fact, this pattern is not surprising in the least

and their probability calculation is based on fundamentally flawed assumptions. What QY fail to

consider is that there are very many precincts in Fulton County that received very few absentee

votes for Trump — making it very probable that, in some precincts, close to 0% or 100% of

Trump’s absentee votes would arrive before Election Day.

QY note — and we corroborate, using official vote data from Fulton County 8 — that there

were 23 precincts out of 384 total in which Trump received no absentee votes at all. There were

an additional 13 precincts in which Trump received only a single absentee vote, and 115 precincts

— nearly a third of all precincts — in which he received fewer than 10 absentee votes. In contrast,

there are very few precincts in which Biden received only a small number of absentee votes —

8
https://results.enr.clarityelections.com/GA/Fulton/105430/web.264614/#/summary

10
only 10 precincts in which he received no absentee votes, and only 21 in which he received fewer

than 10 absentee votes. 9

With such a large number of precincts where Trump received few absentee votes, it is very

probable that in some of them, 0 or 100% of the votes were counted before Election Day. As an

analogy, consider a series of coinflips — akin to a voter’s decision about whether to cast their

absentee vote early enough for it to be counted before Election Day or not. If we flip the coin only

3 times, it is relatively probable that we end up with all heads or all tails — specifically, there is a

12.5% chance of each. Now, imagine that 15 people each flip a coin 3 times. The probability that

at least one of the 15 comes up with all heads or all tails is very high: over 98%. However, now

imagine each person flips 10 coins. The probability of getting all heads or all tails is now very

small: less than 0.1%. The probability that at least one of 15 people, who each flip 10 coins, getting

all heads or all tails is only about 3%.

The large number of precincts with few Trump absentee voters are analogous to 15 people

flipping 3 coins each. Not only should we expect to observe some precincts where close to 0 or

100% of Trump’s absentee votes were counted before Election Day, it would be surprising if we

did not. In contrast, there are not many districts with a small number of Biden absentee votes.

Therefore, we should see many fewer precincts where close to 0 or 100% of Biden’s absentee

votes were counted before Election Day — just as it is much less likely for someone flipping 10

coins to get all head or all tails. Simply put, the histograms that are presented in QY’s report are

roughly what we should expect based on elementary probability theory. 10

9
These patterns are to be expected. Fulton is a heavily Democratic county, and there are many small, urban
precincts throughout the county in which Trump received no votes.
10
Technically, the probability of observing 0 heads from n coin flips is given by the equation p = .5n. For 2 coin
flips, there is a 25% probability of getting 0 heads. For 6 coin flips, there is a roughly 3% chance of getting 0 heads.

11
VII. A Simple Simulation Matches the Patterns in QY’s Report, Undermining Their
Claim to Have Discovered Anomalies

To further probe the ability of this argument to explain the pattern of results in QY’s report,

we conduct a simple numerical simulation that extends the coin-flip analogy used above. This

simulation retains the same intuition but is designed to closely mirror the actual precinct-level data

in Fulton County. For the reasons explained above, although we are skeptical about it, in this

exercise we adopt QY’s assumption that there is no geographic lumpiness to the vote counting.

That is to say, we assume that all of the ballots have been shuffled like a deck of cards, and a

Chattahoochee Hills voter is mixed in a pile of ballots such that he or she is no more likely to be

counted right after another neighboring voter from Southern Fulton County than right after a

Buckhead voter.

We start with the total number of absentee votes for Trump in each precinct, derived from

official Fulton County vote return data. Then, we assume each Trump absentee voter flips a coin

to decide whether or not to cast their ballot early enough for it to be counted before Election Day.11

We then calculate the proportion of total simulated absentee votes within each precinct that arrive

before Election Day.

Figure 3 shows a histogram, using the simulated dataset, of the percent of Trump’s absentee

votes within a precinct that are counted before Election Day. This graph looks strikingly similar

to the one presented as evidence of “statistical anomalies” in QY’s report (para. 30). There are

spikes in the histogram around 0% and 100%, just as in QY’s report. Far from being anomalous,

the general pattern presented in QY’s report is just what we would expect to observe in a fair

election.

11
The table after para. 27 in QY’s report indicates that about 47% of Trump’s total absentee votes were counted in
the first batch of Edison data. Therefore, instead of flipping a fair coin, we assume Trump voters flip a weighted
coin that comes up head 47% of the time.

12
Figure 3: Simulated share of precincts’ absentee votes for Trump that are counted before
Election Day.

As expected from our discussion, the precincts with very high or very low share of

absentee ballots counted before Election Day are precincts where there are very few Trump

absentee voters. To see this, consider Figure 4 below. In this scatterplot, each point represents a

precinct. On the horizontal axis is the actual number of absentee votes for Trump in that precinct. 12

On the vertical axis is the simulated percentage of Trump votes that are counted before Election

Day. The points are moved slightly from their true x-y values to make it easier to see points that

overlap.

At the left-hand side of the plot, we have precincts that have very few Trump absentee

voters. These precincts are highly variable in the proportion of Trump absentee votes that are

counted before Election Day in our simulation: some precincts have 0% counted and some have

12
Because there is a large asymmetry in precinct sizes in Fulton County, the horizontal axis is plotted on a
logarithmic scale.

13
100%. As we move to the right — as the precincts have more Trump absentee voters — the

dispersion of the points decreases substantially. This pattern exactly mimics the coin flip analogy

above: when we flip a coin only a few times, it’s fairly likely that we’ll end up with close to all

heads or all tails. But if we increase the number of flips, we increase the odds of getting close to a

50-50 distribution of heads and tails.

Figure 4: Simulated percentage of a precinct's Trump absentee votes that are counted
before Election Day versus the total number of Trump absentee votes. Points are moved from
their true values slightly to make it easier to see overlapping points.

This simple simulation surely leaves out many details about the way that voters decide

when to cast a ballot and when election officials count those ballots. It also ignores the likely

geographic “lumpiness” in the timing of ballot counting. Nonetheless, it demonstrates that the

14
patterns presented in QY’s report are entirely expected and provide no evidence of fraud or

manipulation whatsoever.

VIII. Skewness and Kurtosis Are Uninformative About Statistical Anomalies

QY make additional claims about the cross-precinct distributions of the share of a

candidate’s absentee votes that were reported in the first Edison update. Specifically, they calculate

the skewness and kurtosis of these distribution. Skewness and kurtosis are statistics that indicate,

respectively, how symmetric a distribution is around its average value and how “fat” its tails are

— i.e., how common it is for observations to fall very far from the average value of the distribution.

QY refer to the skewness statistic for the Biden distribution as a “meaningless nonsense

calculation.” On this point, we agree: this statistic is meaningless for the purpose of detecting

statistical anomalies. QY provide no explanation of why we should expect any particular skewness

or kurtosis values in the data they present. Absent such an explanation of what statistical

regularities we should expect in datasets like the one they present, there is no reason to think that

any skewness or kurtosis value is indicative of statistical irregularities.

Perhaps QY expect that this dataset should follow a normal, bell-shaped distribution.

Normal distributions have a skewness value of 0 and a kurtosis value of 3. However, as we show

above through our probability argument and numerical simulation, there is absolutely no reason to

expect that this dataset should follow a normal distribution, and in fact it would be surprising if it

did. The fact that the skewness statistic was not 0 and the kurtosis statistic was not 3 is totally

uninformative about the integrity of Fulton County’s vote counting.

QY also misinterpret their own statistics. They write that an observed skewness of -153.5%

implies that most outcomes lie below 0. This is incorrect. A negative skewness indicates that the

15
left-hand tail of the distribution is longer than the right-hand tail — informally, that there are more

observations to the right of the average value than there are to the left. A negative skewness statistic

does not imply that most observations are below 0. In fact, the data they present — which, by their

nature, cannot be below 0 — shows quite clearly that a distribution can have negative skewness

without any observations below 0.

In sum, QY’s discussion of skewness and kurtosis is totally meaningless for the

determination of statistical anomalies in Fulton County’s vote counting. They present no argument

for why any particular values would be anomalous and they misinterpret their own data analysis.

IX. Over-Time Correlations in Vote Counting Are Inevitable

QY make additional claims about the correlations between vote total for each candidate

across precincts over time. For example, in para. 25 they point to graphs that plot the cumulative

share of each candidate’s eventual absentee votes that had been counted at different points in time

within a set of precincts. They write that “all gains track nearly perfectly,” implying that this

“synchronous result” is evidence that “absentee votes of all precincts [are] centralized and

coordinated.” QY appear to insinuate that such coordination would be nefarious. No data analysis

is required to reach the conclusion that vote counting is likely coordinated across precincts. While

absentee vote totals in Fulton County are eventually apportioned into voters’ precincts, our

understanding is that actual counting is done in a centralized manner by election administrators.

Centralization in processing and counting of absentee ballots by county election administrators is

a common practice. It would seem most impractical to send absentee ballots out to individual

precincts for counting. Some level of centralization in ballot-counting is not evidence of anything

nefarious, but rather a run-of-the-mill feature of election administration. In any case, their data

16
analysis tells us nothing about whether counting is centralized or not. As time goes on, a higher

proportion of the total absentee ballots are counted. It is impossible for these time series not to be

highly correlated across precincts.

17
Jonathan Rodden
Stanford University
Department of Political Science Phone: (650) 723-5219
Encina Hall Central Fax: (650) 723-1808
616 Serra Street Email: jrodden@stanford.edu
Stanford, CA 94305

Personal
Born on August 18. 1971, St. Louis, MO.
United States Citizen.

Education
Ph.D. Political Science, Yale University, 2000.
Fulbright Scholar, University of Leipzig, Germany, 1993–1994.
B.A., Political Science, University of Michigan, 1993.

Academic Positions
Professor, Department of Political Science, Stanford University, 2012–present.

Senior Fellow, Hoover Institution, Stanford University, 2012–present.


Senior Fellow, Stanford Institute for Economic Policy Research, 2020–present.
Director, Spatial Social Science Lab, Stanford University, 2012–present.
W. Glenn Campbell and Rita Ricardo-Campbell National Fellow, Hoover Institution, Stanford Univer-
sity, 2010–2012.
Associate Professor, Department of Political Science, Stanford University, 2007–2012.
Fellow, Center for Advanced Study in the Behavioral Sciences, Palo Alto, CA, 2006–2007.
Ford Career Development Associate Professor of Political Science, MIT, 2003–2006.

Visiting Scholar, Center for Basic Research in the Social Sciences, Harvard University, 2004.
Assistant Professor of Political Science, MIT, 1999–2003.
Instructor, Department of Political Science and School of Management, Yale University, 1997–1999.

1
Publications
Books
Why Cities Lose: The Deep Roots of the Urban-Rural Divide. Basic Books, 2019.
Decentralized Governance and Accountability: Academic Research and the Future of Donor Programming. Co-
edited with Erik Wibbels, Cambridge University Press, 2019.
Hamilton‘s Paradox: The Promise and Peril of Fiscal Federalism, Cambridge University Press, 2006. Winner,
Gregory Luebbert Award for Best Book in Comparative Politics, 2007.
Fiscal Decentralization and the Challenge of Hard Budget Constraints, MIT Press, 2003. Co-edited with
Gunnar Eskeland and Jennie Litvack.

Peer Reviewed Journal Articles


Partisan Dislocation: A Precinct-Level Measure of Representation and Gerrymandering, 2020, Political
Analysis forthcoming (with Daryl DeFord Nick Eubank).

Who is my Neighbor? The Spatial Efficiency of Partisanship, 2020, Statistics and Public Policy (with
Nick Eubank).
Handgun Ownership and Suicide in California, 2020, New England Journal of Medicine 382:2220-2229
(with David M. Studdert, Yifan Zhang, Sonja A. Swanson, Lea Prince, Erin E. Holsinger, Matthew J.
Spittal, Garen J. Wintemute, and Matthew Miller).

Viral Voting: Social Networks and Political Participation, 2020, Quarterly Journal of Political Science (with
Nick Eubank, Guy Grossman, and Melina Platas).
It Takes a Village: Peer Effects and Externalities in Technology Adoption, 2020, American Journal of
Political Science (with Romain Ferrali, Guy Grossman, and Melina Platas). Winner, 2020 Best Conference
Paper Award, American Political Science Association Network Section.

Assembly of the LongSHOT Cohort: Public Record Linkage on a Grand Scale, 2019, Injury Prevention
(with Yifan Zhang, Erin Holsinger, Lea Prince, Sonja Swanson, Matthew Miller, Garen Wintemute, and
David Studdert).
Crowdsourcing Accountability: ICT for Service Delivery, 2018, World Development 112: 74-87 (with Guy
Grossman and Melina Platas).

Geography, Uncertainty, and Polarization, 2018, Political Science Research and Methods doi:10.1017/
psrm.2018.12 (with Nolan McCarty, Boris Shor, Chris Tausanovitch, and Chris Warshaw).
Handgun Acquisitions in California after Two Mass Shootings, 2017, Annals of Internal Medicine 166(10):698-
706. (with David Studdert, Yifan Zhang, Rob Hyndman, and Garen Wintemute).

Cutting Through the Thicket: Redistricting Simulations and the Detection of Partisan Gerrymanders,
2015, Election Law Journal 14,4:1-15 (with Jowei Chen).
The Achilles Heel of Plurality Systems: Geography and Representation in Multi-Party Democracies,
2015, American Journal of Political Science 59,4: 789-805 (with Ernesto Calvo). Winner, Michael Waller-
stein Award for best paper in political economy, American Political Science Association.

Why has U.S. Policy Uncertainty Risen Since 1960?, 2014, American Economic Review: Papers and Pro-
ceedings May 2014 (with Nicholas Bloom, Brandice Canes-Wrone, Scott Baker, and Steven Davis).

2
Unintentional Gerrymandering: Political Geography and Electoral Bias in Legislatures, 2013, Quarterly
Journal of Political Science 8: 239-269 (with Jowei Chen).
How Should We Measure District-Level Public Opinion on Individual Issues?, 2012, Journal of Politics
74, 1: 203-219 (with Chris Warshaw).

Representation and Redistribution in Federations, 2011, Proceedings of the National Academy of Sciences
108, 21:8601-8604 (with Tiberiu Dragu).
Dual Accountability and the Nationalization of Party Competition: Evidence from Four Federatons,
2011, Party Politics 17, 5: 629-653 (with Erik Wibbels).
The Geographic Distribution of Political Preferences, 2010, Annual Review of Political Science 13: 297–340.

Fiscal Decentralization and the Business Cycle: An Empirical Study of Seven Federations, 2009, Eco-
nomics and Politics 22,1: 37–67 (with Erik Wibbels).
Getting into the Game: Legislative Bargaining, Distributive Politics, and EU Enlargement, 2009, Public
Finance and Management 9, 4 (with Deniz Aksoy).

The Strength of Issues: Using Multiple Measures to Gauge Preference Stability, Ideological Constraint,
and Issue Voting, 2008. American Political Science Review 102, 2: 215–232 (with Stephen Ansolabehere
and James Snyder).
Does Religion Distract the Poor? Income and Issue Voting Around the World, 2008, Comparative Political
Studies 41, 4: 437–476 (with Ana Lorena De La O).

Purple America, 2006, Journal of Economic Perspectives 20,2 (Spring): 97–118 (with Stephen Ansolabehere
and James Snyder).
Economic Geography and Economic Voting: Evidence from the U.S. States, 2006, British Journal of
Political Science 36, 3: 527–47 (with Michael Ebeid).

Distributive Politics in a Federation: Electoral Strategies, Legislative Bargaining, and Government


Coalitions, 2004, Dados 47, 3 (with Marta Arretche, in Portuguese).
Comparative Federalism and Decentralization: On Meaning and Measurement, 2004, Comparative Poli-
tics 36, 4: 481-500. (Portuguese version, 2005, in Revista de Sociologia e Politica 25).

Reviving Leviathan: Fiscal Federalism and the Growth of Government, 2003, International Organization
57 (Fall), 695–729.
Beyond the Fiction of Federalism: Macroeconomic Management in Multi-tiered Systems, 2003, World
Politics 54, 4 (July): 494–531 (with Erik Wibbels).
The Dilemma of Fiscal Federalism: Grants and Fiscal Performance around the World, 2002, American
Journal of Political Science 46(3): 670–687.
Strength in Numbers: Representation and Redistribution in the European Union, 2002, European Union
Politics 3, 2: 151–175.
Does Federalism Preserve Markets? Virginia Law Review 83, 7 (with Susan Rose-Ackerman). Spanish
version, 1999, in Quorum 68.

3
Working Papers
Federalism and Inter-regional Redistribution, Working Paper 2009/3, Institut d’Economia de Barcelona.
Representation and Regional Redistribution in Federations, Working Paper 2010/16, Institut d’Economia
de Barcelona (with Tiberiu Dragu).

Chapters in Books
Political Geography and Representation: A Case Study of Districting in Pennsylvania (with Thomas
Weighill), forthcoming 2021.
Decentralized Rule and Revenue, 2019, in Jonathan Rodden and Erik Wibbels, eds., Decentralized Gov-
ernance and Accountability, Cambridge University Press.
Geography and Gridlock in the United States, 2014, in Nathaniel Persily, ed. Solutions to Political
Polarization in America, Cambridge University Press.
Can Market Discipline Survive in the U.S. Federation?, 2013, in Daniel Nadler and Paul Peterson, eds,
The Global Debt Crisis: Haunting U.S. and European Federalism, Brookings Press.
Market Discipline and U.S. Federalism, 2012, in Peter Conti-Brown and David A. Skeel, Jr., eds, When
States Go Broke: The Origins, Context, and Solutions for the American States in Fiscal Crisis, Cambridge
University Press.
Federalism and Inter-Regional Redistribution, 2010, in Nuria Bosch, Marta Espasa, and Albert Sole
Olle, eds., The Political Economy of Inter-Regional Fiscal Flows, Edward Elgar.
Back to the Future: Endogenous Institutions and Comparative Politics, 2009, in Mark Lichbach and
Alan Zuckerman, eds., Comparative Politics: Rationality, Culture, and Structure (Second Edition), Cam-
bridge University Press.
The Political Economy of Federalism, 2006, in Barry Weingast and Donald Wittman, eds., Oxford Hand-
book of Political Economy, Oxford University Press.
Fiscal Discipline in Federations: Germany and the EMU, 2006, in Peter Wierts, Servaas Deroose, Elena
Flores and Alessandro Turrini, eds., Fiscal Policy Surveillance in Europe, Palgrave MacMillan.
The Political Economy of Pro-cyclical Decentralised Finance (with Erik Wibbels), 2006, in Peter Wierts,
Servaas Deroose, Elena Flores and Alessandro Turrini, eds., Fiscal Policy Surveillance in Europe, Palgrave
MacMillan.
Globalization and Fiscal Decentralization, (with Geoffrey Garrett), 2003, in Miles Kahler and David
Lake, eds., Governance in a Global Economy: Political Authority in Transition, Princeton University Press:
87-109. (Updated version, 2007, in David Cameron, Gustav Ranis, and Annalisa Zinn, eds., Globalization
and Self-Determination: Is the Nation-State under Siege? Routledge.)
Introduction and Overview (Chapter 1), 2003, in Rodden et al., Fiscal Decentralization and the Challenge
of Hard Budget Constraints (see above).
Soft Budget Constraints and German Federalism (Chapter 5), 2003, in Rodden, et al, Fiscal Decentral-
ization and the Challenge of Hard Budget Constraints (see above).
Federalism and Bailouts in Brazil (Chapter 7), 2003, in Rodden, et al., Fiscal Decentralization and the
Challenge of Hard Budget Constraints (see above).
Lessons and Conclusions (Chapter 13), 2003, in Rodden, et al., Fiscal Decentralization and the Challenge
of Hard Budget Constraints (see above).

4
Online Interactive Visualization
Stanford Election Atlas, 2012 (collaboration with Stephen Ansolabehere at Harvard and Jim Herries at
ESRI)

Other Publications
How America’s Urban-Rural Divide has Shaped the Pandemic, 2020, Foreign Affairs, April 20, 2020.
An Evolutionary Path for the European Monetary Fund? A Comparative Perspective, 2017, Briefing
paper for the Economic and Financial Affairs Committee of the European Parliament.
Representation and Regional Redistribution in Federations: A Research Report, 2009, in World Report
on Fiscal Federalism, Institut d’Economia de Barcelona.
On the Migration of Fiscal Sovereignty, 2004, PS: Political Science and Politics July, 2004: 427–431.
Decentralization and the Challenge of Hard Budget Constraints, PREM Note 41, Poverty Reduction and
Economic Management Unit, World Bank, Washington, D.C. (July).
Decentralization and Hard Budget Constraints, APSA-CP (Newsletter of the Organized Section in
Comparative Politics, American Political Science Association) 11:1 (with Jennie Litvack).
Book Review of The Government of Money by Peter Johnson, Comparative Political Studies 32,7: 897-900.

Fellowships and Honors


Fund for a Safer Future, Longitudinal Study of Handgun Ownership and Transfer (LongSHOT),
GA004696, 2017-2018.
Stanford Institute for Innovation in Developing Economies, Innovation and Entrepreneurship research
grant, 2015.
Michael Wallerstein Award for best paper in political economy, American Political Science Association,
2016.
Common Cause Gerrymandering Standard Writing Competition, 2015.
General support grant from the Hewlett Foundation for Spatial Social Science Lab, 2014.
Fellow, Institute for Research in the Social Sciences, Stanford University, 2012.
Sloan Foundation, grant for assembly of geo-referenced precinct-level electoral data set (with Stephen
Ansolabehere and James Snyder), 2009-2011.
Hoagland Award Fund for Innovations in Undergraduate Teaching, Stanford University, 2009.
W. Glenn Campbell and Rita Ricardo-Campbell National Fellow, Hoover Institution, Stanford Univer-
sity, beginning Fall 2010.
Research Grant on Fiscal Federalism, Institut d‘Economia de Barcelona, 2009.
Fellow, Institute for Research in the Social Sciences, Stanford University, 2008.
United Postal Service Foundation grant for study of the spatial distribution of income in cities, 2008.
Gregory Luebbert Award for Best Book in Comparative Politics, 2007.

5
Fellow, Center for Advanced Study in the Behavioral Sciences, 2006-2007.
National Science Foundation grant for assembly of cross-national provincial-level dataset on elections,
public finance, and government composition, 2003-2004 (with Erik Wibbels).
MIT Dean‘s Fund and School of Humanities, Arts, and Social Sciences Research Funds.
Funding from DAAD (German Academic Exchange Service), MIT, and Harvard EU Center to organize
the conference, ”European Fiscal Federalism in Comparative Perspective,” held at Harvard University,
November 4, 2000.
Canadian Studies Fellowship (Canadian Federal Government), 1996-1997.
Prize Teaching Fellowship, Yale University, 1998-1999.
Fulbright Grant, University of Leipzig, Germany, 1993-1994.
Michigan Association of Governing Boards Award, one of two top graduating students at the Univer-
sity of Michigan, 1993.
W. J. Bryan Prize, top graduating senior in political science department at the University of Michigan,
1993.

Other Professional Activities


International Advisory Committee, Center for Metropolitan Studies, Sao Paulo, Brazil, 2006–2010.
Selection committee, Mancur Olson Prize awarded by the American Political Science Association Po-
litical Economy Section for the best dissertation in the field of political economy.
Selection committee, Gregory Luebbert Best Book Award.
Selection committee, William Anderson Prize, awarded by the American Political Science Association
for the best dissertation in the field of federalism and intergovernmental relations.

Courses
Undergraduate
Politics, Economics, and Democracy
Introduction to Comparative Politics
Introduction to Political Science
Political Science Scope and Methods
Institutional Economics
Spatial Approaches to Social Science

Graduate
Political Economy of Institutions
Federalism and Fiscal Decentralization
Politics and Geography

6
Consulting
2017. Economic and Financial Affairs Committee of the European Parliament.
2016. Briefing paper for the World Bank on fiscal federalism in Brazil.
2013-2018: Principal Investigator, SMS for Better Governance (a collaborative project involving USAID,
Social Impact, and UNICEF in Arua, Uganda).
2019: Written expert testimony in McLemore, Holmes, Robinson, and Woullard v. Hosemann, United States
District Court, Mississippi.
2019: Expert witness in Nancy Corola Jacobson v. Detzner, United States District Court, Florida.
2018: Written expert testimony in League of Women Voters of Florida v. Detzner No. 4:18-cv-002510,
United States District Court, Florida.
2018: Written expert testimony in College Democrats of the University of Michigan, et al. v. Johnson, et al.,
United States District Court for the Eastern District of Michigan.
2017: Expert witness in Bethune-Hill v. Virginia Board of Elections, No. 3:14-CV-00852, United States
District Court for the Eastern District of Virginia.
2017: Expert witness in Arizona Democratic Party, et al. v. Reagan, et al., No. 2:16-CV-01065, United
States District Court for Arizona.
2016: Expert witness in Lee v. Virginia Board of Elections, 3:15-cv-357, United States District Court for
the Eastern District of Virginia, Richmond Division.
2016: Expert witness in Missouri NAACP v. Ferguson-Florissant School District, United States District
Court for the Eastern District of Missouri, Eastern Division.
2014-2015: Written expert testimony in League of Women Voters of Florida et al. v. Detzner, et al., 2012-CA-
002842 in Florida Circuit Court, Leon County (Florida Senate redistricting case).
2013-2014: Expert witness in Romo v Detzner, 2012-CA-000412 in Florida Curcuit Court, Leon County
(Florida Congressional redistricting case).
2011-2014: Consultation with investment groups and hedge funds on European debt crisis.
2011-2014: Lead Outcome Expert, Democracy and Governance, USAID and Social Impact.
2010: USAID, Review of USAID analysis of decentralization in Africa.
2006–2009: World Bank, Independent Evaluations Group. Undertook evaluations of World Bank de-
centralization and safety net programs.
2008–2011: International Monetary Fund Institute. Designed and taught course on fiscal federalism.
1998–2003: World Bank, Poverty Reduction and Economic Management Unit. Consultant for World De-
velopment Report, lecturer for training courses, participant in working group for assembly of decentral-
ization data, director of multi-country study of fiscal discipline in decentralized countries, collaborator
on review of subnational adjustment lending.

Last updated: October 19, 2020

7
William Marble
Encina Hall West, 616 Jane Stanford Way, Stanford, CA 94305
wpmarble@stanford.edu ⇧ williammarble.co ⇧ (610) 389-9708

Education Stanford University 2015-Present


Ph.D. Candidate in Political Science
Fields: American Politics, Political Methodology, Comparative Politics (minor)
Dissertation: “Political Responses to Economic Decline”
Commi�ee: Kenneth Scheve, Jonathan Rodden, Justin Grimmer, Clayton Nall
University of Pennsylvania 2011-2015
B.A. in Political Science and Economics, minor in Mathematics

Publications William Marble and Clayton Nall. “Where Interests Trump Ideology: Liberal Homeowners and
Local Opposition to Housing Development.” Journal of Politics (Forthcoming). [link]
Amalie Jensen, William Marble, Kenneth Scheve, and Ma�hew J. Slaughter. “City Limits to
Partisan Polarization in the American Public.” Forthcoming, Political Science Research
and Methods. [link]
William Marble and Ma�hew Tyler. “�e Structure of Political Choices: Distinguishing Be-
tween Constraint and Multidimensionality.” Conditionally accepted, Political Analysis.
[link]

Working Ala’ Alrababa’h, William Marble, Salma Mousa, and Alexandra Siegel. “Can Exposure to
Papers Celebrities Reduce Prejudice? �e E�ect of Mohamed Salah on Islamophobic Behav-
iors and A�itudes.” Revised and resubmi�ed, American Political Science Review. [link]
William Marble. “Responsiveness in a Polarized Era: How Local Economic Conditions Struc-
ture Campaign Rhetoric.” (Job Market Paper) [link]
Justin Grimmer and William Marble. “Who Put Trump in the White House? Explaining the
Contribution of Voting Blocs to Trump’s Victory.” [link]
William Marble and Nathan Lee. “Why Not Run? How �e Demands of Fundraising Under-
mine Ambition for Higher O�ce.” [link]
Kaiping Chen, Nathan Lee, and William Marble. “How Policymakers Evaluate Online versus
O�ine Constituent Messages.” [link]
William Marble. “All-Mail Voting Can Decrease Ballot Roll-O�.” [link]

In Progress “Estimating Issue Weights in American Federal Elections, 1980-2018”


“Social Ties, Labor Mobility, and Support for the Welfare State”
“A�itude Activation and the Study of Political Campaigns” (with Cole Tanigawa-Lau and
Justin Grimmer)
“How Much Do Social Connections Ma�er for Political Success?” (with Ari Ray)
“Creating the American Gentry: Political Consequences of Property Tax Reform in California”
(with Clayton Nall)
“Where’s the Party in Foreign Policy?” (with Rachel Myrick and Carl Gustafson)
Grants and Dissertation Fellowship, Stanford Institute for Research in the Social Sciences, 2020-2021 ($5,500)
Awards Collaborative Research Fellowship, Stanford Impact Labs, 2020 ($7,000)
Schultz Graduate Student Fellowship in Economic Policy, Stanford Institute for Economic Pol-
icy Research, 2019 ($9,000)
Computational Social Science Grant, Russell Sage Foundation, 2019 (with Ari Ray, $9,835)
Ric Weiland Graduate Fellowship in the Humanities and Sciences, 2018-2020
Stanford Centennial Teaching Assistant Award, 2018
Small Grants for Survey Experiments in Political Science, Stanford Institute for Research in
the Social Sciences, 2018 (with Ala’ Alrababa’h and Salma Mousa, $1,000)
Conference travel grant, Penn College of Arts and Sciences, 2015
Undergrad research grant, Penn Democracy, Citizenship, and Constitutionalism Program, 2014
Research fellow, Penn Program on Opinion Research and Election Studies, 2013 and 2014

Invited 2019: UC Santa Barbara


Presentations
Conferences American Political Science Association (2017, 2019)
Midwest Political Science Association (2015, 2018)
Stanford-Berkeley Political Economy Working Group (2018)
American Association for Public Opinion Research (2016)

Teaching Teaching Assistant


Graduate Political Methodology I, 2016 and 2017 (Stanford)
Graduate Political Methodology II, 2017 and 2018 (Stanford)
Math Camp for incoming Ph.D. students, 2016 and 2017 (Stanford)
Undergraduate Political Methodology, 2015 (Penn)
�inking Strategically: Introduction to Game �eory, 2017 (Stanford)
What’s Wrong with American Politics? An Institutional Approach, 2019 and 2020 (Stanford)
International Negotiation and Decision-Making, 2018 (Stanford, short course)

Instructional Workshops
Introduction to Data Science, workshop for high school students visiting Stanford, 2018
Introduction to Webscraping, Stanford Summer Research College, 2016. Links to materials:
slides, tutorial (pdf), GitHub.
Data Visualization Using ggplot2, presentation to Stanford political science graduate stu-
dents, 2016. Links to materials: slides, GitHub.
Introduction to Stata, workshop for summer fellows at the Penn Program for Opinion Research
and Election Studies, 2015
Service Reviewer for American Political Science Review, American Journal of Political Science, Journal
of Politics

TA Mentor for the Stanford Political Science Department, 2017-2020


Co-Chair, Stanford Political Science Graduate Student Association, 2018
Co-Organizer, Stanford Political Science Very Applied Methods Workshop, 2017-2018
Social Chair, Stanford Political Science Graduate Student Association, 2016

Policy “�e Evidence and Tradeo�s for a Stay-at-Home Pandemic Response: A Multidisciplinary
Writing Review of Stay-at-Home Implementation in America.” Policy brief reviewing early research
on covid-19, April 2020. (with Alexis A. Doyle, Mollie S.H. Friedlander, Grace D. Li, Courtney
J. Smith, et al.) [link]
Non-testifying expert witness research in League of Women Voters of Florida v. Detzner, United
States District Court, Northern District of Florida, 2018. (with Jonathan Rodden)
Co-author of expert report on the 2015 vote-by-mail election in San Mateo County, California.
Commissioned by the San Mateo County Election O�ce. (with Melissa Michelson)

Other Co-founder, CivicPulse


Experience Election analyst, NBC News Decision Desk, New York, 2014 midterm elections
Debate coach, La Salle College High School, Wyndmoor, PA, 2011-2015

References Kenneth Scheve Jonathan Rodden


Professor of Political Science and Global A�airs Professor of Political Science
Yale University Stanford University
Email: kenneth.scheve@yale.edu Email: jrodden@stanford.edu

Clayton Nall Justin Grimmer


Assistant Professor of Political Science Professor of Political Science
UC Santa Barbara Stanford University
Email: nall@ucsb.edu Email: jgrimmer@stanford.edu

You might also like