Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Discovery and the Evidentiary

Foundations of Implicit Bias


By Allan G. King, Gregory Mitchell, Richard W. Black,
Catherine A. Conway, and Julie A. Totten
The authors of this article document the extent to which expert opinions regarding implicit bias rely on research that evades careful scrutiny by either the
academic journals or the courts that admit the experts testimony, discuss the
arguments that shield the data underlying research from discovery, argue for
discovery of secondary data notwithstanding the arguments against disclosure,
and argue for excluding expert testimony that relies on data beyond the reach
of the opposing party.

ALLAN G. KING is a shareholder in Littler Mendelsons Dallas ofce; Gregory


Mitchell is the Joseph WeintraubBank of
America Distinguished Professor of Law
at the University of Virginia; RICHARD
W. BLACK is a shareholder in Littler
Mendelsons Washington, D.C. office;
CATHERINE A. CONWAY is a partner in
Gibson, Dunn, and Crutchers Los Angeles
ofce; JULIE A. TOTTEN is a partner in
the Sacramento and San Francisco ofces
of Orrick, Herrington and Sutcliffe LLP.
The authors may be contacted at
agking@littler.com, pgm6u@virginia.
edu, rblack@littler.com, cconway@
gibsondunn.com, and jatotten@orrick.
com, respectively.
SPRING 2015

xperts who opine on implicit, or unconscious, bias as a source of discrimination pose unusual challenges for attorneys who must challenge
their testimony. 1 Implicit bias is dicult to measure using even sensitive
instruments in controlled testing environments, and to date courts have not ordered employees of defendant companies to submit to testing for implicit bias. 2
Thus, unlike other experts who commonly appear in employment cases, experts
who testify regarding implicit bias base their testimony on general social science
research that has no demonstrated connection to the present case but which the
experts contend is helpful in understanding the claims or defenses at issue in the
case. 3 For example, Dr. William Bielbys report in Dukes v. Wal-Mart Stores, Inc.
contains over 100 citations to the social-psychological literature, but fails to analyze
any decisions by Wal-Mart that are alleged to be discriminatory. In contrast, the
report in that same case by the plaintis statistician, Dr. Richard Drogin, contains
no citations to academic research but describes his analysis of Wal-Marts data on
pay and promotions. 4 The attack on Dr. Drogins testimony focused on his own
statistical analysis, which was rebutted with the statistical studies of Wal-Marts
own expert. On the other hand, the attack on Dr. Bielby focused primarily on
the attenuated connection between the research he cited, which was not specic
to Wal-Mart, and the employment decisions at issue in the case; the numerous
studies on which he based his testimony went largely unexamined.
Dr. Drogins testimony was based on Wal-Marts own data. During the course
of discovery, Wal-Mart was able to replicate his statistical work and expose what
it alleged to be its weaknesses and shortcomings. As a result, not only was Dr.
Drogins credibility as a witness in play, but so were the many judgments he made
as a statistician. His decision to include or exclude various segments of Wal-Marts
workforce in his data, which promotions he deemed worthy of study, how he
dened a promotion, any distinctions he drew between full and part time employment, the estimation techniques he used to obtain his statistical results, and his
2015 BY ALLAN G. KING, GREGORY MITCHELL, RICHARD W. BLACK,
CATHERINE A. CONWAY, AND JULIE A. TOTTEN

49

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

interpretation of those results all were potential areas of


inquiry. Obviously, the same opportunity was aorded the
plaintis, who took aim at Wal-Marts statistical expert.
In contrast, because Dr. Bielby conducted no empirical
study of Wal-Mart, but based his opinion largely on the
research of social scientists who studied other businesses
or laboratory subjects, the data underlying this substantial
body of research was beyond the scope of the litigation.
This article advocates discovery of data underlying academic research relied on by testifying experts. We refer
to such data as secondary data. 5 Absent discovery, the
data underlying the testifying experts opinion will be
unexamined. Although a partys own expert may critique
the research literature cited by an opposing expert, the
evidentiary foundation of studies crucial to expert opinions will evade scrutiny when the secondary data is not
available for review. The research studies may have been
peer reviewed, but reviewers typically recommend studies
for publication without reviewing the underlying data.
Confronting the research relied on by an expert solely by
considering published details and competing published
results is unsatisfactory because key details will remain
unknown and aws in collecting or analyzing data will
not be exposed.
We argue that parties should request secondary data,
that courts should not only permit but encourage discovery of secondary data to ensure the reliability of expert
opinions, and that courts should exclude testimony premised on data that remain secret. First, we document the
extent to which expert opinions regarding implicit bias
rely on research that evades careful scrutiny by either the
academic journals or the courts that admit the experts
testimony. Next, we discuss the arguments that shield
the data underlying research from discovery. Finally, we
argue for discovery of secondary data notwithstanding
the arguments against disclosure, and argue for excluding
expert testimony that relies on data beyond the reach of
the opposing party.

Social Science-Based Testimony


Regarding Implicit Bias Is Typically
Based On Secondary Data
Secondary data, as we use the term, means data gathered
and analyzed outside the context of the current case but
relied on by a testifying expert as the foundation for his
or her opinion. 6 The testifying expert must vouch for
the reliability and validity of the research published in
academic journals, although he or she may have no rsthand knowledge of how the data were gathered, how the
50

LABOR LAW JOURNAL

data were prepared for analysis ( data cleaning), what the


assumptions and choices were made for the data analysis,
whether the researcher selectively reported favorable
results, or the scrutiny the article received during the
journals review process. Nevertheless, experts who opine
on the implications of implicit bias research for workplaces
may assert that their opinions are supported by an externally valid body of research, even though that claim
may be unproven. 7 And these experts may assert that their
claims are based on robust research even though few, if any,
of the studies cited have been replicated and even though
data from many of these studies will not be available for
sharing with other researchers for validation purposes. 8
One example is provided by Dr. Barbara Reskin, a professor of sociology at the University of Washington, who
submitted an expert report in an employment discrimination class action alleging gender discrimination by Allstate
Insurance Company. The reference section of Dr. Reskins
report includes over 80 citations to academic publications,
and her distillation of that research provides the basis for
her opinion that Allstates paternalistic culture and the
discretion it requires and permits decisionmakers have [
sic] contributed to the systemic disadvantage of female
managers . Yet, as the district court observed, [m]ost
of Dr. Reskins opinion relies on laboratory studies, experimental research, and various historical trends and ideas
which lead her to the conclusion that subjective decisionmaking is inherently awed . 9 Dr. Reskin conducted
no experimental or statistical studies of data collected from
Allstate managers and employees, conducted no survey of
employees, and conducted no observational study of conditions at Allstate. She simply reviewed some case materials
and then asserted that ndings from the academic studies
she cited applied to the Allstate case.
Dr. Richard F. Martell similarly relied on secondary data
in EEOC v. Wal-Mart Stores, Inc., to support his conclusion regarding the prevalence of gender bias at Wal-Mart:
An extensive body of social science research demonstrates
that women are often perceived as lacking many of the
attributes, skills, and abilities deemed necessary to succeed
in male sex-typed occupations. This causes women to be
disadvantaged, especially in occupations dominated by
men. 10 In support of his opinion, Dr. Martell discusses
eight published social science studies in the text of his
report and cites an additional 40 studies in footnotes.
He asserts that these studies represent an established
body of scientic research that is well-accepted within
the scientic community and that has often been cited
in employment litigation contexts. 11 Relying on social
science studies from outside the context of the suit (and
which often involve laboratory simulations rather than
SPRING 2015

eld studies from real workplaces), Dr. Martell contends


that that subjective selection criteria and measurement
practices that accord hiring managers undue personal
discretion serve to promote gender bias in traditionally
male-dominated occupations, 12 and he contends that
the practices of Wal-Mart are subject to such bias without
having conducted any study of those practices. 13
Dr. Bielbys testimony in Dukes is in the same vein.
14
The scholarly literature provides the framework for
his report and he injects into that framework anecdotal
testimony and excerpts from Wal-Mart documents. His
report includes more than 100 citations to the socialpsychological literature, which consists chiey of studies by
other researchers, often with college student participants
in settings far removed from realistic work conditions.
What is missing from all three reportsby Drs. Reskin,
Martell, and Bielbyis any sense of the variation in
behavior within the samples and across the conditions
studied within the cited studies, the degree to which
outliers may have driven the summary results presented,
the degree to which small changes in the research design
lead to large changes in results, and the degree to which
data were selectively analyzed or presented. For this kind
of information, more detail about each cited study would
be needed and often examination of the underlying data
would be necessary.
Federal Rule of Evidence 703 governs the admissibility
of expert testimony based upon secondary data. It provides, in pertinent part, that this testimony is admissible if
experts in the eld would reasonably rely on those
kinds of facts or data in forming an opinion on
the subject. But if the facts or data would otherwise be inadmissible, the proponent of the opinion
may disclose them to the jury only if their probative value in helping the jury evaluate the opinion
substantially outweighs their prejudicial eect.
This begs the question of the type of secondary data on
which a testifying expert may reasonably rely. If an expert
would not rely on secondary data for the kind of opinion
oered in the case, then the reliance is not reasonable. Even
if the expert would ordinarily use the secondary data to
make claims outside the court that are similar the claims
made in court, the secondary data must itself still satisfy
the Daubert test. 15
We now make clear that it is the judge who makes
the determination of reasonable reliance, and that
for the judge to make the factual determination
under Rule 104(a) that an expert is basing his or
SPRING 2015

her opinion on a type of data reasonably relied


upon by experts, the judge must conduct an independent evaluation into reasonableness. The judge
can of course take into account the particular experts opinion that experts reasonably rely on that
type of data, as well as the opinions of other experts
as to its reliability, but the judge can also take into
account other factors he or she deems relevant.

We think that the standard is equivalent to Rule


702s reliability requirement there must be good
grounds on which to nd the data reliable. It makes
sense that the standards are the same, because there
will often be times when both Rule 702 and Rule
703 apply .Testimony that is admissible as reliable under one Rule should not be inadmissible
under another when the policy considerations underlying the two rules are the same. 16
Further,
When a testifying expert relies upon anothers conclusions, facts, or data, the expert must have a familiarity with the methods or reasoning used. A
scientist, however well-credentialed he may be, is
not permitted to be a mouthpiece of a scientist in
a dierent specialty. [ e.g.,] in Dura Automotive,
the court excluded a witnesss testimony based on
anothers conclusions because the witness lacked the
necessary expertise to determine whether appropriate methods were used in a complex groundwater
model that require[d] the exercise of sound technical judgment in evaluating all available geotechnical
data to determine what input values should be used
with respect to each parameter utilized. 17
Rule 703 encourages close scrutiny of secondary evidence. For example, the trial court in In re Orthopedic Bone
Screw Products Liability Litigation, 18 considered expert
testimony regarding the reliability of a report underlying a
testifying experts opinion. The court thoroughly examined
the reports methodology and concluded it satised both
Rule 702 and Rule 703 and concluded that the report
could thus reasonably be relied on by the testifying expert.
In contrast, the mere fact that a publication cited by an
expert has been peer reviewed does not satisfy Rule 703.
In Daubert, the Supreme Court recognized that:
[p]ublication (which is but one element of peer
review) is not a sine qua non of admissibility; it
51

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

does not necessarily correlate with reliability and in


some instances well-grounded but innovative theories will not have been published. Some propositions, moreover, are too particular, too new, or of
too limited interest to be published. But submission to the scrutiny of the scientic community is
a component of good science, in part because it
increases the likelihood that substantive aws in
methodology will be detected. The fact of publication (or lack thereof ) in a peer reviewed journal
thus will be a relevant, though not dispositive, consideration in assessing the scientic validity of a
particular technique or methodology on which an
opinion is premised. 19
Despite this burden of proof, experts on implicit bias
routinely base their testimony on the published literature
without providing any foundational evidence regarding
the reliability of the dozens of studies they cite. Rather,
their testimony invokes the fact of publication and peer
review to establish this secondary evidence as reliable. For
example, Dr. Martell writes in a report for the plainti in
a gender discrimination case against AT&T:
My analysis and opinion in this case draws upon
my expertise in research, teaching, and consulting on sex stereotyping and discrimination in the
workplace, a topic that has been the subject of scientic research for decades, and which has resulted
in the publication of literally thousands of articles
in scientic, peer-reviewed journals. Much of the
research I have relied on is published in social psychology, industrial-organizational psychology and
management studies. 20
Dr. Reskins report in Puer v. Allstate likewise notes
that she reviewed the depositions and exhibits in the case
in the context of a large body of scientic research in
peer reviewed journals or edited volumes of the highest
scholarly caliber. 21 Neither report demonstrates that the
cited studies employed proper methods, reached defensible
conclusions, and contained sucient information for
replication purposesthe fact of peer review alone serves
as the proxy for reliability. 22
Implicit bias experts may also seek to bolster the credibility of their opinions by asserting that the social science
studies on which they rely have been cited dozens or even
hundreds of times by other academicians. However, frequency of citation is a poor proxy for reliability, especially
where the citations occur in the context of additional basic
research and do not involve extensions of the research to
52

LABOR LAW JOURNAL

applied domains. In fact, science progresses by establishing


the falsity of propositions that were once widely accepted.
23
Thus, while it may be relevant that a referenced publication has been cited frequently, that cannot and should
not be the last word on whether the secondary data are
reliable and helpful for the case at hand. Indeed, those who
cite an article typically will have less information about
the underlying research than the referees who initially
reviewed the paper for publication, and, as explained in
the following section, the peer-review process does not
ensure reliability. 24

The Limits Of Peer Review


The peer review process may often be less searching than
courts, parties, and their counsel assume. 25 Most relevant
for our purposes, peer reviewers (journal referees) rarely
receive, much less carefully examine, the primary data that
are the foundation of the manuscripts submitted for review.
Indeed, from the inception of the peer review process, there
has been tension between the need to vet submissions to
scientic journals and the goal of publishing meritorious
articles in a timely manner, without exhausting the scarce
resources of the scientic community.
Peer review came into being hundreds of years ago.
As early as 1731, the Royal Society of Edinburgh
consulted individuals most versed in these matters before publishing a collection of medical articles entitled Medical Essays and Observations.
The method was not, however, universally embraced. In the nineteenth century, a proliferation
of science journals hungry for content sprang up,
and there was no eort to discourage authors by
subjecting their work to lengthy and rigorous
evaluation. In addition to the desire for content,
many of these journals were managed by editors
who worried less about objectivity and more about
openly advocating and promoting their own individual views of the world. All this began to change
during the early and mid-twentieth century, when
individual journal editors found themselves no
longer able to assess every submission received, due
not only to the complexity of the works but also to
the volume. These editors invited experts to participate in the selection of papers for publication by
serving on editorial boards aliated with the publication. Eventually, even this was not sucient.
The proliferation of scientic output following the
Second World War made it impossible for aliated
SPRING 2015

boards to continue to vet all the work submitted.


As a result, editors began, as a desperate last resort,
calling on individuals with expertise in the specic
and related elds of which the paper under review
was a part. In this way, the peer review system as we
know it today was born. 26
The modern review process generally is not a replication process in which reviewers selected by the editors of
a scholarly publication vet the submitted article to verify
the accuracy of its methods or conclusions. 27 Although
a reviewer may request the underlying data, there is no
expectation that when a reviewer recommends an article
for publication he or she is vouching for the accuracy
of the results it reports. Indeed, the anonymity of peer
review frees the referee from any concern that his or her
reputation will suer by failing to detect awed research. 28
For example, Science recently published the results of a
study designed to assess the ease with which awed research
could survive the peer review process. The author littered a
pseudo-scientic paper with tell-tale signs of a poor research
design and inconsistent reports of experimental ndings
and submitted this ctitious article for publication to 304
open-access Internet journals under assumed names. Of
the 255 journals that rendered decisions, 157 accepted
the paper, most with no discernible sign of having actually carried out peer review. Included among them were
journals published by several prestigious publishers that are
well-respected in the scientic community. 29
Professor Michael Eisen, a professor of molecular and
cell biology at the University of California-Berkeley, cautions against viewing these results as representative only
of online publishing:
But the real story is that a fair number of journals
who actually carried out peer review still accepted the
paper, and the lesson people should take home from
this story [is] not that open access is bad, but that peer
review is a joke. If a nakedly bogus paper is able to get
through journals that actually peer reviewed it, think
about how many legitimate, but deeply awed, papers must also get through. Any scientist can quickly
point to dozens of papersincluding, and perhaps
especially, in high impact journalsthat are deeply,
deeply awed . This all adds up to showing that
peer review simply doesnt work. 30
Apart from aws in research that are apparent from the
manuscript, even the most diligent referee cannot evaluate
research ndings that present an incomplete picture of the
entire research design. 31 For example, imagine a researcher
SPRING 2015

who submits an article reporting that an experiment in


which identical resumes, one with traditional rst-names
the other with rst-names identied with African-Americans, yielded dierent rejection rates when they were
submitted to an identical group of employers. But suppose that same research revealed that these hypothetical
African-Americans were more likely to be rejected when
the reviewing HR manager was also African-American.
32
Although the two ndings taken together may suggest
a very dierent conclusion than if the rst is reported in
isolation, the researcher may elect to report only the version
that conrms a preferred hypothesis. 33 Because the journal
referee is unaware of this potentially important omission,
the article may be published and subsequently cited in
support of a conclusion that the experimental data, were
it fully described, would show to be far more nuanced.
If we cannot count on journal referees to ensure that
research ndings are replicable and reliable, then perhaps
the collective eorts of the scientic community will result
in conrmation via replication. Unfortunately, that is
not the case within the social sciences, where replication
research is relatively rare. 34 And when social scientists do
have an interest in verifying prior research, often published
reports contain too little information for replication purposes or the primary data underlying published research
is not available for re-analysis. 35
Compounding these problems is what is generally
known as publication bias, which refers to publication of
a manuscript being conditional on the reported resultsin
particular, a bias in favor of statistically signicant results
that reject the null hypothesis. Publication bias may
manifest itself at two levels: journal editors may prefer
to publish results that conrm a research hypothesis, as
opposed to null ndings, and authors may expect such
a bias in editors, which leads authors not to submit null
ndings in the rst place.
The typical social science publication begins with a theoretical formulation of a problem, from which the author
derives one or more testable hypotheses. If the theory has
merit, one would expect to nd that the data reject the null
hypothesis of no relationship in favor of the hypothesis
advanced in the paper. Theories that are unsupported in
that manner by any evidence typically are much less interesting to editors and readers. Accordingly, academicians
who are pressured to publish or perish understand that
statistically signicant results are essential to publication.
As a result, academic journals abound with statistically
signicant results, and null ndings rarely see the light
of day, except in rare instances, akin to man bites dog,
when a researcher nds strong evidence against a bedrock
theory of the profession. The result is that papers reporting
53

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

negative ndings are less apt to be published, authors are


more likely to excise negative ndings from the papers
they submit, and the published literature misrepresents the
distribution of eects and true nature of causal relations. 36
These facts of academic life are acknowledged by the
nascent eorts to promote publication on the Internet
of negative results, to require authors to share all of the
data obtained in the course of a research project with the
profession at large, and to report all the statistical results
they obtained, not just those they choose to publish. For
example, the Web site Retraction Watch, 37 urges the
creation of a Reproducibility Index, in which scientic
journals are tracked and rated regarding the frequency
with which the results reported in its pages are reproduced
by other researchers. The impetus for these proposals are
ndings such as the following (highlighted on the Retraction Watch Web site): In a survey of 4,600 studies from
across the sciences, Daniele Fanelli, a social scientist at the
University of Edinburgh, UK, found that the proportion
of positive results rose by more than 22% between 1990
and 2007 . Theodore Sterling found that 97% of the
studies in four major psychology journals had reported
statistically signicant positive results. When he repeated
the analysis in 1995, nothing had changed. 38
This problem is not unique to the social sciences:
Over the past decade, before pursuing a particular
line of research, scientists in the haematology
and oncology department at the biotechnology rm
Amgen in Thousand Oaks, California, tried to conrm published ndings related to that work. Fiftythree papers were deemed landmark studies . It
was acknowledged from the outset that some of the
data might not hold up, because papers were deliberately selected that described something completely
new, such as fresh approaches to targeting cancers
or alternative clinical uses for existing therapeutics.
Nevertheless, scientic ndings were conrmed in
only 6 (11%) cases. Even knowing the limitations
of preclinical research, this was a shocking result. 39
In response, groups of scientists have banded together to
make reproducibility more than an ideal for scientic research:
Science Exchange, PLOS ONE, gshare, and Mendeley have launched the Reproducibility Initiative
to address this problem.
Its time to start rewarding the people who take the
extra time to do the most careful and reproducible work. Current academic incentives place an
54

LABOR LAW JOURNAL

emphasis on novelty, which comes at the expense


of rigor. Studies submitted to the Initiative join
a pool of research, which will be selectively replicated as funding becomes available. The Initiative
operates on an opt-in basis because we believe that
the scientic consensus on the most robust, as opposed to simply the most cited, work is a valuable
signal to help identify high quality reproducible
ndings that can be reliably built upon to advance
scientic understanding. 40
Similar replication eorts are occurring within the
social sciences. 41
The lesson drawn from these criticisms and initiatives is
that peer review provides no guarantee that any published
research nding is reliable, or that it would survive a
Daubert motion were the research and its underlying data
used as a primary report proered in any particular case.
Yet, courts regularly admit research of unproven reliability
when it is embedded in the report of a testifying expert.
Gaining access to the data underlying a published
study can shed light on important, undisclosed limitations in the reported results. For instance, data underlying two studies in which implicit racial bias was
supposedly linked to acts of discrimination against
minorities revealed that the results were sensitive to
outliers ( i.e., the reported results depended on the
inclusion of extreme responses). 42 As another example,
when concerns were raised about a RAND study on the
impact of medical marijuana dispensaries on crime, an
examination of the underlying data revealed serious
problems in the underlying data that led to retraction
of the study. 43 Obtaining data can also lead to the discovery of advertent or inadvertent data manipulation.
44
In general, an unwillingness to make data publicly
available or to share data with other researchers should
be taken as a sign of possible undisclosed problems or
limitations in the data set, absent an explanation for such
non-disclosure based on legal restrictions. 45

Impediments To Scrutinizing
Secondary Data
Two important points frame our discussion of discovery
of secondary data: (1) although often ignored in practice,
a strong norm exists within scientic communities that
the data behind research publications should be available
for review by others to the greatest extent possible without
jeopardizing the safety or privacy interests of those studied;
46
and (2) an expert who relies on secondary data places
SPRING 2015

the reliability of that data in question. 47 Thus, analysis


of an objection by a researcher to sharing data with the
expert who relied on the data or with a party in litigation
seeking to scrutinize that data should proceed from the
premise that at least de-identied data ordinarily should
be made public absent some strong reason against disclosure. 48 And in the event good reasons are provided for
not disclosing secondary data, the expert and party who
placed that data in issue, not the opposing party, should
bear the consequences of that non-disclosure, either in
the form of disallowing reliance on that secondary data or
measures aimed at reducing the weight given to opinions
relying on non-public data.

Rule 45(c)(3)(B)(ii)
When a testifying expert bases his or her report on an
empirical study performed solely for a pending lawsuit, it
is customary to obtain the experts entire le that pertains
to the case, including the input le containing the data
on which the analysis was performed, the programs that
operated on that data, and the output le containing the
results of the statistical analysis performed by the expert. 49
Rule 703 authorizes the same searching inquiry regarding
secondary data relied on by testifying experts, including inspection of the data and code les underlying the reported
results. Unless a party is permitted access to secondary data,
crucial limitations and aws may never be discovered, and
the party may not be able to demonstrate why the foundation of the opposing experts opinion is unreliable.
The chief obstacle to discovery of secondary data is Rule
45(c)(3)(B)(ii) of the Federal Rules of Civil Procedure.
That Rule provides in pertinent part: To protect a person
subject to or aected by a subpoena, the issuing court may,
on motion, quash or modify the subpoena if it requires:
disclosing an unretained experts opinion or information
that does not describe specic occurrences in dispute and
results from the experts study that was not requested by a
party. Because the literature relied upon by implicit bias
experts may be authored by dozens of authors who have
not been retained in the particular case, and because the
data will rarely be drawn from the parties to the litigation, this discovery rule seemingly precludes the inquiry
mandated by Rule 703.
The Advisory Committee Notes to the 1991 amendments to Rule 45 indicate that this provision was intended
to provide appropriate protection for the intellectual
property of the non-party witness . A growing problem has been the use of subpoenas to compel the giving
of evidence and information by unretained experts. 50
According to the Advisory Committee, compulsion to
give evidence may threaten the intellectual property of
SPRING 2015

experts denied the opportunity to bargain for the value


of their services. 51
This rule vests discretion in the district courts to permit
discovery of secondary experts, and courts appear to have
settled on the following test for its application:
In determining whether a court should exercise
its discretion to allow compelled testimony of an
unretained expert, this court has examined the following factors: (1) the degree to which the expert
is being called because of his knowledge of facts
relevant to the case rather than in order to give
opinion testimony; (2) the dierence between testifying to a previously formed or expressed opinion
and forming a new one; (3) the possibility that, for
other reasons, the witness is a unique expert; (4)
the extent to which the calling party is able to show
the unlikelihood that any comparable witness will
willingly testify; and (5) the degree to which the
witness is able to show that he has been oppressed
by having to continually testify. 52
Relatively few reported decisions discuss whether research by secondary experts should be protected from
discovery under Rule 45(c)(3)(B)(ii), but courts apply this
test to varying eect depending on the specics of each case.
Thus, in one case, the court compelled a secondary expert to
testify regarding his current valuation of real estate, which
he provided to a third party, because it was relevant to a
case in which the propertys previous value was contested
and not particularly burdensome under the circumstances:
Defendants assert that they seek primarily factual
testimony or testimony related to opinions already
formed; the witnesses have unique knowledge of
the circumstances and assumptions related to the
valuations; there is no burden associated with their
appearance because Defendants are willing to compensate them to explain their previously formed
opinions; and there is no indication in the record
that the witnesses will be deprived of their intellectual property without compensation. 53
In another case, in which college athletes sought compensation for their likenesses used in commercial endeavors, the
court protected secondary data from disclosure until the
researcher had published the data in an academic outlet.
In this case, a threshold question was whether the general
public recognized the images in particular video games to be
those of specic college athletes, and that question had been
studied by Dr. Anastasios Kaburakis, a professor at St. Louis
55

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

University and an attorney admitted to practice law in Greece.


Dr. Kaburakis produced two publications based on the data
in question but objected to producing the data itself as it was
the basis for a manuscript presently under consideration at a
journal. Dr. Kaburakis contended that production of the data
and draft article were protected from discovery by a scholars
privilege, but the district court noted that a majority of courts
have declined to recognize such a privilege and noted that
the circumstances of the case diered from those in which
the privilege had been recognized. 54 However, the court
placed considerable weight on the fact that Dr. Kaburakis
had declined participation in the litigation as an expert for
either side and that a manuscript relating to the requested data
was in the peer review process, stating that peer review is a
critical step in nalizing and adding credibility to a research
study. Only after peer review approval is an academic piece
considered in press and substantively complete. 55
Professor Kaburakis contended that premature disclosure could reveal analyses and conclusions that will dier
from the ultimate published article, thereby undermining
the credibility of his research and threatening its publication, which in turn could impact his pursuit of tenure. 56
Taking these factors together, the court ruled that the requested manuscript and data did not have to be produced
until the peer review process was completed and ordered
that Dr. Kabuaraki be paid reasonable compensation for
preparing the raw data for production. 57

56

And in yet another case, the court extended the journalists privilege to research materials assembled by two
academics and protected the materials from disclosure.
In particular, Microsoft sought, for use in its defense
against the civil antitrust case led against it by the U.S.
government, the notes, recordings, transcripts, and correspondence between two business school professors and
employees of Netscape in connection with a book on
Netscape and its competition with Microsoft. 58 After a
detailed analysis, the appellate court armed the district
courts ruling quashing the subpoena, on grounds that
ordering production would hamstring not only [these
researchers] future research eorts but also those of others
similarly situated scholars. 59 The court also stated that it
was noteworthy that the respondents [were] strangers to
the antitrust litigation; insofar as the record reects, they
have no dog in the ght. [C]oncern for the unwanted
burden thrust upon non-parties is a factor entitled to
special weight in evaluating the balance of competing
needs. 60 However, the district court did leave open the
possibility of revisiting the need for the requested materials if the testimony in the case conicts with quotations
contained in the book. 61
The following tables summarize additional pertinent
cases, with the rst table summarizing cases in which
the secondary expert was required to testify and/or
produce documents:

Case

Facts

Holding

Erdman Co. v. Phoenix Land and Acquisition,


No. 4:12MC00050AGF, 2012 U.S. Dist. LEXIS
164741 (D. Mo. Nov. 19, 2012)

Defendants sought valuation reports prepared on


behalf of subsequent purchaser for purposes of
establishing the diminution in market value resulting
from substandard work.

Discovery permitted because court determined


that expert evidence was unavailable from
another source.

Frazier v. Stryker Corp., No. MC-10-0059-PHXFJM, 2010 U.S. Dist. LEXIS 91550 (D. Ariz. Aug.
12, 2010)

Defendant, a manufacturer of infusion pain pumps,


moved to compel experts deposition because
expert co-authored a study in a medical journal that
suggested a connection between the use of infusion
pumps and the development of chondrolysis. The
study was relied upon by numerous plaintiffs to
support their claims in product liability suits against
Defendant. Subpoena sought information relating
to the peer review process, methods, reliability
of methods, identication of participants, and
participant selection processes.

The court enforced the expert subpoena, noting


that the information sought was unavailable from
another source. Defendant agreed to compensate
expert for his time and schedule deposition at his
convenience.

Walker v. Blitz USA, Inc., No. 5:08MC15, 2008


U.S. Dist. LEXIS 100592 (N.D. W.Va. Dec. 12,
2008)

The plaintiff subpoenaed an engineer because of his


unique knowledge about the technology involved in
the underlying lawsuit.

The court held that plaintiff had substantial need


of the evidence, which was not available from any
other expert, and it was appropriate to permit the
deposition, but under specied terms, citing Rule
45(c)(3)(C).

Wright v. Jeep Corp., 547 F. Supp. 871 (E.D.


Mich. 1982)

University professor published articles regarding


vehicle crash-worthiness. Defendant anticipates it will
be relied upon by plaintiff and subpoenas documents
and deposition of professor.

Research is directly relevant to disputed issues


and discovery is permitted. Defendant ordered to
pay reasonable fee to expert.

LABOR LAW JOURNAL

SPRING 2015

In the following cases, the court protected secondary experts from discovery:
Case

Facts

Holding

Friedland v. TIC- The Indus. Co., No.


04-cv-01263-PSF-MEH, 2006
U.S. Dist. LEXIS 66613 (D. Colo.
Sept. 5, 2006)

The defendant sought an experts data and


analysis in a prior lawsuit. The defendant
subpoenaed the CPA who analyzed data for the
plaintiff in previous lawsuits.

The court held that the CPAs data was protected


by FRCP 45(c)(3)(B)(ii), specically because some
of the information requested was the intellectual
property of the CPA and the court did not nd
that the defendant presented evidence of undue
hardship sufcient to compel production of the
information.

Glaxosmithkline Consumer Healthcare, L.P. v. Merix Pharm. Corp.,


No. 2:05-mc-436-TS-DN, 2007
U.S. Dist. LEXIS 24969 (D. Utah
Apr. 2, 2007)

Expert had conducted studies on cold sore


remedies, the medication manufactured by the
plaintiff and defendant. Expert had written in
published letters and other publications that
the plaintiffs drug was ineffective. The defendant moved to depose expert about statements
he made concerning the lack of efcacy of the
plaintiffs drug, and clinical and lab studies
concerning the drugs efcacy.

The court denied the defendants motion to compel, nding defendant did not want to depose him
as a fact witness, but as an expert on the cold sore
medication industry.

Intervet, Inc. v. Merial, Ltd., No.


8:07CV194, 2007 U.S. Dist. LEXIS
44970 (D. Neb. Jun. 20, 2007)

The plaintiff alleged that the defendants vaccine infringed on one of its patents. The defendant subpoenaed an expert who consulted with
the plaintiff in another lawsuit. The defendant
argued that this consulting experts opinion in
a prior lawsuit differed from the position of the
plaintiffs current expert.

The court quashed the subpoena because the rule


was meant to protect the intellectual property of
non-party witnesses, and also the court reasoned
that it would be unnecessarily time-consuming
and costly for the expert to comply with the
subpoena.

United States ex rel. Hill v. Univ. of


Med. & Dentistry, No. 03-4837,
2008 U.S. Dist. LEXIS 74330
(D.N.J. Sept. 26, 2008)

The court quashed the subpoena nding that beThe plaintiff alleged that challenged research
was false because of a deeply awed methodol- cause experts research was published, plaintiff did
ogy. The plaintiff sought to depose a secondary not have a substantial need for the testimony.
expert for purposes of establishing an appropriate benchmark of sound methodology to which
the challenged research could be compared. The
expert moved to quash the subpoena.

In re Bextra & Celebrex Mktg. Sales Pzer subpoenaed the New England Journal of
Practices & Prod. Liab. Litig., 249
Medicine to obtain its correspondence with the
F.R.D. 8 (D. Mass. 2008)
authors of articles published regarding two of
Pzers products.
Dow Chem. Co. v. Allen, 672 F.2d
1262 (7th Cir. 1982)

Plaintiff subpoenaed all notes, reports, and raw Court refuses to enforce subpoena because predata regarding research in progress by univermature disclosure of research in progress would
sity researchers.
undermine peer review process and jeopardize the
incomplete research.

Although not entirely consistent, these cases permit a


few generalizations. Courts are most protective when a
party seeks to depose an expert with respect to opinions
regarding matters that pertain only generally to a disputed
issue in the case. For example, in United States ex rel. Hill,
the plainti sought to depose a scientist about general
principles of research methodology, as opposed to the
specic methodology at issue in the case. Under these
circumstances, courts are apt to protect the secondary
experts ownership interest in his or her opinion and guard
against attempts to appropriate these opinions without
SPRING 2015

The court held that documents were protected by


FRCP 45, reasoning that peer reviewers comments
of the articles could not be compelled and that the
journal was entitled to a level of protection commensurate with that afforded to journalists.

providing fair compensation to that expert. However, if


the secondary experts opinion was proered in a previous
case, suggesting that the expert already was appropriately
compensated, courts generally are less protective.
The less abstract the opinion and the more directly
it bears on a disputed issue, the more likely courts are
to compel the secondary expert to testify or provide
the requested data. For example, a secondary expert
who has performed extensive research on a particular
automobile engine is more likely to be compelled to
testify in a case disputing the design of that engine than
57

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

in a dispute regarding the properties of automobile


engines generally.
Courts tend to distinguish between the opinion of the
secondary expert and the facts on which that experts opinions are based, and courts are more protective of secondary
experts who are subpoenaed to testify than those who are
asked merely to produce data. Courts generally nd that if
data have been relied upon in previous publications, then
the only protection the expert requires is compensation
for the time involved in producing the requested data.

Proprietary Issues
Research takes time and money. Scholars therefore may
object to producing the data on which they base their
research on the grounds that they have yet to exhaust the
fruits of the data they compiled (as Dr. Kaburakis did in
the case discussed above). If their data were disclosed,
they contend they could lose what essentially are their
monopoly rights in that data. For example, in Holloway
v. Best Buy, Inc., the University of California resisted a
subpoena served on one of its researchers by arguing that
disclosure of the data she compiled would appropriate
her intellectual property and preempt her opportunity to
publish based upon the data she collected. 62
However, courts routinely protect trade secrets and intellectual property by means of protective orders. Thus, a
protective order would ensure that the secondary experts
data is disseminated solely to the parties to the litigation,
and only scholars retained as experts by each party would
have access. The retained experts would be subject to the
courts contempt powers and would be precluded from
commenting or publishing articles that reference the data
in question. With such protection in place, there is little
likelihood that bootleg research based upon that data
would preempt the research of the scholar who initially
created that database.
The universitys objection in the Holloway case also raises
an important issue about the public nature of scholarship.
First, much of the research relied on by implicit bias experts
has been conducted with the aid of federal grant funds.
Accordingly, the notion that a private researcher has an exclusive property interest in data compiled with public funds
seems questionable. Apart from that is the larger question
of whether a researcher who, for the sake of argument,
nanced her research with private funds retains a proprietary interest once she publishes an article based upon that
data. Once the cat is out of the bag, the interest asserted
by this scholar is the right, in eect, to generate a series of
additional publications without sharing the data until their
value to her has been exhausted. At issue is whether there
is a right recognized in academia to monopolize data and
58

LABOR LAW JOURNAL

personally appropriate its benets that survives the rst


publication based upon that same data.
This issue is important to litigation because if scholarly
journals refuse to recognize this monopoly interest, then
the objection that subpoenaing this data infringes upon
a scholars proprietary interest has no merit. That is, if the
scholarly journals require disclosure of the data underlying published research, then publication in eect waives
the claimed property right. This is distinct from whether
journal referees themselves vet statistical ndings reported
in the papers they review. The question at hand is whether
data generally is made available to the profession at large
for this purpose once an article is published.
In general, academic journals and ethics codes within
research elds provide that the quid pro quo for publication
is sharing the data upon which the published research is
based. For instance, the Journal of Social Psychology prescribes as follows:
Consistent with the recommendations of a growing number of scholarly societies, funding agencies, and publisher associations, the editors of the
Journal of Social Psychology believe that raw research
data should be made freely available, unless there
are compelling reasons why it cannot be shared.
We recommend that at some point between the
time an article is accepted and the time of its publication, authors archive the data from the studies
they present in their paper in an open data repository, such as the system developed by the Social
Psychology Network.
Likewise, the Journal of Experimental Psychology
provides:
Authors may be asked to provide the raw data in
connection with a paper for editorial review, and
should be prepared to provide public access to such
data (consistent with the ALPSP-STM Statement
on Data and Databases), if practicable, and should
in any event be prepared to retain such data for
a reasonable time after publication. This policy is
intended to be the equivalent of APA ethics ruling
8.14 Sharing Research Data for Verication.
The American Sociological Review, the journal of the American Sociological Association, has similar requirements:
All persons who publish in ASA journals are required to abide by ASA guidelines and ethics codes
regarding plagiarism and other ethical issues. This
SPRING 2015

requirement includes adhering to ASAs stated policy on data-sharing: Sociologists make their data
available after completion of the project or its major
publications, except where proprietary agreements
with employers, contractors, or clients preclude
such accessibility or when it is impossible to share
data and protect the condentiality of the data or
the anonymity of research participants ( e.g., raw
eld notes or detailed information from ethnographic interviews) (ASA Code of Ethics, 1997).
The American Economic Review, the publication of the
American Economic Association, requires the following:
It is the policy of the American Economic Review
to publish papers only if the data used in the analysis are clearly and precisely documented and are
readily available to any researcher for purposes of
replication. Authors of accepted papers that contain empirical work, simulations, or experimental
work must provide to the Review, prior to publication, the data, programs, and other details of
the computations sucient to permit replication.
These will be posted on the AER Web site. The
Editor should be notied at the time of submission
if the data used in a paper are proprietary or if, for
some other reason, the requirements above cannot
be met.
Moreover, according to the American Economic Review,
the burden is on the submitting author to explain why
data are proprietary and how her results may be replicated:
If a request for an exemption based on proprietary
data is made, authors should inform the editors if the
data can be accessed or obtained in some other way by
independent researchers for purposes of replication.
As a general rule of scientic publishing, the right to
monopolize data is waived on publication of the rst
article based on that data. Therefore, the objection that
producing data in litigation would forfeit the researchers
valuable right has no merit, at least where the data has
served as the basis for a scientic publication. 63

Confidentiality Issues
Most data in the social sciences are collected from individual respondents or subjects, and often those participants
are anonymous or assured condentiality. For example,
the Census forms completed by households remain condential for a designated span of years. EEO-1 data that
SPRING 2015

employers provide to the EEOC is condential, as is information reported on tax returns led with the IRS. Yet,
research based on condential data is published routinely,
and the data underlying these studies can be made available without compromising their condential nature. 64
Most often condentiality is maintained by redacting
identifying information or else by aggregating data so
that the identity of any one respondent is merged with
all other members of the group. For example, a telephone
interview with survey participants can yield responses
that can be shared with third parties without revealing
the identity of the person who gave a particular response.
Under these circumstances, the responses of the person
interviewed may be recorded or noted by the researcher
who speaks to the subject, and these can be purged as well
of any identifying information.
Courts routinely inspect in camera documents containing condential information and assess whether the
identity of the survey respondents could not be redacted.
65
Occasionally, redaction may not suce, because a respondent may be so singularly identiable that just the
slightest information may reveal a respondents identify.
For example, an employer in Bentonville, Arkansas, with
hundreds of thousands of employees worldwide, is easily
recognized as Wal-Mart, no matter what else is redacted.
Therefore, courts may order aggregations of data to be produced, such as the average income of every 10 respondents,
to protect the privacy of any one respondent. However,
this determination is best made case by case, and blanket
representations that promises of condentiality preclude
production of data should be deemed insucient. Moreover, any argument that de-identication is not possible
should be viewed in light of the courts power to impose
a protective order and require that documents referencing
the data be led under seal.

Timing
Even when data are obtained without objection, reanalyzing published research is time-consuming. Whats
more, the relevant literature will often consist of more
than a single study, making it impossible both to replicate
these studies and incorporate any criticisms into a report
that meets a courts scheduling order. Yet, it may not be
until the opposing expert report is in hand that one knows
the research literature on which that expert is relying.
The question arises, therefore, how to obtain data from a
secondary expert in time to analyze it in a pending case.
One possibility is to anticipate the opposing experts testimony and incorporate a critique into ones own experts
report. For example, when testimony regarding implicit
bias is reasonably certain, the defense expert may oer
59

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

an armative opinion that pervasive discrimination is


unlikely given the employers anti-discrimination policies,
notwithstanding published studies to the contrary. That
is, the defense expert may deem the published literature
on implicit bias to be relevant not because it is cited by
the plaintis expert but because it has such a hold on the
popular press, the EEOC, 66 and the public at large that
the expert should be given the chance to demonstrate why
the prevailing view is incorrect. When a prevailing view
has a strong hold on the beliefs of lay people, it should be
permissible to present criticisms of that view in order to
prove the experts alternative theories correct.
This strategy enables a party to move early in the case to
obtain data from secondary experts, make that information
available to its own experts, who then will have sucient
time to dissect the data and identify its shortcomings. By
subpoenaing data underlying secondary studies rather than
testimony, at least initially, a party enhances the likelihood
that a court will enforce the subpoena and begins to lay
the ground work for a deposition, in the event the data
do not speak for themselves.
In many cases involving implicit bias, however, it may
be necessary to target only a few key studies for secondary scrutiny because so few implicit bias studies have been
conducted in real-world settings that even approximate the
kinds of potentially discriminatory behaviors at issue in the
case. For instance, presently no published studies have been
conducted in real workplaces with measures of implicit racial
bias of managers and observations of their treatment of Black
and White employees. Instead, implicit bias experts typically
rely on a few laboratory simulations of work settings, along
with studies that do not even seek to approximate a work
setting. Thus, the time demands may be reduced by focusing scrutiny on the few studies that are likely to gure most
prominently in an implicit bias experts report.

When Underlying Data Are


Unavailable, Is Secondary Evidence
Admissible Nonetheless?
A court that is unwilling to compel the production of
secondary evidence from nontestifying experts must next
determine whether to admit expert testimony that relies
on evidence that is unavailable for cross-examination. That
is, may the court admit the passages of the expert report,
or permit the expert to testify, based upon the research
of others that are beyond the reach of the party opposing
that testimony?
In our view, the answer must be no, with few exceptions. First, the testifying expert may establish that
60

LABOR LAW JOURNAL

experts in the particular eld routinely rely on the cited


publications, but that does not establish that this reliance
is reasonable. Courts on several occasions have refused
to admit expert testimony after carefully considering the
soundness of the studies on which the expert relies or after
determining that basic research cannot t the case at hand.
For example, in Kelley v. American Heyer-Schulte Corp.,
67
the district court carefully considered detailed aspects
of the secondary studies underlying the testimony of the
testifying expert. The court considered the methodology
employed in those studies, the statistical tests that were
used, and faulted the testifying expert who had not seen
the data collected by the underlying study and therefore
could not opine whether a one-tailed or two-tailed statistical test was appropriate. 68
Similarly, in In re Agent Orange Products Liability Litigation, Judge Weinstein rejected expert testimony based
on epidemiological studies that were not in evidence. If
the underlying data is so lacking in probative force and
reliability that no reasonable expert could base an opinion on it, an opinion which rests entirely upon it must
be excluded. The jury will not be permitted to be misled
by the glitter of an experts accomplishments outside the
courtroom. 69 Accordingly, Judge Weinstein reviewed the
studies regarding causation relied upon by the testifying
expert and concluded: The court has reviewed these and
other like studies dealing with animal laboratory studies,
industrial accidents, and other products. They suggest
that dioxin may cause diseases in animals, including man.
They are not correlated to those exposed to Agent Orange
in Vietnam. At most, they collectively have the probative
force of a scintilla of evidence. 70 These cases illustrate
that the extent to which courts will scrutinize secondary
evidence when the record permits, to determine whether,
notwithstanding the assurances of the testifying expert, the
secondary studies provide a reasonable foundation for the
experts testimony.
Because the secondary data was available to these courts,
they did not reach the question of whether an expert may
testify if secondary data are inaccessible. A line of cases that
deals with that issue concerns the destructive testing of
evidence, which arises in products liability cases when the
defects alleged requires tests that irreversibly change the
nature of the product. Because destroying that evidence
makes it unavailable to an opposing party and precludes
its use at trial, courts proceed cautiously before permitting
such testing. Courts apply the following four-part test
with some frequency:
The court should determine:
1. Whether the proposed testing is reasonable, necessary,
and relevant to proving the movants case;
SPRING 2015

2. Whether the non-movants ability to present evidence


at trial will be hindered, or whether the non-movant
will be prejudiced in some other way;
3. Whether there are any less prejudicial alternative
methods of obtaining the evidence sought; and
4. Whether there are adequate safeguards to minimize
the prejudice to the non-movant, particularly the
non-movants ability to present evidence at trial. 71
In determining whether destroying the evidence is
reasonable, necessary, and relevant, courts recognize that
a party may not use destructive testing merely to bolster
an expert opinion or to gain other potentially intriguing,
albeit irrelevant, information. The evidence sought must
be integral to proving the movants case and do more than
strengthen an already established claim or defense. 72 Applying this factor to the studies cited in the expert reports
regarding implicit bias, it is apparent that social science
studies are cited precisely to bolster the experts opinion,
the implicit message being that the testifying experts views
are shared by numerous others.
The prejudice to the party opposing that evidence
may be substantial. Because the experts report consists
principally of conclusions ostensibly supported by the
research of others, if that research is beyond the reach of
the opposing party, the weaknesses in that research will go
largely unchallenged. Although the published articles may
provide considerable detail regarding the methodology
and the specic ndings of those researchers, nothing is
known about the actual data obtained from the reported
experiments or whether alternative analyses of the same
data may support dierent conclusions.
A less prejudicial means of eliciting testimony from
implicit bias experts is to limit their testimony to studies
by researchers who make their data publicly available. This
would place the burden on these experts, as the proponents of the evidence, to rst ascertain whether secondary
experts are willing and able to produce their data before
premising their opinions on their research, and to encourage data-sharing by those whose research is at the core of
their testimony As a result, testifying experts would be led
to base their testimony, to the greatest extent possible, on
data that can be scrutinized by opposing experts and the
courts. In addition, it would incentivize testifying experts
to gather and analyze these data themselves before testifying to ensure that their reliance on these studies is not

SPRING 2015

misplaced. This would provide another layer of scrutiny


to this evidence. Moreover, these testifying experts would
become, in eect, data repositories, able to produce to
opposing counsel the databases that support the research
on which these experts rely.
This procedure need not prejudice the proponent of
the evidence because presumably implicit bias does not
rest entirely on secret data. It protects the party opposing
this testimony because it will have access to all the data
underlying the opinion of the opposing expert. The result
would be a level playing eld in which the relevant evidence would be available equally to both sides. Thus, by
taking a page from the destructive testing cases, courts
can fairly resolve the problems presented by implicit bias
testimony or any other type of expert testimony based on
secondary data. 73

Conclusion
Discovery from secondary experts looms so large in challenging experts on implicit bias because the standards for
the admissibility of scientic evidence in federal court
are more stringent than the requirements for publication
in academic journals. A testifying expert will survive a
Daubert challenge under Rule 702 only if their data and
methods pass rigorous scrutiny of the courts, with the aid
of experts retained by the parties. Academic journals, in
contrast, may publish studies based upon data that face
no scrutiny whatsoever.
Consequently, courts must be cautious of permitting
testifying experts to launder the research of others; that
is, allowing experts to base their testimony on research
so lacking in transparency that it would not be admitted if it had been collected for the case at hand. This
article has emphasized the importance of subjecting this
secondary data to the same scrutiny that is usually given
to case-specic data presented by testifying experts, challenging the foundational studies on which social science
testimony is based, and excluding primary evidence
that relies upon inaccessible secondary research. A rule
permitting testifying experts to rely only on data that
can be shared with the opposing party would ensure
these data are made available to maximum extent, and
subject the foundation of an experts opinion to the
appropriate scrutiny.

61

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

ENDNOTES
1

62

As we use the term here, implicit bias refers


to evaluative or trait associations that may exist
with respect to various social groups, such as
Whites and Blacks and males and females. The
term thus encompasses both prejudicial attitudes
toward and stereotypes about groups. Experts
in many cases now contend that prejudice and
stereotyping occur at subconscious, or implicit
levels of processing, often automatically, without
awareness and beyond conscious control. Bias is
not synonymous with discrimination; discrimination refers to acts directed at others on the basis
of group membership, and it may be motivated
by group bias or some other factor. We do not
address here the full range of social scientic
concerns presented by expert opinions on implicit
bias, including the lack of reliability of measures
of implicit bias as predictors of discriminatory behavior. See Frederick L. Oswald, Gregory
Mitchell, Hart Blanton, James Jaccard & Philip E.
Tetlock, Predicting Ethnic and Racial Discrimination: A Meta-analysis of IAT Criterion Studies, 105
J. PERSONALITY & SOCIAL PSYCHOL. 171 (2013).
We focus instead on the social science studies
and data that form the foundation for opinions
about implicit bias and whether that data should
be available for closer review in the context of a
case. For discussion of additional problems with
opinions offered by social science experts, see, for
example, Allan G. King, Jeffrey S. Klein & Gregory
Mitchell, Effective Use and Presentation of Social
Science Evidence, 37 EMP. RELATIONS L.J. 3
(2012); Gregory Mitchell, Good Causes and Bad
Science, 63 VAND. L. REV. EN BANC 133 (2010).
See Allan G. King, Jeffrey S. Klein & Gregory Mitchell, Effective Use and Presentation of Social Science
Evidence, 37 EMP. RELATIONS L.J. 3 (2012).
Opinions regarding implicit bias are just one
example of the topics on which many experts
offer opinions that are based on extrapolations
from general social science research as opposed
to case-specic research (such as when a labor
economist studies employment data or an
industrial-organizational psychologists validates
the selection criteria used by a company). Our
analysis extends to any expert opinions founded
on research conducted outside the context of the
case.
Experts who rely on general social science research for opinions that are offered to provide
helpful background information are providing
what Monahan and Walker labeled social
framework evidence, and experts who analyze
case-specic data to offer opinions about what
may have transpired in the case before the court
are providing what Monahan and Walker labeled
social fact evidence. See generally Gregory
Mitchell, Laurens Walker & John Monahan,
Beyond Context: Social Facts as Case-Specific Evidence, 60 EMORY L.J. 1109 (2011); John Monahan,
Laurens Walker & Gregory Mitchell, The Limits of
Social Framework Evidence, 8 LAW, PROBABILITY
& RISK 307 (2009); John Monahan, Laurens
Walker & Gregory Mitchell, Contextual Evidence

LABOR LAW JOURNAL

10

11
12
13

14

15

of Gender Discrimination: The Ascendance of Social Frameworks, 94 VA. L. REV. 1705 (2008).
To distinguish the social scientists who conducted
the research from retained consulting experts,
who are subject to the protections of Fed. R.
Civ. P. 26(4)(D), we may refer to the former as
secondary experts.
A separate objection may arise under Federal
Rule of Evidence 703 if an expert unreasonably
relies on social science data for her opinions. For
instance, if an expert makes a case-specic claim
about causation based on general social science
research that has no connection to the case at
hand, that would be an unreasonable use of the
research given that general social science research
cannot support specic causation claims in particular cases. See David L. Faigman, John Monahan
& Christopher Slobogin, Group to Individual (G2i)
Inference in Scientific Expert Testimony, U. CHI. L.
REV. (forthcoming).
External validity refers to whether results from
one setting with one sample will generalize to
another setting or sample. For a discussion of the
limited external validity of much psychological
research, see Gregory Mitchell, Revisiting Truth
or Triviality: The External Validity of Research in
the Psychological Laboratory, 7 PERSPECTIVES
ON PSYCHOL. SCI. 109 (2012).
See Hart Blanton, James Jaccard, Jonathan Klick,
Barbara A. Mellers, Gregory Mitchell & Philip E.
Tetlock, Strong Claims & Weak Evidence: Reassessing the Predictive Validity of the IAT, 94 J. APPLIED
PSYCHOL. 567 (2009); Gregory Mitchell, What Is
Wrong With Social Psychology?, 26 DIALOGUE 12
(Fall 2012).
Puffer v. Allstate Ins. Co., 255 F.R.D. 450, 468
(N.D. Ill. 2009), affd, 2012 U.S. App. LEXIS 6213
(7th Cir. Ill., Mar. 27, 2012).
Dr. R. F. Martell, Expert Rebuttal Report Responsive to Report of Wal-Mart Expert Nancy Combs.
EEOC v. Wal-Mart Stores, Inc., 6:01-cv-00339KKC, (E.D. Ky. Aug. 15, 2008) (citations omitted).
Id. at 26 n.30.
Id. at 26 & n.30.
The summaries of social science research offered by Dr. Martell and other experts often do
not accurately portray the body of research. For
instance, meta-analytic studies show that subjective personnel criteria are not a good predictor of
bias. We do not agree with or approve of any of
the opinions we offer as examples.
Expert Report of William T. Bielby, Ph.D., Betty
Dukes et al. v. Wal-Mart Stores, Inc., February 3,
2003, available at http://www2.law.columbia.
edu/fagan/courses/law_socialscience/documents/Spring_2006/Class%2017-Gender%20
Discrimination/BD_v_Walmart.pdf .
The Daubert factors are (1) whether the theory or
technique has been tested, (2) whether the theory
or technique has been subjected to peer review
and publication, (3) whether the technique has
a high known or potential rate of error, and (4)
whether the theory or technique is generally accepted within a relevant scientic community.

16

17

18
19
20

21

22

23

Daubert, 509 U.S. at 593-594 ; The Daubert


Court pointed out that Rule 703 applies solely to
expert opinions based on otherwise inadmissible
hearsay. Allison v. McGhan Med. Corp., 184 F.3d
1300, 1313 (11th Cir. 1999); see also Fed. R. Civ. P.
702 cmt. ( the question [under 703] whether the
expert is relying on a sufficient basis of informationwhether admissible information or notis
governed by the requirements of Rule 702.)
(original emphasis).
In re Paoli Railroad Yard PCB Litigation, 35 F.3d
717, 748 (3d Cir. 1994) ( Paoli II), cert. denied
sub nom., 115 S. Ct. 1253 (1995).
Petersen v. DaimlerChrysler Corp., 2011 U.S. Dist.
LEXIS 67305, *16 (D. Utah 2011) (citations omitted).
1997 U.S. Dist. LEXIS 6441 (E.D. Pa. May 5, 1997).
Daubert, 509 U.S at 593 (citations omitted).
Expert Report of Richard F. Martell, Ph.D., Conroy
v. Vilsack, No. 06-cv-00867, 2006 WL 6355859
(D. Utah June 1, 2006).
Expert Report of Barbara F. Reskin, Ph.D., in Puffer
v. Allstate, Rev. March 5, 2008, p. 3.
These studies are hearsay and to be admissible
in their own right would have to satisfy Federal
Rule of Evidence 803(18) and be admitted as
statements in learned treatises, periodicals, or
pamphlets. However, a predicate for admissibility under that exception is that (1) the testimony
must be relied upon by the expert on direct examination, and (2) established as reliable by the
experts testimony. But the reliability of these
studies is precisely the issue that arises under Rule
703, so this exception to the hearsay rule does not
permit this evidence to be admitted through the
backdoor of Rule 803.
See the following quote from Karl Popper, Science
as Falsication, available at http://www.stephenjaygould.org/ctrl/popper_falsification.html :
These considerations led me in the winter of 191920 to conclusions which I may now reformulate
as follows.
It is easy to obtain conrmations, or verications,
for nearly every theoryif we look for conrmations.
Conrmations should count only if they are the
result of risky predictions; that is to say, if, unenlightened by the theory in question, we should
have expected an event which was incompatible
with the theoryan event which would have
refuted the theory.
Every good scientic theory is a prohibition:
it forbids certain things to happen. The more a
theory forbids, the better it is.
A theory which is not refutable by any conceivable
event is non-scientic. Irrefutability is not a virtue
of a theory (as people often think) but a vice.
Every genuine test of a theory is an attempt to
falsify it, or to refute it. Testability is falsiability,
but there are degrees of testability: some theories
are more testable, more exposed to refutation,
than others; they take, as it were, greater risks.
Conrming evidence should not count except
when it is the result of a genuine test of the theory;

SPRING 2015

24

25

26

27

and this means that it can be presented as a


serious but unsuccessful attempt to falsify the
theory. (I now speak in such cases of corroborating evidence.)
Some genuinely testable theories, when found
to be false, are still upheld by their admirersfor
example by introducing ad hoc some auxiliary
assumption, or by reinterpreting the theory ad
hoc in such a way that it escapes refutation. Such
a procedure is always possible, but it rescues
the theory from refutation only at the price of
destroying, or at least lowering, its scientic status. (I later described such a rescuing operation
as a conventionalist twist or a conventionalist
stratagem.)
One can sum up all this by saying that the
criterion of the scientific status of a theory is its
falsifiability, or refutability, or testability.
We recognize, as the Fifth Circuit has observed,
that an expert need not afrmatively demonstrate the reliability of every research nding cited
by the expert:
Daubert should not be interpreted so as to permit
an advocate to put his or her opponent to the
burden of establishing hard scientic reliabilityvalidity upon demand. For example, it would be
ludicrous to require the proponent of a doctors
testimony to introduce evidence that every test
the doctor conducted or reasonably relied upon
under Rule 703 is scientifically reliable-valid
. [T]hat the scientic evidence has received
general acceptance should be sufcient alone
to support admissibility of scientic evidence
unless the opponent presents evidence creating
a genuine issue as to the reliability-validity of the
scientic evidence .
Moore v. Ashland, Chem., 126 F.3d 679, 703 (5th
Cir. 1997), cert. denied, 526 U.S. 1064 (1999),
quoting 2 Graham, Handbook of Federal Evidence
702.5, at 79 (4th ed. 1996). But parties should be
allowed access to the secondary data relied on by
the testifying expert to scrutinize and challenge
the reliability of that data. Where secondary data
is requested, the retort that the study in question
survived peer review should be an insufcient
response.
Whether there is any such thing as a paper so bad
that it cannot be published in any peer reviewed
journal is debatable. Nevertheless, scientists
understand that peer review per se provides only
a minimal assurance of quality, and that the
public conception of peer review as a stamp of
authentication is far from the truth. Charles G.
Jennings, Quality and value: The true purpose of
peer review?, Nature Communications, http://
blogs.nature.com/peer-to-peer/2006/06/quality_and_value_the_true_pur.html .
N. McCormack, Peer Review and Legal Publishing:
What Law Librarians Need to Know about Open,
Single-Blind, and Double-Blind Reviewing, 101
LAW LIBR. J. 59, 60-61 (2009).
As well as not spotting things they ought to spot,
there is a lot that peer reviewers do not even try
to check. They do not typically re-analyse the data
presented from scratch, contenting themselves
with a sense that the authors analysis is properly

SPRING 2015

28

29

30

31

32

33

34

conceived. Trouble at the lab: Scientists like to


think of science as self-correcting. To an alarming
degree, it is not, The Economist (October 13,
2013), available at http://www.economist.com/
news/briefing/21588057-scientists-think-scienceself-correcting-alarming-degree-it-not-trouble .
Our point is not that peer review has no benets
but rather that its scope may be less far-reaching
and its benets less clear than courts and lawyers
may assume. For a fuller discussion of peer review
and disclosure benets, see Gregory Mitchell,
Empirical Legal Scholarship as Scientific Dialogue,
83 N.C.L. REV. 167 (2004).
For a discussion of the ways in researchers can
manipulate data and results to make ndings
t preferred theories or to seem more compelling than they really are, see Arthur G. Bedeian,
Shannon G. Taylor & Alan N. Miller, Management
Science on the Credibility Bubble: Cardinal Sins and
Various Misdemeanors, 9 ACAD. MGMT. LEARNING & EDUC.715 (2010); Daniele Fanelli, How
Many Scientists Fabricate and Falsify Research?
A Systematic Review and Meta-analysis of Survey
Data, 4 PLOS ONE e5738 (2009); John P. Simmons, Leif D. Nelson, & Uri Simonsohn, FalsePositive Psychology: Undisclosed Flexibility in Data
Collection and Analysis Allows Presenting Anything
as Significant, 22 PSYCHOL. SCI. 1359 (2011).
J. Bohannon, Whos Afraid of Peer Review, 342
SCIENCE (October 2013) pp. 60-65.
Michael Eisen, I confess, I wrote the Arsenic
DNA paper to expose flaws in peer-review at
subscription based journals, it is not junk: a blog
about genomes, DNA, evolution, open science,
baseball and other important things, October 3,
2016, available at http://www.michaeleisen.org/
blog/?p=1439 .
And peer review may not catch even basic aws
apparent in the manuscript, such as errors in presenting statistical results. See, Marjan Bakker &
Jelte M. Wicherts, The (Mis)Reporting of Statistical
Results in Psychology Journals, 43 BEHAVIORAL
RESEARCH METHODS666 (2011).
An analogous problem would arise if the researcher had data on race of managers but chose
not to include this variable in reported regression
results. See generally Torsten Biemann, What If We
Were Texas Sharpshooters? Predictor Reporting
Bias in Regression Analysis, 16 ORGL RESEARCH
METHODS 335 (2013).
For presentation of an example of selective reporting of data occurring in implicit bias research,
see Hart Blanton & Gregory Mitchell, Reassessing
the Predictive Validity of the IAT: II. Reanalysis of
Heider & Skowronski (2007), 13 N. AM. J. PSYCHOL. 99 (2011). This paper also documents the
discovery of the intentional alteration of data to
support falsely the hypothesis that implicit bias
predicted behavior toward minorities.
See Matthew C. Makel, Jonathan A. Plucker &
Boyd Hegarty, Replications in Psychology Research: How Often Do They Really Occur?, 7 Perspectives Psychol. Sci. 537, 540 (2012) (reporting
a replication rate under two percent and noting
that this rate is not dissimilar to replication rates
reported in other elds).

35

36

37

38

39

40

41

42

43

44

See, e.g., Joshua Carp, The Secret Lives of Experiments: Methods Reporting in the fMRI Literature,
63 NEUROIMAGE 289 (2012); Deborah Kashy, M.
Brent Donnellan, Robert A. Ackerman & Daniel
W. Russell, Reporting and Interpreting Research
in PSPB: Practices, Principles, and Pragmatics, 35
PERSONALITY SOCIAL PSYCHOL. BULL. 1131
(2009); Jelte M. Wicherts, Denny Borsboom, Judith Kats & Dylan Molenaar, The Poor Availability
of Psychological Research Data for Reanalysis, 61
AM. PSYCHOL. 726 (2006).
See Marjan Bakker, Annette van Dijk & Jelte M.
Wicherts, The Rules of the Game Called Psychological Science, 7 PERSPECTIVES PSYCHOL. SCI.
543 (2012); Daniele Fanelli, Negative Results Are
Disappearing From Most Disciplines and Countries, 90 SCIENTOMETRICS 891 (2012); Daniele
Fannelli, Positive Results Increase Down the
Hierarchy of the Sciences, 5 PLOS ONE e10068
(2010); John P. Ioannidis, Why Most Published
Research Findings Are False, 2 PLOS MED. E124
(2005); Jeffrey D. Scargle, Publication Bias: The
File-Drawer Problem in Scientific Inference, 14
J. SCI. EXPLORATION 91 (2000).
h t t p : / / r e t r a c t i o n w a t c h .w o r d p r e s s .
com/2013/07/08/time-for-a-scientific-journalreproducibility-index/.
http://www.nature.com/news/replicationstudies-bad-copy-1.10634#/b2 .
http://www.nature.com/nature/journal/v483/
n7391/full/483531a.html .
https://www.scienceexchange.com/reproducibility ; see also Carl Zimmer, A Sharp Rise In
Retractions Prompts Calls For Reform, The
New York Times, April 16, 2012, http://www.
nytimes.com/2012/04/17/science/rise-in-scientific-journal-retractions-prompts-calls-for-reform.
html?_r=0 .
See, e.g., Jens B. Asendorpf et al., Recommendations for Increasing Replicability in Psychology, 27
EUROPEAN J. PERSONALITY 108 (2013); Etienne
P. LeBel et al., PsychDisclosure.org : Grassroots
Support for Reforming Reporting Standards in
Psychology, 8 PERSPECTIVES PSYCHOL. SCI. 424
(2013); Open Science Collaboration, An Open,
Large-Scale, Collaborative Effort to Estimate the
Reproducibility of Psychological Science, 7 PERSPECTIVES PSYCHOL. SCI. 657 (2012).
See Hart Blanton et al., Strong Claims and Weak
Evidence: Reassessing the Predictive Validity of the
IAT, 94 J. APP. PSYCHOL. 567 (2009).
RAND Retracts Report about Medical Marijuana
Dispensaries and Crime, available at http://www.
rand.org/news/press/2011/10/24.html (last visited November 5, 2103).
See, e.g., Hart Blanton & Gregory Mitchell,
Reassessing the Predictive Validity of the IAT:
II. Reanalysis of Heider & Skowronski (2007), 13
N. AM. J. PSYCHOL. 99 (2011); Uri Simonsohn,
Just Post It: The Lesson From Two Cases of
Fabricated Data Detected by Statistics Alone,
24 Psychol. Sci. 1875 (2013); Uri Simonsohn,
Just Posting It works, leads to new retraction
in Psychology, available at http://datacolada.
org/2013/09/17/just_posting_it_works/ (last
visited November 5, 2013).

63

DISCOVERY AND THE EVIDENTIARY FOUNDATIONS OF IMPLICIT BIAS

45

46

47

48

49

50
51
52

53

54

55
56
57
58

64

See Jelte Wicherts, Marjan Bakker & Dylan


Molenaar, Willingness to Share Data Is Related
to the Strength of the Evidence and the Quality
of Reporting of Statistical Results, 6 PLOS ONE
e26828 (2011).
See, e.g., Richard A. Anderson, William H. Greene,
B.D. McCullough & H.D. Vinod, The Role of Data/
Code Archives in the Future of Economic Research,
15 J. ECON. METHODOLOGY 99 (2008); Stephen
J. Ceci & Elaine Walker, Private Archives and Public
Needs, 38 AM. PSYCHOL. 414 (1983); Virginia A.
de Wolf, Joan E. Siber, Philip M. Steel & Alvan O.
Zarate, What is the Requirement for Data Sharing?,
27 IRB: Ethics and Human Research 12 (Nov.-Dec.
2005).
This fact differentiates our focal case from cases
in which a researcher has collected data that may
be relevant to a pending case and one of the parties seeks to use the data to prove facts at issue
in the case without retaining that researcher as
an expert in the case.
Given that secondary data, by denition, does
not involve the parties to the case, there should
be no need for identied data. Thus, production
of de-identied data will maintain any anonymity
or condentiality requirements imposed by law or
university institutional review boards. We discuss
this issue in more detail in Section IV.C.
E.g., Randall v. Rolls-Royce Corp., 2010 U.S. Dist.
LEXIS 23421, *5 (S.D. Ind. March 12, 2010) ( Dr.
Harnett who began his evaluation of the analysis
contained in the report soon concluded that he
needed the underlying studies and statistical programs created or used by Dr. Drogin. In response
to the Defendants request for such materials,
Plaintiffs produced four discs containing more
than 1,000 separate electronic les).
FED. R. CIV. P. 45, 1991 amend. note.
Id.
Bio-Technology Gen. Corp. v. Novo Nordisk A/S,
2003 U.S. Dist. LEXIS 7911, *9 (D. Del. May 7,
2003) (quoting Kaufman v. Edelstein, 539 F.2d
811, 822 (2d Cir. 1976)).
Erdman Co. v. Phoenix Land & Acquisition, 2012
U.S. Dist. LEXIS 164741, *8 (E.D. Mo. Nov. 19,
2012).
In re NCAA Student-Athlete Name & Likeness
Licensing Litigation, 2012 WL 4856968, *1-*2
(E.D. Mo. Oct. 12, 2012).
Id. at *3.
Id.
Id. at *4.
Cusumano v. Microsoft Corp., 162 F.3d 708, 711
(1st Cir. 1998).

LABOR LAW JOURNAL

59

Id. at 717.
Id.
61
Id.
62
See Letter Brief to the Honorable Mari-Elena
James, July 24, 2007, Holloway v. Best Buy, Co.,
Inc., No. 3:05-cv-05056-PJH (N.D. Cal. 2005).
63
If journals and their editors enforced requirements
that data be publicly archived or made available
to other researchers, then the need for data
subpoenas would be lessened. Unfortunately,
many social science journals do not require public
archiving as a condition of publication or police
data-sharing requirements, and based on the
experiences of the second author and others who
have made data requests, psychologists share
their data at disappointingly low rates. See Hart
Blanton, James Jaccard, Jonathan Klick, Barbara
A. Mellers, Gregory Mitchell & Philip E. Tetlock,
Strong Claims & Weak Evidence: Reassessing the
Predictive Validity of the IAT, 94 J. APP. PSYCHOL.
567 (2009); Gregory Mitchell, What Is Wrong With
Social Psychology?, 26 DIALOGUE 12 (Fall 2012).
The National Science Foundation has recently
strengthened its data sharing requirement as a
condition of the receipt of grant funds, a move
that may improve the data sharing practices of
social scientists.
Recently, The Economist observed:
A study published last month in PeerJ by Melissa Haendel, of the Oregon Health and Science
University, and colleagues found that more than
half of 238 biomedical papers published in 84
journals failed to identify all the resources (such
as chemical reagents) necessary to reproduce the
results. On data, Christine Laine, the editor of the
Annals of Internal Medicine, told the peer-review
congress in Chicago that ve years ago about
60% of researchers said they would share their
raw data if asked; now just 45% do. Journals
growing insistence that at least some raw data be
made available seems to count for little: a recent
review by Dr. Ioannidis which showed that only
143 of 351 randomly selected papers published
in the worlds 50 leading journals and covered by
some data-sharing policy actually complied.
Trouble at the lab: Scientists like to think of science
as self-correcting. To an alarming degree, it is not,
The Economist (October 13, 2013), supra.
64
Indeed, on occasion researchers have taken it
upon themselves to testify based upon condential data without obtaining permission from
the government agency providing that research.
For example, in the high-prole case of Dukes v.
Wal-Mart, Dr. Marc Bendick based his testimony
60

65

66

67
68
69
70
71

72

73

on condential EEO-1 reports which he was provided by the EEOC with the understanding that it
would be used for research and not for litigation.
Dukes v. Wal-Mart Stores, Inc., 222 F.R.D. 189, 193
(N.D. Cal. 2004 ), affd, 2010 U.S. App. LEXIS 8576
(9th Cir. Apr. 26, 2010) (en banc), revd, 131 S.Ct.
2554 (2011). The district court rejected a motion
to strike Dr. Bendicks declaration based on this
data and did not require production of the full
data set on which Dr. Bendick relied, in part on the
grounds that the full data set was not requested
in a timely manner. Dukes, 222 F.R.D. at 194.
Craig v. Rite Aid Corp., 2012 U.S. Dist. LEXIS
46274, *36 (M.D. Pa. March 30, 2012) (redacting
portion of document after in camera inspection).
For instance, However, biased treatment is not
always conscious. The EEO laws prohibit not
only decisions driven by animosity, but also
decisions infected by stereotyped thinking. Because viewing a video may trigger unconscious
bias, especially if opportunities for face-to
face conversation are absent, covered entities
should implement proactive measures, or best
practices, to minimize this risk. Letter from Assistant General Counsel of EEOC to member of
the public, dated September 21, 2010, available
at http://www.eeoc.gov/eeoc/foia/letters/2010/
ada_gina_titlevii_video_resumes.html .
957 F. Supp. 873, 877-880 (S.D. Tex. 1997).
Id.
611 F. Supp. 1223, 1245 (E.D.N.Y. 1985).
Id. at 1238.
Heckman v. Ryder Truck Rental, Inc., 2013 U.S.
Dist. LEXIS 21035, *2 (D. Md. Feb. 11, 2013) citing Mirchandani v. Home Depot, U.S.A., Inc., 235
F.R.D. 611, 614 (2006).
Mirchandani v. Home Depot, U.S.A., Inc., 235
F.R.D. at 615.
The destructive testing cases provide a helpful
analogy when data, like the allegedly defective
product, potentially are available. However,
there are commonly-cited studies in every eld
for which data may no longer be available simply
due to the age of the studies. Oftentimes these
studies will have spawned research that builds on
their initial insights or that conceptually replicates
the prior results. Consequently, an expert whose
reliance on past studies for which data are not
reasonably available should be able to point to
other studies that conrm or elaborate on the
earlier ndings, and for which data are available.
If the earlier studies have not been replicated
directly or indirectly, then parties and courts
should inquire into why that is the case.

SPRING 2015

Copyright of Labor Law Journal is the property of CCH Incorporated and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.

You might also like