Download as pdf
Download as pdf
You are on page 1of 23

Risk Analysis, Vol. 28, No. 5, 2008 DOI: 10.1111/j.1539-6924.2008.01086.

An Integrated Approach to Oversight Assessment


for Emerging Technologies

Jennifer Kuzma,∗ Jordan Paradise, Gurumurthy Ramachandran, Jee-Ae Kim,


Adam Kokotovich, and Susan M. Wolf

Analysis of oversight systems is often conducted from a single disciplinary perspective and
by using a limited set of criteria for evaluation. In this article, we develop an approach that
blends risk analysis, social science, public administration, legal, public policy, and ethical per-
spectives to develop a broad set of criteria for assessing oversight systems. Multiple methods,
including historical analysis, expert elicitation, and behavioral consensus, were employed to
develop multidisciplinary criteria for evaluating oversight of emerging technologies. Sixty-six
initial criteria were identified from extensive literature reviews and input from our Working
Group. Criteria were placed in four categories reflecting the development, attributes, evolu-
tion, and outcomes of oversight systems. Expert elicitation, consensus methods, and multi-
disciplinary review of the literature were used to refine a condensed, operative set of criteria.
Twenty-eight criteria resulted spanning four categories: seven development criteria, 15 at-
tribute criteria, five outcome criteria, and one evolution criterion. These criteria illuminate
how oversight systems develop, operate, change, and affect society. We term our approach
“integrated oversight assessment” and propose its use as a tool for analyzing relationships
among features, outcomes, and tradeoffs of oversight systems. Comparisons among histori-
cal case studies of oversight using a consistent set of criteria should result in defensible and
evidence-supported lessons to guide the development of oversight systems for emerging tech-
nologies, such as nanotechnology.

KEY WORDS: Expert elicitation; multicriteria decision analysis; multidisciplinary; nanotechnology;


oversight assessment; risk

1. INTRODUCTION diverse approaches such as performance standards,


tradable allowances, consultations between govern-
U.S. approaches to oversight of research and
ment and industry, and premarket safety and efficacy
technology have developed over time in an effort to
reviews (Wiener, 2004; Davies, 2007). The decision
ensure safety for humans, animals, and the environ-
whether to impose an oversight system, the oversight
ment; to control use in a social context; and, on oc-
elements, the level of oversight (e.g., federal, state,
casion, to promote innovation. In modern times, reg-
local), the choice of approach (e.g., mandatory or
ulatory and oversight tools have evolved to include
voluntary), and its execution can profoundly affect
technological development, individual and collective
interests, and public trust and attitudes toward tech-
∗ Address correspondence to Jennifer Kuzma, Center for Sci- nological products (Rabino, 1994; Zechendorf, 1994;
ence, Technology, and Public Policy, Hubert H. Humphrey Siegrist, 2000; Cobb & Macoubrie, 2004; Macoubrie,
Institute, University of Minnesota, 301-19th Ave. S., Min-
neapolis, MN 55455, USA; tel: 612-625-6337; fax: 612-625-3513;
2005, 2006). Oversight is conducted by a range of
kuzma007@umn.edu. institutions with various capabilities, cultures, and

1197 0272-4332/08/0100-1197$22.00/1 
C 2008 Society for Risk Analysis
1198 Kuzma et al.

motives (e.g., Abraham, 2002). Avenues for disput- Public confidence in and perceptions of risk from
ing oversight decisions are also important, and some technological products are affected by more than
argue that the United States operates in an adver- just the quantitative level of risk. Peoples’ attitudes
sarial regulatory culture in which Congress, the me- are influenced by several factors, such as whether
dia, and stakeholders regularly contest the decisions the risk is voluntary or involuntary, natural or man-
of federal agencies (Jasanoff, 1990). made, controllable or uncontrollable, or familiar or
Oversight for a new and emerging technology, unfamiliar (Rasmussen, 1981; Slovic, 1987). Emerg-
nanotechnology, has recently been the subject of de- ing technologies that are unfamiliar to people and to
bate and analysis. Nanotechnology involves an en- which they are involuntarily exposed often fall into
abling set of products, methods, and tools to con- a category of high “dread” in risk perception stud-
duct science, perform tasks or actions, and generate ies (Slovic, 1987; Siegrist et al., 2007a). Public per-
products at very small scales. Scholars and organiza- ception of nanotechnology in particular—including
tions are currently debating how nanotechnology and trust, perceived benefits, and general attitudes—is
its applications should be overseen, even as products dependent on the specific application or product
are rapidly entering the marketplace (Davies, 2006, (Siegrist et al., 2007b). Current and potential prod-
2007; Kuzma, 2006; Taylor, 2006; PEN, 2007). Nan- ucts of nanotechnology are extremely diverse (PEN,
otechnology has been defined as the “understand- 2007), and oversight systems will need to respond to
ing and control of matter at dimensions of roughly this diversity. Such systems will need not only to re-
1 to 100 nanometers, where unique phenomena en- view scientific information about human health and
able novel applications” (NNI, 2007). It has the po- environmental risks of diverse products while pro-
tential to advance medicine, agriculture, health, and moting innovation and research, but also to be seen
environmental science and provide great benefits to as legitimate, trustworthy, and able to handle many
society (Lane & Kalil, 2005). However, there may types of applications.
be safety concerns related to the special properties In order to systematically analyze oversight sys-
of nanoparticles, such as their greater abilities to tems, make comparisons, and glean lessons for over-
penetrate and translocate (i.e., move across mem- sight of new and emerging technologies such as
branes, organs, and cells) in biological systems (re- nanobiotechnology, we developed a broad set of
viewed in Maynard, 2006). To the extent that there multidisciplinary criteria that could be applied to
is governmental oversight as yet, the U.S. oversight characterize and evaluate any oversight system for
system for nanotechnology currently relies on agen- technological products and applications. We divided
cies and regulations that have provided oversight these criteria into categories to explore the develop-
for related technologies, products, or functions, but ment, attributes, outcomes, and evolution of over-
may not be equipped to adequately handle the novel sight systems. This article reviews the literature on
properties of and unique challenges that may be as- which the criteria were based, describes the method-
sociated with certain nanoproducts (Davies, 2006, ology for developing the criteria, and discusses the
2007; Kuzma, 2006; Taylor, 2006). Recent studies in- features of the set. It also describes the role of
dicate that while the public is excited about nan- criteria-based oversight assessment within a broader
otechnology and its potential benefits, there is con- approach, which we term “integrated oversight as-
cern about who is developing and promoting the sessment” (IOA). Future work and publications will
technology, who will assess and manage the potential utilize the criteria and IOA to compare historical
risks, who will be responsible for monitoring prod- case studies of oversight, so that defensible and
ucts after they hit the marketplace, and who will be evidence-supported lessons to guide the develop-
liable for potential problems (Cobb & Macoubrie, ment of oversight systems for emerging technologies,
2004; Macoubrie, 2005, 2006; Pidgeon, 2006). The such as nanotechnology, can be identified.
public is also concerned about long-term health and
environmental effects and has cited negative experi-
ences with past technologies as a reason to be cau- 2. OVERSIGHT ASSESSMENT APPROACHES
tious with nanotechnology (Macoubrie, 2005). It has
2.1. Previous Approaches
been suggested that proper oversight systems can
lead to greater public confidence, as well as increased Analysis of oversight systems has historically
technological innovation and success (Porter, 1991; been conducted through one or a few perspectives
Porter & van der Linde, 1995; Jaffe & Palmer, 1997). using a small set of criteria often focused on a
An Integrated Approach to Oversight Assessment 1199

particular discipline (e.g., U.S. EPA, 1983; OTA, proaches to identify 15 problems in the oversight
1995; Davies, 2007). Yet, oversight affects multiple of research involving human participants. It then
stakeholders with various viewpoints, values, and sorted them into three categories—structural, proce-
concerns and should pass muster from policy, le- dural, and performance assessment problems—and
gal, economic, ethical, and scientific perspectives. evaluated whether proposed reforms would address
Some stakeholders are most concerned about eco- those challenges. Identified problems in the over-
nomic impacts or job opportunities that result or sight of human subjects research included the abil-
are lost with technological adoption. Others care ity of the oversight system to be consistent, flex-
primarily about the health and environmental im- ible, manage conflicts of interest, and provide for
pacts of new products. Most consumers value param- adequate education of participants (Emanuel et al.,
eters that affect their daily life, such as improved 2004). FERET’s highly quantitative model at one
health, lower costs, better local environments, con- extreme, and the Consortium’s qualitative expert
venience, and quality. Government regulators often group consensus model at the other, show a range of
focus on health risks, costs, and benefits (U.S. EPA, approaches to evaluating oversight. However, the re-
1983; White House, 1993, as amended, 2007). From cent literature examining emerging technologies es-
a global perspective, there are emerging concerns tablishes that technology governance requires collab-
that risks and benefits of technological products be oration among scientists, government, and the public
fairly distributed within and among nations (Singer (Wiek et al., 2007). This suggests that oversight as-
et al., 2005). From an ethical perspective, evalua- sessment should use a broad range of criteria that ad-
tion of emerging technologies may raise issues of dresses the concerns of multiple stakeholders.
conflict with moral principles or values and ques-
tions of whether the oversight process respects them
2.2. Basis of Our Approach
(Walters, 2004). Although not every group or indi-
vidual viewpoint can be accommodated in an over- The goal of this study was to develop a multidis-
sight system, in a democracy such as the United ciplinary approach to more comprehensively evalu-
States, an oversight system should respond to a ate oversight systems for emerging technologies. In
range of viewpoints, values, and concerns (Jasanoff, our work, we define “oversight” broadly, as “watch-
1990; Wilsdon & Willis, 2004; MacNaghten et al., ful or responsible care” that can include regula-
2005). tory supervision, or nonregulatory and voluntary
Some groups are making progress in integrating approaches or systems (Kuzma, 2006). Our IOA
criteria for analysis of oversight frameworks in sys- approach is based in part upon multicriteria deci-
tematic ways. For example, the Fast Environmental sion analysis (MCDA). MCDA relies on the notion
Regulatory Tool (FERET) was designed to evaluate that no single outcome metric can capture the ap-
regulatory options via a computerized template to propriateness or effectiveness of a system, allows
“structure the basic integration of impacts and val- for integrating heterogeneous information, and en-
uations, provide a core survey of the literature, in- ables incorporation of expert and stakeholder judg-
corporate uncertainty through simulation methods, ments (reviewed in Belton & Stewart, 2002). MCDA
and deliver a benefit-cost analysis that reports quan- refers to a range of approaches in which multiple
titative impacts, economics values, and qualitative el- criteria are developed, ranked, and used to com-
ements” (Farrow et al., 2001, p. 430). FERET ad- pare alternatives for decision making. General cat-
dresses distributional issues of who bears the costs egories of criteria have been described, such as
and who receives the benefits of oversight, and uses utility-based criteria (focusing on cost, risk-benefit
sophisticated modeling techniques to incorporate un- comparison, and outcomes), rights-based criteria (fo-
certainty. However, it does not account for other im- cusing on whether people have consented to risk and
portant attributes of oversight systems, for example, their rights are being respected), and best available
those that affect public confidence, legitimacy, devel- technology-based criteria (focusing on using the best
oper satisfaction, and technology development. technologies available to reduce risk to the extent
In contrast, a more qualitative oversight eval- possible) (Morgan & Henrion, 1990a). MCDA can
uation method is used in an article on the delib- be descriptively useful to better understand systems,
erations of the consortium to examine clinical re- stakeholder and expert views, and multiple perspec-
search ethics (Emanuel et al., 2004). This diverse tives on decisions. However, its normative utility is
expert group used qualitative and normative ap- limited because its ability to predict or recommend
1200 Kuzma et al.

Fig. 1. Integrated oversight assessment (IOA) methodology. The IOA approach combines multicriteria decision analysis, quantitative and
qualitative analysis, and historical literature analysis, as described in this article.

the best decision or approach is unclear (Morgan & Expanding on the general framework of MCDA,
Henrion, 1990a). we employ several methods to devise criteria for
MCDA has been used recently to evaluate evaluating and describing oversight systems, includ-
strategies for risk management (e.g., remediating ing review of the relevant literature, historical anal-
environmental hazards such as oil spills) (Linkov ysis, group consensus, and quantitative expert and
et al., 2006, 2007a). An MCDA approach was re- stakeholder elicitation (Fig. 1). Fields of public policy
cently used to evaluate oversight approaches to three (including social science and risk policy), social sci-
hypothetical nanomaterials by eliciting criteria and ence, law, and ethics were considered to develop sev-
weightings from scientists and managers (Linkov eral types of criteria, including those relating to eco-
et al., 2007b). Criteria used were health and eco- nomics, social science, safety, values, and impacts on
logical effects, societal importance, and stakeholder technology research and development. In our IOA
preference, and these were weighted according to approach, we consider viewpoints of different actors
their importance. However, since the products were and stakeholders; the role of various programs, poli-
hypothetical, the criteria were broad and few, and cies, and decisions; and diverse types of impacts. The
the authors of the study ranked the criteria them- breadth of our approach resulting from the integra-
selves, the results were limited to demonstrating how tion of multiple disciplines, literatures, and method-
MCDA could be applied to decisions about the en- ologies makes it challenging. It is a more compre-
vironmental health and safety of nanomaterials. To hensive approach than many others, but it is not
our knowledge, MCDA has neither been applied exhaustive in nature. We acknowledge that it has
to broader oversight policy questions for emerging limitations that stem in part from its reliance on
technologies, nor has it incorporated comprehensive literature analysis and the views of experts and stake-
sets of criteria that address intrinsic values, rights, holders. Broad citizen surveys or public engage-
and fairness, as well as utilitarian outcomes. ment exercises were not directly included. However,
An Integrated Approach to Oversight Assessment 1201

the public concerns and attitudes are represented public policy literature regarding oversight, includ-
in the literature we used to develop and refine the ing materials on criteria utilized in oversight analy-
criteria. Despite the limitations, the multiple disci- sis. We strove to set up a methodology that would
plines and methods employed in IOA make it a be conducive to generating and testing hypotheses
unique approach that is well equipped to understand about oversight systems. In order to probe relation-
many dimensions of oversight systems, depict the ships between features of oversight systems and im-
complexity of oversight, and aid in the design and portant outcomes of them in future work (i.e., in
implementation of more viable and robust systems. application of the criteria to historical case stud-
Our overall methodology is depicted in Fig. 1 and de- ies, see Fig. 1), we categorized criteria into four
scribed in more detail in the following sections. groups—those associated with the initial develop-
ment of the system (e.g., establishment of policies,
procedures, or regulations); the attributes of the sys-
3. DEVELOPING AND CATEGORIZING tem (e.g., how the system operates for particular
CRITERIA FOR OVERSIGHT ASSESSMENT processes or decisions); the outcomes of the sys-
We developed our criteria through a multistage tem (e.g., social, economic, cultural, health, environ-
process by drawing upon the literature, conducting mental, and consumer impacts); and evolution of
historical analysis, and using stakeholder and expert the system (e.g., changes to the development, at-
elicitation and consensus. In order to represent mul- tributes, or outcomes over time). We suspect that
tiple disciplines in our assessment of oversight sys- criteria within and among categories interrelate and
tems, we were inclusive in choosing criteria for initial affect each other. For example, we have hypothe-
consideration. Criteria characterized how the over- sized that the way in which the oversight mecha-
sight system developed, its attributes, evolution, and nism develops and its attributes are related to out-
outcomes. At this stage, we use the term “criteria” comes such as public confidence, health, and envi-
broadly as descriptive or evaluative and do not as- ronmental impacts, and economic effects on indus-
sume which criteria are able to predict good over- try and stakeholders. Outcomes then spur further de-
sight or outcomes that a majority would believe to be velopment and change attributes over time as the
positive (e.g., positive environmental impacts). How- system evolves. Below, we discuss some of the sup-
ever, because we were guided in our choice of crite- porting literature and resulting criteria also listed in
ria by what experts, stakeholders, and citizens value, Table I.
many are likely to prove to be normatively impor- Economic criteria for evaluating oversight sys-
tant for assessing whether oversight is effective or tems are prominent in the literature. For example,
appropriate. Future work and publications compar- in the United States, the federal government of-
ing across six historical case studies (gene therapy, ten formally evaluates oversight systems based on
genetically engineered organisms in the food supply,
human drugs, medical devices, chemicals in the work- agency rulings, bill history, and law review articles. Databases
place, and chemicals in the environment) should al- commonly utilized for legal analysis include both electronic
low us to identify what criteria are predictive of suc- (e.g., Lexis/Nexis, Westlaw, Thomas U.S. Government, Find-
cessful and appropriate oversight (Fig. 1). Judging Law, Legal Research Network, LegalTrac) and hard-copy re-
sources. Databases commonly used for ethics analysis, espe-
which criteria fall into which category is not straight- cially bioethics, include the electronic websites Medline, Med-
forward at this point, as we do not know which inde- BioWorld, ISI Web of Science, Academic Search Premiere,
pendent or descriptive variables will impact the de- Lexis/Nexis, Westlaw, as well as bioethics-specific journals such
pendent or evaluative ones (e.g., outcome criteria) as the American Journal of Bioethics; Journal of Law, Medicine
most positively for oversight until the criteria are de- & Ethics; American Journal of Law & Medicine; Hastings Cen-
ter Report; Health Matrix; Kennedy Institute of Ethics Journal;
ployed across the case studies (see Section 5). and Yale Journal of Health Policy, Law & Ethics. Databases
Initially, 66 criteria were identified from support- for public policy research are diverse, including Ingenta, Google
ing literature (Table I and Appendix A). 1 Searches Scholar, JSTOR, Current Contents, PAIS (Public Affairs Infor-
were conducted using a variety of databases and re- mation Service), Pub Med, Agricola, and PolicyFile, as well as
sources 2 to rigorously research the legal, ethics, and specific journals such as Risk Analysis, Issues in Science and Tech-
nology, Science, Nature, Nature Biotechnology, Environmental
Health and Safety, Journal of Applied Economics, Science and
1 Appendix A is available in the online version of the article. Public Policy, Technology in Society, International Journal of
2 Resources relevant to legal analysis of oversight include statutes, Technology Policy and Law, and Journal of Policy Analysis and
legal cases, the Congressional Record, the Federal Register, Management.
1202 Kuzma et al.

Table I. List of Initial Criteria for Oversight Assessment and Supporting Literature

Guiding Question Supporting Literature

Development
d1 Impetus What were the driving forces? Davies (2007)
d2 Clarity of technological Are the technologies, processes, and products to be Davies (2007)
subject matter overseen well defined?
d3 Legal grounding How explicit are the statutes or rules on which the Fischoff et al. (1981); Slovic (1987); Jasanoff
oversight framework is based? (1990); Porter (1991); OTA (1995); Frewer
Is it clear that the decisionmakers in the framework et al. (1996); NRC (1996); Jaffe & Palmer
have legal authority for the actions they proposed (1997); Siegrist (2000); Ogus (2002); Cobb
at the time? Is there grounding in existing laws? and Macoubrie (2004); Frewer et al. (2004);
Jasanoff (2005); Macoubrie (2005); Davies
(2007) & Siegrist et al. (2007a, 2007b)
d4 Federal authority How strong was the authority for federal actors? Fischoff et al., 1981; Slovic, 1987; Jasanoff, 1990,
2005; Frewer et al., 1996, 2004; NRC, 1996;
Siegrist, 2000; Cobb & Macoubrie, 2004;
Macoubrie, 2005; Davies, 2007; Siegrist et al.,
2007a, 2007b
d5 Industry authority How strong was the authority for industry actors? Einsiedel & Goldenberg, 2004; Stewart &
McLean, 2004; Thompson, 2007
d6 Loci How many loci of authority (e.g., industry, Davies, 2007
government, nonprofit, developers, scientists,
clinicians) for oversight were included in the
development stages?
d7 Stakeholder input Was there a process or opportunities for stakeholder Jasanoff, 1990; Einsiedel & Goldenberg, 2004;
contribution to discussions or decisions about on Stewart & McLean, 2004; Thompson, 2007
what the system is based, how it operates, or how it
is structured?
d8 Breadth of input To what extent were groups of citizens and Fischoff et al., 1981; Slovic, 1987; Jasanoff, 1990,
stakeholders from all sectors of society encouraged 2005; Frewer et al., 1996, 2004; NRC, 1996;
to provide input to decisionmakers who devised Siegrist, 2000; Cobb & Macoubrie, 2004;
the oversight framework? Einsiedel & Goldenberg, 2004; Stewart &
Were some groups missing? McLean, 2004; Macoubrie, 2005; Siegrist et al.,
2007a, 2007b; Thompson, 2007
d9 Opportunity for value How many and what kinds of opportunities did Fischoff et al., 1981; Slovic, 1987; Jasanoff, 1990,
discussions stakeholders and citizens have to bring up 2005; Frewer et al., 1996, 2004; NRC, 1996;
concerns about values or nontechnical impacts? Beauchamp & Walters, 1999; Siegrist, 2000;
Cobb & Macoubrie, 2004; Einsiedel &
Goldenberg, 2004; Emanuel et al., 2004;
Stewart & McLean, 2004; Macoubrie, 2005;
Siegrist et al., 2007b; Thompson, 2007
d10 Transparency Were options that the agencies or other Fischoff et al., 1981; U.S. EPA, 1983; Slovic,
decision-making bodies were considering known 1987; Jasanoff, 1990, 2005; White House,
to the public? 1993, amended 2007; Frewer et al., 1996;
Were studies about the pros and cons of these NRC, 1996; Siegrist, 2000; Cobb &
options available? Macoubrie, 2004; Frewer et al., 2004;
Macoubrie, 2005; Siegrist et al., 2007a, 2007b
d11 Financial resources How sufficient were the funds provided to the Emanuel et al., 2004; OTA, 1995; Davies, 2007
developers of the framework?
d12 Personal education and How trained or educated were actors during the Emanuel et al., 2004; Davies, 2007
training development stage of oversight?
d13 Empirical basis To what extent was scientific or other objective Davies, 2007
evidence used in designing the review or oversight
process central to the framework?

(Continued)
An Integrated Approach to Oversight Assessment 1203

Table I. Continued.

Guiding Question Supporting Literature

Attributes
a1 Legal grounding How explicit are the statutes or rules on which Fischoff et al., 1981; Slovic, 1987; Greene,
specific decisions within the oversight framework 1990; Jasanoff, 1990, 2005; Frewer et al.,
are based? 1996, 2004; NRC, 1996; Cole & Grossman,
Is it clear that the decisionmakers in the framework 1999; Siegrist, 2000; Cobb & Macoubrie,
have legal authority for the actions they propose? 2004, 2005; Davies, 2007; Siegrist et al.,
2007a, 2007b
a2 Data requirement How comprehensive are the safety and other studies Fischoff et al., 1981; U.S. EPA, 1983; Slovic,
required for submittal to authorities? 1987; Greene, 1990; Jasanoff, 1990, 2005;
If the system is voluntary, how comprehensive are Frewer et al., 1996, 2004; NRC, 1996; Cole
the data that are generated and available for & Grossman, 1999; Siegrist, 2000; Cobb &
review prior to decisions about release or Macoubrie, 2004; Macoubrie, 2005;
approval? Davies, 2007; Siegrist et al., 2007a, 2007b
a3 Treatment of Is uncertainty accounted for qualitatively or Fischoff et al., 1981; Slovic, 1987; Jasanoff,
uncertainty quantitatively in data and study submissions? 1990, 2005; Frewer et al., 1996, 2004; NRC,
1996; Siegrist, 2000; Cobb & Macoubrie,
2004; Macoubrie, 2005; Davies, 2007;
Siegrist et al., 2007a, 2007b
a4 Stringency of system Is the system mandatory or voluntary? Fischoff et al., 1981; Slovic, 1987; Greene,
1990; Jasanoff, 1990, 2005; Porter, 1991;
OTA, 1995; Frewer et al., 1996, 2004;
NRC, 1996; Jaffe & Palmer, 1997; Cole &
Grossman, 1999; Siegrist, 2000; Ogus,
2002; Cobb & Macoubrie, 2004, 2005;
Davies, 2007; Siegrist et al., 2007a, 2007b
a5 Empirical basis To what extent is scientific or other objective Fischoff et al., 1981; Slovic, 1987; Jasanoff,
evidence used in making decisions about specific 1990, 2005; Frewer et al., 1996, 2004; NRC,
products, processes, or trials? 1996; Siegrist, 2000; Cobb & Macoubrie,
2004; Macoubrie, 2005; Davies, 2007;
Siegrist et al., 2007a, 2007b
a6 Compliance and To what extent does the system ensure compliance Fischoff et al., 1981; Slovic, 1987; Greene,
enforcement with legal and other requirements and to what 1990; Jasanoff, 1990, 2005; OTA, 1995;
extent can it prosecute or penalize noncompliers? Cole & Grossman, 1999; Frewer et al.,
1996, 2004; NRC, 1996; Siegrist, 2000;
Cobb & Macoubrie, 2004; Macoubrie,
2005; Davies, 2007; Siegrist et al., 2007a,
2007b
a7 Incentives Are the stakeholders in the system encouraged to Davies, 2007
abide by the requirements of the system?
a8 Treatment of How does confidential business information, trade NRC, 2000; PIFB, 2003a; Einsiedel &
intellectual property secrets, or intellectual property get treated in Goldenberg, 2004; Stewart & McLean,
and proprietary applications for approval? 2004; Davies, 2007; Thompson, 2007
information
a9 Institutional structure How many agencies or entities with legal authority Porter, 1991; OTA, 1995; Jaffe & Palmer,
are involved in the process of decision making 1997; Ogus, 2002; Davies, 2007
within the framework?
a10 Feedback loop Can something discovered in later phases of product OTA, 1995; Emanuel et al., 2004; Davies,
review be used to improve early review stages for 2007
other, the same, or modified products/trials in the
future?

(Continued)
1204 Kuzma et al.

Table I. Continued.

Guiding Question Supporting Literature

a11 Formal assessment Is there a regular mechanism for internal or contracted Emanuel et al., 2004
studies on the quality of the system?
a12 Postmarket monitoring Is there a science-based and systematic process for Emanuel et al., 2004; Davies, 2007
detecting risks and benefits after commercial release,
field trials, or clinical trials?
a13 Industry navigation How easily can small, medium, and large companies U.S. EPA, 1983
navigate the oversight system?
a14 Actors involved Is there a wide range of perspectives and expertise Davies, 2007
involved in decision making for projects, trials, or
processes?
a15 Flexibility Can products or trials undergo expedited review when OTA, 1995
appropriate?
Can products or trials be easily stopped when
information on potential risks is presented?
a16 Capacity Is the system well prepared and equipped to deal with OTA, 1995; Davies, 2007
the approvals of trials, products, or processes?
a17 Relationship among Are the various actors confrontational to each other in Davies, 2007
actors interactions?
Are there efforts to work together to understand
differences?
a18 Stakeholder input Is there a process or opportunities for stakeholders to Fischoff et al., 1981; Slovic, 1987; Jasanoff,
contribute to discussions or decisions about whether 1990, 2005; Frewer et al., 1996, 2004; NRC,
certain products, processes, or trials should be 1996; Siegrist, 2000; Cobb & Macoubrie,
approved? 2004; Einsiedel & Goldenberg, 2004; Stewart
& McLean, 2004; Macoubrie, 2005; Siegrist
et al., 2007a, 2007b; Thompson, 2007
a19 Breadth of input To what extent are groups of citizens and stakeholders Fischoff et al., 1981; Slovic, 1987; Jasanoff,
from all sectors of society encouraged to provide input 1990, 2005; Frewer et al., 1996, 2004; NRC,
to decisionmakers about specific actions or approvals? 1996; Siegrist, 2000; Cobb & Macoubrie,
Are some groups “heard” more than others? 2004; Einsiedel & Goldenberg, 2004; Stewart
& McLean, 2004; Macoubrie, 2005;
Thompson, 2007; Siegrist et al., 2007a, 2007b
a20 Opportunities for value How many and what kinds of opportunities do Fischoff et al., 1981; Slovic, 1987; Jasanoff,
discussion stakeholders and citizens have to bring up value 1990, 2005; Frewer et al., 1996, 2004; NRC,
concerns? 1996; Beauchamp & Walters, 1999; Siegrist,
2000; Cobb & Macoubrie, 2004; Einsiedel &
Goldenberg, 2004; Emanuel et al., 2004;
Stewart & McLean, 2004; Macoubrie, 2005;
Thompson, 2007; Siegrist et al., 2007a, 2007b
a21 Consideration of Is the system attentive to the distribution of costs/risks Jasanoff, 1990, 2005; OTA, 1995; Beauchamp
fairness and benefits? & Walters, 1999
Is the system attentive to equal opportunities for all to
benefit from its decisions?
a22 Transparency Are options that agencies or other decision-making Fischoff et al., 1981; U.S. EPA, 1983; Slovic,
bodies are considering known to the public? 1987; Jasanoff, 1990, 2005; White House,
Are studies about the pros and cons of these options 1993, as amended, 2007; Frewer et al., 1996,
available? 2004; NRC, 1996, 2000; Beauchamp &
Is the process for how decisions are made clearly Walters, 1999; Siegrist, 2000; PIFB, 2003a;
articulated to interested parties? Cobb & Macoubrie, 2004; Macoubrie, 2005;
Davies, 2007; Siegrist et al., 2007a, 2007b

(Continued)
An Integrated Approach to Oversight Assessment 1205

Table I. Continued.

Guiding Question Supporting Literature

a23 Conflict of interest Do independent experts conduct or review safety Einsiedel & Goldenberg, 2004; Emanuel et al.,
studies? Are conflicts of interest disclosed 2004; Stewart & McLean, 2004; Davies, 2007;
routinely? Thompson, 2007
a24 Conflict of views How are conflicting views handled in the review
of products, processes, and trials?
a25 Economic costs and What role does cost-benefit analysis play in U.S. EPA, 1983; OTA, 1995
benefit considered approvals?
a26 Accountability and Is there a fair and just system for addressing Davies, 2007
liability product or trial failures with appropriate
compensation to affected parties and/or
environmental remediation?
a27 Education of To what extent does the system make efforts to Emanuel et al., 2004; Davies, 2007
decisionmakers, educate the interested and affected parties, as
stakeholders well as the decisionmakers?
a28 Informed consent To what extent does the system supply the Fischoff et al., 1981; Slovic, 1987; Jasanoff, 1990,
amount and type of information so that people 2005; Frewer et al., 1996, 2004; NRC, 1996;
can make informed decisions about what they Beauchamp & Walters, 1999; Siegrist, 2000;
will accept? Cobb & Macoubrie, 2004, 2005; Siegrist et al.,
2007a, 2007b
a29 International How well does the system match up with other Newell, 2003
harmonization systems around the world?
Evolution
e1 Extent of change in To what extent has the system changed over OTA, 1995
attributes time?
e2 Distinguishable periods Can separable periods in the oversight history be
of change distinguished?
e3 Extent of change in To what extent have the system’s attributes OTA, 1995
attributes changed over time?
e4 Change in stakeholder To what extent have stakeholder opinions
satisfaction changed during the evolution of the system?
e5 Public confidence To what extent have public opinions changed OTA, 1995
during the evolution of the system?
Outcomes
o1 Product safety What is the number of adverse reports compared OTA, 1995
to the number of approvals?
o2 Time and costs for How long does it take and how much does it cost U.S. EPA, 1983; OTA, 1995; Emanuel et al., 2004
market approval for approval?
o3 Recalls What is the number of recalls compared to the OTA, 1995; Davies, 2007
number of approvals?
o4 Stakeholder satisfaction How well do stakeholders and experts regard the Beauchamp & Walters, 1999
system?
o5 Public confidence What do the public or citizens think about the Porter, 1991; Rabino, 1994; OTA, 1995; Fewer
system? et al., 1996; Siegrist, 2000; Macoubrie, 2005,
How about disadvantaged, special, or susceptible 2006
populations?
o6 Effects on social groups Are the net effects of approvals positively Jasanoff, 2005
affecting the vast majority of social groups?
o7 Cultural effects Are the net effects of approvals positively OTA, 1995; Beauchamp & Walters, 1999;
affecting people and their cultures? Jasanoff, 2005
o8 Research impacts Has the system enhanced and supported Porter, 1991; OTA, 1995; Jaffe & Palmer, 1997;
research either on environmental health and Ogus, 2002
safety or in the development of products?

(Continued)
1206 Kuzma et al.

Table I. Continued.

Guiding Question Supporting Literature

o9 Innovation Has the system led to more innovation in the field or U.S. EPA, 1983; Greene, 1990; Porter, 1991;
stifled it? OTA, 1995; Cole & Grossman, 1999;
Rabino, 1994; Jaffe & Palmer, 1997;
Siegrist, 2000; Ogus, 2002; Macoubrie,
2005, 2006
o10 Health Does the oversight system impact health in positive U.S. EPA, 1983; White House, 1993, as
ways? amended, 2007; Beauchamp & Walters,
1999
o11 Distributional health Are the health impacts equitably distributed? U.S. EPA, 1983; White House, 1993; OTA,
impacts Is there an inequitable impact on specific social or 1995; Beauchamp & Walters, 1999
disadvantaged groups?
o12 Environmental impacts Does the oversight system impact the environment in U.S. EPA, 1983; Greene, 1990; White House,
positive ways? 1993; OTA, 1995; Cole & Grossman, 1999
o13 Nonindustry economic How does the system impact nonindustry U.S. EPA, 1983; OTA, 1995; Beauchamp &
impacts stakeholder groups economically? Walters, 1999; Jasanoff, 2005
o14 Effects on big How are big companies doing, financially and U.S. EPA, 1983; Porter, 1991; Rabino, 1994;
corporations otherwise, as a result of the system? OTA, 1995; Jaffe & Palmer, 1997; Siegrist,
2000; Ogus, 2002; Newell, 2003;
Macoubrie, 2005, 2006
o15 Effects on small- to Are SMEs disadvantaged as a result of the oversight U.S. EPA, 1983; Porter, 1991; Rabino, 1994;
medium-sized system? Are they suffering? OTA, 1995; Jaffe & Palmer, 1997; Siegrist,
enterprises (SMEs) 2000; Ogus, 2002; Newell, 2003;
Macoubrie, 2005, 2006
o16 Economic development Does the approval of the products or trials improve U.S. EPA, 1983; Porter, 1991; Rabino, 1994;
the overall economic situation of the nation? OTA, 1995; Jaffe & Palmer, 1997; Siegrist,
2000; Ogus, 2002; Macoubrie, 2005, 2006
o17 Global competitiveness Does the oversight system disadvantage the United U.S. EPA, 1983; OTA, 1995; Newell, 2003
for the United States States in the global marketplace?
o18 Distributional Does the approval of the products or trials improve U.S. EPA, 1983; Porter, 1991; OTA, 1995;
economic impacts the economic situation of rural or developing Jaffe & Palmer, 1997; Ogus, 2002
world citizens?
o19 Proposals for change Have there been proposals for change resulting from OTA, 1995
the oversight system?

Note: Examples of supporting literature are listed for most of the initial 66 criteria. References to authors as supporting literature reflect our
interpretation of that literature. Although the authors of the literature referenced may not have explicitly stated that particular criterion
for evaluation of an oversight system, their work supports the importance of that criterion.

costs and benefits through regulatory impact assess- e.g., a25, a2, a13, o2, o9, o13, o14, o15, o16, o17, o18).
ment (RIA) and economic analyses (U.S. EPA, 1983, Executive Order 12,866 suggests somewhat broader
2000). Oversight systems often originate with statu- criteria, requiring that every new regulation be sub-
tory systems that are then detailed and implemented jected not only to a cost-benefit test, but also that
by regulatory agencies by formal notice and com- analysis include: (1) evaluation of the adverse side
ment rule-making. RIA and economic analyses fo- effects of regulations on health and the environment,
cus on the benefits and costs of proposed rules and (2) qualitative assessment of distributional impacts,
decisions made under those regulatory systems (U.S. and (3) assurances of transparency (White House,
EPA, 1983, 2000). Proposed rules often are key as- 1993, as amended, 2007). Based on these government
pects or implementations of oversight systems. Thus, documents, we included transparency in both the de-
cost effectiveness is embedded in several of our cri- velopment and execution of oversight systems (d10,
teria, particularly those related to the attributes and a22), the consideration of distributional health im-
outcomes of the system (Table I and Appendix A, 3 pacts (o11), and health and environmental impacts
(o10, o12).
Other criteria for the analysis of oversight sys-
3 The letter-number combinations in parentheses refer to criteria tems have been described as part of MCDA (Morgan
in Table I and Appendix A throughout this section.
An Integrated Approach to Oversight Assessment 1207

& Henrion, 1990a; Linkov et al., 2006, 2007a, 2007b), mance of firms (Porter, 1991; Jaffe & Palmer, 1997;
but deal more broadly with the system as a whole Ogus, 2002). Yet, other studies indicate that regula-
as opposed to particular decisions (e.g., OTA, 1995; tions can decrease research productivity, particularly
Davies, 2007). The Office of Technology Assessment for smaller firms (Thomas, 1990). In our criteria, the
used seven criteria to assess regulatory and nonregu- legal grounding (d3), stringency of the system (a4),
latory environmental policy tools and these appear in institutional structure (a9), economic impacts (o14,
similar forms in our criteria list (OTA, 1995): cost- o15, o16, o18), and effects on research and innova-
effectiveness and fairness (a21, a25, o2, o11, o12, tion (o8, o9) were included to explore these rela-
o13, o14, o15, o18), minimal demands on government tionships in our historical case studies. There is also
(d11, a9, a16, a25, o18), assurance to the public that evidence that mandatory systems lead to better at-
environmental goals will be met (e5, o1, o5, o5, o12), tributes and outcomes as far as compliance, innova-
prevention of hazards and exposure when possible tion, and environmental impacts (e.g., Greene, 1990;
(a4, a6, d3), consideration of environmental equity Cole & Grossman, 1999). Criteria (a1, a2, a4, a6, o9,
and justice issues (a21, o7, o11, o12), adaptation to o12) were included to explore these relationships as
change (a15, a10, e1, e3, o19), and encouragement of well.
technology innovation and diffusion (o1, o3, o8, o9, Several criteria relating to what oversight fea-
o14, o15, o16, o17, o18). tures citizens believe to be important were derived
The criteria we chose also overlap with many from the public engagement and risk perception lit-
of the criteria that Davies (2007) suggests for over- erature. That literature shows that citizens may ap-
sight of nanotechnology by the EPA: incentives for preciate transparency (d10, a22), exercising rights to
industry to do long-term testing (a1, a3, a4, a5, a7, know and choose (a22, a28), opportunities for mean-
a12, a26), monitoring capabilities (a10, a12,), legal ingful input not limited to the quantitative risk (d8,
authority (d3, a1), empirical basis (d13, a5), resources d9, a18, a19, a20), and mandatory requirements for
for agencies (d4, d11, d12, a16, a27), clarity of ma- safety testing and regulation (d3, d4, a1, a2, a3, a4,
terials to be regulated (d2, a1), recall authority (a4, a5, a6) (Fischoff et al., 1981; Slovic, 1987; Jasanoff,
a10, a12, o3), burden of proof on manufacturers 1990, 2005; Frewer et al., 1996; NRC, 1996; Siegrist,
(a2, a6, a26), data requirements (a2), prohibition of 2000; Cobb & Macoubrie, 2004; Frewer et al., 2004;
marketing (a1, a6), timely adverse events reporting Macoubrie, 2005; Siegrist et al., 2007a, 2007b). Rig-
(a10, a12), transparency in safety review (a8, a22), orous oversight can foster consumer or public con-
incentives for risk research (a2, a7, o8), proper insti- fidence and trust (o5) and ultimately the success of
tutional structures (d6, a9, a14), power relationships beneficial technologies (o9, o14, o15, o16) (Porter,
and impacts on oversight (a14, a17, a23), and political 1991; Rabino, 1994; Siegrist, 2000; Macoubrie, 2005,
will (d1). 2006).
Our oversight criteria also address the fact that Other criteria were based on the social science,
oversight systems can affect the competitiveness ethics, and science and technology studies litera-
of the nation, particularly in the context of trade ture. For example, impacts from oversight systems
and World Trade Organization (WTO) agreements for genetically engineered organisms have been doc-
(Newell, 2003). Trade can be affected in positive umented to include changes in industry structure,
or negative ways due to different standards or test- farmer relationships, and cultural systems (o6, o7,
ing requirements. For example, U.S. grain exporters o13) (Jasanoff, 2005). Power relationships and trust
have lost hundreds of millions of dollars in trade are influenced by the treatment of intellectual prop-
with the European Union (EU) because U.S. vari- erty (a8), the involvement of industry in decision-
eties of genetically engineered food crops not ap- making and safety testing (d5, a23), and whether
proved in the EU are not segregated from other va- there are opportunities for wider public input (d8, d7,
rieties in the United States (Paarlberg, 2002). These d9, a18, a19, a20). These factors may affect the pub-
broader economic considerations and international lic legitimacy of decisions made about new techno-
harmonization of oversight were also included in our logical products (e.g., Einsiedel & Goldenberg, 2004;
initial list of 66 criteria (a29, o14, o15, o17). Some Stewart & McLean, 2004; Thompson, 2007). Confi-
analysts have hypothesized that mandatory regula- dential business information (CBI) and treatment of
tory systems with clear standards can foster inno- intellectual property (a8) have affected the ability
vation, ultimately improving the economic perfor- of stakeholders and scientists outside of industry to
1208 Kuzma et al.

access information about technological products be- 4. EXPERT AND STAKEHOLDER


fore, during, and after regulatory review; thus, we in- ELICITATION FOR IDENTIFYING
clude transparency among the criteria (a22) (PIFB, KEY CRITERIA
2003a; NRC, 2000). Transparency has been proposed
as a precondition to public trust and confidence (o5), The 66 criteria described in the previous sec-
although it is not itself sufficient for trust (Frewer tion were too numerous to be analytically tractable
et al., 1996). Also, transparency and public con- for future work on historical analysis of oversight
sultation (d7–d9; a18–a20) enhance the credibility for emerging technologies. Thus, we assembled a
of oversight systems if input is considered care- panel of experts and stakeholders as a Working
fully by decisionmakers and not ignored (e.g., a21) Group to seek their input and consensus on cri-
(Jasanoff, 1990). Cash et al. (2003) suggest that teria to be used in the six historical case studies
the salience, credibility, and legitimacy of informa- (Fig. 1). The 12 Working Group members, by disci-
tion produced from systems for managing bound- plinary background, expertise, and type of affiliation,
aries between knowledge creation and action (like respectively, included: cell biology, nanobiotechnol-
oversight systems) are enhanced by communica- ogy, academe; health policy, law, academe; medicine,
tion, mediation, and translation among decision- biochemistry, small industry; business, food, large in-
makers, the public, and stakeholders during deci- dustry; applied economics, regulation, academe; en-
sion making. Several of our criteria relate to their vironmental law, academe; regulatory policy, law,
ideas, such as the inclusion of diverse stakeholders consumer organization; toxicology, public policy,
and opportunities for public input at key junctures nongovernmental organization (NGO); environmen-
in oversight systems (d7–d10, a14, a17–a22, a27– tal policy, sociology, academe; science communica-
a28). tion, sociology, academe; mechanical engineering,
The ethics literature is reflected in principles of nanoparticles, academe; and engineering and public
equity, justice, rights to know and choose, and benef- policy, environmental policy, academe. The Working
icence or the minimization of harm (d9, a20, a21, a22, Group agreed that it was necessary to refine the num-
a28, o4, o7, o11, o13, o10, o13) (Beauchamp & Wal- ber of criteria to a manageable set.
ters, 1999). For clinical trial oversight systems, the The members of the Working Group all met
ability to address major ethical issues (d9, a20), do several well-established conditions to qualify as
so in a timely manner (o2), manage conflicts of in- “experts.” These conditions include substantive con-
terest (a23), educate decisionmakers (d12, a27), pro- tributions to the scientific literature (Wolff et al.,
vide sufficient resources (d11), report adverse events 1990), status in the scientific community, member-
in a timely fashion (a12), and conduct formal assess- ship on editorial committees of key journals (Siegel
ments (a10) have been identified as key attributes et al., 1990; Evans et al., 1994), membership on ad-
(Emanuel et al., 2004). visory boards, and peer nomination (Hawkins &
Criteria within a category and among the four Evans, 1989). The Working Group provided a variety
categories (i.e., development, attributes, evolution, and balance of institutional perspectives. Some mem-
and outcome) are not mutually exclusive. Given our bers represent stakeholder groups that are interested
approach to capture the evolution, operation, and in or affected by historical models of oversight or
adaptation of systems, there is some overlap in our nanotechnology oversight. Most members have had
list (Table I). For example, economic development extensive experience with oversight systems and fed-
outcomes (o16) cannot be separated from effects eral regulatory frameworks in one or more of the
on large corporations (o14). Similarly, health (o10) six areas and all have had some experience with
and environmental impacts (o12) often cannot be them.
fully distinguished (i.e., environment affects human The Working Group was assembled, presented
health). A given criterion may be reflected in more with the criteria list derived from the literature
than one category. For example, transparency ap- (Table I; Appendix A available online), and asked
pears both in the development and attributes cate- to arrive at consensus on what criteria were im-
gory, reflecting its importance both in establishing portant for oversight assessment. The derivation
oversight systems and in making particular decisions of consensus among the panel members was ap-
about products or applications of technologies (d10, proached from two complementary angles: be-
a22). havioral and mathematical. Behavioral approaches
An Integrated Approach to Oversight Assessment 1209

generally rely on psychological factors and interac- chose to use both types of approaches to strengthen
tions among experts. Behavioral approaches are the the quality of the input from the Working Group.
dominant means of achieving consensus in bioethics, The behavioral approach was used to make adjust-
law, and public policy groups. In bioethics, for exam- ments to criteria, add or reword criteria, and glean
ple, multidisciplinary dialogical consensus-building is general principles of good oversight. The mathemat-
standard, and federal and state committees and pro- ical approach involved a quantitative expert elicita-
fessional societies have long used this method to tion process whereby the Working Group members
generate consensus. Moreno (2004) describes a con- were asked to assign values, or probabilities, indicat-
sensus process that has worked successfully in ad- ing how important each criterion was for the evalua-
dressing bioethical and policy problems in which tion of oversight models (Appendix A).
members of the group approach the issues with For the elicitation, we asked each member to as-
openness, analyze the problem from a range of sess the importance of each criterion for oversight
perspectives (usually ethical, legal, policy, scien- assessment. The question of “How important is it to
tific, and medical), articulate the arguments in consider this criterion in our oversight case studies?”
favor of alternative positions, and work toward was posed. The members were asked to rank the im-
agreement. During the course of a two-day meet- portance of each criterion for oversight assessment,
ing with our Working Group, this process was based on their experience and knowledge, on a scale
used and aided substantially by our prior anal- from 0 to 100 with the option of referring to qualita-
ysis of the literature and synthesis of candidate tive descriptions of different probability levels. These
criteria. levels included: Certain (100); Near Certain (80–99);
Mathematical schemes designate a functional ag- Probable, Likely, We Believe (60–80); Even Chance
gregation rule that accepts inputs from each expert (40–60); Less than an Even Chance (20–40); Improb-
and returns an arbitrated consensus (Winkler, 1968, ably, Probably Not, Unlikely, Near Impossibility (1–
1986; Genest & Zidek, 1986). Expert elicitation has 20); Impossible (0). Twelve members of the Working
been typically used to estimate uncertain quantities Group participated in the exercise. STATA and Ex-
(Morgan & Henrion, 1990b). There is not one “best” cel software were used to analyze the results from the
way to conduct an expert elicitation; however, at- elicitation. A subsequent data report included sum-
tributes of good protocols include flexibility in ap- maries of responses as histograms as well as means
proach, introduction of the expert to the general and median values and standard deviations for each
task of elicitation, focus on the subject matter to be criterion (Table II).
judged, and good definition of the quantity (or in Following the elicitation exercise, we used both
this case, the oversight criteria) that is to be elicited the quantitative results from it and behavioral con-
(Morgan & Henrion, 1990b). For our quantitative sensus approaches with the authors and Working
approach, we used a version of expert elicitation Group to derive a streamlined set of key criteria for
with the goal of gaining empirical information about oversight assessment. Project staff and the Working
what criteria are important for oversight assessment. Group had initially agreed upon a target number of
We followed these principles by remaining flexible approximately 20 criteria for evaluations of the six
in incorporating feedback from the Working Group case studies. The Working Group believed that this
members up to the elicitation; spending a day prior number would reduce the list of criteria to a manage-
to the elicitation to give background on the subject able level for analysis while retaining a good degree
matter (e.g., reviewing the six historical case studies of breadth and coverage. Thus, we chose a cut-off
of oversight and emerging issues in nanotechnology score from the expert elicitation that would reduce
oversight); providing a primer on expert elicitation the number of criteria to approximately 20. We se-
before the exercise; and defining each criterion with lected criteria for which over eight of the members
not only a description of what it is, but also an exam- (>70%) gave a score of at least 70 (out of 100). This
ple interpretation of that criterion and guiding ques- dropped 42 criteria from the list, with 24 remaining
tion to help with the ranking of it (Appendix A). (Table II).
Behaviorally derived agreements often suffer Results of the elicitation indicated that me-
from problems of personality and group dynam- chanics of oversight were important to the Work-
ics. Mathematical approaches avoid these problems ing Group: compliance and enforcement, incentives,
but introduce their own set, as numerically dictated institutional structure, flexibility, and capacity re-
compromises may be universally unsatisfactory. We mained on the list of 24 criteria. Additionally, public
1210 Kuzma et al.

Table II. Criteria Analysis: Mean and Median Values and Table II. Continued.
Standard Deviation for Each Criterion after Expert Ranking
Mean Median SD
Development Mean Median SD
Outcomes
Development o1 Product safety 71 78 26
d1 Impetus 80 83 12 o2 Time and costs for market 69 75 25
d2 Clarity of technological subject 76 80 18 approval
matter o3 Recalls 57 62 28
d3 Legal grounding 69 70 16 o4 Stakeholder satisfaction 70 70 15
d4 Federal authority 70 77 19 o5 Public confidence 76 80 18
d5 Industry authority 65 75 26 o6 Effects on social groups 57 60 18
d6 Loci 62 60 27 o7 Social, ethical, and cultural effects 63 60 21
d7 Stakeholder input 81 88 16 o8 Research impacts 86 90 13
d8 Breadth of input 71 67 16 o9 Innovation 74 85 28
d9 Opportunity for value 59 57 17 o10 Health 85 88 12
discussions o11 Distributional health impacts 80 82 14
d10 Transparency 80 85 18 o12 Environmental impacts 82 84 15
d11 Financial resources 69 75 20 o13 Nonindustry economic impacts 75 78 18
d12 Personnel education and training 66 73 23 o14 Effects on big corporations 56 62 28
d13 Empirical basis 69 78 23 o15 Effects on small- to medium-sized 58 68 29
Attributes enterprises
a1 Legal grounding 74 78 17 o16 Economic development 61 68 25
a2 Data requirement 82 84 10 o17 Global competitiveness for the 64 65 20
a3 Treatment of uncertainty 81 72 16 United States
a4 Stringency of system 78 83 15 o18 Distributional economic impacts 61 62 20
a5 Empirical basis 82 84 11 o19 Proposals for change 72 70 16
a6 Compliance and enforcement 83 90 13
a7 Incentives 73 84 21 Note: Gray boxes refer to criteria that were eliminated according
a8 Treatment of intellectual 74 80 22 to consensus cut-off scores: over eight experts (>70%) rating the
property and proprietary criterion as 70 or higher.
information
a9 Institutional structure 62 68 23
a10 Feedback loop 69 75 22 confidence in oversight was rated highly by the
a11 Formal assessment 63 70 24 Working Group as an important outcome of over-
a12 Postmarket monitoring 76 82 19 sight systems. However, criteria associated with eco-
a13 Industry navigation 71 78 22 nomic impacts on industry ranked lower than we ex-
a14 Actors involved 70 75 17
a15 Flexibility 76 81 15
pected from our expert and stakeholder group, which
a16 Capacity 79 81 13 contained members from corporations and academic
a17 Relationship among actors 63 65 24 researchers who work with industry to develop new
a18 Stakeholder input 70 77 23 technological products or applications. While the lit-
a19 Breadth of input 59 60 28 erature reflects an emphasis on the need for over-
a20 Opportunities for value discussion 53 50 23
sight systems to reduce burdens on developers of
a21 Consideration of fairness 74 78 17
a22 Transparency 82 88 17 products or applications (e.g., OTA, 1995; IRGC,
a23 Conflict of interest 86 90 9 2006), this group rated economic outcomes of over-
a24 Conflict of views 70 72 20 sight lower than most other criteria (Table II). This
a25 Economic costs and benefits 66 74 26 result could reflect different industry viewpoints in
considered
our Working Group with respect to the community at
a26 Accountability and liability 72 72 13
a27 Education of decisionmakers, 65 70 21 large.
stakeholders To derive a final list of criteria, we used a be-
a28 Informed consent 82 88 14 havioral consensus approach to combine the quan-
a29 International harmonization 66 68 23 titative elicitation results with our knowledge of the
Evolution
literature and qualitative Working Group input. We
e1 Extent of change in attributes 59 77 30
e2 Distinguishable periods of change 48 50 30 examined the quantitative results for each criterion
e3 Extent of change in attributes 58 62 21 carefully. Recognizing the imperfections of expert
e4 Change in stakeholder satisfaction 60 62 22 elicitation and mathematical approaches to consen-
e5 Public confidence 68 70 15 sus, we reinstated and combined a few criteria, and
(Continued) revised the description of several based on feedback
An Integrated Approach to Oversight Assessment 1211

Table III. Final Set of Criteria

Description and Guiding Question(s)

Development: 7 Criteria (D)


1. Impetus Historical context and driving forces behind the system of oversight or the reasons for developing
the basic framework for the oversight system. The guiding question here is: “What were the
driving forces?” Examples could include intense public or legal pressure, key technological
developments, or in response to an adverse event (reactive) or emerging concerns about the
potential risks and benefits prior to legal or public pressure, technological developments, or any
release or adverse event (proactive).
2. Clarity of technological Clarity or definition of the technologies, processes, and products to be overseen. The guiding
subject matter question here is: “Are the technologies, processes, and products to be overseen well-defined?”
Examples could include that the technologies, processes, and products as the subject of oversight
are unclear or ill defined (not clear) or that they are well defined and it is clear what falls into
oversight (clear).
3. Legal grounding Basis for development and the clarity of the statutes or rules for implementing the newly developed
framework and achieving its goals. The guiding questions are: “How explicit are the statutes or
rules on which the oversight framework is based? Is it clear that that the decisionmakers in the
framework have legal authority for the actions they proposed? Is there grounding to existing
laws?” Examples could be that there is considerable ambiguity and room for interpretation in
executing the policies in the oversight framework (weak) or that there is little ambiguity about
what the agencies can or cannot do (strong).
4. Public input Inputs shaping the creation of the oversight system, and the extent of opportunities for engaged
stakeholders including nongovernmental organizations, trade associations, academics, industry,
and other affected groups, to provide input into the development of the initial framework. This
includes input on both scientific questions, as well as values questions (social, cultural, and
ethical). The guiding question is: “Was there a process or opportunities for stakeholders to
contribute to discussions or decisions about the basis of the system, how it operates, or how it is
structured?” Examples could be that the government or overseers (in the case of a voluntary
system) made decisions based largely in the absence of stakeholder input (minimal) or that the
overseers made decisions based on a formal process above and beyond Federal Register notices for
soliciting input from stakeholders (significant).
5. Transparency Extent to which interested parties could obtain information about decisions during the development
of the framework. The guiding question is: “Were options that the agencies or other
decision-making bodies were considering known to the public? Were studies about the pros and
cons of these options available?” Examples include that decisionsmakers laid out options and
studies that compared and contrasted them to the public prior to completing the framework (high)
or the framework was published as a draft in the Federal Register, but the process for arriving at
the framework remained unknown to interested and affected parties (low).
6. Financial resources Funding and resources and the amount of money allocated to the development of the oversight. The
guiding question is: “How sufficient were the funds provided to the developers of the
framework?” Examples include that no money was set aside for oversight development (not at all)
or ample funds were available for oversight development (sufficient).
7. Empirical basis Empirical basis for development of the oversight system, including the amount and quality of
evidence (scientific, risk, benefit, or social impact studies) used. The guiding question is: “To what
extent was scientific or other objective evidence used in designing the review or oversight process
central to the framework?” Examples include that there was a body of evidence used to assess the
important features of data submissions, clinical studies, etc., that would be required in the general
framework (strong basis) or that during the development of the framework there was little to no
information available on the nature or extent of the risks and benefits, or other impacts of
products or processes, in that qualitative speculation or predictions were used to generate the
framework for oversight (weak basis).
Attributes: 15 Criteria (A)
8. Legal basis Legal and policy structure, that is, the clarity of the statutes or rules for implementing the specific
decisions for processes, trials, research, or products within the oversight framework and achieving
its goals. The guiding questions are: “How explicit are the statutes or rules on which specific
decisions within the oversight framework are based? Is it clear that the decisionmakers in the
framework have legal authority for the actions they propose?” Examples include that there is little
ambiguity about what the agencies can or cannot do in the context of a specific application or
product (strong) or there is considerable ambiguity and room for interpretation in executing
specific decisions (weak).

(Continued)
1212 Kuzma et al.

Table III. Continued.

Description and Guiding Question(s)

9. Data requirements and Extent to which empirical studies are submitted prior to market approval, release, or clinical trials,
stringency and whether there is adequate legal authority to require data. The guiding questions are: “How
comprehensive are the safety and other studies required for submittal to authorities? If the system
is voluntary, how comprehensive are data generated and are they available for review prior to
decisions about release or approval? How much regulatory authority is there for requesting new
data?” Examples include that a letter is submitted that describes the composition of the product
and there is little authority to request more data and assure compliance (weak) or that a battery of
safety studies that extensively address environmental and human health risks are required and
backed up with adequate regulatory authority to assure compliance (strong).
10. Postmarket monitoring Systematic monitoring for adverse or beneficial events after the product is released or trials begin.
The guiding question is: “Is there a science-based and systematic process for detecting risks and
benefits after commercial release, or field or clinical trials?” Examples include that once the
product is released or trials begin, there is no monitoring for adverse events except anecdotally
(little) or that there is an extensive system for reporting potential adverse events and
consolidating and evaluating them after a trials begin or product is released (extensive).
11. Treatment of uncertainty Reporting ranges of possible values or studies in data that are submitted, whether subpopulations
are considered, acknowledgment of areas for which little scientific information is available, and
recognition that the hazards may not be well categorized. The guiding question is: “Is uncertainty
accounted for qualitatively or quantitatively in data and study submissions?” Examples include
that the risk analyses on which decisions are based use uncertainty modeling, account for
subpopulations, and qualitatively describe what is unknown (extensive) or point estimates are
used based on population averages and narratives of sources of uncertainty are omitted (limited).
12. Empirical basis Amount and quality of evidence (scientific, risk, benefit) used for particular decisions. The guiding
question is: “To what extent was scientific or other objective evidence used in making decisions
about specific products, processes, or trials?” Examples include that high-quality, extensive
evidence on safety is required for product submissions, clinical studies, or field trials (strong basis)
or low-quality minimal evidence is required for making decisions (weak basis).
13. Compliance and Programs and procedures in place to ensure compliance with the oversight process, and in the cases
enforcement where there is a lack of compliance, consequences and corrections will result. The guiding
question is: “To what extent does the system ensure compliance with legal and other requirements
and can prosecute or penalize noncompliance?” Examples include that there is little compliance
or enforcement of requirements (weak) or there is much compliance and enforcement with
requirements (strong).
14. Incentives Incentives, financial or otherwise, for compliance with system requirements. The guiding question is:
“Are the stakeholders in the system encouraged to abide by the requirements of the system?”
Examples include that there is no incentive structure for compliance (few) or there are many
incentives for compliance in the oversight system, beyond product or trial approval (many).
15. Treatment of intellectual Treatment of intellectual property and confidential business information. The guiding questions
property and proprietary include: “How does confidential information get treated in applications for approval? How does
information intellectual property factor in?” Examples include that decisionmakers share with the public
business information and intellectual property is dealt with in an adequate way (high) or business
information is considered confidential and not shared with the public and intellectual property is
not dealt with in an adequate way (low).
16. Institutional structure Type of structure of the framework with regard to the number and/or complexity of the actors
involved, most notably federal agencies. The guiding question is: “How many agencies or entities
with legal authority are involved in the process of decision making within the framework?”
Examples include that there is a single authority with a simple and concentrated procedure
(simple) or that there are multiple authorities, with a complexity of overlap or potential gaps
(complex).
17. Flexibility Ability for the framework to be flexible in unique or urgent situations or when new information is
obtained. Guiding questions are: “Can products or trials undergo expedited review when
appropriate? Can products be withdrawn or trials easily stopped when information on potential
risks is presented?” Examples include that the system is rigid in the sense that there is only one
option or path and it is difficult to change this pattern with new information (low) or the system
provides for numerous ways to account for unique and emerging situations (high).

(Continued)
An Integrated Approach to Oversight Assessment 1213

Table III. Continued.

Description and Guiding Question(s)

18. Capacity Resources of system, whether expertise, personnel, or financial, to appropriately handle decisions. Is
the system well-prepared and equipped to deal with the approvals of trials, products, or
processes? The agency staff are stretched thin and do not have time to do a good job with specific
decisions (inadequate). Agency staff are provided with resources, expertise, and time to give
proper and high quality attention to process (adequate).
19. Public input Extent of opportunities for engaged stakeholders (nongovernmental organizations, trade
associations, academics, industry, citizen groups, and other affected groups) to provide input into
specific or categories of decisions before they are made or during the process. The guiding
question is: “Is there a process or opportunities for stakeholders to contribute to discussions or
decisions about whether certain products, processes, or trials should be approved?” Examples
include that the government or overseers (in the case of a voluntary system) make decisions
largely in the absence of stakeholders (minimal) or that the overseers make decisions based on a
formal process, above and beyond notice of and comments on rule-making, for soliciting input
from stakeholders (significant).
20. Transparency Extent to which interested parties can obtain information about decisions during particular decisions
that are being made within the oversight framework. The guiding questions are: “Are options that
agencies or other decision-making bodies are considering known to the public? Are studies about
the pros and cons of these options available? Is the process for how decisions are made clearly
articulated to interested parties?” Examples include that decisionmakers divulge the processes
and authorities for review, options for, and studies about particular products or events as they are
being considered and it is easy for citizens to track the process for and basis of decisions (high) or
decisions are published in the Federal Register, but it is difficult to figure out how, when, and by
what criteria products, processes, or trials are reviewed (low).
21. Conflicts of interest Ability of the system to ensure that conflicts of interest do not affect judgment. Guiding questions
are: “Do independent experts conduct or review safety studies? Are conflicts of interest disclosed
routinely?” Examples include that there is no disclosure of conflicts of interest and industry
largely conducts studies on its own products without external review, except by agency staff,
(prominent) or every possible effort is made to avoid or disclose conflicts of interest (avoided).
22. Informed consent Stakeholders’, patients’, research participants’, or the public’s ability to know, understand, and
choose their exposure or the level of risk they accept. The guiding question is: “To what extent
does the system supply the amount and type of information so that people can make informed
decisions about what they will accept?” Examples include that the public has little information
about whether it is exposed, consuming the product, or subject to certain risks (little) or the public
is meaningfully informed about its exposure and the risks (extensive).
Evolution: 1 Criterion (E)
23. Extent of change in attributes Extent of change to the system over time. The guiding question is: “To what extent has the system
changed over time?” Examples include that there were no change at all (none) or that there were
significant structural changes (extensive). Change can indicate appropriate evolution of the system
based on new information or in response to adverse events.
Outcomes: 5 Criteria (O)
24. Public confidence Public confidence in the system, including views about product or trial safety and trust in actors.
Guiding question is: “What do diverse citizens and stakeholders think about the system, including
disadvantaged, special, or susceptible populations?” Examples include that there is widespread
fear and mistrust among the public (low) or that there is a general feeling that the oversight
systems and decisionmakers are doing a good job serving individual and multiple interests and
society at large (high).
25. Research & innovation Impacts on science, research, and innovation and whether the oversight system encourages research
and innovation. The guiding question is: “Has the system led to more research and innovation in
the field or stifled it?” Examples include that the oversight system does not stifle research and
innovation, and in fact increases it (positive) or the oversight system stifles research and
innovation in many ways, perhaps due to time delays or cost of approvals (negative).

(Continued)
1214 Kuzma et al.

Table III. Continued.

Description and Guiding Question(s)

26. Health and safety Health impacts and whether oversight of the products, processes, or trials leads to impacts on global,
national, or local health and safety. The guiding question is: “Does the oversight system impact
health and safety in positive ways?” Examples include that the oversight of products or processes
is leading to negative health and safety impacts either through delays in approvals (e.g., life-saving
drugs) or through approvals of unsafe products (negative) or that the oversight of products,
processes, or trials is leading to positive health impacts and increased safety (positive).
27. Distributional health impacts How the health risks and benefits resulting from the system are distributed. Guiding questions are:
“Are the health impacts equitably distributed? Is there an inequitable impact on specific social or
disadvantaged groups?” Examples include that health impacts are not justly and equitably
distributed (inequitable) or health impacts are justly and equitably distributed (equitable).
28. Environmental impacts The oversight of the products or processes leading to impact on the environment. The guiding
question is: “Does the oversight system impact the environment in positive ways?” Examples
include that the oversight system has resulted in negative impacts on the environment (negative)
or that there have been beneficial impacts on the environment from oversight (positive).

Note: This final set of 28 criteria is being used to evaluate the historical oversight models.

from the Working Group following the ranking ex- empirical basis, compliance and enforcement, incen-
ercise. Five criteria that did not make the consen- tives, treatment of intellectual property and propri-
sus cutoff (70% of experts over 70), but that we felt etary information, institutional structure, flexibility,
were important and that had relatively high means capacity, public input, transparency, conflicts of in-
and medians (Table II) were fully reinstated—legal terest, and informed consent. One evolution crite-
grounding (d3), institutional structure (a9), postmar- rion was retained to capture how the system has
ket monitoring (a12), stakeholder input (a18), and ex- changed over time and why: extent of change in at-
tent of change (e1) (Table II). Impacts on innovation tributes. Five outcome criteria were retained that
(o9) did not meet the consensus (70% over 70) cutoff, apply to the assessment of the impacts of decisions
but it also had a relatively high mean and median (74 stemming from the oversight framework: public con-
and 85 respectively) and has been viewed in the liter- fidence, research and innovation, health and safety,
ature as an important outcome affected by oversight distributional health impacts, and environmental
(e.g., OTA, 1995; Jaffe & Palmer, 1997). Therefore, impacts.
it was reinstated and combined with impacts on re-
search (o8) to address the overlap between the two.
5. SYSTEMS APPROACH FOR
We also merged data requirements (a2) with strin-
RELATIONSHIPS AMONG CRITERIA
gency of the system (a4) to address Working Group
input about the similarity between these two. Oversight systems are complex, and relation-
Twenty-eight criteria remained in our final set ships among their attributes, outcomes, and how they
(Table III). Seven development criteria were re- develop and change are intricate, dynamic, and in-
tained that apply to the formal process of developing volve feedback. As such, hypotheses about what cri-
laws, rules, standards, guidance documents, pro- teria are important for good oversight will be for-
grams, and policies that relate to the overall pro- mulated and tested across historical models using
cess for considering individual products, processes, a systems approach (Fig. 2) and the final set of
or chemical trials: impetus, clarity of technological criteria (Table III). Systems approaches are useful
subject matter, legal grounding, public input, trans- in cases where mental models (people’s understand-
parency, financial resources, and empirical basis. Fif- ings of systems) are crucial for analysis given high
teen attributes criteria were retained that apply to degrees of complexity, limited empirical information,
the process, whether formal or informal, of mak- and multiple types of parameters (Forrester, 1993). It
ing decisions about specific products, subcategories has been suggested that effective methods for learn-
of products, clinical trials, or other ways in which ing about complex, dynamic systems include elicita-
the framework is implemented: legal basis (formerly tion of participants in the system for their percep-
legal grounding), data requirements and stringency, tions, creation of maps of the feedback structure of
postmarket monitoring, treatment of uncertainty, a system from those perceptions, and stronger group
An Integrated Approach to Oversight Assessment 1215

is whether criteria in the attributes, evolution, and


development categories (initially considered as in-
dependent variables) positively or negatively impact
those key outcome criteria (initially the dependent
variables) (Fig. 2, solid arrows). For example, trans-
parency in development or operation of oversight
systems (Table III, D5 or A20) is thought to promote
public confidence (Table III, O24). In this case, trans-
parency would be considered the independent or de-
scriptive variable and public confidence the depen-
dent or evaluative one.
However, other relationships among criteria will
be explored. Several attributes and development cri-
teria are normatively considered good features of
oversight, and these can be used on their own to
judge an oversight system. Transparency is thought
to be a good feature of oversight (D5, A20) in
that it promotes ethical principles of autonomy and
“rights to know” (Beauchamp & Walters, 1999). Re-
garded this way, transparency is an evaluative and
independent criterion. Yet, other criteria in devel-
opment or attributes categories, such as institutional
Fig. 2. Systems approach: types of and relationships among cri- structure (A16), can impact transparency, making
teria. Criteria were placed into categories of development, at- transparency a dependent and evaluative variable
tributes, or outcomes of oversight systems, as well as how systems (Fig. 2, dotted arrows). Furthermore, with feedback,
change over time. Relationships among criteria will be explored in
transparency could become a dependent and eval-
future work through the cross-comparisons of historical oversight
systems. A systems model with complex interactions among crite- uative variable based upon an outcome criterion
ria and feedback is depicted. Solid arrows indicate relationships in (Fig. 2, striped arrows). Therefore, transparency can
which outcome criteria are the dependent variables and used for be placed into multiple categories depending on the
evaluating oversight systems. Dotted arrows indicate relationships relationship being explored.
between other categories of criteria, which may include indepen-
Additionally, some criteria that seem purely de-
dent or dependent variables and evaluative or descriptive crite-
ria. Striped arrows indicate feedback from outcomes to features of scriptive at this point might turn out to be evaluative
oversight systems, and in these cases, outcomes impact dependent after historical cross-comparisons of oversight mod-
variables in other categories of criteria. els (Fig. 1, future work). For example, institutional
structure (A16) seems to be a description of an over-
sight system, and there currently is not sufficient evi-
processes (Sterman, 1994). We are employing these dence in the literature to determine what type of in-
strategies through our work to better understand stitutional structure is best for oversight of emerging
oversight systems for emerging technologies (Fig. 1). technologies. However, this criterion might turn out
However, with our efforts to avoid oversim- to be correlated with positive outcomes or other eval-
plifying oversight systems into linear models, we uative criteria, such as transparency, after our cross-
struggled with whether to place our criteria into comparisons. If so, a hypothesis about institutional
categories of “evaluative” versus “descriptive,” or structure and its contributions to good oversight can
“independent” versus “dependent” variables at the be generated.
outset of our work. Initially, we will consider the out- As a result of these complexities and in consulta-
comes that most people would agree upon as results tion with the Working Group, we chose not to cate-
of good oversight as key dependent variables and gorize our initial or final criteria with more resolution
evaluative criteria (e.g., the five remaining outcome than the four categories of development, attributes,
criteria of public confidence, positive and justly dis- evolution, and outcomes at this point. There is prece-
tributed health and environmental impacts, and in- dent in the literature for blending multiple types of
creased research and innovation). A central ques- criteria in analysis of decisions (Morgan & Henrion,
tion of our approach to assessing oversight systems 1990a, p. 51). However, our methodology is unique
1216 Kuzma et al.

in the application of this approach to oversight sys- perts to evaluate oversight systems, and behavioral
tems and consideration of complexities and feedback consensus methods to discuss and debate attributes
in oversight. and outcomes of systems (Fig. 1). We are now us-
ing IOA to actively analyze and evaluate six histor-
ical case studies that are related to the application of
6. CONCLUSIONS AND FUTURE WORK
nanotechnology to biological systems—gene therapy,
We have developed a broad set of criteria to genetically engineered organisms in the food supply,
describe and assess oversight systems for emerging human drugs, medical devices, chemicals in the en-
technologies and their applications. We derived these vironment, and chemicals in the workplace. Through
criteria using multidisciplinary methods with care- cross-comparisons of these historical oversight cases,
ful attention to the multidimensional nature of over- hypotheses about what oversight features affect cer-
sight. As discussed, the criteria are both descrip- tain outcomes will be generated and tested in order
tive and evaluative, addressing principles and fea- to derive principles and lessons for oversight of re-
tures of the system, the evolution and adaptability of lated nanotechnology applications.
the system over time, and outcomes of the system. We propose that comparisons across case studies
Our work incorporates a diversity of perspectives using a consistent set of criteria will result in defen-
on oversight and combines quantitative and qualita- sible and evidence-supported lessons for future over-
tive methods. From qualitative analysis of the multi- sight systems for nanotechnology products (Fig. 1).
disciplinary literature on oversight, we incorporated The final set of criteria embedded within a broader
what many groups, including experts, stakeholders, IOA approach will be used to compare relation-
and citizens, believe to be important for good over- ships among the development, attributes, evolution,
sight. Through the use of quantitative elicitation and and outcomes of oversight systems across historical
consensus methods with our Working Group, we case studies. Then, several criteria will likely progress
have directly included what those familiar with over- from their descriptive role to being useful indica-
sight systems believe to be important for oversight tors of the quality of oversight systems and predic-
assessment. tors of positive outcomes that satisfy a majority of
The resulting criteria reflect this broad consid- citizens and stakeholders. For example, we may find
eration of perspectives and literature. In the final that outcomes such as improved human health or en-
set, criteria range in subject matter from the impor- vironmental quality (outcome criteria in Table III)
tance of “sound science” in oversight to the extent are consistently correlated with increased public in-
of opportunities for public input into the design and put (attribute criteria in Table III) across the histori-
execution of oversight. The current outcomes crite- cal case studies.
ria (Table III) are heavily weighted toward health In summary, IOA blends theory, methods, and
and environmental impacts, which reflects the impor- ideas from legal, bioethics, and public policy ap-
tance that multiple experts and stakeholders place on proaches with the practical goals of providing guid-
the ethical principle of maximizing benefits and mini- ance to policymakers, decisionmakers, researchers,
mizing harm (Beauchamp & Walters, 1999). Impacts industry, patients, research subjects, consumers, and
on research and innovation are also included, and the public at large. Integrating multiple methods and
these, in turn, are believed to have wider economic criteria for oversight assessment will appeal to a wide
impacts on industry and society (NRC, 2006). range of stakeholders bringing a range of perspec-
Our work is based on the idea that the design tives to bear. As we begin to apply the criteria to
and implementation of oversight for nanotechnology historical models of oversight, we will also be able
should be schooled by the past successes and fail- to assess the degree of agreement and polariza-
ures of oversight systems for related technologies. In tion of expert and stakeholder opinion on histori-
our future work aimed to derive lessons for the over- cal oversight systems, which will be instructive for
sight of nanotechnology, quantitative expert elicita- diagnosing controversy and how it impacts features
tion and application of the final criteria will continue and outcomes of oversight.
to complement other prongs of the IOA approach, We expect that our multidisciplinary IOA ap-
which include literature reviews about the perfor- proach could be widely applicable to other emerging
mances of historical systems, assessment of public technologies, facilitating assessment of current reg-
opinion about oversight systems from the literature, ulatory oversight systems, the identification of pos-
semi-structured interviews with stakeholders and ex- sible changes to existing systems, and the design of
An Integrated Approach to Oversight Assessment 1217

new ones. We anticipate that this approach will be Belton, V., & Stewart, T. J. (2002). Multiple Criteria Decision
a valuable tool for analyzing multiple perspectives, Analysis: An Integrated Approach. Boston: Kulwer Academic
Publishers.
features, outcomes, and tradeoffs of oversight sys- Cash, D. W., Clark, W. C., Alcock, F., Dickson, N. M., Eckley, N.,
tems. Such an approach that incorporates the view- Guston, D. H., et al. (2003). Knowledge systems for sustain-
points of key disciplines and the perspectives of able development. PNAS, 100(14), 8086–8091.
Cobb, M. D., & Macoubrie J. (2004). Public perceptions
multiple stakeholders could help to ameliorate con- about nanotechnology: Risks, benefits and trust. Journal of
troversy and conflict as new technologies emerge Nanoparticle Research, 6, 395–405.
and oversight systems for them are considered and Cole, D. H., & Grossman, P. Z. (1999). When is command and
control efficient? Institutions, technology, and the compara-
deployed. tive efficiency of alternative regulatory regimes for environ-
mental protection. Wisconsin Law Review, 5, 887.
Davies, C. (2006). Managing the Effects of Nanotechnology.
ACKNOWLEDGMENTS Project on Emerging Nanotechnologies. Washington, DC:
PEN 2.
This work was supported in part by National Sci- Davies, C. (2007). EPA and Nanotechnology: Oversight for the 21st
ence Foundation NIRT Grant SES-0608791 (Wolf, Century. Project on Emerging Nanotechnologies. Washington,
DC: PEN 2.
PI; Kokkoli, Kuzma, Paradise, Ramachandran, Co- Einsiedel, E. F., & Goldenberg, L. (2004). Dwarfing the so-
PIs). Any opinions, findings, and conclusions or rec- cial? Nanotechnology lessons from the biotechnology front.
ommendations expressed in this article are those Bulletin of Science, Technology, and Society, 24(1), 28–
33.
of the authors and do not necessarily reflect the Emanuel, E., Wood, A., Fleishman, A., Bowen, A., Getz, K. A.,
views of the National Science Foundation. The Grady, C., et al. (2004). Oversight of human participants re-
authors would like to thank the Working Group search: Identifying problems to evaluate reform proposals.
Annals of Internal Medicine, 141, 283–292.
participants: Dan Burk, J.D., M.S.; Steve Ekker, Evans, J. S., Gray, G.M., Sielken, R. L., Smith, A. E., Valdez-
Ph.D.; Susan Foote, J.D.; Robert Hall, J.D.; Robert Flores, C., & Graham, J. D. (1994). Use of probabilistic expert
Hoerr, M.D., Ph.D.; Susanna Hornig Priest, Ph.D; judgment in distributional analysis of carcinogenic potency.
Risk Analysis, 20, 15–36.
Terrance Hurley, Ph.D.; Robbin Johnson; Bradley Farrow, R. S., Wong, W., Ponce, R. A., Faustman, E. M., & Zerbe,
Karkkainen, J.D.; George Kimbrell, J.D.; Andrew R. O. (2001). Facilitating regulatory design and stakeholder
Maynard, Ph.D.; Kristen Nelson, Ph.D.; David participation: The FERET template with an application to the
Clean Air Act. In P. Fichbeck & R. S. Farrow (Eds.), Im-
Norris, Ph.D.; David Y. H. Pui, Ph.D.; T. Andrew proving Regulation: Case in Environment, Health and Safety
Taton, Ph.D.; and Elizabeth J. Wilson, Ph.D., as well (Ch. 19). Washington, DC: Resources for the Future Press.
as collaborators; Efrosini Kokkoli, Ph.D.; Alison W. Fischoff, B., Lichtenstein, S., Slovic, P., Derby, S. L., & Keeney, R.
L. (1981). Acceptable Risk. Cambridge: Cambridge University
Tisdale; Rishi Gupta, M.S., J.D.; Pouya Najmaie, Press.
M.S.; Gail Mattey Diliberto, J.D.; Peter Kohlhepp; Forrester, J. (1993). System dynamics and the lessons of 35 years.
Jae Young Choi; and Joel Larson for their valu- In K. B. D. Greene (Ed.), Systems-Based Approach to Policy-
making. Norwell, MA: Kluwer Academic Publishers.
able input on the project. Additional contributors Frewer, L. J., Howard, C., Hedderley, D., & Shepherd, R. (1996).
to refinement of project methodology include Dave What determines trust in information about food-related
Chittenden; Judy Crane, Ph.D.; Linda Hogle, Ph.D.; risks? Underlying psychological constructs. Risk Analysis,
16(4), 473–486.
William D. Kay, Ph.D.; Maria Powell, Ph.D.; and Frewer, L., Lassen J., Kettlitz B., Scholderer, J., Beekman, V., &
Michael Tsapatsis, Ph.D. The authors would also like Berdal, K. G. (2004). Societal aspects of genetically modified
to thank Audrey Boyle for her project management. foods. Food and Chemical Toxicology, 42, 1181–1193.
Genest, C., & Zidek, J. V. (1986). Combining probability distri-
butions: A critique and an annotated bibliography. Statistical
Science, 1, 114–148.
APPENDIX A: ELICITATION SURVEY Greene, D. L. (1990). CAFÉ or price? An analysis of the effects
INSTRUMENT of federal fuel economy regulations and gasoline price on new
car MPG, 1978–1989. Energy Journal, 11, 37–58.
Please see the online appendix. Hawkins, N. C., & Evans, J. S. (1989). Subjective estimation of
toluene exposures: A calibration study of industrial hygien-
ists. Applied Industrial Hygiene Journal, 4, 61–68.
International Risk Governance Council. (2006). Survey on Nan-
REFERENCES otechnology Governance: Volume B. The Role of Industry.
Retrieved August 6, 2007 from http://www.irgc.org/spip/IMG/
Abraham, J. (2002). Regulatory science as culture: Contested two- projects/Survey on Nanotechnology Governance - Part B
dimensional values at the US FDA. Science as Culture, 11, The Role of Industry.pdf.
309–335. Jaffe, A. B., & Palmer, K. (1997). Environmental regulation and
Beauchamp, T., & Walters, L. (1999). Ethical theory and bioethics. innovation: A panel data study. Review of Economics and
In T. Beauchamp & L. Walters (Eds.), Contemporary Issues in Statistics, 79, 610–619.
Bioethics, 5th ed. (pp. 1–32). Belmont, CA: Wadsworth Pub- Jasanoff, S. (1990). The Fifth Branch: Science Advisors as Policy-
lishing Company. makers. Cambridge, MA: Harvard University Press.
1218 Kuzma et al.

Jasanoff, S. (2005). Designs on Nature. Princeton: Princeton Uni- OTA. (1995). Environmental Policy Tools. U.S. Office of Technol-
versity Press. ogy Assessment. Washington, DC: U.S. Government Printing
King, A., & Lenox, M. (2000). Prospects for industry self- Office.
regulation without sanctions: A study of responsible care in Paarlberg, R. (2002). The contested governance of GM foods: Im-
the chemical industry. Academy of Management Journal, 43, plications for US-EU trade and the developing world. Weath-
698–716. erhead Center for International Affairs, Harvard University,
Kuzma, J. (2006). Nanotechnology oversight: Just do it. Environ- Paper No. 02–04.
mental Law Reporter, 36, 10913–10923. PEN. (2007). Consumer products inventory. Project on emerg-
Kuzma, J. (2007). Moving forward responsibly: Oversight for ing nanotechnologies. Retrieved on June 19, 2007, from
the nanotechnology-biology interface. Journal of Nanoparti- http://www.nanotechproject.org/44.
cle Research, 9, 165–182. Pidgeon, N. (2006). Opportunities and uncertainties: The British
Lane, N., & Kalil, T. (2005). The national nanotechnology initia- nanotechnologies report and the case for upstream societal
tive: Present at the creation. Issues in Science and Technology, dialogue. Conference-Paper: VALDOR. Stockholm, Swe-
21(4), 49–54. den. Available at http://www.congrex.com/valdor2006/papers/
Linkov, I., Satterstrom, F. K., Kiker, G., Seager, T. P., Bridges, T., 53 Pidgeon.pdf.
& Gardner, K. H., et al. (2006). Multi-criteria decision anal- PIFB. (2003a). University-Industry Relationships: Framing the Is-
ysis: A comprehensive decision approach for management of sues for Academic Research in Agricultural Biotechnology.
contaminated sediments. Risk Analysis, 26(1), 61–78. Pew Initiative on Food and Biotechnology. Available at
Linkov, I., Satterstrom, F. K., Steevens, J., Ferguson, E., & http://pewagbiotech.org/research/UIR.pdf.
Pleus, R. C. (2007b). Multi-criteria decision analysis and en- PIFB. (2003b). Post-Market Monitoring of Biotech Foods: Is the
vironmental risk assessment for nanomaterials. Journal of System Prepared? Pew Initiative on Food and Biotechnology.
Nanoparticle Research, 9(4), 543–554. Available at http://pewagbiotech.org/research/postmarket/
Linkov, I., Satterstrom, F. K., Tkachuk, A., Seager, T. P., Figueria, PostMarketExecSum.pdf.
J. R., & Tervonen, T. (2007a). A multi-criteria decision anal- Porter, M. E. (1991). America’s green strategy. Scientific Ameri-
ysis approach for priorization of performance metrics. In I. can, 264, 168.
Linkov, G. Kinker, & R. Wenning (Eds.), Environmental Se- Porter, M. E., & Van Der Linde, C. (1995). Toward a new concep-
curity in Harbors and Coastal Areas. Dordrecht, The Nether- tion of the environment competitiveness relationship. Journal
lands: Springer. of Economic Perspectives, 9, 97–118.
Macnaghten, P., Kearnes, M. B., & Wynne, B. (2005). Nanotech- Rabino, I. (1994). How European and U.S. genetic engineering
nology, governance, and public deliberation: What role for the scientists view the impact of public attention on their field: A
social sciences? Science Communication, 27(2), 268–291. comparison. Science, Technology & Human Values, 19, 23–46.
Macoubrie, J. (2005, September). Informed Public Perceptions Rasmussen, N. (1981). The application of probabilistic risk assess-
of Nanotechnology and Trust in Government. Projects on ment techniques to energy technologies. Annual Review of
Emerging Nanotechnologies, Woodrow Wilson International Energy, 6, 123–138.
Center for Scholars. Siegel, J. E., Graham, J. D., & Stoto, M. A. (1990). Allocating re-
Macoubrie, J. (2006). Nanotechnology: Public concerns, reason- sources among AIDS research strategies. Policy Sciences, 23,
ing, and trust in government. Public Understanding of Science, 1–23.
15, 221–241. Siegrist, M. (2000). The influence of trust and perceptions of risks
Maynard, A. (2006). Nanotechnology: A Research Strategy for Ad- and benefits on the acceptance of gene technology. Risk Anal-
dressing Risk. Project on Emerging Nanotechnologies. ysis, 20, 195–204.
Moreno, J. D. (2004). Consensus, role and authority of. In S. G. Siegrist, M., Cousin, M. E., Kastenholz, H., & Wiek, A. (2007b).
Post (Eds.), Encyclopedia of Bioethics, 3rd ed. (pp. 520–523). Public acceptance of nanotechnology foods and food pack-
New York: Macmillan. aging: The influence of affect and trust. Appetite, 49(2), 459–
Morgan, G., Fischoff, B., Bostrom, A., & Atman, C. J. (2002). Cre- 466.
ating an expert model of the risk. In G. Morgan, B. Fischoff, Siegrist, M., Keller, C., Kastenholz, H., Frey, S., & Wiek, A.
A. Bostrom, & C. J. Atman (Eds.), Risk Communication: A (2007a). Laypeople’s and experts’ perception of nanotechnol-
Mental Models Approach (pp. 34–61). Cambridge: Cambridge ogy hazards. Risk Analysis, 27, 59–69.
University Press. Singer, P., Salamanca-Buentello, F., & Daar, A. (2005). Harness-
Morgan, G., & Henrion, M. (1990a). Overview of Quantitative Pol- ing nanotechnology to improve global equity. Issues in Science
icy Analysis and Nature and Sources of Uncertainty in Uncer- and Technology, 21(4), 57–64.
tainty (pp. 16–72). Cambridge: Cambridge University Press. Slovic, P. (1987). Perception of risk. Science, 236, 280–285.
Morgan, G., & Henrion, M. (1990b). Performing Probability As- Sterman, J. D. (1994). Learning in and about complex systems.
sessment in Uncertainty (pp. 141–171). Cambridge: Cambridge System Dynamics Review, 10(2–3), 291–330.
University Press. Stewart, P. A., & McLean, W. (2004). Fear and hope over the third
National Nanotechnology Initiative. (2007). What Is Nanotechnol- generation of agricultural biotechnology: Analysis of public
ogy? Retrieved on June 5, 2007, from http://www.nano.gov/ response in the Federal Register. AgBioForum, 7(3), 133–141.
html/facts/whatIsNano.html. Taylor, M. (2006). Regulating the Products of Nanotechnology:
National Research Council. (1996). Understanding Risk. Washing- Does FDA Have the Tools it Needs? Project on Emerging
ton, DC: National Academy Press. Nanotechnologies. Washington, DC: PEN 5.
National Research Council. (2000). Genetically Modified Pest- Thomas, L. G. (1990). Regulation and firm size: FDA impacts on
Protected Plants: Science and Regulation. Washington, DC: innovation. RAND Journal of Economics, 21(4), 497–517.
National Academy Press. Thompson, P. (2007). Food Biotechnology in Ethical Perspective,
National Research Council. (2006). Rising Above the Gathering 2nd ed. Dordrecht, The Netherlands: Springer.
Storm: Energizing and Employing America for a Brighter Eco- U.S. EPA. (1983). Guidelines for Performing Regulatory Impact
nomic Future. Washington, DC: National Academy Press. Analysis. (EPA-230-01-84-003.) Washington, DC: U.S. Gov-
Newell, P. (2003). Globalization and the governance of biotech- ernment Printing Office.
nology. Global Environmental Politics, 3(2), 56–71. U.S. EPA. (2000). Guidelines for Preparing Economic Analy-
Ogus, A. (2002). Regulatory institutions and structures. Annals of ses. (EPA-240-R-00-003.) Washington, DC: U.S. Government
Public Cooperative Economics, 73, 627–648. Printing Office.
An Integrated Approach to Oversight Assessment 1219

Walters, L. (2004). Human embryonic stem cell research: An in- Wiener, J. B. (2004). The regulation of technology, and the tech-
tercultural perspective. Kennedy Institute of Ethics Journal, nology of regulation. Technology in Society, 26, 483–500.
14(1), 3–38. Wilsdon, J., & Willis, R. (2004). See-Through Science. London: De-
White House. (1993, amended 2007). Regulatory Planning mos.
and Review. (Executive Order #12,866.) Washington, DC: Winkler, R. L. (1968). The consensus of subjective probability dis-
White House, Office of the Press Secretary. 30 Septem- tributions. Management Science, 15(2), B61–B75.
ber. Available at http://govinfo.library.unt.edu/npr/library/ Winkler, R. L. (1986). Expert resolution. Management Science, 32,
direct/orders/2646.html. Amendments retrieved September 298–306.
6, 2007 from http://www.whitehouse.gov/news/releases/2007/ Wolff, S. K., Hawkins, N. C., Kennedy, S. M., & Graham, J. D.
01/20070118.html. (1990). Selecting experimental data for use in qualitative risk
Wiek, A., Zemp, S., Siegrist, M., & Walter, A. I. (2007). Sustain- assessment: An expert judgment approach. Toxicology & In-
able governance of emerging technologies—Critical constel- dustrial Health, 6, 275–291.
lations in the agent network of nanotechnology. Technology Zechendorf, B. (1994). What the public thinks about biotechnol-
in Society, 29, 388–406. ogy: A survey of opinion polls. Bio/Technology, 12, 870–875.

You might also like