Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

1AC KENTUCKY RR R5 VS.

EMORY GS
1ac emerging tech adv
Current judicial deference doctrines are applied confusingly and inconsistently
Kagan 18 [Michael Kagan, no relation, B.A. Northwestern University, J.D. University of Michigan Law School, Professor of Law at the University
of Nevada, “LOUD AND SOFT ANTI-CHEVRON DECISIONS,” Spring, 2018, Wake Forest Law Review, 53 Wake Forest L. Rev. 37, lexis]

The Chevron doctrine, it seems, is in play. n1 There are now two Justices on the Supreme Court who have published opinions calling the central,
canonical doctrine in contemporary administrative law an unconstitutional transfer of judicial authority to the executive [*38] branch. n2 There
are at least three other Justices who have called for limitations on the doctrine's application to agency interpretations of their own jurisdiction.
n3 Another Justice has advocated a context-specific approach in which Chevron would apply with less force in some situations than in others.
n4 In King v. Burwell, n5 a majority of the Justices signed an opinion holding that Chevron's famous two-step analysis is merely something "we
often apply," and, in any case, is not appropriate for matters of "deep economic and political significance." n6 The Chevron, U.S.A., Inc. v.
Natural Resources Defense Council, Inc. n7 decision famously called for courts to defer to an executive branch agency when it interprets a
statute that it administers. n8 First, a court should ask if congressional intent is clear from the statute. n9 Second, "if the statute is silent or
ambiguous ... , the question for the court is whether the agency's answer is based on a permissible construction ... ." n10 The power of Chevron
deference, in theory at least, is that it calls on judges to affirm statutory interpretations against their own best judgment as to how statutes
should be understood. Once the Chevron doctrine coalesced in the 1980s, it seemed to enjoy consensus support on the Supreme Court. n11
But then, in 2015, Justice Thomas published a broadside against Chevron in Michigan v. EPA. n12 This only represented one vote out of nine, of
course, but it was a notable vote. A decade earlier, Justice Thomas had written the majority opinion in National Cable & Telecommunications
Ass'n v. Brand X Internet Services, n13 one of the Court's most robust articulations of the commandment for judges to defer to administrative
agencies. n14 But in 2015, Justice Thomas derided his own prior majority opinion. n15 Then, in 2017, Justice [*39] Gorsuch replaced a Justice
who had, for a long time, been Chevron's most outspoken supporter on the Court. n16 Just a few months before his elevation to the Supreme
Court, then-Judge Gorsuch launched a bold critique of Chevron, calling it "no less than a judge-made doctrine for the abdication of the judicial
duty." n17 This Article's purpose is to suggest a methodology for understanding the Supreme Court's approaches to Chevron now that
Chevron's future is more in doubt. To be clear, this Article does not predict Chevron's complete demise. There are still only two Justices on
record supporting its reversal. In fact, this Article is based on the assumption that the Chevron doctrine will continue but that the consensus
period of its history is finished. n18 Assuming that we are now entering a period in which there will be much less certainty about the doctrine's
reach, the Court may be more willing to explicitly refine the doctrine, to limit its application in certain ways, and to articulate new exceptions.
To a great extent, the current analytical challenge in administrative law is not new - it is just more out in the open. Since the early days of the
doctrine, the trouble with Chevron has been in understanding why the Court does one thing in one case but another thing in another case. The
problem is not just that the Court has sometimes explicitly indicated that there are exceptions to this doctrine - the so-called "Step Zero," for
the Court far more frequently fails to follow Chevron's normal two-step
example. n19 Instead, the problem is that
analysis in cases to which it seems to apply and then does not explain why . n20 Explaining this persistent
inconsistency has long been a preoccupation of administrative law scholarship. But prior to 2015, no Justice had announced any desire to
formally abandon Chevron, and the dominant streams of administrative law scholarship were reluctant to draw doctrinal conclusions from the
Justices' failure to practice what they preached. [*40] At least one scholar has recently suggested that the Court's "failure to apply Chevron
where it would seem to apply" should be seen as a signal of reluctance about "a full-throated Chevron doctrine." n21 This theory has actually
been around for quite some time, as it was suggested in a pioneering empirical study of Chevron case law in 1992. n22 But it did not catch on
and was not developed or pursued consistently by most administrative law scholarship. Now that Justices are expressing doubts and criticisms
of Chevron more openly, it makes sense to see the Court's long-term inconsistency in its application in a
different light . This Article aims to expand this thesis into a more structured way of interpreting the many cases in which the Court does
not apply Chevron in the way that it likely should. Part II briefly traces the evolution and recent breakdown of the Supreme Court consensus
about Chevron deference and outlines alternative ways scholars have tried to explain the Justices' inconsistency in applying the doctrine. The
prevailing views have generally asserted that the Justices are committed to fundamental principles undergirding deference, even if they are
idiosyncratic (and quite possibly biased) in their willingness to defer to agencies in actual cases. However, the Court's inconsistency should also
be seen as a potential signal of lurking problems and doubts and thus can provide guidance about how the doctrine might be refined in the
future. The clearest expressions of doctrinal doubts are what can be called "loud" anti-Chevron decisions, when judges actually articulate a
limitation on or a critique of the doctrine. This type of decision is explained in Part III. The bigger difficulty concerns the many decisions where
the Supreme Court failed to apply Chevron when it ostensibly should have mattered or applied it in such a way as to render the doctrine
irrelevant. These can appropriately be called "soft" anti-Chevron cases. Part IV shows that these cases come in several varieties. The degree to
which they indicate doctrinal discomfort depends on several factors that can be discerned by close reading of the case law. When there are
patterns in these cases that can be explained by a convincing doctrinal theory, scholars and judges should use them to articulate refinements to
our understanding of the Chevron doctrine. II. Reexplaining Chevron's Inconsistency Chevron has long been the ultimate canonical decision. The
doctrinal meaning typically attributed to the case has been much more than anyone would have anticipated from reading the decision [*41]
itself. n23 The Chevron doctrine actually developed through interpretation by lower courts rather than from an immediate understanding that
the Supreme Court had issued a watershed decision. n24 In fact, it could be said that the Chevron doctrine would more appropriately be
termed the General Motors doctrine, in honor of the D.C. Circuit decision that seems to have been the first to cite and explain Chevron as a
major change in administrative law. n25 The Chevron doctrine is often expressed as a rigid algorithm - the two steps - which makes any
deviation by the Court quite noticeable. n26 Yet, despite all the fanfare, it is now well known that the Supreme Court itself
applies Chevron inconsistently at best . n27 Once this inconsistency became apparent, some leading scholars sought to reframe
Chevron as a looser set of jurisprudential principles rather than a rigid formula. n28 One influential illustration of these efforts was Peter
Strauss's conception of "Chevron space." n29 More than anything, the Court's inconsistency , mixed with its surface-level devotion to
the doctrine, turned Chevron into a kind of enigma . As Michael Herz summarized the situation in 2015, "Despite all the
attention, ... the "Chevron revolution' never quite happens. This decision, though seen as transformatively important, is honored in [*42] the
breach, in constant danger of being abandoned, and the subject of perpetual confusion and uncertainty ." n30 Even if just
for a time, all the Justices were committed to deference at a general level. Even if most remain so today, the details of the doctrine were not
fully thought out by the Court at the beginning. Chevron was originally just a case about air pollution. As Gary Lawson and Stephen Kam explain
in their history of how the Chevron decision became the Chevron doctrine, "The process by which Chevron became law - a series of lower court
decisions and then default acceptance in the Supreme Court - prevented ... ambiguities from being vented and resolved in an authoritative
forum; instead, they remain to this day largely submerged and unaddressed ." n31 This process made Chevron unusual
for a case of its stature. Typically, when the Court makes a blockbuster decision, such as in Citizens United v. FEC n32 or Obergefell v. Hodges,
n33 the big question that the Court has to decide is understood well in advance. The issue is fully briefed in the litigation and has likely been
hashed out in the lower courts. But that did not really happen with Chevron deference. n34 Instead, it might be said that the hashing out
has taken place in the three decades since the Supreme Court's decision .

That prevents coherent regulatory policy from emerging --- only SCOTUS solves
Kagan 18 [Michael Kagan, no relation, B.A. Northwestern University, J.D. University of Michigan Law School, Professor of Law at the University
of Nevada, “LOUD AND SOFT ANTI-CHEVRON DECISIONS,” Spring, 2018, Wake Forest Law Review, 53 Wake Forest L. Rev. 37, lexis]

VI. Conclusion When the Supreme Court explicitly announces an exception to Chevron doctrine, as it did in King v. Burwell, n127 it is obvious to
anyone who pays attention to administrative law that it has done something important. It is possible that a decade from now will prove the
major questions exception to have been a huge change in the application of deference to administrative agencies. It is also possible that, in the
long run, it will look like an anomaly that has little enduring impact, like Justice Stevens's forgotten alternative holding in INS v. Cardoza-Fonseca
in 1987. n128 But because the Court was explicit that it was not applying Chevron, one knows to pay attention. n129 It was loud. When the
Court speaks more softly about Chevron, by stripping deference of any real force or by simply ignoring it, it is harder to know what to think.
Such cases are numerous and have been extensively counted in empirical studies . But they have not been
parsed and analyzed for their doctrinal implications to quite the same extent. This is only natural. The Court in these cases does not give us
much to analyze. Since, until recently, the Court seemed superficially devoted to Chevron, it was perhaps sensible for administrative law
scholars to roll our collective eyes. Supreme Court Justices are unpredictable and maybe a little unprincipled, and perhaps that is all there is to
it. But now that the Chevron doctrine is entering a new phase of doubt, there needs to be a closer look at the fact that this is a doctrine that
emerged in somewhat odd fashion and that the Supreme Court [*56] never really applied as expected. Some of the Court's apparent
unpredictability in applying deference may actually follow patterns to which spectators have not been adequately sensitive. The Court's well-
documented inconsistency in applying the Chevron doctrine may be seen in retrospect as a means by which the Justices quietly have worked
through operational problems and doubts, which thus could form the foundation for refinements to the doctrine . There are
many cases where the Court has ignored or minimized Chevron. But to make too much out of any isolated instance of this phenomenon can be
dangerous. Sometimes Supreme Court inconsistency is just inconsistency. The key is finding patterns . When there is a strong
pattern that can be explained by a coherent and compelling normative or doctrinal theory, it may be time to urge the Justices to make a
louder statement. It may be, as administrative law scholarship has long documented, that the importance of deference doctrines at the
Supreme Court level can be easily overstated. n130 But it has also long been clear that the lower courts do seem to try more consistently to
refine the Chevron doctrine so that
follow the Supreme Court's instructions on deference. For this reason alone, it is important to
lawyers and judges understand if and how they are supposed to apply deference in different contexts . Key to this endeavor is
the realization that a rigid, one-size-fits-all version of deference defined by a rigid, two-step algorithm may never have been realistic or
appropriate for the myriad contexts in which courts review legal interpretations by the administrative state. That does not mean we
should throw up our hands. It means we need to look much more closely . Sometimes there may be a good deal of
wisdom hidden in the Court's apparent inconsistency. The large body of administrative cases in which the Court had the opportunity to apply
deference should be understood as the Court's testing ground for the Chevron doctrine. Scholars and practitioners should pay attention to the
test results.

Chevron enables sufficient flexibility for the regulation of emerging tech --- but Brand
X renders agency flexibility too excessive for predictability to emerge
Masur 7 [Jonathan Masur, Bigelow Fellow and Lecturer in Law, University of Chicago Law School, “Judicial Deference and the Credibility of
Agency Commitments,” Vanderbilt Law Review, May, 60 Vand. L. Rev. 1021, lexis]
Yet, in the two decades after Chevron, one significant obstacle remained to an agency's ability to re-
interpret ambiguous statutes and adapt to changing circumstances. Until 2005, the Supreme Court treated
its statutory interpretation precedents - no matter the context and regardless of whether they had involved an agency
interpretation and a judicial grant of Chevron deference - as absolute and decisive. Once a court had interpreted a statute, regardless
of whether the agency had already had the opportunity to proffer its own interpretation, stare decisis controlled. An agency could only re-
interpret if a court had never passed on the original interpretation, or if the agency could convince the court that the court had erred in its
original interpretation, without reference to Chevron. In
2005, the Supreme Court eliminated this final impediment .
While deciding an otherwise mundane issue of statutory [*1027] interpretation in National Cable & Telecommunications Ass'n v. Brand
X Internet Services, the Court announced that Chevron henceforth would trump stare decisis : an
interpretation of an ambiguous statute that ordinarily would be entitled to deference under Chevron
would still receive that deference - and an agency would be permitted to revise a prior statutory interpretation - irrespective
of anything that a court had ever said on the subject. Ambiguous statutes had become forever
ambiguous; no court could settle their meaning . n11I. Administrative Flexibility: Temporal Adjustments and Judicial
Entrenchment Administrative agencies cannot function effectively if they do not possess substantial
discretion to set agency policy. Agencies exist in large degree as institutional mechanisms for solving
policy questions whose intricacies and difficulties exceeded the capacities of Congress itself. An agency
that lacked the freedom to choose between competing policy solutions or the flexibility to adjust its
regulations in the face of scientific or economic progress would be little more than a rigid executor of
Congress's will, stripped of the expertise that made it an attractive repository of policy-making authority
in the first instance. Consequently, a growing consensus of administrative law scholars has long favored
granting agencies ever-greater authority to enact policy changes in concert with developments in the
relevant markets and technologies . n8 Pursuant to this rationale, the Supreme Court has afforded
agencies broad authority to alter extant regulations or select new policy courses . Under well-established law, an
agency may discard a long-standing policy in favor of a novel one, provided that it offers a coherent rationale for its decision. n9 And if an
agency's current interpretation of its empowering statute is not sufficiently capacious to permit the agency to pursue this new policy, the
agency may adopt a reasonable new interpretation of an old statute without relinquishing the deference that it is due under Chevron 's
famous two-step formulation. n10 Yet, in the two decades after Chevron, one significant obstacle remained to an agency's ability to re-interpret
ambiguous statutes and adapt to changing circumstances. Until 2005, the Supreme Court treated its statutory interpretation precedents - no
matter the context and regardless of whether they had involved an agency interpretation and a judicial grant of Chevron deference - as
absolute and decisive. Once a court had interpreted a statute, regardless of whether the agency had already had the opportunity to proffer its
own interpretation, stare decisis controlled. An agency could only re-interpret if a court had never passed on the original interpretation, or if
the agency could convince the court that the court had erred in its original interpretation, without reference to Chevron. In 2005, the Supreme
Court eliminated this final impediment. While deciding an otherwise mundane issue of statutory [*1027] interpretation in National Cable &
Telecommunications Ass'n v. Brand X Internet Services, the Court announced that Chevron henceforth would trump stare decisis: an
interpretation of an ambiguous statute that ordinarily would be entitled to deference under Chevron would still receive that deference - and an
agency would be permitted to revise a prior statutory interpretation - irrespective of anything that a court had ever said on the subject.
Ambiguous statutes had become forever ambiguous; no court could settle their meaning. n11 A. Temporal Flexibility Congress delegates power
to agencies for a wide variety of reasons. Congress may find it politically infeasible to make some necessary decision because of significant
negative political ramifications, and as a result it might seek to foist responsibility off on some other actor. Alternatively, there may be a faction
within Congress that hopes to accomplish via the executive branch what it cannot achieve legislatively. n12 But Congress may also
delegate power in order to harness the superior expertise of an agency actor and to bring to bear on a
problem a set of scientific and technological knowledge and a breadth of experience that Congress
does not possess. n13 In order for this delegation to be successful - indeed, in order for it to be meaningfully a
"delegation" - it must afford the recipient agency some degree of "substantive flexibility" : the agency must have
the freedom, when analyzing the subject matter at the heart of the delegation, to choose from among a range of
acceptable policies the one that it believes is best . Accordingly, the Supreme Court has granted administrative
agencies wide substantive leeway to select among competing statutory interpretations - and thus among competing policies - via
the familiar two-step process set forth in Chevron, pursuant to which courts must defer to reasonable agency interpretations of ambiguous
statutes. n14 Moreover, courts and commentators have long realized that agencies possess comparative institutional advantages over Congress
that surpass the mere application of expertise. By
shifting policymaking responsibility outside of the legislative
branch, Congress is also able to avail itself of the greater agility of administrative agencies in responding
to changed circumstances or adapting to new [*1028] policy concerns. Legislation is costly and time-consuming to enact,
and Congress cannot always rapidly change course when confronted with novel problems or the imminent obsolescence of old solutions. n15
Agencies are more willing and able than Congress to tweak their policy agendas. Especially in the high-
technology areas , this alacrity is invaluable to agencies' ability to act in the public interest. In order to act
effectively, then, agencies must possess flexibility not only in the substantive sense described above, but also in
the "temporal" sense: They must be free to alter policies over time and adapt to changes in relevant technologies and markets. n16
Much like substantive flexibility (deference, really), temporal flexibility (which I will refer to simply as "flexibility") is the lifeblood
of successful agency operation . Even minor changes in technology or markets can obsolete pre-existing
regulatory regimes, and it likely would be prohibitively costly for Congress to respond to every minor
circumstance by amending an agency's authorizing legislation . n17 Agencies need the authority to adjust
policies in order to maintain their currency and efficacy , n18 and unwise judicial doctrines that deny
agencies all significant policy flexibility would undoubtedly lead to regulatory stagnation . n19
In light of this obvious need, the Supreme Court has moved , over the past two decades, toward affording agencies ever
greater regulatory flexibility. In its 1983 decision in Motor Vehicles Manufacturers Association of the U.S., Inc. v. State Farm Mutual
Automobile Insurance Company n20 - a case better known as a source of agency constraint rather than empowerment - the Supreme Court
explained that an agency may switch policies as long as it explains and justifies the move, noting that agencies "must be given ample [*1029]
latitude to "adapt their rules and policies to the demands of changing circumstances.' " n21 As State Farm concerned the logic behind a policy
choice, not whether that policy was consistent with the delegatory statute, the type of policy change at issue was one that the agency could
legally make within the confines of a single statutory interpretation. A year later the Court extended this framework, holding in Chevron that it
would not deny deference to an agency's statutory interpretation merely because that interpretation conflicted with prior agency policy. n22
The Court thus sanctioned shifts between alternative, reasonable statutory interpretations. This move was a natural outgrowth of the principles
that underscored Chevron itself: Once statutes are conceived of as delegations of policymaking authority, rather than rigid textual commands, it
is logical to permit an agency to shift "statutory" policies, just as it was permitted under State Farm to shift policies within the confines of a
single statutory meaning. n23 For several decades after Chevron the Court vacillated on this pro-flexibility
stance , occasionally indicating that a novel agency interpretation deserves less deference. n24 On other occasions, however, the Court
reiterated (counterfactually) that it "had rejected the argument that an agency's interpretation "is not entitled to deference because it
represents a sharp break with prior interpretations.' " n25 [*1030] Nevertheless, by 2005 it appeared to be relatively settled law that
fluctuations in an agency's positions did not affect Chevron deference, and thus an agency could adjust regulations over time without sacrificing
its valuable entitlement to deference. n26 B. Chevron and Stare Decisis Despite the broad grants of temporal flexibility bestowed upon agencies
in State Farm and Chevron, as late as 2005 there remained one significant obstacle to agencies' ability to adjust and adapt policies over time.
The stare decisis effect of a judicial decision regarding even an ambiguous statute's meaning - whether the court was merely ratifying an
interpretation proffered by an agency (pursuant to Chevron) or undertaking its own de novo statutory interpretation - served to entrench that
statutory meaning. Once an agency had been taken to court, it would be effectively stripped of the ability to revisit its statutory interpretation.
It was this last barrier that the Supreme Court confronted in Brand X. 1. The Supremacy of Judicial Precedent As one might expect of any
decision of its magnitude, Chevron left many important questions in its wake. n27 These issues ranged from the self-evidently crucial n28 to the
apparently mundane, n29 and many of [*1031] them involved the interaction between Chevron deference and traditional judicial mechanisms
of statutory interpretation, such as statutory canons n30 and, of greatest relevance here, stare decisis. n31 These latter issues of statutory
interpretation would appear to concern only the level of deference (meaning substantive flexibility) that an agency will receive. They apply
most obviously to the question of when a court will find a statute unambiguous at Chevron's Step One, and thus to the issue of when an
agency's interpretation will stand. But the relationship between stare decisis and Chevron deference has importance far outstripping its impact
upon the interpretation of particular statutes as ambiguous or unambiguous. Stare decisis is, of course, a temporal phenomenon: A first case is
decided at some moment in time, and the result in that case binds subsequent outcomes. Consequently, if stare decisis were to function in
typical fashion, Chevron notwithstanding, every extant judicial decision would curtail an agency's temporal flexibility. A judicial interpretation of
a statute - whether or not that initial interpretation was the agency's to which the earlier court had deferred n32 - would effectively fix the
meaning of that statute in place, binding an agency until either Congress chose to amend the statute or the court agreed to overturn its prior
holding. Imagine, for instance, an agency that decides to initiate a rulemaking process pursuant to an empowering statute. The agency issues a
rule (that necessarily involves interpreting some portion of the governing statute) and third parties immediately challenge the rule. Assume that
a court holds that the statute is ambiguous, defers to the agency under Chevron, and upholds the agency's statutory interpretation. Under a
rigid stare decisis regime, this statutory interpretation would be cast in stone; the agency would not later be able to amend its interpretation
and pursue an alternative policy course without convincing the court to overturn its prior ruling. In view of the many questions of statutory
construction the Supreme Court has already addressed and decided, this would be no small constraint; stare decisis would place a significant
swath of [*1032] administrative law beyond the reach of agency alteration. n33 Such a rigid approach to stare decisis could result in an
"ossification" of regulatory policymaking on a scale rivaling that of doctrines (such as hard look review) that are traditionally blamed for
inducing such paralysis. n34 The impact of stare decisis upon agencies' temporal flexibility revolves crucially around courts' treatment of
ambiguous statutes - those statutes that would normally trigger agency deference at Chevron's Step One. Recall that if a court determines that
a statute is unambiguous, the agency is entitled to no deference and thus no flexibility. n35 Under those circumstances, the court determines
the meaning of the statute n36 and this judicial construction is subject to the stare decisis in the same manner as any other legal decision. n37
But if a statute is ambiguous, a court must afford deference to a valid agency interpretation and must allow the agency the flexibility to adjust
its interpretation over time n38 - stare decisis trumps Chevron, in which case a pre-existing judicial decision would lock a statute's
interpretation in place. In the aftermath of Chevron, the Supreme Court appeared to signal that judicial precedent - or at least its own judicial
precedent - would indeed override both an agency's right to Chevron deference and [*1033] its concomitant ability to shift statutory
interpretations over time. In a series of four cases, the Court consistently refused to permit an agency to alter its interpretation of a statute
where the Court had previously spoken to the question of statutory interpretation at hand. n39 Moreover, though the opinions are opaque in
important respects, in none of these cases did the result appear to hinge on the Court having previously held the statutes at issue to be
unambiguous. On the contrary, the Supreme Court extended the dominance of stare decisis to ambiguous statutes whose constructions would
be otherwise subject to Chevron deference. The first of these four cases, Golden State Transit Corporation v. Los Angeles, n40 is also the Court's
most explicit statement on the subject. In Golden State, the agency's statutory interpretation normally would have been entitled to Chevron
deference were it reaching the Court for the first time. n41 Nonetheless, the Supreme Court announced that it would adhere to its contrary
precedent, the relevant statute's admitted ambiguity notwithstanding: A rule of law that is the product of judicial interpretation of a vague,
ambiguous, or incomplete statutory provision is no less binding than a rule that is based on the plain meaning of a statute. The violation of a
federal right that has been found to be implicit in a statute's language and structure is as much a "direct violation" of a right as is the violation
of a right that is clearly set forth in the text of the statute. n42 Several years later, in Neal v. United States, n43 the Court confirmed this view: In
these circumstances, we need not decide what, if any, deference is owed the Commission in order to reject its alleged contrary interpretation.
Once we have determined a statute's meaning, we adhere to our ruling under the doctrine of stare decisis, and we assess an agency's later
interpretation of the statute against that settled law. n44 On two other occasions the Supreme Court's explanation of the operative rule has
been substantially less unequivocal. At first glance, Maislin Industries, U.S., Inc v. Primary Steel, Inc. n45 and Lechmere, Inc v. NLRB n46 support
a distinction between the precedential effect of a holding that rests on a statute's lack of ambiguity and a pre-Chevron judicial interpretation of
an ambiguous statute. In Maislin, the Supreme Court wrote, "once we have determined a statute's clear [*1034] meaning, we adhere to that
determination under the doctrine of stare decisis, and we judge an agency's later interpretation of the statute against our prior determination
of the statute's meaning." n47 The Court's explanation is heavy with the negative pregnant; if the statute is ambiguous, stare decisis may hold
no purchase on the agency. But the decisions that predated and constrained Maislin n48 and Lechmere n49 hardly involved unambiguous
statutes, n50 much less statutes that the Supreme Court had already held unambiguous (recall that stare decisis only operates meaningfully if
the prior decision has determined that the statutory meaning is clear). n51 For example, the statutory question at issue in Lechmere was
whether a store owner's decision to prohibit the distribution of union literature in the store's parking lot constituted an "unfair labor practice"
that "interfered with, restrained, or coerced employees." n52 As one commentator noted, the Court's decision "may be good public policy" but
surely does not represent the statute's only possible meaning. n53 As of 2005, then, the weight of authority appeared to favor the view that
stare decisis trumped an agency's entitlement to Chevron deference and to temporal flexibility, n54 though this point was not without
controversy. n55 The Court's position was one of immoderate anti-skepticism; it believed in its own power to settle a law's meaning,
irrespective of inherent ambiguity. In the words of Judge Kozinski, [*1035] Statutory meaning is not a matter of hopes or wishes; it is a fact. In
settling on a particular interpretation of a statute, the court is saying: "This is the meaning that was actually conferred upon this statute by
Congress." ... A change in the agency's view ... may motivate a reviewing court to reconsider the soundness of its prior interpretation. But a
change in an agency's position cannot automatically alter the meaning Congress gave the statute years earlier. n56 Crucially, this anti-skeptical
position seemed to apply regardless of whether, in the prior existing precedent, the court had offered its own interpretation of an ambiguous
statute or merely ratified an agency's interpretation (pursuant to Chevron deference). n57 Stare decisis served to entrench both judicial and
agency interpretations, ensconcing whichever institutional view happened to make its way into court first. It was this view of precedent that
the Court undertook to reconsider in Brand X. 2. Brand X and the Dominance of Agency Interpretation Brand X arose from a ruling by the FCC -
pursuant to delegated notice-and-comment rulemaking authority - that cable television companies that provided broadband internet service
(so-called "cable modem service") were not offering a "telecommunications service" and thus were not subject to mandatory regulation under
Title II of the Telecommunications Act. n58 This otherwise quotidian issue of statutory interpretation and Chevron deference was complicated
by the fact that the Ninth Circuit had in an earlier case, AT&T Corp. v. Portland, n59 interpreted the statutory phrase "telecommunications
service" to include cable modem internet service, contradicting the FCC's subsequent interpretation at issue in Brand X. Critically, the Ninth
Circuit in Portland had not found the key statutory language unambiguous, though neither had it arrived at its statutory interpretation after
granting Chevron deference to the agency's view. Rather, in Portland the FCC had "declined, both in its regulatory capacity and as amicus
curiae, to address the issue," forcing the Ninth Circuit to undertake a de novo interpretation of the statute. Pursuant to this pre-existing
interpretation, in Brand X the Ninth Circuit concluded that Portland's stare decisis effect trumped [*1036] the agency's right to Chevron
deference and controlled the case, and held that the court's prior interpretation of the statutory language must prevail, the agency's contrary
interpretation notwithstanding. The Supreme Court did not only reverse the Ninth Circuit's substantive ruling. More importantly, the Court
reversed the Ninth Circuit's holding as to the interaction between Chevron and precedent. n60 The Court explained: A court's prior judicial
construction of a statute trumps an agency construction otherwise entitled to Chevron deference only if the prior court decision holds that its
construction follows from the unambiguous terms of the statute and thus leaves no room for agency discretion. n61 In other words, if an
agency has been delegated the authority to interpret a statute per Chevron, the meaning of that statute can be fixed by a court's prior ruling
only if that court holds n62 that the statute is unambiguous - only, that is, if there is no delegation of interpretive authority to the agency in the
first place. n63 Because the Portland court had not held the terms of the statute unambiguous, its prior decision did not control the FCC, and
the agency's subsequent interpretation was entitled to Chevron deference. n64 Nowhere
in Brand X does the Supreme Court
state specifically whether Chevron's trump of stare decisis applies even if an agency has already once
construed a statute pursuant to Chevron - if the agency has already had "one bite at the apple" - and then subsequently
sought to amend its interpretation. n65 Indeed, in Brand X the FCC had never before had the opportunity to offer its own
definition in a judicial proceeding. n66 Nonetheless, it is readily apparent that Brand X and Chevron will apply regardless of how many prior
interpretations of a statute an agency has already offered, or how many times those interpretations have been litigated, afforded Chevron
The Brand X Court displayed full willingness to
deference, and resulted in judicial validation of an agency's construction.
accept shifts in agency policy as [*1037] readily as it sanctioned agency overrides of judicial constructions ;
n67 after all, "Chevron's premise is that it is for agencies, not courts, to fill statutory gaps. " n68 Brand X
replaced a legal regime in which a judicial decision regarding an ambiguous statute would always
entrench that statute's meaning (whether or not the agency had yet passed on the statute) into one in which such a
decision would never entrench statutory meaning . n69 Once a statute has been deemed ambiguous, an agency forever
possesses the freedom to select any "reasonable" statutory interpretation n70 and, within that interpretation, any non-arbitrary policy. n71 II.
An End to Agency Credibility: The Deleterious Effects of Diminished Legal Stability Brand
X represents the most recent step in a
nearly unblemished trend towards greater agency flexibility in policymaking. Undoubtedly, as described above, this shift
in many ways augurs improved regulatory consequences. Agencies will have greater ability to "revise unwise judicial constructions of
ambiguous statutes," n72 adjust policies and programs to keep pace with the technological and economic vanguard, and generally exercise
more effectively the expertise that served as the original raison d'etre for agency delegation. n73 By
eliminating the threat that
stare decisis may tie an agency to a statutory interpretation before the agency is able to exercise the
deference due under Chevron, Brand X will produce other ancillary benefits. For instance, agencies and courts will no longer be
engaged in a "race to interpret," with the level of deference determined by whether an agency has an opportunity to undertake the formal
procedures required by Mead before a court has an opportunity to pass [*1038] on the statutory language. n74 This would seem to be an
unalloyed good; it is difficult to imagine why deference should turn on the vagaries of litigation timing or the "anomaly" of whether a court or
an agency managed to reach the question first. n75 Among other things, such
a shift will allow an agency to continue to
develop statutory interpretations via a case-by-case or "evolutional approach," n76 rather than having to
fear that if it first undertakes a few, tentative adjudications, instead of a decisive act of notice-and-
comment rulemaking, a court will swoop in and decide the statutory question before the agency can
avail itself of Chevron. An agency can no longer run out of time to interpret a statute and gain deference
or "use up" its opportunity to do so. Much ink has been spilled in arguing the benefits of greater agency authority generally, n77
and not surprisingly a preponderance of voices has favored the Supreme Court's moves towards increased administrative flexibility. n78 Yet
scholars simultaneously have overlooked the deleterious effects that Brand X's extension of agency flexibility may have upon both outside
parties and the agencies' ability to accomplish their own regulatory objectives. Brand X adds an element of flexibility - and
therefore instability - to agency authority n79 that is qualitatively different than the discretion that
agencies enjoyed under Chevron and the other pre-existing doctrines. Before Brand X, a judicial interpretation of
a statute - whether or not that interpretation came pursuant to a grant of Chevron or Skidmore deference - would effectively fix
the meaning of that statute, binding it in place until either Congress chose to amend the statute or the
court agreed to overturn its prior holding. Certainly, as described above, this functioned in many respects as an unhealthy check
on an agency's ability to adjust policy to changed circumstances. At the same time, though, the judiciary's ability to settle
statutory meaning served as the last resort for outside [*1039] actors - Congress, regulated firms, and other interested
third parties - seeking some degree of predictability and stability within the law . Judicial interpretations
of statutes - whether pursuant to Chevron deference, Skidmore deference, n80 or no deference at all - operated as a very clumsy
sort of " safe harbor ." Once a question of statutory interpretation had been decided by a court, outside
parties could assume that the interpretation would remain relatively stable , alterable only by Congress or a
superseding judicial decision, not by unilateral executive action. n81 Agencies would still possess the authority to shift
policy. n82 But their regulatory options would be confined within the parameters of a single, enduring
statutory interpretation , eliminating the possibility of drastic or broad adjustments in course that
could only be accommodated within a re-interpreted statutory framework . n83 This "safe harbor" was undoubtedly
a very crude mechanism for generating regulatory stability. It relied upon the self-serving decision of a party with standing n84 to bring an
action in court, and upon the vagaries of the litigation process to bring forth a judicial decision (and in particular, an appellate decision) from
that action. The process was neither consistent nor predictable, but it did exist. n85 Crucially, it could have been adopted by either an
interested third party seeking regulatory stability or an agency that wishes to bind itself to a particular statutory interpretation. Indeed, as the
following sections will demonstrate, the value of this mechanism lay in the agency's power to initiate a legal action that it knew would result in
After Brand X, such a safe harbor no longer
a constraining judicial decision, thus credibly committing itself to a given policy.
exists . At no point can an agency's statutory interpretation become fixed ; neither a court nor an agency
can render an ambiguous statute unambiguous or [*1040] permanently anchor its meaning. n86 As long as
an agency interpretation would be entitled to Chevron deference (per the rule set forth in Mead n87), the agency
will always have the ability to advance an entirely novel interpretation that would override any
previous construction , and to which courts would be required to defer . A statutory interpretation (and the
accompanying policy) will now
remain stable only until an agency undertakes a subsequent rulemaking to alter
them. This permanent instability - a feature heretofore unknown to the administrative landscape - will
powerfully impact the behavior of Congress and private parties under the post-Brand X legal regime .
n88 Brand X - and more generally, the absence of any type of agency safe harbor - is likely to impair the
functioning of agencies with respect to two outside groups: regulated parties (and other private interests) and
Congress. Because of the nature of the administrative malfunction at issue - agencies' inability to commit credibly to a particular position -
the dilemma sounds in a type of "contract theory" of administrative law. First, because neither agencies nor private parties can ever definitively
settle or anchor the law, agencies will have great difficulty persuading private parties to [*1041] rely on agency interpretations. It is
fundamental to contract law that the consumer of some good may be unable to induce the supplier to produce if the consumer will not (or
cannot) sign a contract agreeing to purchase the good. n89 So too might agencies encounter resistance when attempting to coerce regulated
parties to produce "regulatory goods" if the agency cannot reliably promise that those goods will retain value. Second, Congress will be chary of
delegating too much power to agencies that it knows can shift their statutory interpretations at any time in the future - especially after a
change in administration - and regulatory problems that could be profitably addressed through agency delegations may go unsolved. Moreover,
the elimination of any temporal constraint on agency revisions will transfer some of the value from any legislative agreement to future
executives, making every bargain less profitable - and less likely - for the parties who must negotiate it. A. Third Parties and Induced Reliance
Scholars and courts long have noted the damage that shifts in regulatory policy may exact upon reliance interests. n90 Any change in the
background regulatory rules governing an industry is likely to upset the settled expectations of the firms and interested groups working in the
affected field, leading to disruptions and increased costs as pre-existing programs become unworkable and new projects become necessary.
n91 More importantly, fluid agency interpretations and re-interpretations make it more costly for affected entities or other stakeholders to
adjust their conduct to conform to agency rules, and thus regulated actors may refrain from making costly investments or embarking upon new
projects that may be endorsed under one regulatory regime but prohibited under another one that could be soon forthcoming. [*1042] In
expanding the policy flexibility available to agencies and eliminating the possibility of a regulatory safe harbor, Brand
X exacerbates
this effect. Because an agency no longer can be bound by its (or a court's) prior pronouncements,
private parties now must confront a far less predictable regulatory landscape . After Brand X, there is
no passage of time and no formal judicial mechanism that can entrench a statute's meaning ; an agency
will always possess unilateral authority to re-interpret a statute and engage in a concomitantly
significant shift in policy. Without a regulatory safe harbor, a regulated party is forced into a permanent
state of uncertainty . The party will never enjoy settled expectations and cannot be certain that future
projects will not be frustrated by significant alterations in the regulatory landscape. This boundless
ambiguity is likely to compel risk-and uncertainty-averse industries to forego potentially productive
investments and lead to avoidable negative outcomes . n92

The plan’s optional two-step framework solves doctrinal unclarity and strikes an
effective balance between stability and flex
Re 14 [Richard M. Re, J.D., Yale Law School, A.B., Harvard University, “Article: Should Chevron Have Two Steps?,” Spring, 2014, Indiana Law
Journal, 89 Ind. L.J. 605, lexis]

F. The Three Distinct Versions of Chevron It is time to take stock. From what has been shown so far, we know that no single version of the
there are several different analytic regimes traveling under
agency deference inquiry is logically compelled. Rather,
the name Chevron . First, "traditional two-step" asks initially whether the statute is clear and then
whether the agency's interp retation is reasonable. To clarify and sharpen this approach , n83 step one can
be understood to ask, "Is the agency's reading mandatory?" Second, " optional two-step " asks whether the
agency's reading is reasonable and then gives courts discretion to ask whether the agency's
interpretation is not just reasonable, but mandatory. Finally, " one-step " asks only whether the agency's
interpretation is reasonable and never asks whether the agency's view is mandatory . These three versions of
Chevron are outlined in the table below. n84 Table 2. Three Versions of Chevron. So what we have is not a logically compelled choice, as
Stephenson and Vermeule would have it, but rather a normative decision. Chevron comes in three distinct varieties, and we have to ask: Which
option should we prefer? II. THE NORMATIVE STRUCTURE OF CHEVRON DEFERENCE Having now arrived at a clear understanding of how courts
might logically structure Chevron, it is time to ask the normative question of how Chevron should [*624] be structured. This analysis must
proceed over nearly uncharted ground. Despite all the articles on the proper structure of Chevron, normative considerations have played only a
peripheral role-even as descriptive and logical arguments abound. n85 As discussed below, each of the different versions of Chevron has its
own strengths and weaknesses. To focus the analysis, this Part develops a novel analogy to qualified-immunity doctrine, which has likewise
struggled with the question of whether unnecessary lawmaking should be impermissible, obligatory, or optional. On balance, the qualified-
immunity analogy cuts in favor of optional two- step. However, the normative case for optional two-step may depend on the development of
new rules capable of guiding and legitimizing courts' exercise of their previously unrecognized Chevron discretion. A. Beginning to Assess the
Options The obvious strength of traditional two-step Chevron is that it fosters rapid development of the
law . Courts that apply traditional two-step always ask, at step one, whether the statute is clear. Therefore, traditional two-step
always discloses when an agency's reading is mandatory. And discovering that an agency's position is not
just acceptable but necessary is a huge help-to litigants, to courts, and to the agency itself . How many
business decisions, lawsuits, executive-branch lobbying efforts, and agency deliberations could be
spared by finding out early whether an agency's interpretation is obligated by law? All other things being equal,
it is plainly much more efficient to clarify the law sooner rather than later . n86 But all other things might
not be equal . n87 Perhaps resolving issues of statutory meaning sooner rather than later will generate
inefficient decision making and even error . Restraint may be especially warranted in the Chevron
context, because the question whether an agency's view is mandatory may not be squarely addressed ,
either by the government or by its challengers. The outcome of any particular agency challenge, after all, will turn
only on whether the agency's reading is impermissible, regardless of whether it is mandatory . Thus, there
is normally no need to take up the potentially difficult and time-consuming question of mandatoriness .
It may therefore be preferable for courts to postpone ruling out potential agency constructions until
they are adopted by the government and squarely challenged as unreasonable . There are also
legitimacy problems associated with traditional two-step Chevron . A supporter of one-step Chevron might point out,
for example, that it is unnecessary in agency deference cases to find that the agency's reading is not just [*625]
reasonable but mandatory. n88 And, in the language of contemporary judicial restraint, when it is not
necessary to decide, it is necessary not to decide. n89 For the same reasons that courts typically eschew
unnecessarily broad holdings and refuse to afford precedential effect to dicta, n90 they might also
disfavor the unnecessary lawmaking that marks traditional two-step Chevron . Is optional two-step
Chevron the best of both worlds ? It certainly does allow courts to clarify the law by finding agency
interpretations to be mandatory, as opposed to reasonable . Whether that choice is viewed as a plus or a minus largely
depends on whether courts are likely to choose wisely. When a court opts to find an agency reading mandatory, it has either helpfully clarified
the law or rashly erred. And if a court simply finds the agency's reading to be reasonable, without reaching the question of whether the reading
is mandatory, then it has either exercised prudent restraint or ducked an important question.

Nanotech is inevitable internationally, but U.S. model is key to stability---checks gray


goo, super-weapons, and eco-collapse
Dennis 6 [Lindsay V., JD Candidate – Temple University School of Law, “Nanotechnology: Unique Science Requires Unique Solutions”,
Temple Journal of Science, Technology & Environmental Law, Spring, 25 Temp. J. Sci. Tech. & Envtl. L. 87, lexis]

Nanotechnology, a newly developing field merging science and technology, promises a future of open-
ended potential . 6 Its scientific limits are unknown, and its myriad uses cross the boundaries of the technical, mechanical and medical fields. 7 Substantial research 8 has led scientists, 9 politicians 10 and academicians 11 to believe that nanotechnology has the

nanotechnology may touch every facet of human life


potential to profoundly change the economy and to improve the national standard of living. 12 In addition, because its products cross the

In the future, nanotechnology could ensure


boundaries of the most important industries, including electronics, biomedical and pharmaceutical  [*89]  industries, and energy production. 13 longer, healthier lives

elimination of life-threatening diseases


with the reduction or , 14 a cleaner planet with pollution remediation and emission-free energy, 15 and the innumerable benefits of increased information technology. 16

However systems, 17 have given rise to an ethical debate similar to that surrounding
, certain uses, such as advanced drug delivery

cloning and stem cell research. 18 Moreover, some analysts have theorized that nanotechnology may
endanger humankind with more dangerous warfare and weapons of terrorism , 19 and that
nanotechnology may lead to "gray
artificial intelligence beyond human control. 20 The widespread use of nanotechnology far in the future threatens to alter the societal framework and create what has been called

goo." 21 Because nanotechnology has the potential to improve lives, but also imperil the products that most of us rely on in our daily

society as we know it, we should research, monitor and regulate nanotechnology for the public good
with trustworthy systems, and set up pervasive controls over its research, development, and deployment. In addition, its substantial impacts on existing regulations should be ascertained,
and solutions incorporated into the regulatory framework. This paper addresses these concerns and provides potential solutions. Part I outlines the development of nanotechnology. Parts II and III explore the current and theoretical future applications of nanotechnology, and its potential
side-effects. Then, Part IV analyzes the government's current role in monitoring nanotechnology, and the regulatory mechanisms available to manage or eliminate the negative implications of nanotechnology. Part V considers the creation of an Emerging Technologies Department as a
possible solution to maximize the benefits and minimize the detrimental effects of nanotechnology. Lastly, Part VI examines certain environmental regulations to provide an example of nanotechnology's impact on existing regulatory schema.  [*90]  Part I: Nanotechnology Defined  
Nanoscience is the study of the fundamental principles of molecules and structures with at least one dimension roughly between 1 and 100 nanometers (one-billionth of a meter, or 10[su'-9']), otherwise known as the "nanoscale." 22 Called nanostructures, these are the smallest solid
things possible to make. 23 Nanofabrication, or nanoscale manufacturing, is the process by which nanostructures are built. 24 Top-down nanofabrication creates nanostructures by taking a large structure and making it smaller, whereas bottom-up nanofabrication starts with individual
atoms to build nanostructures. 25 Nanotechnology applies nanostructures into useful nanoscale devices. 26 The nanoscale is distinctive because it is the size scale where the properties of materials like conductivity, 27 hardness, 28 or melting point 29 are no longer similar to the
properties of these same materials at the macro level. 30 Atom interactions, averaged out of existence in bulk material, give rise to unique properties. 31 In  [*91]  nanotech research, scientists take advantage of these unique properties to develop products with applications that would
not otherwise be available. 32 Although some products using nanotechnology are currently on the market, 33 nanotechnology is primarily in the research and development stage. 34 Because nanoparticles are remarkably small, tools specific to nanotechnology have been created to
develop useful nanostructures and devices. 35 Two techniques exclusive to nanotechnology are self-assembly, and nanofabrication using nanotubes and nanorods. 36  [*92]  In self-assembly, particular atoms or molecules are put on a surface or preconstructed nanostructure, causing the
molecules to align themselves into particular positions. 37 Although self-assembly is "probably the most important of the nanoscale fabrication techniques because of its generality, its ability to produce structures at different length-scales, and its low cost," 38 most nanostructures are
built starting with larger molecules as components. 39 Nanotubes 40 and nanorods, 41 the first true nanomaterials engineered at the molecular level, are two examples of these building blocks. 42 They exhibit astounding physical and electrical properties. 43 Certain nanotubes have
tensile strength in excess of 60 times high-grade steel while remaining light and flexible. 44 Currently, nanotubes are used in tennis rackets and golf clubs to make them lighter and stronger. 45 Part II: Nanotechnology's Uses   Researching and manipulating the properties of
nanostructures are important for a number of reasons, including, most basically, to gain an understanding of how matter is constructed, and more practically, to use these unique properties to develop unique products. 46 Nanoproducts can be divided into four general categories: 47
smart materials, 48 sensors, 49 biomedical applications, 50 and optics and electronics. 51  [*93]  A "smart" material incorporates in its design a capability to perform several specific tasks. 52 In nanotechnology, that design is done at the molecular level. 53 Clothing, enhanced with
nanotechnology, is a useful application of a smart material at the nanoscale. Certain nano-enhanced clothing contains fibers that have tiny whiskers that repel liquids, reduce static and resist stains without affecting feel. 54 Nano-enhanced rubber represents another application of a
nanoscale smart material. 55 Tires using nanotech-components increase skid resistance by reducing friction, which reduces abrasion and makes the tires last longer. 56 The tires may be on the market "in the next few years" according to the National Nanotechnology Initiative (NNI). 57
Theoretically, this rubber could be used on a variety of products, ranging from tires to windshield wiper blades to athletic shoes. 58 A more complex nanotechnology smart material is a photorefractive polymer. 59 Acting as a nanoscale "barcode," these polymers could be used as
information storage devices with a storage density exceeding the best available magnetic storage structures. 60 Nano-sensors may "revolutionize much of the medical care and the food packaging industries," 61 as well as the environmental field because of their ability to detect toxins
and pollutants at fewer than ten molecules. 62 As the Environmental Protection Agency (EPA) recognizes: Protection of human health and ecosystems requires rapid, precise sensors capable of detecting pollutants at the molecular level. Major improvements in process control,
compliance monitoring, and environmental decision-making could  [*94]  be achieved if more accurate, less costly, more sensitive techniques were available. Nanotechnology offers the possibility of sensors enabled to be selective or specific, detect multiple analytes, and monitor their
presence in real time. 63 Examples of research in sensors include the development of nano-sensors for efficient and rapid biochemical detection of pollutants; sensors capable of continuous measurement over large areas; integration of nano-enabled sensors for real-time continuous
monitoring; and sensors that utilize "lab-on-a-chip" technology. 64 All fundamental life processes occur at the nanoscale, making it the ideal scale at which to fight diseases. 65 Two quintessential examples of biomedical applications of nanotechnology are advanced drug delivery systems
and nano-enhanced drugs. 66 The promise of advanced drug delivery systems lies in that they direct drug molecules only to where they are needed in the body. 67 One example is focusing chemotherapy on the site of the tumor, instead of the whole body, thereby improving the drug's
effectiveness while decreasing its unpleasant side-effects. 68 Other researchers are working to develop nanoparticles that target and trick cancer cells into absorbing certain nanoparticles. 69 These nanoparticles would then kill tumors from within, avoiding the destruction of healthy cells,
as opposed to the indiscriminate damage caused by traditional chemotherapy. 70 Nano-enhanced suicide inhibitors 71 limit enzymatic activity by forcing naturally occurring enzymes to form bonds with the nanostructured molecule. 72 This may treat conditions such as epilepsy and
depression because of the enzyme action component involved in these conditions. 73 Lastly, nanotechnology has the potential to revolutionize the electronics and optics fields. 74 For instance, nanotechnology has the potential to produce clean,  [*95]  renewable solar power. 75
Through a process called artificial photosynthesis, solar energy is produced by using nanostructures based on molecules which capture light and separate positive and negative charges. 76 Certain Swiss watches and bathroom scales are illuminated through a nanotech procedure that
transforms captured sunlight into an electrical current. 77 In the electronics field, nanostructures offer many different ways to increase memory storage by substantially reducing the size of memory bits and thereby increasing the density of magnetic memory, increasing efficiency, and
decreasing cost. 78 One example is storing memory bits as magnetic nanodots, which can be reduced in size until they reach the super-paramagnetic limit, the smallest possible magnetic memory structure. 79 Advances in electronics and computing brought on by nanotechnology could
allow reconfigurable, "thinking" spacecraft. 80 Some uses of nano-products already on the market include suntan lotions and skin creams, tennis balls that bounce longer, faster-burning rocket fuel additives, and new cancer treatments. 81 Solar cells in roofing tiles and siding that provide
electricity for homes and facilities, and the prototypic tires, supra, may be on the market in the next few years. 82 The industry expects advanced drug delivery systems with implantable devices that automatically administer drugs and sensor drug levels, and medical diagnostic tools such
as cancer-tagging mechanisms to be on the market in the next two to five years. 83 It is nearly impossible to foresee what developments to expect in nanotechnology in the decades to come. 84 Nonetheless, the book Engines of Creation presented one vision of the possibilities of
advanced nanotechnology. 85 Nano-machines could be designed to construct any product, from mundane items such as a chair, to exciting items such as a rocket engine. 86 These "assemblers" could also be programmed to build copies of themselves. 87 Known as "replicators," these
nano-machines could alter the world by producing an exponential quantity of themselves that are to be put to work as assemblers. 88 The development of assemblers could advance the space  [*96]  exploration program, 89 biomedical field, 90 and even repair the damage done to the

With the
world's ecological systems. 91 Over time, production costs may sharply decrease because the assemblers will be able to construct all future products from an original blueprint at virtually no additional cost. 92 Part III: Nanotechnology's Side-Effects  

good, however, comes the bad. The " gray goo problem," arises when the most well-known unwanted potential consequence of the spread of nanotechnology, 93

replicators and assemblers produce almost anything, and subsequently spread uncontrolled,
obliterating natural organisms and replacing them with nano-enhanced organisms. A more foreseeable 94

issue is environmental contamination . 95 The EPA noted   As nanotechnology progresses from research and development to commercialization and use, it is likely that manufactured nanomaterials and

nanoproducts will be released into the environment... . The unique features of manufactured nanomaterials and a lack of experience with these materials hinder the risk evaluation that is
needed to inform decisions about pollution prevention, environmental clean-up and other control measures, including regulation. Beyond the usual concerns for most toxic materials ... the adequacy of current toxicity tests for chemicals needs to be assessed ... . To the extent that

nanotechnology could
nanoparticles  [*97]  ... elicit novel biological responses, these concerns need to be accounted for in toxicity testing to provide relevant information needed for risk assessment to inform decision making. 96   In addition,

change the face of global warfare and terrorism . biological 97 Assemblers could be used to duplicate existing weapons out of superior materials, and chemical and

weapons could be created with nano-enhanced components. 98 Modern detection systems would be
inadequate to detect nano-enhanced weapons built with innocuous materials such as carbon. 99 Luckily, nanotechnology offers responses to these problems, and researchers are already tackling these issues. 100 "Labs-on-a-chip," a sensor system the size of a
microchip, could be woven into soldiers' uniforms to detect toxins immediately. 101 Adding smart materials could make soldiers' uniforms resistant to certain chemical and biological agents. 102 Nanotechnology also enhances threats against citizens. Drugs and bugs (electronic
surveillance devices) could be used by police states to monitor and control its citizenry. 103 Viruses could be created that target specific genetic characteristics. 104 Not only is the development of technologically advanced, devastating weaponry itself a hazardous effect of
nanotechnology, but also, millions of dollars have already been spent researching potential uses of nanotechnology in the military sphere, 105 thus diverting funds from more beneficial uses such as biomedical applications and clean energy. However, these negative effects are not

To minimize or
inevitable. By analyzing the scope of potential drawbacks accompanying these research investments, lawmakers can institute regulatory controls that could mitigate these problems.  [*98]  Part IV: Maximizing Benefits, Minimizing Catastrophe  

eliminate the problems associated with nanotechnology, while maximizing the beneficial effects,
nanotechnology research and development should be monitored and regulated by "trustworthy
systems." 106 Currently, the federal government oversees a massive funding and research program with
the purpose of " ensuring United States global leadership in the development and application of
nanotechnology." 107 Nonetheless, as nanotechnology becomes more prevalent, more thorough
regulation may be necessary . 108 Nanotechnology may greatly impact some of the largest revenue producing industries in the United States, such as the pharmaceutical and medical fields, utilities and power generation, and computer

The federal
electronics. 109 Thus, it is clear that nanotechnology will likely touch every facet of human life. In addition, these powerful industries have been known to promote profits over human safety, 110 one of the reasons for their stringent regulation.  [*99] 

government must regulate nanotechnology for the public good as it pertains to these industries . The form and scope of

Each system has its advantages and disadvantages


the trustworthy systems are being debated. 111 . 112 The system should be accountable to judicial review and public comment, as well as
transparent, 113 while minimizing "the traditional laments of the bureaucratic agency: lack of efficiency, duplication of effort, and subjection to Congressional and judicial requirements in enacting regulations." 114 Certain proposals are outlined briefly in this article as examples of what
can be done to regulate nanotechnology.

Rapid advancements in nanotech and AI are coming and risk global war – US
regulatory leadership is key
Tate 15 [Jitendra S. Tate, Associate Professor of Manufacturing Engineering at the Ingram School of Engineering, Texas State University, et
al., “Military And National Security Implications Of Nanotechnology”, The Journal of Technology Studies, Volume 41, Number 1, Spring,
https://scholar.lib.vt.edu/ejournals/JOTS/v41/v41n1/tate.html]

All branches of the U.S. military are currently conducting nanotechnology research, including the Defense Advanced
Research Projects Agency (DARPA), Office of Naval Research (ONR), Army Research Office (ARO), and Air Force Office of Scientific Research (AFOSR). The U nited S tates

is currently the leader of the development of nanotechnologybased applications for military and
national defense. Advancements in nanotechnology are intended to revolutionize modern warfare with the development of applications such as nano-sensors, artificial intelligence, nanomanufacturing, and
nanorobotics. Capabilities of this technology include providing soldiers with stronger and lighter battle suits, using nano-enabled medicines for curing field wounds, and producing silver-packed foods with decreased spoiling rate

( Tiwari, A., Military Nanotechnology, 2004 ). Although the improvements in nanotechnology hold great promise, this technology has the potential to pose some risks .
This article addresses a few of the more recent, rapidly evolving, and cutting edge developments for defense purposes. To prevent irreversible damages, regulatory
measures must be taken in the advancement of dangerous technological developments implementing
nanotechnology. The article introduces recent efforts in awareness of the societal implications of military and national security nanotechnology as well as recommendations for national leaders.
Keywords: Nanotechnology, Implications, modern warfare
INTRODUCTION

Advances in nano-science and nanotechnology promise to have major implications for advances in the scientific field as well as peace for
the upcoming decades. This will lead to dramatic changes in the way that material, medicine, surveillance, and sustainable energy technology are understood and created. Significant
breakthroughs are expected in human organ engineering, assembly of atoms and molecules, and the
emergence of a new era of physics and chemistry. Tomorrow’s soldiers will have many challenges such as carrying self-guided missiles, jumping over large obstacles,
monitoring vital signs, and working longer periods with sleep deprivation. ( Altmann & Gubrud, Anticipating military nanotechnology, 2004 ). This will be achieved by controlling matter at the nanoscale (1-100nm). A nanometer is
one-billionth of a meter. This article considers the social impact of nanotechnology (NT) from the point of view of the possible military applications and their implications for national defense and arms control. This technological
evolution may become disruptive; meaning that it will come out of mainstream. Ideas that are coming forth through nanotechnology are becoming very popular and the possibilities will in practice have profound implications for
military affairs as well as relations between nations and thinking about war and national security ( Altmann J. , Military Uses of Nanotechnology: Perspectives and Concerns, 2004 ). In this article some of the potential applicability

uses of recent nanotechnology driven applications within the military are introduced. This article also discusses how the impact of a rapid technological evolution in the
military will have implications on society.
POTENTIAL MILITARY TECHNOLOGIES
Magneto rheological Fluid (MR Fluid)
A magneto-rheological-fluid is a fluid where colloidal ferrofluids experience a body force on the entire material that is proportional to the magnetic field strength ( Ashour, Rogers, & Kordonsky, 1996 ). This allows the status of the
fluid to change reversibly from a liquid to solid state. Thus, the fluid becomes intelligently controllable using the magnetic field. MR fluid consists of a basic fluid, ferromagnetic particles, and stabilizing additives ( Olabi & Grunwald,
2007 ). The ferromagnetic particles are typically 20-50μm in diameter whereas in the presence of the magnetic field, the particles align and form linear chains parallel to the field ( Ahmadian & Norris, 2008 ). Response times 21 that
require impressively low voltages are being developed. Recently, ( Ahmadian & Norris, 2008 ) has shown the ability of MR fluids to handle impulse loads and an adaptable fixing for blast resistant and structural membranes. For
military applications, the strength of the armor will depend on the composition of the fluid. Researchers propose wiring the armor with tiny circuits. While current is applied through the wires, the armor would stiffen, and while the
current is turned off, the armor would revert to its liquid, flexible state. Depending on the type of particles used, a variety of armor technology can be developed to adapt for soldiers in different types of battle conditions.
Nanotechnology could increase the agility of soldiers. This could be accomplished by increasing mechanical properties as well as the flexibility for battle suit technology.
Nano Robotics
Nanorobotics is a new emerging field in which machines and robotic components are created at a scale at or close to that of a nanometer. The term has been heavily publicized through science fiction movies, especially the film
industry, and has been growing in popularity. In the movie Spiderman , Peter Parker and Norman Osborn briefly talk about Norman’s research which involves nanotechnology that is later used in the Green Goblin suit. Nanorobotics
specifically refers to the nanotechnology engineering discipline or designing and building nano robots that are expected to be used in a military and space applications. The terms nanobots, nanoids, nanites, nanomachines or
nanomites have been used to describe these devices but do not accurately represent the discipline. Nanorobotics includes a system at or below the micrometer range and is made of assemblies of nanoscale components with
dimensions ranging from 1 to 100nm ( Weir, Sierra, & Jones, 2005 ). Nanorobotics can generally be divided into two fields. The first area deals with the overall design and control of the robots at the nanoscale. Much of the research
in this area is theoretical. The second area deals with the manipulation and/or assembly of nanoscale components with macroscale manipulators ( Weir, Sierra, & Jones, 2005 ). Nanomanipulation and nanoassembly may play a
critical role in the development and deployment of artificial robots that could be used for combat.
According to Mavroidis et al. ( 2013 ), nanorobots should have the following three characteristic abilities at the nano scale and in presence of a large number in a remote environment. First they should have swarm intelligence.
Second the ability to self-assemble and replicate at the nanoscale. Third is the ability to have a nano to macro world interface architecture enabling instant access to the nanorobots with control and maintenance. ( Mavroidis &
Ferreira, 2013 ) also states that collaborative efforts between a variety of educational backgrounds will need to work together to achieve this common objective. Autonomous nanorobots for the battlefield will be able to move in
all media such as water, air, and ground using propulsion principles known for larger systems. These systems include wheels, tracks, rotor blades, wings, and jets ( Altmann & Gubrud, Military, arms control, and security aspects of
nanotechnology, 2004 ). These robots will also be designed for specific military tasks such as reconnaissance, communication, target destination, and sensing capabilities. Self-assembling nanorobots could possibly act together in
high numbers, blocking windows, putting abrasives into motors and other machines, and other unique tasks.
Artificial Intelligence

Artificial intelligence (AI) is a vast emerging field that can be very thought provoking . AI has been seen recently in a number
This intellect could possibly outperform human
of movies and television shows that have predicted what the possibility of an advanced intelligence could do to our society.

capabilities in practically every field from scientific research to social interactions. Aspirations to surpass human capabilities include
tennis, baseball, and other daily tasks demanding motion and common sense reasoning (Kurzweil, 2005). Examples where AI could be seen include chess playing, theorem proving, face and speech recognition, and natural language

understanding. AI has been an active and dynamic field of r esearch and d evelopment since its establishment in 1956 at the Dartmouth
Conference in the United States ( Cantu-Ortiz, 2014 ). In past decades, this has led to the development of smart systems, including phones, laptops,
medical instruments, and navigation software.
One problem with AI is that people are coming to a conclusion about its capabilities too soon. Thus, people are becoming afraid of the probability that an artificial intelligent system could possibly expand and turn on the human
race. True artificial intelligence is still very far from becoming “alive” due to our current technology. Nanotechnology might advance AI research and development. In nanotechnology, there is a combination of physics, chemistry

Bringing together nanosciences and AI can


and engineering. AI relies most heavily on biological influence as seen genetic algorithm mutations, rather than chemistry or engineering.

boost a whole new generation of information and communication technologies that will impact our society. This could be accomplished by successful

convergences between technology and biology ( Sacha & P., 2013 ). Computational power could be exponentially increased in current successful AI based military decision
behavior models as seen in the following examples.
Expert Systems
Artificial intelligence is currently being used and evolving in expert systems (ES). An ES is an “intelligent computer program that uses knowledge and interference procedures to solve problems that are difficult enough to require
significant human expertize to their solution” ( Mellit & Kalogirou, 2008 ). Results early on in its development have shown that this technology can play a significant impact in military applications. Weapon systems, surveillance, and
complex information have created numerous complications for military personnel. AI and ES can aid commanders in making decisions faster than before in spite of limitations on manpower and training. The field of expert systems
in the military is still a long way from solving the most persistent problems, but early on research demonstrated that this technology could offer great hope and promise ( Franklin, Carmody, Keller, Levitt, & Buteau, 1988 ). Mellit et
al. argues that an ES is not a program but a system. This is because the program contains a variety of different components such as a knowledge base, interference mechanisms, and explanation facilities. Therefore they have been
built to solve a range of problems that can be beneficial to military applications. This includes the prediction of a given situation, planning which can aid in devising a sequence of actions that will achieve a set goal, and debugging
and repair-prescribing remedies for malfunctions.
Genetic Algorithms
Artificial intelligence with genetic algorithms (GA) can tackle complex problems through the process of initialization, selection, crossover, and mutation. A GA repeatedly modifies a population of artificial structures in order to
adjust for a specific problem (Prelipcean et al., 2010). In this population, chromosomes evolve over a number of generations through the application of genetic operations. This evolution process of the GA allows for the most elite
chromosomes to survive and mate from one generation to the next. Generally, the GA will include three genetic operations of selection, crossover, and mutation. This is currently being applied to solving problems in military vehicle
scheduling at logistic distribution centers.
Nanomanufacturing
Nanomanufacturing is the production of materials and components with nanoscale features that can span a wide range of unique capabilities. At the nanoscale, matter is manufactured at lengthscales of 1-100nm with precise size
and control. The manufacturing of parts can be done with the “bottom up” from nano sized materials or “top down” process for high precision. Manufacturing at the nanoscale could produce new features, functional capabilities,
and multi-functional properties. Nanomanufacturing is distinguished from nanoprocessing, and nanofabrication, whereas nanomanufacturing must address scalability, reliability and cost effectiveness ( Cooper & Ralph, 2011 ).
Military applications will need to be very tough and sturdy but at the same time very reliable for use in harsh environments with the extreme temperatures, pressure, humidity, radiation, etc. The use of nano enabled materials and
components increase the military’s in-mission success. Eventually, these new nanotechnologies will be transferred for commercial and public use. Cooper et al. makes known how nanomanufacuring is a multi-disciplinary effort that
involves synthesis, processing and fabrication. There are however a great number of challenges that as well as opportunities in nanomanufacturing R&D such as;
Predictions from first principles of the progress and kinetics of nanosynthesis and nano-assembly processes.
23 Understand and control the nucleation and growth of nanomaterial and nanostructures and asses the effects of catalysts, crystal orientation, chemistry, etc. on growth rates and morphologies.
R&D IN THE USA

The USA is proving to have a lead in military research and development in nanotechnology . Research spans under
umbrella of applications related to defense capabilities. NNI has provided funds in which one quarter to one third goes to the department of defense – in 2003, $ 243 million of $774 million. This is far more

than any country and the US expenditure would be five times the sum of all the rest of the world ( Altmann
& Gubrud, Military, arms control, and security aspects of nanotechnology, 2004 ).
INITIATIVES
The National Nanotechnology Initiative
The National Nanotechnology Initiative (NNI) was unveiled by President Clinton in a speech that he gave on science and technology policy in January of 2000 where he called for an initiative with funding levels around 500 million
dollars ( Roco & Bainbridge, 2001 ). The initiative had five elements. The first was to increase support for fundamental research. The second was to pursue a set of grand challenges. The third was to support a series of centers of
excellence. The fourth was to increase support for research infrastructure. The fifth is to think about the ethical, economic, legal and social implications and to address the education and training of nanotechnology workforce
( Roco & Bainbridge, 2001 ). NNI brings together the expertise needed to advance the potential of nanotechnology across the nation.
ISN at MIT
The Institute for Soldier Nanotechnologies (ISN) initiated at the Massachusetts Institute of Technology in 2002 ( Bennet-Woods, 2008 ). The mission of ISN is to develop battlesuit technology that will increase soldier survivability,
protection, and create new methods of detecting toxic agents, enhancing situational awareness, while decreasing battle suit weight and increasing flexibility.
ISN research is organized into five strategic areas (SRA) designed to address broad strategic challenges facing soldiers. The first is developing lightweight, multifunctional nanostructured materials. Here nanotechnology is being
used to develop soldier protective capabilities such as sensing, night vision, communication, and visible management. Second is soldier medicine – prevention, diagnostics, and far-forward care. This SRA will focus on research that
would enable devices to aid casualty care for soldiers on the battle field. Devices would be activated by qualified personnel, the soldier, or autonomous. Eventually, these devices will find applications in medical hospitals as well.
Third is blast and ballistic threats – materials damage, injury mechanisms, and lightweight protection. This research will focus on the development of materials that will provide for better protection against many forms of
mechanical energy in the battle field. New protective material design will decrease the soldier’s risk of trauma, casualty, and other related injuries. The fourth SRA is hazardous substances sensing. This research will focus on
exploring advanced methods of molecularly complicated hazardous substances that could be dangerous to soldiers. This would include food-borne pathogens, explosives, viruses and bacteria. The fifth and final is nanosystems
integration –flexible capabilities in complex environments. This research focuses on the integration of nano-enabled materials and devices into systems that will give the soldier agility to operate in different environments. This will
be through capabilities to sense toxic chemicals, pressure, and temperature, and allow groups of soldiers to communicate undetected (Institute for Soldier Nanotechnologies).
SOCIAL IMPLICATIONS
The purpose of country’s armed forces is to provide protection from foreign threats and from internal conflict. On the other hand, they may also harm a society by engaging in counter- productive warfare or serving as an economic
burden. Expenditures on science and technology to develop weapons and systems sometimes produces side benefits, such as new medicines, technologies, or materials. Being ahead in military technology provides an important
advantage in armed conflict. Thus, all potential opponents have a strong motive for military research and development. From the perspective of international security and arms control it appears that in depth studies of the social
science of these implications has hardly begun. Warnings about this emerging technology have been sounded against excessive promises made too soon. The public may be too caught up with a “nanohype” ( Gubrud & Altmann,
2002 ). It is essential to address questions of possible dangers arising from military use of nanotechnology and its impacts on national security. Their consequences need to be analyzed.
NT and Preventative Arms Control
Background

The goal of preventive arms control is to limit how the development of future weapons could create
horrific situations , as seen in the past world wars . A qualitative method here is to design boundaries
which could limit the creation of new military technologies before they are ever deployed or even
thought of. One criterion regards arms control and how the development of military and surveillance technologies could go beyond the limits of international law warfare and control agreements. This could include
autonomous fighting war machines failing to define combatants of either side and Biological weapons could possibly give terrorist circumvention over existing treaties ( Altmann & Gubrud, Military, arms control, and security
aspects of nanotechnology, 2004 ). The second criterion is to prevent destabilization of the military situation which emerging technologies could make response times in battle much faster. Who will strike first? The third criterion,
according to Altman & Gubrud, is how to consider unintended hazards to humans, the environment, and society. Nanoscience is paving the way for smaller more efficient systems which could leak into civilian sectors that could
bring risks to human health and personal data. Concrete data on how this will affect humans or the environment is still uncertain.
Arms Control Agreements
The development of smaller chemical or biological weapons that may contain less to no metal could potentially violate existing international laws of warfare by becoming virtually undetectable. Smaller weapons could fall into
categories that would undermine peace treaties. The manipulation of these weapons by terrorist could give a better opportunity to select specific targets for assassination. Anti- satellite attacks by smaller more autonomous
satellites could potentially destabilize the space situation. Therefore a comprehensive ban on space weapons should be established ( Altmann & Gubrud, 2002 ). Autonomous robots with a degree of artificial intelligence will
potentially bring great problems. The ability to identify a soldiers current situation such as a plea for surrender, a call for medical attention, or illness is a a very complicated tasks that to an extent requires human intelligence. This
could potentially violate humanitarian law.
Stability

New weapons could pressure the military to prevent attacks by pursuing the development of new
technologies faster. This could lead to an arms race with other nations trying to attain the same goal.
Destabilization may occur through faster action, and more available nano systems . Vehicles will become much lighter and will be
used for surveillance. This will significantly reduce time to acquire a targets location. Medical devices implanted in soldiers’ bodies will enable the release of drugs that influence mood and response times. For example, an implant

[AI]
that attaches to the brains nervous system could give the possibility to reduce reaction time by processing information much faster than usual ( Altmann & Gubrud, Anticipating military nanotechnology, 2004 ).

Artificial intelligence based genetic algorithms could make tactical decisions much faster through
computational power by adapting to a situations decision. Nano robots could eavesdrop, manipulate or even destroy targets while at the same time being
undetected ( Altmann J. , Military Uses of Nanotechnology: Perspectives and Concerns, 2004 ).
Environment Society & Humans
Human beings have always been exposed to natural reoccurring nanomaterials in nature. These particles may enter the human body through respiration, and ingestion ( Bennet- Woods, 2008 ). Little been known about how
manufactured nanoscale materials will have an impact to the environment. Jerome (2005) argues that nanomaterials used for military uniforms could break of and enter the body and environment. New materials could destroy
species of plants and animal. Fumes from fuel additives could be inhaled by military personnel. Contaminant due to weapon blasts could lead to diseases such as cancer or leukemia due to absorption through the skin or inhalation.
Improper disposal of batteries using nano particles could also affect a wide variety of species. An increase in nanoparticle release into the environment could be aided by waste streams from military research facilities. Advanced
nuclear weapons that are miniaturized may leave large areas of soil contaminated with radioactive materials. There is an increase in toxicity as the particle size decrease which could cause unknown environmental changes. Bennet-
woods ( 2008 ) argues that there is great uncertainty in which the way nano materials will degrade under natural conditions and interact with local organisms in the environment.

Danger to society could greatly be affected due to self-replicating , mutating, mechanical or biological plagues . In the event that these
intelligent nano systems were to be unleashed, they could potentially attack the physical world. There are a number of applications that will be developed with nanotechnology that could
potentially crossover from the military to national security that can harm the civilian sector ( Bennet-Woods, 2008 ). There is a heightened awareness that new technologies will allow for a more efficient access to personal privacy
and autonomy ( Roco & Bainbridge, 2005 ). Concerns regarding artificial intelligence acquiring a vast amount of personal data, voice recognition, and financial data will also arise. Implantable brain devices, intended for
communication, raise concerns for actually observing and manipulating thoughts. Some of the most feared risks due to nanotechnology in the society are the loss of privacy ( Flagg, 2005 ). Nano sensors developed for the battlefield
could be used for eavesdropping and tracking of citizens by state agencies. This could lead to improvised warfare or terrorism. Bennet-Woods ( 2008 ) argues that there should be an outright ban on nanoenabled tracking and
surveillance devices for any purpose.
Nanotechnology in combination with biotechnology and medicine raise concerns regarding human safety. This includes nanoscale drugs that may allow for improvements in terrorism alongside more efficient soldiers for combat.
Bioterrorism could greatly be improved through nano-engineered drugs and chemicals ( Milleson, 2013 ). Body implants could be used by soldiers to provide for better fighting efficiency but in the society, the extent in which the
availability of body manipulation will have to be debated at large ( Altmann J. , Nanotechnology and preventive arms control, 2005 ). Brain implanted stimulates could become addictive and lead to health defects. The availability of
body and brain implants could have negative effects during peace time. Milleson ( 2013 ) argues that there is fear that this technology could destabilize the human race, society, and family. Thus, the use in society should be
delayed for at least a decade.
CONCLUSIONS
Nanoscience will lead to a revolutionary development of new materials, medicine, surveillance, and sustainable energy. Many applications could arrive in the next decade. The US is currently in the lead in nanoscience research and
development. This equates to roughly five times the sum of all the rest of world. It is essential to address the potential risks that cutting edge military applications will have on warfare and civilian sector. There is a potential for
mistrust in areas where revolutionary changes are expected. There are many initiatives by federal agencies, industry, and academic institutions pertaining to nanotechnology applications in military and national security.

Preventive measures should be coordinated early on among national leaders. Scientists propose for national leaders to follow general
guidelines. There shall be no circumvention of existing treaties as well as a ban on space weapons. Autonomous robots should be greatly restricted. Due to rapidly advancing capabilities,

a technological arms race should be prevented at all costs. Nanomaterials could greatly harm humans and their environment therefore nations should work
together to address safety protocols. The national nanotechnology of different nations should build confidence in

addressing the social implications and preventive arms control from this technological revolution.
Optional two-step enables effective regulation --- especially for the EPA --- thru reining
in Brand X deference by allowing for durable mandatoriness findings
Re 14 [Richard M. Re, J.D., Yale Law School, A.B., Harvard University, “Article: Should Chevron Have Two Steps?,” Spring, 2014, Indiana Law
Journal, 89 Ind. L.J. 605, lexis]

What happens when Chevron is reduced to only one step? The one-step version of Chevron that Stephenson and Vermeule propose is
essentially the same verbal formulation as step two. n37 And the authors express their conclusion by saying that step one is unnecessary. So it
appears that the one-step version of Chevron proposed by Stephenson and Vermeule, like step two, has only two possible answers, either yes
or no. To repeat the key sentence from Stephenson and Vermeule's essay: "The single question is whether the agency's construction is [*613]
permissible as a matter of statutory interpretation . . . ." n38 A question that uses the word "whether" in this way normally has only "yes" or
"no" as possible answers. That leads to a serious problem, however. A single question with two possible answers cannot possibly capture the
there are three potential answers in all agency deference
full range of answers available in deference cases. Again,
cases : the agency's interpretation may be mandatory, it may be reasonable, or it may be impermissible.
Because one-step Chevron can be answered in only two ways (yes or no), it cannot capture one of the three possible
answers in agency deference cases. So Stephenson and Vermeule are faced with a choice. They can either give up on their claim
that two-step and one-step Chevron are analytically identical, or they must say that one-step Chevron has three possible answers. Given that
their entire critique hangs on the proposition that one-step and two-step Chevron are interchangeable, Stephenson and Vermeule would
presumably adopt the latter of these options. But if Stephenson and Vermeule take the view that one-step Chevron has three possible answers,
then they would not have simplified the traditional two-step approach. Again, a
"whether" question like one-step Chevron
invites two possible options, either yes or no. For a "whether" question to permit three options, it must
be accompanied by some other principle specifying the full range of possibilities . That is, Stephenson and
Vermeule cannot rest after having asked their purportedly solitary question: "whether the agency's construction is permissible as a matter of
statutory interpretation." n39 Rather, they must then ask a follow-up question: "Also, please consider whether the agency's construction is
mandatory." Ironically, "one-step" Chevron can function as intended only with the help of a second step. n40 In sum, those who hope to
capture the full range of options in deference cases do not face a choice between two redundant steps and a single elegant step, as Stephenson
and Vermeule would have it. The choice is instead between: (i) two questions, each with yes or no as options; and (ii) one question with yes or
no as options, accompanied by a separate direction to consider another yes or no question. A moment's reflection reveals that (i) and (ii) are
substantively identical. And both have two steps. [*614] B. The Additional Step Is Important We have already seen that the traditional two-step
approach to Chevron ensures consideration of all three possible answers to deference questions. By contrast, one-
step Chevron must
be complemented by an additional question in order to ensure consideration of the full range of
possible answers. To the extent that Stephenson and Vermeule have not accepted or made clear the need for this separate step, they
risk truncating, instead of simplifying, the traditional Chevron inquiry. n41 After outlining the distinction between reasonable and unreasonable
agency interpretations, Stephenson and Vermeule confront the exact position advocated here: "We might distinguish Step One and Step Two
by interpreting Step One to ask whether Congress has clearly specified one, and only one, permissible interpretation of the statute." n42 Quite
so. That is just another way of saying-as argued above- that step one asks whether the agency's interpretation is mandatory, apart from
whether it is reasonable or unreasonable. Stephenson and Vermeule should leap at their own suggestion. Instead, the authors reject that
straightforward conclusion-as they must in order to advance their thesis that having a second step does no additional work. How can they do
this? In short, by denying that the additional step matters. Stephenson and Vermeule first explain that "Congress' intention may be ambiguous
within a range, but not at all ambiguous as to interpretations outside that range, which are clearly forbidden"; and they further note that
statutes can be open to a "range of reasonable interpretations ," thereby giving rise to " 'policy space'
within which agencies may make reasoned choices." n43 Having reiterated those uncontroversial observations, Stephenson
and Vermeule conclude: "There is therefore no good reason why we should decide whether the statute has only one possible reading before
deciding simply whether the agency's interpretation falls [*615] into the range of permissible interpretations." n44 Taking the absolutist line
necessary to defend their essay's thesis, to say nothing of its pithy title, Stephenson and Vermeule assert that "nothing of consequence turns on
whether the set of permissible interpretations has one element or more than one element; the only question is whether the agency's
interpretation is in that set or not." n45 That last statement, read literally, is incorrect. What Stephenson and Vermeule presumably mean is
that whether the agency wins doesn't turn on "whether the set of permissible interpretations has one element or more," so long as "the
agency's interpretation is in that set." n46 That narrower statement would be true enough. But as the authors elsewhere recognize, n47 it is a
mistake to think that "nothing of consequence turns on whether the set of permissible interpretations has one element or more than one
element." n48 If a court says that the "set of permissible interpretations has one element" n49 while upholding an agency interpretation, then
it has made what is normally called a "step-one holding." n50 It has bound the agency to adhere to its reading henceforth, no matter what the
expert agency might later discover and come what may in the upcoming election cycle. By contrast, if the court says that "the set of permissible
interpretations has"-or may have-"more than one element," n51 then the agency remains free to seek out and adopt another element in the
set. Whether an agency is constrained by its own success marks the critical difference between a reading that is mandatory and one that is
reasonable. We can be more specific. Both step-one and step-two rulings in favor of agencies demonstrate that the agency's view is at least
reasonable. But step-one rulings mean something more-namely, that all other views of the relevant issue are
unreasonable . In other words, a step-one holding in favor of an agency consists of a reasonableness finding (as to the agency's view) plus
an unreasonableness [*616] finding (as to all other views). That additional, prohibitory conclusion does not arise when a court affirmatively
responds to the question, "Is the agency's view permissible?" When a court affirms an agency interpretation for being reasonable, it thereby
postpones the mandatoriness inquiry, perhaps indefinitely. Once again, a defining feature of traditional two-step Chevron is its insistence that
courts find agency interpretations to be mandatory whenever possible. In sharp contrast, one-step Chevron would forgo those findings by
asking only whether the agency's interpretation is reasonable. Besides having obvious practical importance for judicial and agency decision-
fostering political
making, the difference between mandatory and reasonable readings also goes to one of Chevron's core purposes:
accountability . n52 Under one- step Chevron , courts would hold agency interpretations to be reasonable
without clarifying whether they are mandatory. Those holdings would obscure whether responsibility for the agency policy lies
most immediately with the Executive or with Congress. Consider interpretations offered by non-independent, executive-branch agencies over
which the President has considerable influence, such as the Environmental Protection Agency ( EPA ). When
the agency interprets a
federal statute, interested parties will very much want to know whether that interpretation was
mandatory or reasonable. If it was mandatory, then interested groups must seek relief in the halls of Congress. But if the
agency's interpretation was only reasonable, then aggrieved parties might prefer to visit the White
House first. Mandatory readings are also integral to implementation of the Supreme Court's holding in Brand X n53 that judicial
interpretations of statutes subsequently bind agencies only if the reviewing court specifies that its interpretation was unambiguous. n54 In a
footnote, Stephenson and Vermeule argue that Brand X would be unaffected by one-step Chevron, but in making this claim they once again
overlook cases that involve a prior agency victory. n55 According to the authors, "if the prior court stated clearly that the agency's (current)
interpretation was outside the zone of the permissible, then the agency may not now adopt that interpretation." n56 Having thus narrowed
their gaze to cases involving invalidation of agency action, Stephenson and Vermeule conclude: "nothing in the logical structure of the inquiry
requires a distinction between cases in which the zone of the permissible reduces to a single point, and cases in which it does not-the [*617]
distinction at the heart of the current two-step framework." n57 But what if the prior court had held at step one that the agency's earlier
interpretation was mandatory-in other words, that "the zone of the permissible reduces to a single point"? n58 In that event, the agency would
have been limited as to future interpretations. In pointed contrast, the agency would not be so limited if the prior court had issued only a one-
step holding pertaining to reasonableness alone. For all these reasons, Stephenson and Vermeule are wrong to claim that "the only question is
whether the agency's interpretation is in that set," that is, the set of reasonable readings, "or not." n59 Perhaps that is the only question that
we should ask, but it is not the only available or important question in agency deference cases. Traditional two-step asks the additional, highly
significant question of whether the agency's reading is mandatory. C. How to Cure Traditional Chevron's Redundancy We saw earlier that
traditional two-step Chevron generates a limited redundancy. To summarize: asking the two successive questions that make up traditional two-
step Chevron, where each question is susceptible to two answers, yields four possible outcomes. Yet there are only three possible answers in
deference cases: mandatory, reasonable, and unreasonable. The redundancy arises when agency interpretations are held to be unreasonable-
an outcome that is equally available at either step one or step two. Fortunately, this limited redundancy can be cured. The simplest way to do
so is to tweak step one so that it focuses on the unique work made possible by that step- namely, finding agency interpretations to be
mandatory. n60 To implement that tweak, courts engaged in step one might ask "[w]hether Congress has directly spoken to the precise
question at issue" in a way that mandates the reading offered by the agency? n61 Or, even more simply: "Is the agency's reading mandatory?"
If no, then step two would follow without modification. Under this revision, there would be three possible outcomes: yes, no/yes, and no/no.
And each outcome would lead to a unique, non-duplicative answer. A yes outcome would mean that the agency's view is mandatory. A no/yes
outcome would mean that the agency's view is reasonable. And a no/no outcome would mean that the agency's view is unreasonable. This
revision is consistent with the Court's statement in Chevron that, "[i]f the intent of Congress is clear, that is the end of the matter." n62 And it
also accords with the common practice of referring to [*618] mandatoriness findings as "step-one" holdings. n63 Below, Figure 2 illustrates this
revised version of traditional two-step Chevron. Figure 2. Revised Traditional Two-Step. In order to focus attention on the unique work being
done at step one, the remainder of this Article will adopt the above tweak. Again, this revision calls for courts to ask at step one, "Is the
agency's reading mandatory?" Courts would then ask the same step-two question that doubles as one-step Chevron, "Is the agency's reading
permissible?" D. The Possibility of an Optional Two-Step Procedure So far, we have seen that traditional Chevron is defined in part by having
two steps, each of which does unique and important interpretive work. Asking whether an agency interpretation is both mandatory (step one)
and reasonable (step two) reveals more information than just asking about either mandatoriness or reasonableness alone. But traditional two-
step Chevron has another defining feature: it makes mandatoriness findings , well, mandatory. That is the effect of requiring
consideration of both steps in every case. Under traditional two-step Chevron, there is no way to reach the second step (on reasonableness)
without previously considering at the first step whether Congress has spoken directly to the interpretive question in a way that would
preclude later agency re-interpretation . One-step Chevron actually rules out the possibility of mandatoriness findings. When
asked, "Is the agency's view permissible," courts implementing one-step Chevron will answer "yes" and thereby terminate the case, even when
permissible , it's mandatory ." n64 Put another way, two-step Chevron makes
the real answer is, "Not only is it
mandatoriness findings obligatory, whereas one-step Chevron makes mandatoriness findings
impermissible . There is a third, intermediate option. Instead of being either obligatory or impermissible, mandatoriness findings could be
optional. The essential deference question , after all, is the question of reasonableness . If the agency is
reasonable, it wins . And if it is unreasonable, it loses . By contrast, the mandatoriness question is [*619]
expendable : it is important only because of the useful information it reveals for future decision
making by litigants, administrators, courts, and legislators . The third potential version of the Chevron
inquiry can be termed " optional two- step ." Importantly, this previously unidentified approach would
reverse the order of the traditional two steps. That is, optional two-step Chevron would first ask the
reasonableness question, and then it would give courts discretion to ask a second question regarding
mandatoriness. This reversed sequence helpfully prioritizes the indispensable and easier inquiry into
reasonableness, while postponing the optional, harder question of mandatoriness. The advantages and disadvantages of optional two-
step Chevron are discussed at length in Part II below.

Proactive yet flexible EPA regs prevent dangerous nano but capture its upsides
Reese 13 [Michelle Reese, J.D., 2013, Case Western Reserve University School of Law, “Nanotechnology: Using Co-regulation to Bring
Regulation of Modern Technologies into the 21st Century,” Health Matrix: Journal of Law Medicine, 23 Health Matrix 537, Fall, 2013, lexis]

Nevertheless, nanotechnology may also present new risks. Scientists are not sure whether nanotechnology poses any serious
health hazards to humans or the environment. Considering our wide exposure to nanotechnology, it
is critical that we identify
potential risks and impose regulations that strike a balance between accessing the benefits of nanotechnology
and limiting the foreseeable harm to the environment and public health . Nanotechnology is the manipulation of
matter on an atomic scale to create tiny, functional structures. n3 These structures are incredibly small: one nanometer is precisely one-
billionth of a meter. n4 Nanotechnology is defined as the production of materials that are between one and one-hundred nanometers in size.
n5 Although they cannot be seen with the naked eye, these microscopic structures called "nanoparticles" have been proven to benefit humans
in a variety of ways. For example, they can lead to new medical treatments. n6 They also can be used to develop [*539] building materials with
a very high strength-to-weight ratio. n7 Sunscreen and cosmetics that make use of nanoparticles apply more smoothly and evenly to human
skin. n8 Other examples of products that utilize nanoparticles include stain-resistant clothing, lightweight golf clubs, bicycles, car bumpers,
antimicrobial wound dressings, and synthetic bones. n9 While there are many benefits presented by nanotechnology, there are also potential
risks. Studies have indicated that nanoparticles called carbon nanotubes act like asbestos within the human body. n10 Cells that are exposed to
nanostructures called "buckyballs" n11 have been shown to undergo slowed or even halted cell division. n12 In general, the small size and high
surface-area-to-volume ratio of nanoparticles indicates a higher potential for toxicity. n13 The application of nanotechnology to drug
development has aided the treatment of common life-threatening diseases while concurrently posing toxic side effects. n14 For example,
carbon nanotubes n15 may be used to enhance cancer treatments, but there is also an indication that the nanotubes themselves might
ironically have a carcinogenic effect on the human body. n16 Certain nanoparticles can be used to enhance water filtration systems, but there
are concerns that the production of nanoscale products may lead to new types of water pollution. n17 Common [*540] to these examples is the
difficulty in determining whether the benefits of nanotechnology will outweigh the risks. One
place to turn for answers is the
regulatory agency tasked with investigating the risks posed by nanotechnology. The Environmental Protection
Agency (EPA) has the regulatory authority to assess the environmental and public health risks associated
with nanotechnology, and to prescribe regulations as needed to prevent or reduce those risks . n18
Unfortunately, authority to assess those risks does not mean the EPA has adequate tools to do so. n19 Nanotechnology is becoming ubiquitous
as the industry continues to expand, and new products are being created every day. n20 The
need for thorough risk assessment, followed
by appropriate risk management, is becoming more important as potential environmental and public exposure
to nanoparticles is becoming more common . n21 Nanotechnology is not categorically dangerous . n22 The
current danger is that it is unknown whether nanoparticles present any risks to the environment and
public health. As more common household products are created or enhanced with nanoparticles, public exposure to nanotechnology is
increasing rapidly. n23 This increasing public exposure indicates an urgent need for risk assessment. And as
exposure increases, it becomes more important that the EPA be able to determine what risks will
accompany that exposure, if any, so that it can properly balance the risks against the benefits and
promulgate the most effective rules . Generally speaking, the EPA is familiar with assessing risks and
regulating new products. The EPA has authority through the Toxic Substances Control Act (TSCA) to regulate chemical
manufacturing . n24 TSCA requires manufacturers to inform the EPA of the potential risks associated with a new product, or new uses for
an existing product, before production begins. n25 This gives the EPA an opportunity to prohibit or limit the
manufacturing of that substance. n26 While this seems [*541] to suggest that the EPA is well-equipped to manage
the potential risks of products containing nanoparticles, some say that TSCA is outdated and that it will be difficult to use this older
statute to regulate modern technology. n27
AI-nano combo causes extinction
Bostrom 14 [Nick, Professor in the Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity
Institute, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014]

An agent’s ability to shape humanity’s future depends not only on the absolute magnitude of the agent’s
own faculties and resources—how smart and energetic it is, how much capital it has, and so forth—but also on the relative
magnitude of its capabilities compared with those of other agents with conflicting goals . In a situation
where there are no competing agents, the absolute capability level of a superintelligence , so long as it exceeds
a certain minimal threshold, does not matter much, because a system starting out with some sufficient set of
capabilities could plot a course of development that will let it acquire any capabilities it initially lacks . We
alluded to this point earlier when we said that speed, quality, and collective superintelligence all have the same indirect reach. We alluded to it
again when we said that various subsets of superpowers, such as the intelligence amplification superpower or the strategizing and the social
manipulation superpowers, could be used to obtain the full complement. Consider a superintelligent agent with actuators
connected to a nanotech assembler. Such an agent is already powerful enough to overcome any natural
obstacles to its indefinite survival . Faced with no intelligent opposition, such an agent could plot a safe
course of development that would lead to its acquiring the complete inventory of technologies that
would be useful to the attainment of its goals. For example, it could develop the technology to build and
launch von Neumann probes, machines capable of interstellar travel that can use resources such as asteroids,
planets, and stars to make copies of themselves.13 By launching one von Neumann probe, the agent
could thus initiate an open-ended process of space colonization . The replicating probe’s descendants,
travelling at some significant fraction of the speed of light, would end up colonizing a substantial portion of the Hubble
volume, the part of the expanding universe that is theoretically accessible from where we are now. All this matter and free
energy could then be organized into whatever value structures maximize the originating agent’s utility
function integrated over cosmic time—a duration encompassing at least trillions of years before the aging universe becomes inhospitable to
information processing (see Box 7). The superintelligent agent could design the von Neumann probes to be
evolution-proof. This could be accomplished by careful quality control during the replication step. For example, the control software for a
daughter probe could be proofread multiple times before execution, and the software itself could use encryption and error-correcting code to
make it arbitrarily unlikely that any random mutation would be passed on to its descendants.14 The
proliferating population of
von Neumann probes would then securely preserve and transmit the originating agent’s values as they
go about settling the universe. When the colonization phase is completed, the original values would
determine the use made of all the accumulated resources, even though the great distances involved and
the accelerating speed of cosmic expansion would make it impossible for remote parts of the
infrastructure to communicate with one another . The upshot is that a large part of our future light cone would be formatted
in accordance with the preferences of the originating agent. This, then, is the measure of the indirect reach of any system
that faces no significant intelligent opposition and that starts out with a set of capabilities exceeding a
certain threshold. We can term the threshold the “wise-singleton sustainability threshold” (Figure 11)

Two-step Chevron is incoherent --- only the plan’s restriction can clarify the scope of
deference to predictable bounds
Re 14 [Richard M. Re, J.D., Yale Law School, A.B., Harvard University, “Article: Should Chevron Have Two Steps?,” Spring, 2014, Indiana Law
Journal, 89 Ind. L.J. 605, lexis]

2. How the Data Inform Debates Over Chevron


As commentators have observed, the simple methodology employed above-though common in this area of scholarship-rests on a limited
sample as well as on a number of inevitably disputable judgments. n171 The resulting data thus provide only a rough indicator of actual
appellate practice. Nonetheless, the evidence suggests several conclusions. First, the data shed light on the picture of Chevron's logical
structure outlined above in Part I. For example, Tables 5 and 6 reflect that only two-step versions of Chevron generate mandatoriness findings.
Further, Table 7 suggests that step one of traditional two-step Chevron screens out agency interpretations that would separately fail as
unreasonable under step two, for only when courts view step two as arbitrariness review do steps one and two diverge. This result confirms, as
argued in Part I, that traditional two-step Chevron contains a redundancy in that invalidations under each step
are interchangeable. n172 The substantive equivalence of step-one and step-two invalidations finds
further support in the rare but remarkable practice of invalidating agency interpretations under both
steps, apparently for the same reasons. [*641] Second, the data clarify some of the practical stakes in choosing among the versions of
Chevron, particularly the choice whether to adopt one-step. As an initial matter, Table 6 suggests that whether courts apply one-step or two-
step has no significant effect on agencies' chances of victory. Instead, courts applying one-step and two-step Chevron both invalidated agency
interpretations just over a quarter of the time. This finding undermines claims that psychological burdens might make one- or two-step Chevron
more deferential. n173 Still, moving to one-step Chevron would have a significant effect on Chevron practice .
As Table 5 depicts, just over one-tenth of Chevron cases result in mandatoriness findings . Under one-step Chevron, those
results would transform into mere reasonableness determinations . Third, current appellate practice significantly
resembles, and may even implement, optional two-step Chevron. As noted in Part I, supporters of one- or two- step Chevron often argue that
their preferred proposals mirror actual Chevron practice and, therefore, that alternatives could be confusing or destabilizing. n174 The reality,
however, is that current practice is already highly heterogeneous-and has been that way for a long time. Indeed, Kerr's data from 1995 to 1996
show that over a quarter (28%) of all Chevron determinations turned on a one-step analysis. n175 If anything, courts' longstanding willingness
to alternate between the one- and two-step versions of Chevron-including in the Supreme Court n176 -suggests that
current Chevron practice broadly reflects a valid if unidentified approach: optional two-step . Finally, if
courts and commentators are to be faithful to existing practice, then Chevron discretion should be refined , not resisted. As the
data indicate, courts already exercise considerable discretion when implementing Chevron. Instead of downplaying that reality or
combating it by insisting on adherence to either one- or two-step across the board , commentators should aim
to perfect the decision-making discretion underlying current practice. The most pressing task for future research is therefore to determine how
judicial discretion in this area should be exercised. This Article has taken an initial step toward answering that question by mining qualified-
immunity doctrine for principles that might guide discretionary Chevron determinations. n177
CONCLUSION
Chevron is due for a redesign . Justice Scalia and prominent scholars have argued that Chevron's traditional two-step procedure is
redundant and should be reduced to a single step. But each of Chevron's two steps actually does unique work , as only
step one reveals whether an agency's interpretation is not just reasonable, but mandatory . Meanwhile, other
judges and scholars argue that Chevron does indeed have two steps- but only because the second step consists of arbitrariness review.
That approach compresses the distinct reasonableness and mandatoriness inquiries into an artificially
singular first step, while making step two redundant with arbitrariness under the APA . Moreover, the existing
debate has focused almost exclusively on descriptive or logical considerations, and so has overlooked that the structure of Chevron deference
raises important normative issues. For example, traditional two- step fosters the rapid development of precedent ,
whereas one-step enforces norms of judicial restraint . This Article has offered a circumspect defense of a new version of
Chevron called optional two-step , whereby courts have discretion to clarify the law by finding agency
interpretations to be mandatory . This hybrid approach seeks to balance the values of law-clarification ,
decision-making efficiency , and judicial restraint . And, though it has never before been identified, optional two-
step generally comports with recent practice in the Supreme Court and federal courts of appeals. n178
What is more, the normative case for optional two-step finds surprising support in the analogous domain of
qualified immunity, which has likewise given federal courts limited discretion to clarify the law, even
when doing so is unnecessary to resolve the case at hand. Yet the appeal of optional two-step depends on frankly
acknowledging that federal courts already exercise discretion in Chevron cases . By identifying new
doctrinal guideposts that might focus and legitimize their exercise of Chevron discretion, courts can
make progress toward redesigning Chevron .
1ac plan
The United States federal government should restrict judicial deference to agency interpretation through an optional two-step Chevron
framework.
1ac ftc adv
The plan overcomes legal processes that cause blank checks on agency actions ---
enabling agencies to eschew manditoriness findings when Courts can’t assess them
checks judicial error
Re 14 [Richard M. Re, J.D., Yale Law School, A.B., Harvard University, “Article: Should Chevron Have Two Steps?,” Spring, 2014, Indiana Law
Journal, 89 Ind. L.J. 605, lexis]

D. The Psychological Burdens of Unnecessary Lawmaking In a provocative passage, Pearson offered an important insight into judicial psychology
that overlaps with an academic debate concerning Chevron deference. Because it is impossible to "specify the sequence in
which judges reach their conclusions in their own internal thought processes ," n125 Pearson recognized that
"there will be cases in which a court will rather quickly and easily decide that there was no violation of
clearly established law before turning to the more difficult question whether the relevant facts make out
a constitutional question at all." n126 [*632] Unfortunately, Pearson conflated this important point with the more general
observation that courts may not "devote as much care" when "uttering pronouncements that play no role in
their adjudication." n127 In other words, the Pearson Court felt that judges might give inadequate attention to constitutional issues
whose resolution (the judges anticipated) would not affect the disposition of the case at hand. Yet the implications of Pearson's psychological
insight are more significant than the Court let on. In short, the
psychological burdens associated with qualified immunity
potentially go to judicial bias, and not just judicial inattention . When a judge eyeballs a constitutional claim and quickly
ascertains that the officer didn't violate clearly established law, it is possible that the judge has implicitly adopted a negative view of the
plaintiff's underlying claim on the merits. The defendant, after all, has already been adjudged a reasonable officer, whereas the constitutional
claimant has become-quite literally-a loser. That
cognitive development may favor the government , perhaps in ways
that the judge does not consciously appreciate. Conversely, a judge who begins the analysis by assessing
the merits of a constitutional claim and finding a violation might then have difficulty stepping back to
determine whether, at the time of the violation, the officer's conduct was nonetheless reasonable. Analogous
psychological problems have garnered attention in the context of Chevron deference. As Justice Stephen Breyer
has written (and others have agreed n128 ): "It is difficult, after having examined a legal question in depth with the object of deciding it
correctly, to believe both that the agency's interpretation is legally wrong, and that its interpretation is reasonable." n129 In other words, step
one of traditional two-step Chevron forces judges to think about what Congress has unambiguously
tried to do in a particular provision, and undertaking that inquiry may prevent judges from impartially applying
step two . Stephenson and Vermeule count this concern as a point in favor of one-step Chevron. n130 By asking a singular question, the
argument goes, one-step frees courts from having to ask about "best" interpretations , n131 thereby allowing for
quick, unbiased conclusions as to whether agency interpretations are reasonable . n132 [*633] Optional two-
step replicates many of the benefits associated with one-step Chevron, while avoiding the psychological
difficulties suggested by Pearson. As an initial matter, optional two-step would allow courts to ask first , and often
exclusively , about the relatively easy question of reasonableness . And only after a court has assured
itself that the agency's reading is reasonable would it even consider whether to reach the more difficult
optional issue of mandatoriness. Thus, courts applying optional two-step would often both begin and
end their reasonableness analyses without asking about "best" interpretations at all . n133 And even when
courts applying optional two-step did reach the issue of mandatoriness, they would not encounter the psychological burdens suggested in
Pearson. As noted above, a court that immediately recognizes the existence of qualified immunity may have a hard time fairly contemplating
whether the admittedly "reasonable" defendant nonetheless acted unconstitutionally. But there is no similar tension in asking whether a
reasonable agency interpretation is also mandatory. n134 Knowing that an agency acted reasonably simply tees up the possibility that the
agency also acted in a way that was mandated by law. The first conclusion does not prejudice the second one. Given the above, traditional two-
step Chevron-like the "rigid order of battle" adopted in Saucier n135 -may pose special psychological burdens for judges. By contrast, one-step
avoids those problems, and optional two-step largely seems able to do so as well. E. The Incomplete Case for Optional Two-Step As promised,
each substantively distinct version of Chevron has its own strengths and weaknesses. Traditional two-step requires a relatively complex
analysis in every case and may impose significant psychological burdens on judges, thereby increasing
the risk of error; but it also ensures that courts seize every opportunity to find that agency
interpretations are mandatory . One-step is the simplest of the three options and accordingly minimizes the
possibility that psychological burdens might warp outcomes ; but its elegance comes at the steep price
of forgoing many opportunities to find statutory clarity . Optional two-step promises to avoid the
deficiencies of its stricter cousins. By embracing a discretionary decision-making process , optional two-
step would allow courts to reach the issue of mandatoriness when doing so is most beneficial and
least likely to result in error . But optional two-step can reliably achieve this goal only if courts identify
objective criteria to guide their Chevron discretion . As argued above, for example, courts might ask whether
many litigants have interpreted the statutory provision at issue. An affirmative answer would suggest
both that the provision is important enough to benefit from law-clarification, and that the court is
equipped to resolve the mandatoriness question . [*634] Yet the normative case for optional two-step remains incomplete
absent consideration of actual judicial practice. To address those important issues, the next Part turns to the empirical structure of Chevron
deference. III. THE EMPIRICAL STRUCTURE OF CHEVRON DEFERENCE Arguments for one or another version of Chevron invariably rest on
controversial claims about actual judicial practice. For example, commentators often assert that their own views accord with the weight of the
case law, that opposing views of Chevron threaten disruption of the status quo, or that various proposed reforms either would or would not
meaningfully change existing trends. n136 This Part tackles the foregoing empirical questions by examining a number of important recent
decisions, as well as by examining all published federal court-of-appeals decisions citing Chevron in 2011. Because (as we have seen) there are
three substantively distinct versions of Chevron, and each has its own unique set of advantages and disadvantages, it should be no surprise that
courts have at various times appeared to adopt different solutions . Still, overall practice in federal
appellate courts broadly supports the optional two-step approach -even though current doctrine has
not identified that approach or justified its sub silentio revision of the traditional two-step Chevron
framework. A. In the Supreme Court Recent Supreme Court cases frequently depict Chevron as a two-step inquiry. n137 In Mayo
Foundation for Medical Education and Research v. United States, for example, the Court discussed and applied each of Chevron's two steps at
length before expressly upholding an agency interpretation as "reasonable" at step two. n138 In a similar vein, Judulang v. Holder discussed
"the second step of the test we announced in Chevron," while comparing it with arbitrariness review under the Administrative Procedure Act.
n139 And in Roberts v. Sea-Land Services, Inc., a particular statute provided "unambiguous" support for the agency's proffered reading, thereby
allowing the Court to resolve the case without asking the step-two question whether the agency's view was "entitled to deference." n140 But
many recent decisions cast Chevron as an essentially unitary inquiry. In Holder v. Gutierrez, for example, the Court held that an agency's
interpretation "prevails if it is a reasonable construction of the statute, whether or not it is the only [*635] possible interpretation or even the
one a court might think best." n141 And, in upholding the agency's interpretation, the Court distinguished between the questions of
reasonableness and mandatoriness: "We think the BIA's view on imputation meets that standard, and so need not decide if the statute permits
any other construction." n142 Likewise, Astrue v. Capato said that "even if the [Social Security Administration's] longstanding interpretation is
not the only reasonable one, it is at least a permissible construction that garners the Court's respect under Chevron." n143 In both of these
cases, the Court expressly declined to determine whether the statute unambiguously favored the agency's reading, as would traditionally be
required under Chevron step one. On balance, the Supreme Court's recent cases appear to reflect an optional two-
step approach . In cases like Sea-Land Services, the Court finds agency interpretations to be not just reasonable, but mandatory. At the
same time, the Court often declines to reach the issue of mandatoriness. So the Court sometimes asks about
mandatoriness and sometimes chooses not to do so-just as optional two-step would recommend . Even
Justice Scalia's recent opinion citing Stephenson and Vermeule could be read to accommodate discretion in this area. To be sure, Justice Scalia
asserted- incorrectly n144 -that "[w]hether a particular statute is ambiguous makes no difference if the
interpretation adopted by the agency is clearly reasonable-and it would be a waste of time to conduct
that inquiry." n145 But Justice Scalia also said that "'Step 1' has never been an essential part of Chevron analysis" and is "hardly
mandatory." n146 In saying that "Step 1" should be viewed as unessential and non-mandatory, Justice Scalia may have left open the possibility
that the mandatoriness question is warranted in some cases. And that, as we have seen, is optional two-step. In effect, the Court has quietly
brought its agency-deference and qualified- immunity doctrines into alignment. Though unnecessary lawmaking was originally deemed
obligatory both in Chevron and in Saucier, the Court has gradually backed off those stringent demands in favor of a discretionary approach. But
the Court has so far failed to clarify
while Pearson made that doctrinal transition explicit in the qualified-immunity context,
that Chevron's two-step approach has likewise become optional . n147 It is time to do so .

Otherwise, the overly-aggressive deference from Brand X ensures FTC overreach on


antitrust law
Hurwitz 14 [Justin (Gus) Hurwitz, Assistant Professor of Law, University of Nebraska College of Law, Winter, 2014, “CHEVRON AND THE LIMITS
OF ADMINISTRATIVE ANTITRUST,” University of Pittsburgh Law Review, 76 U. Pitt. L. Rev. 209, lexis]
Perhaps the most enduring explanation for why Chevron does not apply to FTC interpretations of Section 5 is that its authority overlaps with
that of the DOJ's antitrust authority, and Chevron does not apply to agencies with overlapping statutory authority. n201 As an initial matter, it is
simply not accurate to categorically say that Chevron does not apply in cases of overlapping statutory authority. n202 [*255] Modern Chevron
analysis treats this as a Step-Zero question, acknowledging that Congress may have intended such overlapping delegations of statutory
authority or may have intended one agency to retain interpretive authority at the expense of the other. n203 But there is a more fundamental
problem with this argument against the application of Chevron. Section 5 does not (necessarily) overlap with the antitrust laws. Section
5
was enacted precisely to extend to conduct outside the scope of the antitrust laws--and nothing in Section 5 references the antitrust
laws. It is surely the case that current understandings of Section 5 incorporate the antitrust laws. But
this is a judicial construction of the statute . n204 Under Brand X , that construction is not binding on the FTC ;
and even if the FTC has previously endorsed that understanding, this poses little, if any, obstacle to the FTC changing this interpretation under
Fox I. n205 Fundamentally, this argument is premised on a circular understanding that the FTC is not entitled to deference because the courts
have interpreted Section 5 in a way that precludes deference. It surely is the case that the FTC cannot interpret Section 5 in a way that
constrains other agencies' (or the courts') interpretations of the antitrust laws. But so long as it is interpreting Section 5, the mere existence of a
cognate area of law does not limit the applicability of Chevron. This must especially be the case where, as here, Congress delegated authority
precisely because it felt that that cognate area of law was too constricted n206 --a concern redoubled because Congress was also concerned
that part of this constriction was due to the courts. n207 [*256] 4. Does Chevron Not Apply if Courts Have Previously Acted? This case of
circularity highlights another misconception about the application of Chevron to Section 5: that "courts are most likely to defer to
Brand X
administrative agency judgments . . . about which the courts have not developed a deeply rooted body of precedent." n208
speaks directly to this argument : Existing judicial precedent is relevant to the interpretation of an agency's statute that is
"otherwise entitled to Chevron deference only if the prior court decision holds that its construction follows from the unambiguous terms of the
statute and thus leaves no room for agency discretion." n209 While some courts may have held historically to judicial precedent over agency
interpretation of ambiguous statutory meaning, this is not the modern approach. It may be the case that prior judicial interpretations present
concerns that must be addressed for a new statutory construction to be adopted, n210 but this concern applies the same to prior agency
constructions as to prior judicial constructions. 5. Does Chevron Not Apply Because Courts, Not Agencies, Develop Common Law? Possibly the
most fascinating argument against the applicability of Chevron to Section 5 is that interpretation of Section 5 is akin to common-law lawmaking,
not merely resolving statutory ambiguity--and that such power is beyond the scope of Chevron. This understanding is carried by--or pushes
against--deep currents in administrative law. As explained by Crane: [The FTCA is] not merely susceptible to two or more plausible readings but
[is] essentially [a] delegation[] to either courts or agencies--which one is the question--to create a federal antitrust common law within a
specified remedial structure. In the case of the [FTCA], at least, there is evidence that Congress intended to delegate to the FTC, not the courts,
the primary responsibility for developing a body of antitrust common law. But courts tend to be jealous about the creation of common law,
which they view as their distinct prerogative. [*257] Under at least one view of administrative law, the more an agency's decision veers in the
direction of policy making and away from interpretation of a statute, the more intrusive judicial review should be. Even if courts are otherwise
willing to give agencies policy-making breathing room, they may be reluctant to do so when the agency's norm creation is structured as a
common law process--a judicial archetype. n211 Of course, "jealousy" is not a jurisprudential theory. But, as Crane notes, this view is shared by
some administrative law scholars. For instance, David Zaring has offered a realist understanding of Chevron, arguing that courts defer to
agencies based upon their assessment of whether the agencies are acting reasonably, instead of based on Chevron's stated standard. n212 But
the trend of modern administrative law--as crafted by the Supreme Court--runs the opposite direction. As explained previously, agencies
have won the war of interpretive authority . n213 This was first strongly seen in Brand X , where the Court held that prior
judicial interpretations of ambiguous statutes are not binding on agencies. n214 The point was made even more directly in American Electric
Power, where the court held that congressional delegation of interpretative authority--based on the same reasoning used in Chevron--displaces
federal common law. n215 In other words, the Court has spoken to Crane's concern that the FTC Act is "[a] delegation[] to either courts or
agencies--which one [being] the question--to create a federal antitrust common law" in several other contexts, and it has come down on the
the trend
side of the agencies. [*258] I do not mean to overstate this conclusion; this is still an evolving area of administrative law. But
appears to be toward greater deference to agencies to develop federal common law. It is the path suggested by
Brand X . Mead and Chenery II make clear that agencies can receive deference when acting through common law-like processes; n216 it is
the path suggested by Fox I; n217 it is the path suggested by American Electric Power. n218 C. Indiana Federation of Dentists
Supports Application of Chevron Indiana Federation of Dentists bears special discussion. It is regularly cited for the proposition that courts
conduct de novo review of FTC legal determinations under Section 5, according some (limited) deference to the FTC. n219 It also bears special
note as it is the most recent Supreme Court opinion considering the application of Section 5's prohibition against unfair methods of
competition--the only opinion to do so since Chevron. n220 The most cited passage from Indiana Federation of Dentists explains that: The legal
issues presented--that is, the identification of governing legal standards and their application to the facts found--are, by contrast, for the courts
to resolve, although even in considering such issues the courts are to give some deference to the [FTC's] informed judgment that a particular
commercial practice is to be condemned as "unfair." n221 [*259] This language has been cited as requiring do novo review of all legal
questions, including the legal meaning of Section 5. n222 Dan Crane has called this an "odd standard," n223 noting that ordinarily "this is
technically a question of Chevron deference, although the courts have not articulated it that way in the antitrust space." n224 Indeed, it seems
remarkable that Indiana Federation of Dentists does not even mention Chevron--a fact that has led antitrust commentators to believe that
"[o]ne cannot explain judicial posture in the antitrust arena in Chevron terms." n225 But this is an over-reading of Indiana Federation of
Dentists. Indeed, the case can instead be read as entirely in line with Chevron. First, it is unsurprising that Indiana Federation of Dentists does
not cite Chevron. The Indiana Federation of Dentists petitioned for certiorari from a Seventh Circuit opinion that had been argued before
Chevron was decided, and the FTC was arguing for an uncontroversial interpretation of Section 5 as applying Section 1 of the Sherman Act.
n226 In other words, the FTC had never structured its case to seek deference, and it had no need to argue for any deference before the Court.
Given the case's history and posture, it would have been more surprising had the parties or the Court cited to Chevron. Moreover, it took
several years for the importance of Chevron to become understood and to filter its way into judicial review of agency statutory interpretation.
Over the next several years, the circuits regularly cited Indiana Federation of Dentists to explain the standard of review for an agency's
interpretation of its organic statutes. n227 Importantly, these cases recognized that there was some confusion as to the changing standard of
review, n228 framed their [*260] analysis in terms of Skidmore (the precursor to Chevron in this line of cases), n229 and largely reached
Chevron-like conclusions, despite Indiana Federation of Dentists's suggestion of a lower level of deference. n230 Perhaps most importantly,
today it is Chevron, not Indiana Federation of Dentists, that is recognized as the law of the land--at least, for every regulatory agency other than
the FTC. Indeed, a close reading of Indiana Federation of Dentists finds that it accords with Chevron. The continuation of the paragraph quoted
above goes on to explain that: The standard of "unfairness" under the [FTCA] is, by necessity, an elusive one, encompassing not only practices
that violate the Sherman Act and the other antitrust laws, but also practices that the [FTC] determines are against public policy for other
reasons. Once the [FTC] has chosen a particular legal rationale for holding a practice to be unfair, however, familiar principles of administrative
law dictate that its decision must stand or fall on that basis, and a reviewing court may not consider other reasons why the practice might be
deemed unfair. In the case now before us, the sole basis of the FTC's finding of an unfair method of competition was [its] conclusion that the
[alleged conduct] was an unreasonable and conspiratorial restraint of trade in violation of § 1 of the Sherman Act. Accordingly, the legal
question before us is whether the [FTC's] factual findings, if supported by evidence, make out a violation of Sherman Act § 1. n231 This
language alters the paragraph's initial proposition that the legal issues are for determination by the courts. Rather, the Court recognizes that
Section 5 is inherently ambiguous. It is, therefore, up to the FTC to choose the legal standard [*261] under which that conduct will be reviewed:
"[A] reviewing court may not consider other reasons why the practice might be deemed unfair." This is precisely the standard established by
Chevron: First, the courts determine whether the statute is ambiguous, and, if it is not, the court's reading of the statute is binding; but if it is
ambiguous, the court defers to the agency's construction. n232 Part of why Chevron is a difficult test is that both parts of this analysis do, in
fact, present legal questions for the court. The first step is purely legal, as the court determines on its own whether the statute is ambiguous.
Then, at step two, the legal question is whether the agency correctly applied the facts to its declared legal standard--as the Court recognized in
Indiana Federation of Dentists, "the legal question before us is whether the FTC's factual findings make out a violation of Sherman Act § 1."
n233 Thus, the opening, oft-quoted first sentence of the paragraph n234 is correct and in accord with Chevron: The legal issues presented are
for the courts to resolve--but according to the legal standard prescribed by the FTC. The most likely reason that Indiana Federation of Dentists is
viewed as the standard of review for the FTC's interpretation of Section 5 is because the FTC has not sought greater deference. This is in part
because, where the FTC couches enforcement of Section 5 in the antitrust laws, it can safely rely on judicially-crafted understandings of the
antitrust laws without any need to seek deference. Thus, in Schering-Plough, where the FTC's finding was based on Section 1 of the Sherman
Act, the FTC's brief recounted Indiana Federation of Dentists as requiring de novo review of its legal determinations--a standard that was then
used by the Eleventh Circuit in its opinion. n235 But it is also surely in part because the FTC has been reluctant to advance a more deferential
standard (shell-shocked as it [*262] is from pre-Chevron losses) n236 and has failed to recognize the current agency-deferential state of
administrative law. V. LIVING WITH A CHEVRON-SUPERCHARGED SECTION 5 Thus far, this Article has argued that Chevron applies to FTC
interpretations of Section 5. That Chevron applies does not necessarily mean that courts will uphold any given agency interpretation. That is,
FTC interpretations of Section 5 pass muster at Chevron step zero--but we still need to consider whether such interpretations are likely to pass
muster at steps one and two. This Article has so far taken no position on whether such deference is normatively desirable, should it be granted.
These questions are addressed below, starting with a brief discussion of how the FTC is likely to fare under Chevron steps one and two and then
turning to the normative question. The FTC has shown an alarming willingness in recent years to threaten litigation under Section 5 without
feeling the need to define its understanding of Section 5's contours. It has leveraged the uncertain bounds of Section 5 to demand extrajudicial
settlements from numerous firms, especially in high-tech industries. This has occurred even with the understanding that the FTC is not entitled
to Chevron deference. This article's argument that the agency will, by and large, receive deference may have the regrettable effect of
strengthening the agency's strong-arm settlement tactics. Sections V.B, V.C, and V.D argue that the FTC should not have such broad power.
These sections also consider possible challenges to its use of that power. Finally, these sections argue for possible administrative and statutory
changes to reign in the agency's power. A. Do FTC Constructions of Section 5 Pass Chevron Steps One and Two? That Chevron applies to FTC
constructions of Section 5 does not necessarily mean that the courts will defer to agency constructions of the statute. Actual deference in any
specific case will turn on the Chevron step one and two inquiries concerning whether the statute is ambiguous and, if so, whether the agency's
interpretation is a permissible construction. Given the inherently and deliberately ambiguous nature of Section 5, it seems very likely that any
agency action to [*263] regulate the conduct of a firm will satisfy Chevron's step one inquiry, provided that it is arguably related to competition.
n237 It is more difficult to consider whether an agency construction of Section 5 would pass Chevron step-two without knowing the specific
construction in question. At this stage, the question is whether the specific construction is permissible. Here too, however, it seems likely
that any agency construction would be deemed permissible . As discussed previously, there is substantial debate within
the antitrust literature on what constitutes anticompetitive conduct, n238 and it is a near certainty that a court would deem as permissible any
FTC construction of Section 5 arguably in line with non-fringe understandings of what constitutes anticompetitive conduct under the Sherman
or Clayton Acts. This conclusion would likely hold even where the FTC may disagree with judicial constructions of the Sherman and Clayton Acts.
That alone would greatly expand the scope of Section 5 vis-a-vis current understandings of antitrust law, but Section 5 is not constrained by the
Sherman and Clayton Acts, and there is no reason to think that FTC interpretations of "unfair" would be constrained by economic logic. While
the FTC's separate unfair acts and practices authority is expressly constrained by a consumer welfare test, its unfair method of competition
authority is not. The history of the FTCA offers a sufficient basis for courts to find almost any construction of an "unfair" method of competition
permissible--even if that construction is based in supremely uneconomic logic. This history, moreover, offers little, if anything to suggest that
such a construction is impermissible. n239 Thus, it is likely that the FTC could construe any form of conduct (i.e., a "method") that harms
anyone (i.e., "unfair") operating in the same product market as the entity engaging in that conduct (i.e., "competition") to be an unfair method
of competition. It must be emphasized that without a particular agency construction to consider, this discussion is speculative. Regardless, it
serves to make two points: first, that the breadth of constructions likely to be considered permissible is very [*264] large; and second, that the
proper forum in which to challenge such interpretations is not before the Article III courts. Given the breadth of the statute, once the matter
has reached that point, there is great weight in favor of the FTC's position receiving Chevron deference. Rather, challenges to the agency
interpretation must be made either before the agency (a challenging proposition) or by appeal to Congress for legislative change. B. The FTC
Should Not Have This Power n240 While Congress did give the FTC very broad power, it did not give the FTC unbounded
power . Unfortunately, the ambiguity in the agency's power, and the ways in which the agency uses that ambiguity, has yielded an agency
with near boundless power to regulate the economy largely unconstrained by judicial review . To understand
this, we must understand how the FTC has wielded its Section 5 authority in recent years. The scope of Section 5 is unclear. This is substantially
because the FTC has declined to explain what it believes the scope to be. Lacking such explanation, firms must live in constant fear
of the agency's potential vigilance . The possibility that the agency may challenge a firm's conduct is a daunting one, especially
because the FTC may elect to first challenge the conduct internally through an administrative hearing. n241 Should the defendant-firm lose,
that decision may be appealed only to the full FTC. Until recently, the FTC never failed to uphold a complaint under its review. n242 Effectively,
then ,
it is only after multiple [*265] years and two complete rounds of litigation that the matter can be
appealed to an Article III tribunal . n243 In other words, if the FTC challenges a firm's conduct, defending that
conduct is extremely expensive. n244 It is also probabilistic due to the ambiguity inherent in Section 5. The FTC has broad power to
challenge conduct that may not be an unfair method of competition with little concern that a firm will attempt to defend itself. Rather, firms do
a cost-benefit analysis and decide to settle with the agency, often agreeing to decades-long oversight of their business practices. n245 In this
way, the agency wields the uncertain boundaries of Section 5 as a weapon .
The possibility that the FTC would broadly
receive Chevron deference for its [*266] construction of these boundaries is a force multiplier, giving
firms even less incentive to defend their innocent conduct . n246 The most problematic aspect of the FTC's approach to
using the threat of litigation to extract consent decrees is that this approach yields little if any official statement of the FTC's interpretation of
Section 5 or any record of the FTC's reasoning. Such records are important. They
provide firms with notice of both the
agency's interpretation of Section 5 and the reasons for that interpretation . They may offer some constraints on the
agency's ability to subsequently change its interpretation. This is true even under Fox I, in which the Court gave agencies broad latitude to
adopt new understandings of an ambiguous statute even in the face of prior, contrary understandings. n247 If the agency has a longstanding
construction of a given statute, it may need to address why it has changed that construction. n248 Similarly, in explaining its basis for adopting
a given construction, the agency may need to address (and contradict) the
changed circumstances of prior justifications in
order to change its construction--particularly where the prior policy was based on factual assumptions
that have not changed. n249 Perhaps most importantly, it provides Congress with information about the agency's performance and
consistency--information that is necessary both for effective oversight and to indicate to Congress where statutory changes may be necessary.

Simons will interpret the FTC Act to aggressively break up tech companies
Winston & Strawn 18 [Winston & Strawn LLP, “The FTC’s Sharpened Focus on Antitrust Enforcement Against Big Tech, Internet, and Social
Media,” April 16, 2018, https://www.lexology.com/library/detail.aspx?g=9e6f479a-b3c6-494a-bdeb-7b4f1319560c]

A potentially significant shift in antitrust enforcement related to big tech, internet, and social media companies has been signaled by recent
testimony of the incoming nominees likely to lead the Federal Trade Commission ("FTC"). For example, Joseph Simons, President Trump's
nominee for chairman of the FTC, testified last month at his confirmation hearing before the U.S. Senate Commerce, Science, and
Transportation Committee, suggesting that the FTC may take a more vigorous approach to enforcement actions against tech companies,
particularly in the Internet and social media space. Simons, a veteran director of the FTC's Bureau of Competition under the Bush
Administration, is widely expected to be appointed chairman in the near future. the buck by enforcing in those areas." Simons did not temper
his position by reference to the importance of incentivizing innovation in technology and Internet markets. Nor did Simons address the
complexities associated with analyzing market definition and power in these unique and nontraditional markets that are typically characterized
by low barriers to entry. Christine Wilson, another nominee, additionally noted that it made sense to "take another look" at concerns that have
been raised in the past, given the evolution of technology. This suggests that the Trump Administration may even reconsider the Obama
Administration's decision to not pursue some previous high-profile complaints in the tech sector. Simons was asked to respond to two
questions reflecting concerns that big tech and social media should be subject to more antitrust scrutiny. His first response noted that big is
sometimes good and sometimes bad, generally tracking monopolization law to say that the FTC should enforce the antitrust laws and attack
conduct by companies that are "big and influential" and that use inappropriate or anticompetitive means "to get big or stay big" (i.e., to gain or
maintain monopoly power). In response to Senator Cruz's concern about the unprecedented size, scope, and power of big tech companies,
however, Simons described enforcing the antitrust laws against such companies as being akin to the reason why Jesse James robbed banks, in
that it was "where the money is." Simons added that the "place most likely to have antitrust problems is places that have market power" so
"those are the places you want to look the most" and that "if some anticompetitive conduct is occurring there, that is where you get a big bang
for The FTC nominees also emphasized that data privacy and security issues will remain a key priority for antitrust enforcement going forward,
as the agencies attempt to grapple with the harmful exposure that consumers face, which will necessarily--and disproportionately--impact tech
companies that use, gather, and monetize data on their platforms or those provided by others. About a week after the confirmation hearing,
DOJ Deputy Assistant Attorney General Roger Alford implied that the Antitrust Division would continue to pursue a balanced approach to
enforcement surrounding digitalization, online platforms, and data applications, while taking into consideration the often procompetitive
benefits surrounding such innovation. DAAG Alford indicated that the Division would remain vigilant in examining the growth and evolution of
various innovative applications in the tech sector, spanning from pricing algorithms to standard essential patents and data and information
sharing. DAAG Alford, however, also pointedly argued that intensive innovation itself contributes to growth such that "the stakes could not be
greater in deciding whether or not we will enforce antitrust laws to promote innovation." The remarks by Simons and Wilson suggest that the
new leadership of the FTC may pursue tech, Internet, and social media companies more aggressively in the near future than the antitrust
agencies have historically. Prior speeches by Chairman Ohlhausen and Commissioner Ramirez expressed concern about the potential chilling
effects on innovation that antitrust enforcement may have. Similarly, the FTC's statement in 2013, during the Obama Administration, regarding
its decision to close the investigation into online search practices without taking any action, reflected a focus on consumer benefits relating to
the technology's product design and improvements. This potential enforcement shift magnifies the need for more robust and nuanced antitrust
counseling and compliance efforts on the front end as the agencies pursue uncharted territory with increased interest .
One thing to watch for will be to see whether the FTC seeks to breathe renewed life into the FTC Act, which could be used more expansively to
reach conduct that does not fall squarely within the Sherman Act's prohibitions on conspiracy and monopolization.

That ensures ad hoc rulemaking that decimates the tech sector


Hurwitz 14 [Justin (Gus) Hurwitz, Assistant Professor of Law, University of Nebraska College of Law, Winter, 2014, “CHEVRON AND THE LIMITS
OF ADMINISTRATIVE ANTITRUST,” University of Pittsburgh Law Review, 76 U. Pitt. L. Rev. 209, lexis]

CONCLUSION The FTC's authority under Section 5 to proscribe unfair methods of competition is a broad and untapped source of power.
Historically interpreted by the courts and treated as coterminous with judicially-defined antitrust laws, the FTC is just beginning to test the
limits of its administrative antitrust authority in the modern administrative state--and especially in the age of Chevron . This article
argues that the FTC is likely to receive great deference from the courts in its use of that power. The potential scope of the FTC's
newfound power is problematic, particularly given the FTC's recent consent decree-based approach to developing legal norms and its interest in
the high-tech and information sectors of the economy . One-hundred years ago, the FTC was given broad powers so that it
would have the agility and expertise to craft rules required to give stability to, and constrain the excesses of, complex industries. But today it
has taken that flexibility to the extreme, foregoing entirely the pretense of developing rules . Rather, it
is developing ad hoc rules in an effort to keep up with the economy's most dynamic, innovative, and
competitive industries . As in the 1970s, this approach has more ability to harm consumers than to protect
them .

Antitrust actions against tech destroy innovation, particularly in cloud computing ---
but antitrust enforcement must be retained to flush out obvious anticompetitive
behavior
Rosoff 17 [Matt Rosoff, editorial director of technology coverage at CNBC.com in San Francisco, most recently an executive editor at Business
Insider, “The idea of using antitrust to break up tech 'monopolies' is spectacularly wrong,” CNBC, April 23, 2017,
https://www.cnbc.com/2017/04/23/why-antitrust-should-not-be-used-against-tech-monopolies.html]

A pair of editorials in The New York Times and Business Insider exclaimed recently that the power in the tech industry is concentrated among
too few companies, with both publications calling for a new round of antitrust regulation akin to the Department of Justice action against
Microsoft in the 1990s. This argument is stunningly, spectacularly wrong. Yes, the five big tech companies—Alphabet (Google), Amazon, Apple,
Facebook, and Microsoft—are more powerful collectively than the tech industry has ever been. They're the five largest companies in the U.S.,
as measured by market cap, and have been driving most of the stock market's gains since January. It's also easy to argue, as Matt Stoller does in
the Business Insider piece, that innovation in the tech industry is in a lull. Silly venture capital-funded companies like Juicero, which sells a $400
juicer, are a highly visible example. (A recent Bloomberg investigation showed that the juice packets could actually be squeezed by hand to the
nearly same effect as the $400 juicer, which apparently irritated some of the start-up's early investors.) Yet what both of these facts actually
demonstrate is that the tech industry is one of the nation's most vibrant, subject to constant competition and disruption— precisely the
opposite of the market characteristics antitrust law was meant to stop . Consider: The Big Five are in constant
competition. The fact that there are five powerful companies at the top of this industry, rather than one (as was arguably the case with
Microsoft in the 1990s) should be a clear clue that the tech industry is exceptionally vibrant. In fact, it's not clear that any of these companies
has an actual monopoly, and it depends on how you define the market. Does Google have a monopoly in the search market? Probably. But it
makes its money from online advertising, where it faces clear competition from Facebook. Amazon arguably has a monopoly only if you define
e-commerce as a separate market from retail. Apple doesn't seem to have a monopoly anywhere. But more to the point, these five companies
are in constant battle, both at the margins and in their core areas of business. Consider the following: Apple invented the modern smartphone
business with the iPhone in 2007, but Google quickly rolled out a competing platform, Android, and licensed it broadly to the point where it
now has more than 80 percent of the global market; Amazon is constantly improving product search in an effort to undercut one of Google's
core sources of revenue—search ads that appear when the user seeks information on a particular product; Facebook is competing against
Google for every dollar available in online advertising, particularly in video; Apple has its own suite of mobile productivity apps that compete
with Microsoft's Office apps on its devices, while Google has a strong online version of these kinds of apps; Amazon, Microsoft, and Google are
in brutal competition for the cloud computing market, which itself is disrupting traditional software vendors like Oracle and SAP, with hundreds
of billions of dollars of corporate IT budgets at stake. And on and on. This isn't a case of five companies sitting comfortably on their piles of gold
and colluding to stay out of each other's core areas. It's all-out war, year after year . The evidence of fast disruption in the industry
is clear. Contrary to Stoller's argument, Google did not beat Microsoft because of antitrust litigation; the areas where Microsoft was restricted
from competing related to web browsers and forcing PC makers to accept and reject certain software as a condition for getting Windows.
Google became a threat to Microsoft because it solved an entirely different problem that Microsoft hadn't even been focused on—organizing
the burgeoning mass of information on the Internet in a way that made it easy for people to find what they were looking for. By the time
Microsoft woke up and tried to beat Google with its own search engine, MSN Search (later Bing) in 2005, it was already too late. There are
plenty of other examples. As recently as 2007, Microsoft had the only operating system that mattered—Windows. A decade later, Windows is
in third place behind Google's Android and Apple's iOS, which conquered mobile computing devices and caught Microsoft flat-footed. Facebook
swept into an online advertising market dominated by Google in 2012, and ended up capturing a huge portion of mobile online ads, catching
Google flat-footed. And on and on. A vibrant start-up market is a sign of competition . Yes, Juicero was a silly idea.
(Although as former Microsoft exec and current venture capitalist Steven Sinofsky pointed out to me on Twitter, the company that owns the
similar Keurig drink-pod system sold for $14 billion in 2015.) But the fact that these silly ideas are getting funded is a sign of vibrancy, not a sign
that innovation is being squashed by monopolists. Take for example Snap, which is losing hundreds of millions of dollars a year, but was funded
by venture capitalists to the tune of $2.65 billion before going public earlier this year at a valuation over $20 billion. It is now causing enough
panic at Facebook that the company is imitating Snapchat's core features as fast as its developers can code them. Or look at Jet.com, a venture-
funded competitor to Amazon that Wal-Mart snapped up for over $3 billion last year. Or on the enterprise side, Okta set out to solve a problem
in an area that Microsoft had dominated for most of the last decade—how to sign employees in to the apps they need to use, without making
them enter a username and password every time. Microsoft's solution was designed back when most companies used apps from a few vendors,
running on their own in-house computers; Okta saw that companies were moving toward using cloud-based apps from a wide variety of
vendors and exploited that niche. It went public earlier this month at a market cap of over $2 billion. Price competition benefiting customers.
U.S. antitrust law focuses on harm to consumers—it's not enough for a company to be dominant, it must be using that dominance to raise
prices or lower selection for consumers. (It's different in Europe, where the dominance itself can be cause for restriction.) The evidence is quite
to the contrary. Amazon is in brutal competition with physical retailers to offer consumers the lowest prices on almost anything they could
want to buy. Google and Facebook offer their services for free to consumers, and are fiercely competitive when it comes to their paying
customers—advertisers. Meanwhile, Amazon, Microsoft, and Google are locked in a price war for cloud
computing . Venture capital-subsidized start-ups like Uber give consumers more choices at lower prices than they've ever had. Antitrust is a
blunt instrument No doubt, these five companies are powerful. There may indeed be cases where regulators need to step in and restrict these
companies from harming consumers. For instance, you could argue that Google and Facebook have too much information about people's web-
surfing and buying habits, and that they should be subject to strict privacy restrictions on how they use that information. You could argue that
Amazon's cutthroat negotiations with suppliers will have a long-term negative effect on price and selection by forcing smaller retailers and e-
tailers out of business, and look for ways to regulate that. Regulators should certainly be on the lookout for any evidence of collusion between
the big powers in tech—as happened with the class action suit over employee salaries at Apple, Google, and several other big companies that
was settled in 2015. But antitrust law is a blunt instrument meant to be used in cases of obvious market dominance
that's clearly hurting consumers. That's not at all what the tech industry looks like today .

Wrecks US tech, causes internet fragmentation in tech ecosystems


Atkinson 18 [Robert D. Atkinson, “Don’t Fear the Titans of Tech and Telecom,” Information Technology & Innovation Foundation, July 2, 2018,
https://itif.org/publications/2018/07/02/dont-fear-titans-tech-and-telecom]

We are bombarded on an almost daily basis with some new angst-ridden lament about the inexorable rise of “Big Tech” or “Big Broadband.”
The concern is that tech and Internet companies like Google, Apple, Facebook, and Amazon (or “GAFA,” as European policymakers brand
them), along with broadband companies like AT&T, Verizon, and Comcast, are just too big and therefore too powerful: They are present at
every turn in our digital lives, from the devices and apps we use to manage our daily affairs and navigate the world, to the platforms we
frequent, to the communications infrastructure connecting it all. What’s to stop them from using that power for self-interested reasons, and in
the process distorting the economy and threatening not only fundamental liberties such as free speech, but our very democracy? These fears,
while understandable, are largely unwarranted. After all, Americans have long distrusted centralized power, whether in our philosophy of
government or our attitudes toward business. The Progressive-era antitrust crusader and Supreme Court Justice Louis Brandeis once wrote,
“The doctrine of the separation of powers was adopted … not to promote efficiency but to preclude the exercise of arbitrary power.” The way
to “save the people from autocracy,” he said, is by building “friction” into the system. These days, when it comes to fears about Big Tech and
Big Broadband, there are two main proposals for building in that friction. One is wielding the antitrust cudgel to break up some of these big
companies. The other is to turn some of them into regulated public utilities. But either way, the cure would be worse than the disease.
Breaking up Big Tech or reconstituting Big Broadband as a public utility would grievously wound the U.S.
information technology and communications ecosystem , which is otherwise the envy of the world. Many of the attacks on Big
Tech and Big Broadband stem from the same fear: that they will use their power to limit our freedom or otherwise manipulate our lives. We are
regularly told that large Internet service providers (ISPs) will use their last-mile “pipe” to control the Internet traffic we see. Katrina Vanden
Heuvel, editor of The Nation, warns that “if net neutrality is eliminated, these media monopolists will restructure how the internet works,
creating information super-highways for corporate and political elites and digital dirt roads for those who can’t afford the corporate tolls.”
Rebecca Vallas, an analyst at the Center for American Progress, writes that absent strong net neutrality laws, ISPs would capriciously block
YouTube channels like “The Misadventures of Awkward Black Girl,” presumably out of some latent or even overt racism. Big Tech is even less
trusted by others. DC economic consultant Ev Ehrlich asks, “How do we assure ourselves that the ‘users’ they [Internet companies] connect us
to are human or that the search results they feed us are based on merit—not pay for play (or worse, algorithmic racism)?” Likewise, Google
critic Jonathan Taplin warns, “we’ve given Google enormous control over our lives and the lives of our children.” Notwithstanding all this
breathless fearmongering, there are almost no cases where any of the major ISPs or Internet companies have intentionally blocked or
manipulated legal Internet content for nefarious reasons. The one legitimate net neutrality case that has emerged to date involved a small,
independent DSL provider, Madison River, which blocked the voice-over-Internet provider Vonage, but then quickly backed down (and paid a
fine) after the Federal Communications Commission intervened (without net neutrality rules). For Internet firms, most criticism is about cases
where they block hate speech or prevent illegal access to copyrighted content. So they are often dammed if they do and damned if they don’t:
If they block hate speech, they are accused of being censors. If they don’t, they are accused of enabling hate. Nonetheless, critics warn that
corporations are poised become Big Brother at any moment, so government must act now. Hence, the two main solutions being proffered.
Proponents of greater antitrust enforcement put their faith in more competition. The idea is that if there were more ISPs, search engines, and
social networks, then we would be less dependent on any of today’s market leaders, and their nefarious urges to manipulate networks for their
own purposes would be less damaging and easier to defeat. As Klint Finley writes in Wired, “ideally, if your internet service provider … decided
to charge you more to access Netflix than Hulu, you’d just switch to a different provider that offered better terms.” For big Internet companies
like Google, John Hawkins argues in National Review, “there is another option to deal with this situation that would reduce the power of these
companies and serve the public interest. That is breaking these monopolies up into smaller, more focused entities.” Besides the fact that there
is no evidence that the titans of tech and telecom are harming consumers, there are at least three problems with the “break ’em up” solution.
The first is that it would raise costs. As my colleague Mike Lind and I argue in our book Big Is Beautiful: Debunking the Myth of Small Business,
there are very good reasons why many tech and telecom firms are large: They tend to gain significant market share because of economies
of scale and scope, and what economists call “network effects.” For the latter, by providing platforms for users
around the world to connect, their very size generates enormous economic benefits for society and consumers. Nonetheless, some antitrust
advocates would chop these powerful titans down to what they consider to be a more manageable size by trying to split their core businesses
into multiple competitors. In broadband, this would have little effect on competition. If Comcast, for example, were broken up into 25
companies, all providing “triple-play” Internet, phone, and television services, consumers in any market now with Comcast would still have the
same amount of choice (usually several satellite providers, one telco broadband provider, and sometimes another cable or fiber provider).
Some advocates instead would try to add more competitors, such as through having the government subsidize the deployment of a third or
sometimes even a fourth broadband network in a particular jurisdiction. But trying to add more broadband ISPs to any particular geographic
market would be a waste of money for the simple reason that it would increase the total cost of providing broadband services in the United
States. Breaking up big Internet companies makes just as little sense. Imagine the government pressuring Google to somehow break up its
search business (e.g., mandating that it give its algorithm to competitors for free or at low cost) and that we now have 10 search engines. Given
the considerable economies of scale involved in building and improving software systems, having 10 competing search engines would be
wasteful. Moreover, none of the 10 would have pockets deep enough to invest as much in R&D as Google does every year to keep improving
search, so overall innovation would likely diminish . Such breakups would also be terrible for users. Were the government to
break Facebook into two companies (say, Facebook and “Headbook”) we’d all have to post twice every time we did something we want our
friends to know about. There is a very good reason why these applications have evolved to where there is one leading social network
(Facebook); one leading professional network (LinkedIn); one leading microblogging site (Twitter): It makes it easier for users. Finally, breaking
apart these companies would likely be temporary as the market would eventually choose the best service and over time it would gain dominant
market share. There’s a third problem with the “add competition and stir” panacea. If, for example, there were now 10 search engines, each
with 10 percent market share, one likely result would be that some search engines would cater to particular interests. Are you a conservative?
Use XYZ search engine and be sure to avoid those troubling liberal search results. Holocaust denier? Use ZYX search engine to be sure that the
denier sites feature prominently in search results. Surely, this kind of balkanization cannot be in the public interest .
More and smaller tech companies also have fewer resources and incentives to respond to challenges, including the fact that a freewheeling
Internet where anyone can post anything anonymously is not an unmitigated blessing. For example, it may well be easier for the Russians to
influence U.S. elections through social networks if there were five major social networks, because each network would have fewer resources
and less motivation to fight “fake news.” Most proposal for breakups seek to separate various component services offered by tech and telecom
giants: This makes little sense either. In the broadband space, the market is already driving that phenomenon, with video and phone
alternatives that are now separate from the ISPs’ triple-play offerings. By next year, an estimated 19 million American households will have “cut
the cord” from pay TV, choosing “over-the-top” video offerings like Hulu, Netflix, and YouTube. In the Internet space, such unbundling of
services would either be inconsequential to competition or would harm innovation and consumers. For example, some have argued for
breaking up Amazon retail from Amazon Web Services (its cloud computing business). But if there is concern about market power in retail or
cloud computing—and there shouldn’t be—then splitting up Amazon wouldn’t diminish power in either business. It’s worth noting that such
a move would also send a powerful signal to innovators: Be careful when innovating . Don’t get too
successful because if you do the government will step in with a crow bar to yank it away into a separate business . It’s
easy to forget that it was Amazon that was the major original innovator in cloud computing because it had so much excess computing power
that it didn’t need except over the holidays, so CEO Jeff Bezos decided to build a business around it. For other separations, either consumers
would be hurt, or innovation would be, or both. The Boston Globe recently called for the government should break up Alphabet (the holding
company for Google), including separating Google Search from Google X, its “moonshot” R&D group. But many large technology companies,
including Google, use their “Schumpeterian” profits to invest in highly risky, expensive innovation ventures, which if successful would lead to
enormous societal progress . Indeed, one can make a compelling case that were it not for Google’s investments in autonomous
vehicle driving systems the major car companies would not be as far as long as they are with their investments in the technology. Indeed, most
if not all of the Big Tech companies are investing exactly the way so many pundits think American companies should be investing, but all too
often are not:making big, risky bets on transformative innovation such as drones, AI, AVs, robotics, new kinds of Internet
access for remote places, and more. Indeed, as law professor Michael Petit points out, most big tech companies embody
patient capitalism , investing large amounts of R&D in future-oriented projects. In other cases, vertical integration provides
real benefits to consumers , as in the case of WhatsApp—a disruptive application that enabled much cheaper messaging than
traditional SMS—being part of Facebook. Prior to WhatsApp’s acquisition by Facebook, users had to pay a subscription fee to use it. Now there
is no fee. This is because of “economies of scope,” which in this case produce greater consumer welfare. Most breakups would reduce the
efficiency of providing a bundle of products and raise overall the overall costs of providing services.

The impact is global cloud computing


Gould 15 [Jeff, president of SafeGov.org and CEO and director of research at Peerstone Research, US EFFORT TO GRAB DATA FROM MICROSOFT
IN IRELAND SHOULD FRIGHTEN ALL FIRMS USING THE CLOUD OVERSEAS" 8/20,
http://www.nextgov.com/technology-news/tech-insider/2015/08/us-effort-grab-data-microsoft-ireland-should-frighten-all-firms-using-cloud-
overseas/119264]

Such an inefficient and balkanized cloud scenario , if it came to pass, would be bad enough. But the actual
outcome will likely be worse. It often won’t be feasible for local cloud providers to step into the shoes of
the established global giants. The reality is that the cloud offerings of Amazon, Microsoft, Google and a
handful of other global providers have reached a scale and degree of technical sophistication that
simply cannot be duplicated by local champions . Why can’t small providers touch the global giants ? One
reason is money . The top dozen or so cloud providers are investing hundreds of billions of dollars in the
construction of vast global networks of linked data centers, with mostly football-field-sized facilities housing e housing
hundreds of thousands of individual servers . These networked centers continuously shift data between themselves to optimize
service resilience, network latency and resource utilization. Another reason why the local champions will be left behind is that cloud
providers are increasingly shifting from commodity services to more differentiated offerings . Basic cloud
infrastructure is evolving from simple virtualized servers to something much more complex. The very
notion of “server” is dissolving into a more abstract notion of “ compute fabric ." Don’t worry about
configuring virtual machines, providers like Amazon now say, just give us your code and we’ll run it. Amazon’s new
Lambda service is an early example of this trend, sometimes known as the “serverless” cloud. Microsoft and Google are rapidly
heading in the same direction. Delving deeper, we find cloud applications that by definition cannot be
copied. If you want Google Apps, Office 365 or Salesforce CRM, you won’t be able to get it from your local cloud provider. In short, cloud
providers confined to single-country markets will not be able to compete on the global stage .
Rudimentary local services with little more to propose than remote virtual machines will not make the
cut. The real choice for customers will be between a global cloud or no cloud at all . The stakes in the
Microsoft case are thus very high indeed .

Key to solve space debris, miscalc, and asteroids


Johnston et al. 9 [Steven, Assc Prof Architecture and Technical Management at Univ South Florida, “Cloud Computing for Planetary
Defense,” October 2009, http://www.mendeley.com/research/cloud-computing-for-planetary-defense/]

cloud-based computing architecture can be used for planetary


Abstract In this paper we demonstrate how a
defense and space situational awareness ( SSA ). We show how utility compute can facilitate both a financially
economical and highly scalable solution for space debris and near-earth object impact analysis. As we
improve our ability to track smaller space objects, and satellite collisions occur, the volume of objects
being tracked vastly increases , increasing computational demands. Propagating trajectories and
calculating conjunctions becomes increasingly time critical , thus requiring an architecture which can scale with demand.
The extension of this to tackle the problem of a future near-earth object impact is discussed, and how cloud computing can play a key role in
this civilisation-threatening scenario. Introduction Space situational awareness includes scientific and operational aspects of space weather,
near-earth objects and space debris. This project is part of an international effort to provide a global response strategy to the threat of a Near
Earth Object (NEO) impacting the earth, led by the United Nations Committee for the Peaceful Use of Space (UN-COPUOS). The impact of a
NEO – an asteroid or comet – is a severe natural hazard but is unique in that technology exists to
predict and to prevent it, given sufficient warning. As such, the International Spaceguard survey has identified nearly
1,000 potentially hazardous asteroids >1km in size although NEOs smaller than one kilometre remain
predominantly undetected , exist in far greater numbers and impact the Earth more frequently1. Impacts by objects larger
than 100 m (twice the size of the asteroid that caused the Barringer crater in Arizona) could occur with little or no warning ,
with the energy of hundreds of nuc lear weapons, and are “devastating at potentially unimaginable
levels”2 (Figure 1). The tracking and prediction of potential NEO impacts is of international importance, particularly with regard to disaster
management. Space debris poses a serious risk to satellites and space missions. Currently Space Track3 publishes the locations of about 10,000
objects that are publicly available. These include satellites, operational and defunct, space debris from missions and space junk. It is believed
that there are about 19,000 objects with a diameter over 10cm. Even the smallest space junk travelling at about 17,000 miles per hour can
cause serious damage; the Space Shuttle has undergone 92 window changes due to debris impact, resulting in concerns that a more serious
accident is imminent4, and the International Space Station has to execute evasion manoeuvres to avoid debris. There
are over
300,000 objects over 1cm in diameter and there is a desire to track most , if not all of these. By
improving ground sensors and introducing sensors on satellites the Space Track database will increase
in size

. By tracking and predicting space debris behaviour in more detail we can reduce collisions as the orbital
environment becomes ever more crowded . Cloud computing provides the ability to trade
computation time against costs . It also favours an architecture which inherently scales, providing burst capability. By treating
compute as a utility, compute cycles are only paid for when they are used. Here we present a cloud application framework to tackle space
debris tracking and analysis, that is being extended for NEO impact analysis. Notably, in this application propagation and conjunction analysis
results in peak compute loads for only 20% of the day, with burst capability required in the event of a collision when the number of objects
increases dramatically; the 1 Population of NEOs larger than 100m is estimated at between 200,000 & 400,000, with an impact frequency of
2,000 to 4,000 years. 2 Testimony of Russell L. Schweickart, Chairman, B612 Foundation, before the Space and Aeronautics Subcommittee of
the House Committee on Science and Technology, 11 October 2007. 3 The Source for Space Surveillance Data. Space Track. [Online]
http://www.space-track.org. 4 Hypervelocity Impact Technology Facility (Missions from STS-50 through STS-114). [Online]
http://hitf.jsc.nasa.gov. Iridium-33 Cosmos-2251 collision in 2009 resulted in an additional 1,131 trackable objects (Figure 2). Utility
computation can quickly adapt to these situations consuming more compute, incurring a monetary cost but keeping computation wall clock
time to a constant . In the event of a conjunction event being predicted, satellite operators would have to be quickly alerted so they could
decide what mitigating action to take. In this work we have migrated a series of discrete manual computing processes to the Azure cloud
platform to improve capability and scalability. It is the initial prototype for a broader space situational awareness platform. The workflow
involves the following steps: obtain satellite position data, validate data, run propagation simulation, store results, perform conjunction
analysis, query satellite object, and visualise. Satellite locations are published twice a day by Space Track, resulting in bi-daily high workloads.
Every time the locations are published, all previous propagation calculations are halted, and the propagator starts recalculating the expected
future orbits. Every
orbit can be different , albeit only slightly from a previous estimate, but this means that
all conjunction analysis has to be recomputed . The quicker this workflow is completed the quicker
possible conjunction alerts can be triggered , providing more time for mitigation . The concept project uses
Windows Azure as a cloud provider and is architected as a data-driven workflow consuming satellite locations and resulting in conjunction
alerts, as shown in Figure 3. Satellite locations are published in a standard format know as a Two-Line Element (TLE) that fully describes a
spacecraft and its orbit. Any TLE publisher can be consumed, in this case the Space Track website, but also ground observation station data. The
list of TLEs are first separated into individual TLE Objects, validated and inserted into a queue. TLE queue objects are consumed by comparator
workers which check to see if the TLE exists; new TLEs are added to an Azure Table and an update notification added to the Update Queue. TLEs
in the update notification queue are new and each requires propagation; this is an embarrassingly parallel computation that scales well across
the cloud. Any propagator can be used. We currently support NORAD SGP4 propagator and a custom Southampton simulation (C++) code. Each
propagated object has to be compared with all other propagations to see if there is a conjunction (predicted close approach). Any conjunction
source or code can be used, currently only SGP4 is implemented; plans are to incorporate more complicated filtering and conjunction analysis
routines as they become available. Conjunctions result in alerts which are visible in the Azure Satellite tracker client. The client uses Virtual
Earth to display the orbits. Ongoing work includes expanding the Virtual Earth client as well as adding support for custom clients by exposing
the data through a REST interface. This pluggable architecture ensures that additional propagators and conjunction codes can be incorporated,
and as part of ongoing work we intend to expand the available analysis codes. The framework demonstrated here is being extended as a
generic space situational service bus to include NEO impact predictions. This
will exploit the pluggable simulation code
architecture and the cloud’s burst computing capability in order to allow refinement of predictions for
disaster management simulations and potential emergency scenarios anywhere on the globe. Summary We have shown how a new
architecture can be applied to space situational awareness to provide a scalable robust data-driven architecture which can enhance the ability
of existing disparate analysis codes by integrating them together in a common framework. By automating the ability to alert satellite owners to
potential conjunction scenarios we reduce the potential of conjunction oversight and decrease the response time, thus making space safer. This
framework is being extended to NEO trajectory and impact analysis to help improve planetary defencs capability for all.

Debris causes US-Russia war


Lewis 4 [Postdoctoral Fellow in the Advanced Methods of Cooperative Study Program, Jeffrey, Office of the Undersecretary of Defense for
Policy, Center for Defense Information, “What if Space Were Weaponized?,” July 2004, http://www.cdi.org/PDFs/scenarios.pdf]

This is the second of two scenarios that consider how U.S. space weapons might create incentives for America’s opponents to behave in dangerous ways. The
previous scenario looked at the systemic risk of accidents that could arise from keeping nuclear weapons on high alert to guard against a space weapons attack. This
section focuses on the risk that a
single accident in space, such as a piece of space debris striking a Russian early-warning satellite, might be the
catalyst for an accidental nuclear war. As we have noted in an earlier section, the United States canceled its own ASAT program in the 1980s
over concerns that the deployment of these weapons might be deeply destabilizing. For all the talk about a “new relationship” between the United States and
Russia, both sides retain thousands of nuclear forces on alert and configured to fight a nuclear war. When briefed about the size and status of U.S. nuclear forces,
President George W. Bush reportedly asked “What do we need all these weapons for?”43 The answer, as it was during the Cold War, is that the forces
remain on alert to conduct a number of possible contingencies, including a nuclear strike against Russia. This fact, of course, is not lost on the Rus- sian leadership,
which has been increasing its reliance on nuclear weapons to compensate for the country’s declining military might. In the mid-1990s, Russia dropped its pledge to
refrain from the “first use” of nuclear weapons and conducted a series of exercises in which Russian nuclear forces prepared to use nuclear weapons to repel a
NATO invasion. In October 2003, Russian Defense Minister Sergei Ivanov reiter- ated that Moscow might use nuclear weapons “preemptively” in any number of
contingencies, including a NATO attack.44 So, it remains business as usual with U.S. and Russian nuclear forces. And business as usual includes the occasional false
alarm of a nuclear attack. There have been several of these incidents over the years. In September 1983, as a relatively new Soviet early-warning satellite moved
into position to monitor U.S. missile fields in North Dakota, the sun lined up in just such a way as to fool the Russian satellite into reporting that half a dozen U.S.
missiles had been launched at the Soviet Union. Perhaps mindful that a brand new satel- lite might malfunction, the officer in charge of the command center that
monitored data from the early-warning satellites refused to pass the alert to his superiors. He reportedly explained his caution by saying: “When people start a war,
they don’t start it with only five missiles. You can do little damage with just five missiles.”45 In January 1995, Norwegian scientists launched a sounding rocket on a
trajectory similar to one that a U.S. Trident missile might take if it were launched to blind Russian radars with a high. What if Space Were Weaponized? altitude
nuclear detonation. The incident was apparently serious enough that, the next day, Russian President Boris Yeltsin stated that he had activated his “nuclear
football” – a device that allows the Russian president to communicate with his military advisors and review his options for launching his arsenal. In this case, the
Russian early-warning satellites could clearly see that no attack was under way and the crisis passed without incident.46 In both cases, Russian observers were
confi- dent that what appeared to be a “small” attack was not a fragmentary picture of a much larger one. In the case of the Norwegian sounding rocket, space-
based sensors played a crucial role in assuring the Russian leadership that it was not under attack. The Russian command sys- tem, however, is no longer able to
provide such reliable, early warning. The dissolution of the Soviet Union cost Moscow several radar stations in newly independent states, creating “attack corridors”
through which Moscow could not see an attack launched by U.S. nuclear submarines.47 Further, Russia’s constellation of early-warn- ing satellites has been allowed
Russia is attempting to
to decline – only one or two of the six satellites remain operational, leaving Russia with early warning for only six hours a day.

reconstitute its constellation of early-warning satellites , with several launches planned in the next few years. But Russia
will still have limited warning and will depend heavily on its space-based systems to provide warning of an
American attack.48 As the previous section explained, the Pentagon is contemplating military missions in space that will improve U.S. ability to cripple Russian
nuclear forces in a crisis before they can execute an attack on the United States. Anti-satellite weapons, in this scenario, would blind Russian reconnaissance and
warning satellites and knock out communications satellites. Such strikes might be the prelude to a full-scale attack, or a limited ef- fort, as attempted in a war game
at Schriever Air Force Base, to conduct “early deterrence strikes” to signal U.S. resolve and control escalation.49 By 2010, the United States may, in fact, have an
arsenal of ASATs (perhaps even on orbit 24/7) ready to conduct these kinds of missions – to coerce opponents and, if necessary, support preemptive attacks.
Moscow would certainly have to worry that these ASATs could be used in conjunction with other space-enabled systems – for example, long-range strike systems
that could attack targets in less than 90 minutes – to disable Russia’s nuclear deterrent before the Rus- sian leadership understood what was going on. What
would happen if a piece of space debris were to disable a Russian early-warning satellite under these conditions?
Could the Russian military distinguish between an accident in space and the first phase of a U.S. attack? Most Russian early-warning satellites are in elliptical
Molniya orbits (a few are in GEO) and thus difficult to attack from the ground or air. At a minimum, Moscow would probably have some tactical warn- ing of such a
suspicious launch, but given the sorry state of Russia’s warning, optical imaging and signals intelligence satellites there is reason to ask the question. Further, the
advent of U.S. on-orbit ASATs, as now envisioned50 could make both the more difficult orbital plane and any warning systems moot. The unpleasant truth is that
the Russians likely would have to make a judgment call . No state has the ability to definitively deter- mine the cause of the satellite’s
failure. Even the United States does not maintain (nor is it likely to have in place by 2010) a sophisticated space surveillance system that would allow it to distin-
guish between a satellite malfunction, a debris strike or a deliberate attack – and Russian space surveillance capabilities are much more limited by comparison. Even
the risk assessments for col- lision with debris are speculative, particularly for the unique orbits in which Russian early-warning satellites operate. During peacetime,
it is easy to imagine that the Russians would conclude that the loss of a satellite was either a malfunction or a debris strike. But how confident could U.S. planners
What
be that the Russians would be so calm if the accident in space occurred in tandem with a second false alarm, or occurred during the middle of a crisis?

might happen if the debris strike occurred shortly after a false alarm showing a missile launch? False alarms are
appallingly common – according to information obtained under the Freedom of Information Act, the U.S.-Canadian North American Aerospace Defense
Command ( NORAD) experienced 1,172 “moderately serious” false alarms between 1977 and 1983 – an

average of almost three false alarms per week. Comparable information is not available about the
Russian system, but there is no reason to believe that it is any more reliable .51 Assessing the likelihood of these sorts
of co- incidences is difficult because Russia has never provided data about the frequency or duration of false alarms; nor indicated how seriously early- warning data
is taken by Russian leaders. More- over, there is no reliable estimate of the debris risk for Russian satellites in highly elliptical orbits.52 The important point,
however, is that such a coincidence would only appear suspicious if the United States were in the business of disabling satellites – in other words, there is much less
risk if Washington does not develop ASATs. The loss of an early-warning satellite could look rather ominous if it occurred during a period of major tension in the
relationship. While NATO no longer sees Russia as much of a threat, the same cannot be said of the converse. Despite the warm talk, Russian leaders remain wary of
NATO expansion, particularly the effect expansion may have on the Baltic port of Kaliningrad. Although part of Russia, Kaliningrad is separated from the rest of
Russia by Lithuania and Poland. Russia has already complained about its decreasing lack of access to the port, particularly the uncooperative attitude of the
Lithuanian govern- ment.53 News reports suggest that an edgy Russia may have moved tactical nuclear weapons into the enclave.54 If the
Lithuanian government were to close access to Kaliningrad in a fit of pique, this would trigger a major crisis between NATO and Russia. Under these circumstances,
the loss of an early-warning satellite would be extremely suspicious . It is any military’s nature during a crisis to interpret
events in their worst-case light. For ex- ample, consider the coincidences that occurred in early September 1956, during the extraordinarily tense period in
international relations marked by the Suez Crisis and Hungarian uprising.55 On one evening the White House received messages indicating: 1. the Turkish Air Force
had gone on alert in response to unidentified aircraft penetrating its airspace; 2. one hundred Soviet MiG-15s were flying over Syria; 3. a British Canberra bomber
had been shot down over Syria, most likely by a MiG; and 4. The Russian fleet was moving through the Dardanelles. Gen. Andrew Accidental Nuclear War Scenarios
27 28 What if Space Were Weaponized? Goodpaster was reported to have worried that the confluence of events “might trigger off ... the NATO
operations plan” that called for a nuclear strike on the Soviet Union. Yet, all of these reports were false. The “jets” over Turkey were a flock of swans; the Soviet
MiGs over Syria were a smaller, routine escort returning the president from a state visit to Moscow; the bomber crashed due to mechanical difficulties; and the
Soviet fleet was beginning long-scheduled exercises. In an important sense, these were not “coincidences” but rather different manifestations of a common failure –
human error resulting from extreme tension of an international crisis. As one author noted, “The detection and misinterpretation of these events, against the
context of world tensions from Hungary and Suez, was the first major example of how the size and complexity of worldwide electronic warning systems could, at
certain critical times, create momentum of its own.” Perhaps most worrisome, the
United States might be blithely unaware of the
degree to which the Russians were concerned about its actions and inadvertently escalate a crisis. During
the early 1980s, the Soviet Union suffered a major “war scare” during which time its leadership concluded that bilateral relations were rapidly declining. This war
scare was driven in part by the rhetoric of the Reagan administration, fortified by the selective reading of intelligence. During this period, NATO conducted a major
command post exercise, Able Archer, that caused some elements of the Soviet military to raise their alert status. American officials were stunned to learn, after the
fact, that the Kremlin had been acutely nervous about an American first strike during this period.56 All of these incidents have a common theme – that confidence is
often the difference between war and peace. In times of crisis, false alarms can have a momentum of their own . As in the
second scenario in this monograph, the lesson is that commanders rely on the steady flow of reliable information. When that information flow is
disrupted – whether by a deliberate attack or an accident – confidence collapses and the result is panic
and escalation. Introducing ASAT weapons into this mix is all the more dangerous, because such
weapons target the elements of the command system that keep leaders aware, informed and in control.
As a result, the mere presence of such weapons is corrosive to the confidence that allows national nuclear
forces to operate safely.

Asteroid strikes cause extinction


McGuire 2 [Bill, Professor of Geohazards at University College London and is one of Britain's leading volcanologists, A Guide to the End of the
World, 2002, p. 159-168]

The Tunguska events pale into insignificance when compared to what happened off the coast of Mexico's Yucatan Peninsula 65
million years earlier. Here a 10-kilometre asteroid or comet—its exact nature is uncertain—crashed into the sea and changed our
world forever. Within microseconds, an unimaginable explosion released as much energy as billions of
Hiroshima bombs detonated simultaneously, creating a titanic fireball hotter than the Sun that
vaporized the ocean and excavated a crater 180 kilometres across in the crust beneath. Shock waves blasted upwards, tearing the
atmosphere apart and expelling over a hundred trillion tonnes of molten rock into space, later to fall
across the globe. Almost immediately an area bigger than Europe would have been flattened and scoured of
virtually all life, while massive earthquakes rocked the planet. The atmosphere would have howled and screamed as hypercanes five
times more powerful than the strongest hurricane ripped the landscape apart , joining forces with huge tsunamis
to batter coastlines many thousandsof kilometres distant. Even worse was to follow. As the rock blasted into space began to rain down across
the entire planet so the heat generated by its re-entry into the atmosphere irradiated
the surface, roasting animals alive as
effectively as an oven grill, and starting great conflagrations that laid waste the world's forests and
grasslands and turned fully a quarter of all living material to ashes . Even once the atmosphere and oceans had settled
down, the crust had stopped shuddering, and the bombardment of debris from space had ceased, more was to come. In the following weeks,
smoke and dust in the atmosphere blotted out the Sun and brought temperatures plunging by as much as 15
degrees Celsius. In the growing gloom and bitter cold the surviving plant life wilted and died while those herbivorous
dinosaurs that remained slowly starved. global wildfires and acid rain from the huge quantities of sulphur injected into the
atmosphere from rocks at the site of the impact poured into the oceans, wiping out three-quarters of all marine life.
After years of freezing conditions the gloom following the so-called Chicxulub impact would eventually have lifted, only to
reveal a terrible Sun blazing through the tatters of an ozone layer torn apart by the chemical action of
nitrous oxides concocted in the impact fireball: an ultraviolet spring ha

rd on the heels of the cosmic winter that fried many of the remaining species struggling precariously to hang on to
life. So enormously was the natural balance of the Earth upset that according to some it might have taken hundreds of thousands of years for
the post-Chicxulub Earth to return to what passes for normal. When it did the age of the great reptiles was finally over, leaving the field to the
primitive mammals—our distant ancestors—and opening an evolutionary trail that culminated in the rise and rise of the human race. But could
we go the same way1?To assess the chances, let me look a little more closely at the destructive power of an impact event. At Tunguska,
destruction of the forests resulted partly from the great heat generated by the explosion, but mainly from the blast wave that literally pushed
the trees over and flattened them against the ground. The strength of this blast wave depends upon what is called the peak overpressure, that
is the difference between ambient pressure and the pressure of the blastwave. In order to cause severe destruction thisnccds to exceed 4.
pounds per square inch, an overpressure that results in wind speeds that arc over twice the force of those found in a typical hurricane. Even
though tiny compared with, say, the land area of London, the enormous overpressures generated by a 50-metre object exploding low overhead
would cause damage comparable with the detonation of a very large nuclear device, obliterating almost everything within the city's orbital
motorway. Increase the size of the impactor and things get very much worse. An asteroid just 250 metres across would be sufficiently massive
to penetrate the atmosphere; blasting a crater 5 kilometres across and devastating an area of around 10,000 square kilometres— that is about
the size of the English county of Kent. Raise the size of the asteroid again, to 650 metres, and the area of devastation increases to ioo;ooo
square kilometres—about the size of the US state of South Carolina. Terrible as this all sounds, however, even this would be insufficient to
affect the entire planet. In order to do this, an impactor has to be at least 1 kilometre across, if it is one of the speedier comets, or 1.5
kilometres in diameter if it is one of the slower asteroids. A collision with one of these objects would generate a blast
equivalent to 100.000 million tonnes of TNT , which would obliterate an area 500 kilometres across say the size of England—
and kill perhaps tens of millions of people, depending upon the location of the impact. The real problems for the rest of the world would start
soon after as dust in the atmosphere began to darken the skies and reduce the level of sunlight reaching the Earth's surface. By comparison
with the huge Chicxulub impact it is certain that this would result in a dramatic lowering of global temperatures but there is no consensus on
just how bad this would be. The chances are, however, that an impact of this size would result in appalling weather conditions and crop failures
at least as severe as those of the 'Year Without a Summer'; 'which followed the 1815 eruption of Indonesia's Tambora volcano. As mentioned in
the last chapter, with even developed countries holding sufficient food to feed their populations for only a
month or so, large-scale crop failures across the planet would undoubtedly have serious implications.
Rationing, at the very least, is likely to be die result, with a worst case scenario seeing widespread disruption of the social and
economic fabric of developed nations. In the developing world, where subsistence farming remains very much the norm, wide-
spread failure of the harvests could be expected to translate rapidly into famine on a biblical scale Some
researchers forecast that as many as a quarter of the world's population could succumb to a deteriorating climate
following an impact in the 1—1.5 kilometre size range. Anything bigger and photosynthesis stops
completely. Once this happens the issue is not how many people will die but whether the human race
will survive. One estimate proposes that the impact of an object just 4- kilometres across will inject sufficient
quantities of dust and debris into the atmosphere to reduce light levels below those required for
photosynthesis. Because we still don't know how many threatening objects there are out there nor whether they come in bursts, it is
almost impossible to say when the Earth will be struck by an asteroid or comet that will bring to an end the world as we know it. Impact events
on the scale of the Chicxulub dinosaur-killer only occur every several tens of millions of years, so in any single year the chances of such an
impact arc tiny. Any
optimism is, however, tempered by the fact that— should the Shiva hypothesis be true—the next
swarm of Oort Cloud comets could even now be speeding towards the inner solar system . Failing this, we may
have only another thousand years to wait until the return of the dense part of the Taurid Complex and another asteroidal assault. Even if it
turns out that there is no coherence in the timing of impact events, there
is statistically no reason why we cannot be hit next
year by an undiscovered Earth-Crossing Asteroid or by a long-period comet that has never before visited
the inner solar system. Small impactors on the Tunguska scale struck Brazil in 1931 and Greenland in 1097, and will continue to pound
the Earth every few decades. Because their destructive footprint is tiny compared to the surface area of the Earth, however, it would be very
bad luck if one of these hit an urban area, and most will fall in the sea. Although this might seem a good thing, a larger object striking the ocean
would be very bad news indeed. A 500-metre rock landing in the Pacific Basin, for example, would generate gigantic tsunamis that would
obliterate just about every coastal city in the hemisphere within 20 hours or so. The chances of this happening arc actually quite high—about 1
per cent in the next 100 years—and the death toll could well top half a billion. Estimates of the frequencies of impacts in the 1 kilometre size
bracket range from 100,000 to 333,000 years, but the youngest impact crater produced by an object of this size is almost a million years old. Of
course, there could have been several large impacts since, which cither occurred in the sea or have not yet been located on land. Fair enough
you might say, the threat is clearly out there, but is
there anything on the horizon? Actually, there is . Some 13 asteroids
—mostly quite small—could feasibly collide with the Earth before 2100. Realistically, however, this is not very likely as the
probabilities involved arc not much greater than 1 in io;ooo— although bear in mind that these arc pretty good odds. If this was the
probability of winning the lottery then my local agent would be getting considerably more of my business.
There is another enigmatic object out there, however. Of the 40 or so Near Earth Asteroids spotted last year, one — designated 2000SG344—
looked at first as if it might actually hit us. The object is small, in the 100 metre size range, and its orbit is so similar to the earth that some have
suggested it may be a booster rocket that sped one of the Apollo spacecraft on its way to the Moon. Whether hunk of rock or lump of man-
made metal, it was originally estimated that 2000SG344 had a 1 in 500 chance of striking the Earth on 21 September 2030. Again, these may
sound very long odds, but they are actually only five times greater than those recently offered during summer 2001 for England beating
Germany 5-1 at football. We can all relax now anyway, as recent calculations have indicated that the object will not approach closer to the
Earth than around five million kilometres. A few years ago, scientists came up with an index to measure the impact threat, known as the Torino
Scale, and so far 2000SG2144 is the first object to register a value greater than zero. The potential impactor originally scraped into category 1,
events meriting careful monitoring. Let's hope that many years elapse before we encounter the first category 10 event—defined as 'a certain
collision with global consequences'. Given sufficient warning we might be able to nudge an asteroid out of the Earth's way but due to its size,
high velocity, and sudden appearance, wc could do little about a new comet heading in our direction.

The miscalc risk is high now – a new triggering event kills hundreds of millions
Kimball 16 – executive director of the Arms Control Association (Daryl G, “TAKING FIRST-USE OF NUKES OFF THE TABLE: GOOD FOR THE
UNITED STATES AND THE WORLD,” War on the Rocks, Jul 14 2016, https://warontherocks.com/2016/07/taking-first-use-of-nukes-off-the-table-
good-for-the-united-states-and-the-world/, jwg)
Once Nuclear Weapons Are Used, There Is No Way to Prevent Escalation Were the United States to
exercise its contingency plans to use nuclear weapons first in a conflict against a nuclear-armed
adversary, it risks retaliation and escalation that could lead to an all-out nuclear exchange. The fog of
war is thick. The fog of nuclear war is even thicker . During a heated conflict or rapidly developing crisis,
political and military leaders are working with incomplete info rmation. They have little time to think through highly
consequential decisions and often have difficulty communicating with the people commanding their forces — to say nothing about their
adversaries. Emotions are high, and the likelihood of miscalculation is increased. Given these realities, responsible
leaders understand that military options that can lead to mutual national suicide should not be on the table. As McGeorge Bundy, George
Kennan, Robert McNamara, and Gerard Smith wrote in Foreign Affairs in 1982 about nuclear weapons first-use contingency plans in Europe,
“Noone has ever succeeded in advancing any persuasive reason to believe that any use of nuclear
weapons, even on the smallest scale, could reliably be expected to remain limited….” Today, the United States
and Russia still deploy thousands of nuclear warheads on hundreds of bombers, missiles, and submarines. U.S. land- and sea-based strategic
forces, armed with nearly 1,000 warheads, stand ready for immediate firing in peacetime. Many military targets are in or near
urban areas. It has been estimated that the use of even a fraction of U.S. and Russian nuclear forces could lead to the
death of tens of millions of people in each country. An all-out exchange would kill hundreds of millions and produce
catastrophic global consequences with adverse agricultural, economic, health, and environmental consequences for
billions of people. It is impossible to imagine any U.S. political objective worth this cost.

2AC
2ac extra-t

Executive power entails agency interpretation, CIC powers, and stewardship for the
public good --- covers agencies
Cash 63 [Robert B. Cash, “Presidential Power: Use and Enforcement of Executive Orders,” Notre Dame Law Review, volume 39, issue 1, 39
Notre Dame L. Rev. 44 (1963)]

The validity of certain executive action was questioned when the Constitutionality of Executive Order
10340 was attacked.4 This order directed the Secretary of Commerce to "seize" the steel mills. In 1953 the Supreme Court held the
order to be unconstitutional.44 Professor Corwin, a noted writer on this subject, feels that the case is of value to Constitutional law and practice
because it affirms: That
the president does possess "residual" or "resultant" powers over and above, or in
consequence of, his specifically granted powers to take temporary alleviative action in the presence of
serious emergency is a proposition to which all but Justices Black and Douglas would probably have
assented in the absence of the complicating issue that was created by the President's refusal to follow
the procedures laid down in the TaftHartley Act. 45 Thus, To Summarize: The President's power as Chief
Executive is multidimensional, and has expanded along almost every dimension. His role as interpreter
of the law has become, with the watering down in recent times of the Lockian maxim against the delegation of legislative power, a
power of quasi-legislation and, in times of emergency, power of legislation un- qualified by "the softening word quasi." His power
as Commander-in Chief to employ the armed forces to put down "combinations too power- ful to be
dealt with by ordinary judicial processes" is, in the absence of definitely restrictive legislation, almost
plenary, as is also his power to employ preventive (as against punitive) martial law. Furthermore, the line that today
separates the "peace of the United States" from the domestic peace of the states severally has been since the First World War a tenuous one.
The third source of the President's power as Chief Executive is the theory that attributes to him the
responsibility of "stewardship" to act for the public good so far at least as the laws do not inhibit .46 The
following are illustrative of the sweep of the power exercisable through the executive order. He can declare martial law,4 7 enforce the laws of
the United States, 48 and remove executive officers.4 9 In
addition to the president's constitutionally inherent powers,
his powers may be augmented by Congress ; 50 and whenever an executive order or proclamation is
founded upon constitutional or delegated authority, it has the force of public law ,51 the violation of
which may be made punishable by Congress .5 2 Moreover, an executive order or proclamation which
exceeds presidential authority may be ratified by Congress, with the same result as if the order or
proclamation were issued after the statute was enacted .53 B. Limits Upon Executive Power Whatever
the extent of presidential power may be, it is nevertheless limited by Congress . The legislative function
of the government has been entrusted to Congress, with the result that neither the president nor an
agency head, 54 who acts at the president's direction, may contravene a statutory provision.5 5 The courts will
strike down such an order even when it emanates from a Constitutionally enumerated power like that of
Commander-in-Chief. 5 In issuing an executive order, the president must conform to the standards laid
down in a Congressional delegation of authority, and must also state the existence of the particular
circumstances and conditions which authorize such order .57 Presidential power is also limited by
Congressional declarations of public policy . The determination of public policy is within the province of
the legislative branch of the government, and the executive branch may only apply the policy so fixed
and determined, and may not itself determine matters of public policy or change the policy laid down by
the legislature.5s Mr. Justice Jackson, concurring in Youngstown Sheet & Tube Co. v. Sawyer stated that: When the President takes
measures incompatible with the express or implied will of Congress, his power is at its lowest ebb, for then he can rely upon his own
Constitutional powers minus any constitutional powers of Congress over the matter. Courts can sustain exclusive presidential control in such a
case only by disabling Congress from acting upon the subject.59 Justice Frankfurter entertained similar thoughts, but complicated 'the issue by
considering the duration of the seizure: We must . . . put to one side consideration of what powers the President would have had if there had
been no legislation whatever bearing on the authority asserted by the seizure, or if the seizure had been only for a short, explicitly temporary
period, to be terminated automatically un- less Congressional approval were given.60 Whatever contribution Youngstown Sheet & Tube Co. has
made to constitutional law,6' it does reaffirm the principle that the president is subservient to Congressionally declared public policy.

Agency interpretations are the executive power


Vermeule 17
Cass R. Sunstein, Robert Walmsley University Professor-Harvard University, Adrian Vermeule, Ralph S. Tyler Jr Professor of Constitutional Law-
Harvard Law School, Symposium: The Unbearable Rightness of Auer, 84 U. Chi. L. Rev. 297, Winter 2017

But this critique of Auer is both unsound and too sweeping. There are four critical points. First, the traditional and mainstream understanding in
American public law is that when agencies - acting within a statutory grant of authority - make rules, interpret rules, and adjudicate violations,
they exercise executive power, not legislative or judicial power. Executive power itself includes the power to make and interpret rules, in the
course of carrying out statutory responsibilities. n53 n53. See United States v Grimaud, 220 US 506, 521 (1911) (noting that statutory authority
to make administrative rules is a grant of executive power, not legislative power).
Hence there is no commingling of functions within agencies in the first place; any talk to the contrary is loose and imprecise. The Court recently
and emphatically reiterated this point, through the pen of ... Scalia:
2ac – emory t- restrictions
We meet – the aff is a strong and substantial legal constraint
Kim ’15 - Assistant Professor of Law, University of North Carolina School of Law
Catherine. “Presidential Control Across Policymaking Tools” Florida State University Law Review. Volume 43:1.
https://ir.law.fsu.edu/cgi/viewcontent.cgi?article=2533&context=lr
The judiciary imposes strong legal constraints on presidential control over rulemaking by requiring
agencies to provide a contemporaneous, reasoned justification 35 and exercising “hard-look review”36
over rulemaking decisions. In Massachusetts v. Environmental Protection Agency, 37 the Supreme Court was even willing to
closely scrutinize an agency’s decision not to engage in rulemaking, ultimately rejecting what was widely
viewed as a presidentially directed decision not to regulate greenhouse gases .38

That ev describes the plan because it establishes a hard look step two --- but the
vagueness inherent in ascertaining what current deference actually is ensures that
excluding our aff ensures the entire deference part of the topic
Bednar 17 [Nicholas R. Bednar, J.D., University of Minnesota Law School, B.A., University of Minnesota, “The Clear-Statement Chevron Canon,”
Volume 66, Issue 3, Spring 2017: Twenty-Sixth Annual DePaul Law Review Symposium]

Chevron is, and always has been, about framing. The seminal 1984 case, Chevron U.S.A. Inc. v. Natural Resources Defense Council, Inc., 1
created the namesake doctrine’s two steps: first whether Congress has directly spoken to the precise question at issue, and second whether the
agency’s interpretation of the statute is reasonable.2 Justice Stevens, who authored the Court’s opinion, never intended Chevron to depart
from prior precedent.3 But his equivocal opinion has generated much debate about Chevron’s proper application.4
[Footnote 4] 4. See generally Kenneth A. Bamberger & Peter L. Strauss, Chevron’s Two Steps, 95 VA. L. REV. 611 (2009) (arguing against One-
Step Chevron); Gary S. Lawson, Reconceptualizing Chevron and Discretion: A Comment on Levin and Rubin, 72 CHI.-KENT L. REV. 1377 (1997)
(endorsing Levin’s conception of Step Two Chevron); Ronald M. Levin, The Anatomy of Chevron: Step Two Reconsidered, 72 CHI.-KENT L. REV.
Richard M. Re, Should Chevron Have Two
1253 (1997) (proposing that Chevron has one primary step and an optional step);
Steps? , 89 IND. L.J. 605 (2014) ( advocating for a hard look Step Two ); Matthew C. Stephenson & Adrian Vermeule, Chevron
Has Only One Step, 95 VA. L. REV. 597 (2009) (proposing a One-Step Chevron). [End Footnote 4]
All standards of review, including Chevron, are malleable and become confused as reviewing judges respond differently to new situations.5 In
Universal Camera Corp. v. NLRB, 6 Justice Frankfurter described standards of review as a “mood” that “only serve as a standard of judgment
and not as a body of rigid rules assuring sameness of application.”7 Correspondingly, judges can disagree about the substance of Chevron’s
steps but reach the same outcome in any given case. Over time, scholars and jurists have produced a number of different substantive
frameworks of Chevron, all of which reflect different interpretations of Chevron’s vague directive.8

A restriction is a legal rule from an external source


Oxford Advanced Learner’s Dictionary, 13 (“restriction” http://oald8.oxfordlearnersdictionaries.com/dictionary/restriction)

restriction, constraint, restraint or limitation?


These are all things that limit what you can do. A restriction is rule or law that is made by somebody in authority. A constraint is something that
exists rather than something that is made, although it may exist as a result of somebody's decision. A restraint is also something that exists: it
can exist outside yourself, as the result of somebody else's decision; but it can also exist inside you, as a fear of what other people may think or
as your own feeling about what is acceptable:moral/social/cultural restraints. A limitation is more general and can be a rule that somebody
makes or a fact or condition that exists.
2ac T Curtiss Wright
Aspec
2ac Adv cp
Retaining FTC authority crucial to data protection that enables cloud computing---
flexible policy framework is key to innovation
Watson 18 [Shaundra Watson, “As AI Grows, Privacy, Security, and Global Data Flows Are Key to Innovation, BSA Tells FTC,” BSA TechPost,
August 22, 2018, https://techpost.bsa.org/2018/08/22/as-ai-grows-privacy-security-and-global-data-flows-are-key-to-innovation-bsa-tells-ftc]

The FTC has recognized that a critical part of ensuring data-driven innovation continues to thrive is protecting the
privacy and security of consumers’ personal information. BSA agrees, and we highlighted in our comments on the upcoming hearings
that the FTC should continue its record of promoting the flexible , technology-neutral, risk-based privacy and
security frameworks that are best suited to protect consumer data in a global , dynamic marketplace . An
important part of the shifts in the marketplace is the growth of advanced techniques enabling data analysis and use, such as
AI and cloud computing . The increased use of data has provided immense economic and societal benefits across a wide
swath of industries that benefit consumers. The cloud not only enables these AI tools, but it also provides small and medium-sized
enterprises access to flexible, scalable computing resources. The reduced cost to small businesses of accessing the same infrastructure in use by
large enterprises, along with the enhanced capability to use advanced techniques to develop innovative products and services, contributes
significantly to more robust competition in the marketplace. These innovations have occurred under flexible policy
frameworks that spur data-driven innovation .

Innovation stagnation wrecks the industry


Linthicum 13 [David "Dave" S. Linthicum is SVP of Cloud Technology Partners and an internationally recognized cloud industry expert and
thought leader, author and co-author of 13 books on computing, including the best-selling Enterprise Application Integration, keynotes at many
leading technology conferences on cloud computing, SOA, enterprise application integration and enterprise architecture, “Three things that
could derail cloud computing success,” August 2013, https://searchcloudcomputing.techtarget.com/tip/Three-things-that-could-derail-cloud-
computing-success]

creativity and innovation


The hope is that enough innovative new companies replace those that are taken out of play to provide the
the still-emerging cloud computing space requires . If the innovation is not there , the industry will stagnate
and we'll return to the same old patterns of traditional IT , which are not working very well right now. 2. The incorrect spinning of
the facts to sell your own technology. There is a saying I have: "Stupid or liar -- which is it?" This is typically running through my mind as I listen
to a technology vendor's pitch that begins with his or her company's own view of the cloud computing space and how those who consume the
technology must think so that their technology is seemingly the right fit. I'm told that nobody wants public cloud because "everyone knows they
are unsecure." Or because "everyone knows that it's really not cloud computing." Or my favorite: "We're providing our hardware and software
on a rental-basis now. Thus, it's on-demand, and thus it's a cloud." Really? The trouble is when those who are not in the know believe the spin
and the hype, and thus make poor
technical choices (see next issue). These choices typically lead to failure -- a failure to
understand the technology and its proper use, which ends up getting blamed on the new cloud
computing technology. It was never the cloud computing technology in the first place; instead it was a square piece of traditional
hardware or software that was forced into a round cloud computing hole. 3. Less-than-knowledgeable people making strategic calls around the
use of cloud computing. You know how this goes: Those in the organization who have the most political pull are selected as the ones to pick the
technology. However, they may not have the knowledge or the skills to evaluate and select the right technology. This has been going on for
years, and cloud computing is no exception, other than the fact that you can now make decisions that will truly kill your business. The
intentions are always good, but selecting
the right path to cloud computing requires a deep understanding of the
technology and emerging best practices . You need to be willing to understand your requirements in great detail, look at your existing
infrastructure and applications and mash all that up with the existing and future cloud computing technology.

Strong risk reduction key to prevent AI-driven extinction---it’s uniquely likely, but
success solves every impact
Pamlin, 15 – Dennis Pamlin, Executive Project Manager of the Global Risks Global Challenges Foundation, and Stuart Armstrong, James
Martin Research Fellow at the Future of Humanity Institute of the Oxford Martin School at University of Oxford, Global Challenges Foundation,
February, http://globalchallenges.org/wp-content/uploads/12-Risks-with-infinite-impact.pdf
Despite the uncertainty of when and how AI could be developed, there are reasons to suspect that an AI with human-comparable skills would
be a major risk factor . AIs would immediately benefit from improvements to computer speed and any computer research. They could be
trained in specific professions and copied at will, thus replacing most human capital in the world, causing potentially great economic
disruption . Through their advantages in speed and performance , and through their better integration with standard computer software,
they could quickly become extremely intelligent in one or more domains (research, planning, social skills...). If they became skilled at
computer research, the recursive self-improvement could generate what is sometime called a “singularity”, 482 but is perhaps better described
as an “intelligence explosion”, 483 with the AI’s intelligence increasing very rapidly. 484 Such extreme intelligences could not easily be
controlled (either by the groups creating them, or by some international regulatory regime),485 and would probably act in a way to boost their
own intelligence and acquire maximal resources for almost all initial AI motivations.486 And if these motivations do not detail 487 the survival
and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful
features of human existence. This makes extremely intelligent AIs a unique risk ,488 in that extinction is more likely than lesser impacts . An AI
would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had
been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction . On a
more positive note, an intelligence of such power could easily combat most other risks in this report, making extremely intelligent AI into a
tool of great positive potential as well.489 Whether such an intelligence is developed safely depends on how much effort is invested in AI
safety (“Friendly AI”)490 as opposed to simply building an AI .49
2ac congress cp

Perm do the CP – Judicial restrictions can be imposed by legislature


Kansas vs Buser No Date
No. 105, 982. http://www.kscourts.org/Kansas-Courts/General-Information/105982BuserOrder070115.pdf

This is not to say, however, that all judicial restrictions imposed by the legislature are unconstitutional.
See, e.g., Kan. Const. art. 3, § 3 ( supreme court shall have " such appellate jurisdiction as may be provided by law"); K.S.A. 60- 2014 Supp. 26-
504 eminent domain appeals to the supreme court " shall take precedence over other cases, except . . . other cases in which preference is
granted by statute"); In re N.A. C., 299 Kan. 1100, 1106, 329 P. 3d 458 ( 2014) ( expedited appeal of child in need of care case under K.S. A. 2012
Supp. 38- 2273[ d] [" Notwithstanding any other provision of law to the contrary, appeals under this section shall have priority over all other
cases."]).

Courts would interpret the counterplan narrowly such that any degree of deference
they wanted could be cited
Raso 17 [Connor Raso, attorney at the Securities and Exchange Commission, contributor to the Series on Regulatory Process and Perspective as
part of the Brookings Center on Regulation and Markets, Brookings, “Congress may tell courts to ignore regulatory agencies’ reasoning, but will
it matter?,” Jan 27, 2017, https://www.brookings.edu/research/congress-may-tell-courts-to-ignore-regulatory-agencies-reasoning-but-will-it-
matter]

The Separation of Powers Restoration Act of 2016 (SOPRA) would eliminate Chevron deference (as well as other forms of
deference to agencies, which are not addressed in this post) and require courts to review statutory provisions anew—that is, to give the
agency’s interpretation no consideration at all. Nobody doubts that Congress has the power to eliminate Chevron deference, which is premised
on the notion that Congress intends to delegate to agencies. But there are sharp disagreements as to whether Congress should exercise this
power, with differences resulting in large part from divergent beliefs about the efficacy and legitimacy of the administrative state relative to the
judiciary. Proponents of SOPRA (primarily Republicans) generally argue that unelected administrative agencies have gained too much power by
interpreting vague laws to unduly expand the reach of government, thereby usurping Congressional prerogatives. Opponents of SOPRA (largely
Democrats) respond that the bill would hamper the ability of expert administrative agencies to address pressing public policy problems. This
debate assumes that Chevron really matters, and eliminating it will have a meaningful impact on how courts review agency interpretations of
statutes. There are a number of good reasons to doubt that, though. This post offers three reasons why eliminating Chevron may not have
much impact and then explains why it may indeed have an impact – but one different from what SOPRA proponents expect. First, judges may
actually need to defer to agency interpretations in some cases. Generalist judges appreciate that better-staffed agencies
have more expertise over complicated subjects such as energy pipeline pricing or environmental
science. Looking at such issues anew requires significant time and resources that judges often lack in an era of growing court dockets and
judicial vacancies. These dynamics are very likely to lead judges to defer to agencies in all but name in some
cases, conducting the same analysis without citing the Chevron case. Second and relatedly, judges may
interpret SOPRA itself narrowly to reduce its impact . For instance, judges may decide that agency-specific law at issue (say
the Clean Air Act) in a case compels deference, notwithstanding SOPRA, because Congress clearly sought to delegate interpretive latitude to the
judges may decide to
agency. Because specific law generally trumps a general law like SOPRA, the court would defer. Alternatively,
read SOPRA’s scope, which applies only to “questions of law,” narrowly , and deem many questions to
involve both “law and fact.” Challenges to agency interpretations of statutes often arise at the
intersection of law and policy. SOPRA would not prevent the court from emphasizing agencies’
competence as fact-finders and deferring in such cases .

CP fails – courts disregard interpretive guidance from Congress


Kiracofe 4 [Adam W., Boston University Law School, “The Codified Canons of Statutory Construction: A Response and Proposal to
Nicholas Rosenkranz's Federal Rules of Statutory Interpretation,” 2004, 84 B.U. L. Rev. 571, lexis]

Rosenkranz's proposal asks Congress to create a code of interpretive instructions, which are often defined as statutory directives passed by
legislatures to guide courts in interpreting particular statutes.42 There are multiple types of interpretive instructions, but one particular
distinction is especially important for the purposes of this Note: the distinction between specific interpretive instructions and general
interpretive instructions. Specific interpretive instructions are usually either statute-specific or act-specific and, therefore, only apply to the
statute or act in which they are located.43 The liberal construction clause contained in the Racketeer Influenced and Corrupt Organizations
("RICO") provisions, while arguably poorly drafted, is an example of a specific interpretive instruction because it pertains only to RICO. 4 4
Rosenkranz's proposal, on the other hand, would consist of general interpretive instructions. Under his proposal, the FRSI would apply to every
statute that Congress enacts and has enacted, thus making the rules "generally applicable" to all federal statutes.45 This is a major distinction
between Rosenkranz's proposal and the proposal offered by this Note. The following discussion will explain that a code of general interpretive
instructions-such as the FRSI-is not a realistic option for Congress. 1. Courts Too Often Ignore General Interpretive
Instructions General interpretive instructions are often simply ignored by the courts . They rarely have much of an effect
because courts are disinclined to respect such broad legislation . 46 Courts often consider such congressional efforts to be hasty and
poorly thought out , concluding that Congress likely did not consider the impact that the general interpretive instruction would have on
the specific statute applicable in a particular case. 47 Thus, if Congress passed the FRSI as proposed by Rosenkranz, it is unlikely that
the courts would accord them much respect. 2. An Immediately Applicable General Act Is Too Drastic a Change Of course,
even if the FRSI were given the full force of law by the courts, it certainly does not follow that such an act would be wise. If the interpretive
statutes were applied retrospectively to all past statutes, the whole statutory scheme that the general public relies upon
would be in complete disarray . One reason relied on by Congress and the courts for refusing to effect great changes in the law is that
the public has a great reliance interest in the law remaining in equilibrium. 48 A drastic change in the law would alter many of the
legal relationships that have been formed in reliance on various laws and their subsequent interpretations. This may be why most
courts do not apply such interpretive instructions retrospectively. 49 At most, then, courts typically apply general interpretive instructions only
prospectively. 50 3. Not All Statutes Should Be Interpreted the Same Way Another problem with the general interpretive scheme proposed by
Rosenkranz is that different statutes require different interpretations. That is why the common law developed inconsistent canons of
construction. 51 There are times when Congress desires a very liberal construction of one statute52 and yet wants a very strict construction of
another statute. A general interpretive regime applied across all federal statutes, such as that proposed by Rosenkranz, is simply too broad to
cover the spectrum of legislative needs and purposes. 4. Too Difficult to Enact Express Statements So as to Override Presumptions In response
to the argument that statutes require different interpretations, Rosenkranz would likely argue that Congress should create what he calls "safe
harbors." 53 These are predetermined clear statements that Congress can use to notify courts that it does not want one of the enacted
presumptions to apply to a particular statute. His humorous example is that Congress could require the words "Mother, may I?" to be inserted
into a statute when Congress does not want a certain enacted presumption to apply. 54 In a statute where Congress provides a list of
prohibited conduct, the phrase "Mother, may I?" could be inserted after one of the terms in the list to signal that the enacted interpretive
instruction of noscitur a sociis55 should not apply to that term. 56 The problem with any statutory default rule is that it requires a clear and
express statement to overcome it. Some commentators have argued that these clear statement rules , whether enacted by Congress
or imposed by the courts, impose a high burden on Congress when it attempts to pass legislation to trump the rule.57 Once a default
rule is intact, to override that rule, Congress must pass a statute with plain language, and then also pass the clear statement .
"[I]t is far easier to kill a bill than to pass a bill. Thus, in many circumstances at least, the use of a super-strong clear
statement rule may be... ' countermajoritarian ."' 58 Some commentators have even argued that the additional burden of passing a
clear statement implicates constitutional concerns. 59
2ac esr cp
It can’t bind future officials --- Brand X ensures infinite future executive flexibility and
unpredictable policy --- even if they fiat past that, nobody expects that fiat to be
durable
Nielson 18 [Aaron, Associate Professor, J. Reuben Clark Law School, Brigham Young University, “Sticky Regulations,” The University of
Chicago Law Review, 2018, Vol. 85]

C. Commitment Mechanisms in Administrative Law An agency’s ability to create incentives implicates the concept of commitment mechanisms
—that is, devices to make promises credible . There also is a great deal of scholarship on commitment mechanisms—much of which has
been developed in the context of game-theory economics.137 The basic question is how to make promises trustworthy enough that
recipients will believe that should the triggering event happen, the promised reaction will actually occur . A good example arises in
the context of interna- tional relations. One nation may threaten another nation with war if the nation does some disfavored act. But how to
make that threat credible so it is not dismissed as empty talk?138 The more confidence that a recipient of a promise has in the promise, the
greater likelihood that the recipient will act in response to the promise. This is true in administrative law, too. Unfortunately, “the commitment
of many regulatory bodies is fragile ; it can break easily under pressure .”139 This is especially so because it is now a black-
letter principle that so long as an agency has discretion over a subject, agency officials in time period one generally cannot bind
future agency officials in time period two . This is true in at least two respects . When it comes to questions of
policy , agencies are free to change course—a point that is especially potent after the Supreme Court’s decision in Fox.140 And when it
comes to questions of law, agencies are also free to change course so long as the statute is ambiguous , even if a court has
already interpreted the statute. This point was made clear in the Supreme Court’s decision in Brand X that agencies are not always bound by
prior judicial interpretations of the statutes they administer.141 Combined, the principles from Fox and Brand X mean agencies have a great
deal of power to change regulatory schemes . In Fox, the Supreme Court addressed a change in FCC policy regarding “indecent
expletives.”142 Initially, the FCC had required that the expletives be repeated, but the agency changed policy and concluded that repetition was
not always required.143 The Second Circuit concluded that this revised policy—which was applied to various instances of expletives on
television—was unlawful because agencies wishing to “reverse[ ] course” face a higher standard of justification than those
creating policy in the first instance.144 The Supreme Court disagreed : “We find no basis in the Administrative Procedure Act or in our
opinions for a re- quirement that all agency change be subjected to more searching review.”145 Although reiterating the principle that “the
require- ment that an agency provide reasoned explanation for its action would ordinarily demand that it display awareness that it is changing
position,” and stressing that “the agency must show that there are good reasons for the new policy,” the Fox Court nonetheless declared
that “the agency need not always provide a more detailed justification than what would suffice for a new policy created on a blank
slate.”146 Many believe that Fox makes it easier for agencies to change their minds.147 In any event, Fox shows that so long as agencies
acknowledge that they are chang- ing policy, officials in a later time period are not bound by the decisions reached by their predecessors.

CP leads to low quality regs


Viscusi 15 [University Distinguished Professor of Law, Economics, and Management, Vanderbilt University.
W. Kip. Caroline Cecot, Postdoctoral Research Scholar in Law and Economics, Vanderbilt University. “Judicial Review of Agency Benefit-Cost
Analysis” GEO. MASON L. REV. Vol 22:3. HeinOnline]

Despite the internal guidance and OIRA review , agency compliance with the spirit and letter of the
executive order is not high . An analysis of seventy-four agency BCAs that span the Reagan, first Bush, and Clinton
administrations revealed that many did not provide information on net benefits and policy alternatives. 49
While all BCAs provided at least some information on costs, a much smaller proportion of BCAs in each administration
quantified, much less monetized, benefits.so Although these statistics do not directly shed light on the quality of the BCAs, they
imply that, in practice, at least some BCAs are not as useful for determining whether regulation would
produce net benefits or whether the chosen policy is welfare maximizing given available alternatives . And
even those BCAs that provide estimates of benefits and costs may have deeper quality issues-they may
be missing important categories of costs or benefits, or the underlying assumptions may be faulty . The
usefulness of BCA in agency decision making hinges on the quality of the BCA; a poor-quality BCA is
unlikely to foster rational policy decisions.
2ac lower courts cp
The counterplan destroys policy uniformity --- lower courts want the aff, not the
counterplan
Barnett and Walker 17 [Kent Barnett, Associate Professor, University of Georgia School of Law, and Christopher J. Walker, Associate Professor,
The Ohio State University Moritz College of Law, “Chevron in the Circuit Courts,” October 2017, Michigan Law Review, 116 Mich. L. Rev. 1, lexis]

Conclusion Let us briefly return to where we began with our findings in Part III--the big picture. We have discussed particular findings and their
implications in each Part. But what broader insights about Chevron Regular and Chevron Supreme can we glean from stepping back and
considering our findings as a whole? We have demonstrated empirically that, contrary to how they fare in the Supreme Court, n286
agencies usually prevail more under Chevron than other standards of review in the circuit courts (at least
when those courts refer to Chevron). n287 This finding is meaningful for agencies and litigating parties because
circuit courts review far more agency statutory interpretations than the Supreme Court . Although we cannot
say in our discussion here how the deference standards affect judicial decisionmaking, we can say outcomes do vary. Because they do,
one leading scholar's call, based on findings from past empirical studies, for practitioners, teachers, courts, and scholars to deemphasize review
standards appears premature. n288 They seem to matter, even if [*71] no one, including us (based on methodological limitations), can yet say
exactly how. If Chevron matters, we should consider whether it is functioning properly. The Supreme Court indicated that Chevron exists
to provide agencies a congressionally delegated space to regulate , where courts keep agencies in their space
without imposing their own policy judgments. n289 The doctrine largely appears to fail at achieving these aims in the
Supreme Court based on its rare invocation n290 and failure to constrict the justices' perceived
preferences. n291 Prior studies of the circuit courts have also found that Chevron does not appear to meaningfully constrict judges from
deciding in accord with their perceived political preferences n292--at least when a judge on a panel with different political preferences isn't on
the panel. n293 Although we leave our ideology data and more sophisticated statistical modeling for future work, our initial, descriptive
findings suggest, based on a larger dataset than in prior studies, that Chevron has some kind of disciplining effect in the aggregate on circuit
courts because agency-win rates are so disparate between when Chevron applies and when it does not,
even when the agency statutory interpretations use the same formal interpretive method s. n294 More
specifically, our thirty-nine-percentage-point difference between agency-win rates under Chevron and de novo review suggests that courts
distinguish looking for the best answer from permitting a reasonable one . n295 If they are able and willing to do
so, then the Supreme Court's recently invoked "stabilizing purpose"--to render outcomes from thirteen
circuit courts more predictable n296 and thereby further the uniformity goals that Peter Strauss highlighted
decades ago n297-- becomes more compelling , regardless of the delegation theory's normative force. n298 Indeed, as federal
dockets have swelled, Chevron may be one more device that federal courts have used to avoid what
they perceive as low-value or low-interest cases. n299 But, at the same time, our data indicate that the Supreme Court
needs to provide better guidance to lower courts if it seeks to create a stabilizing doctrine . The circuit-by-
circuit disparity in the circuit courts' invocation of [*72] Chevron and agency-win rates reveals that Chevron may not be
operating uniformly among the circuits . n300 To ameliorate uniformity, the Court should provide
clearer guidance to numerous issues, which other scholars have noted: What are the "traditional tools of
statutory construction" n301 to which Chevron referred for step one that courts should use ? n302 Should the
long-standing nature of agency interpretations matter? n303 What role exactly should legislative history or a purposivist inquiry have? n304 Is
there an "order of battle" in which the circuit courts proceed through certain steps or interpretive canons to interpret statutes? n305 Is step
two different from arbitrary-and-capricious review and, if so, how? n306 And perhaps more prominently, what role do agency expertise,
formality, and the significance of the question have when determining when Congress has delegated authority to agencies? n307 If
Chevron
is a means of controlling the lower courts, the case for providing more guidance becomes urgent. And our
findings, albeit to a limited degree, suggest that lower courts will view more rule-based guidance as a comforting swaddling
blanket rather than handcuffs . Circuit courts rarely invoked various values--including those mentioned in Barnhart--that they could
have used to gain additional discretion in deciding whether to invoke Chevron or ultimately side with the agency. n308 And they appeared to
largely ignore troubling step-zero questions concerning sensitive matters, perhaps having difficulty discerning the Supreme Court's vague or
inconsistent signals as to these matters. n309 If [*73] Chevron can function as a welcomed supervisory doctrine, the differences between
Chevron Supreme--functioning as a malleable, discretionary canon of construction n310--and Chevron Regular--functioning as precedent--
become less troubling.

The impact is policy predictability and straight turns the rule of law and democracy
Caminker 94 [Evan H. Caminker, Acting Professor, U.C.L.A. School of Law, “Why Must Inferior Courts Obey Superior Court Precedents?,”
Stanford Law Review, April, 1994, 46 Stan. L. Rev. 817, lexis]

Uniformity of federal law interpretation across the nation ought to be considered equally important in preserving
courts' perceived legitimacy. If federal law means one thing to one court but something else to another ,
the public might think either or both courts unprincipled or incompetent , or that the process of
interpretation necessarily is indeterminate. Each of these alternatives subverts the courts' efforts to make their legal rulings
appear objective and principled. n152 Of course, perceived legitimacy is not measurable and is likely affected by a number of variables besides
divergent interpretations by autonomous courts. n153 Butat the margin , respect for judicial authority would likely suffer
if persistent interpretive conflicts among the federal courts led the public [*854] to believe that
interpretation is inherently arbitrary and unprincipled. Put succinctly, internal consistency strengthens external
credibility. n154 Cultural desire for a single authoritative voice . Given the Supreme Court's plenary
jurisdiction over federal questions, the present hierarchical judiciary vests in a single court the opportunity to
provide a final, authoritative voice on the meaning of federal law. One result of this arrangement is
nationwide uniformity of interpretation . Yet the presence of a final arbiter may also serve psychological as well as instrumental
purposes. In an uncertain world of indeterminate and shifting norms , having a single oracle to provide us
answers is comforting. To a great degree, this is how today's public perceives the Supreme Court. n155 Arguably, we need the
Court to play this role to maintain a sense of community in our diverse society . n156 Thus the Court's
status as the final authority on the meaning of federal law, which assures a measure of uniformity, may
also reinforce our need to believe that we live under the rule of law . n157

Zero uniqueness for court clog


Walke 16 [John Walke, Clean Air Director and Senior Attorney for the Natural Resources Defense Council, HEARING ON H.R. 4768, THE
“SEPARATION OF POWERS RESTORATION ACT OF 2016”, U.S. House Testimony, 5-17, https://www.nrdc.org/sites/default/files/testimony-
separation-of-powers-restoration-act-20160517.pdf]

It is well-documented that the federal judiciary is overburdened handling current litigation dockets. Chief Justice John Roberts, in his annual report
on the state of the federal judiciary, notes that federal judges are “faced with crushing dockets .”24 Further, the Chief Justice notes that overburdened
court dockets are threatening the public’s interest in speedy, fair, and efficient justice.25 The American Bar Association affirms that the federal
judiciary is overtaxed, and that this problem is compounded by increasing numbers of vacancies on the federal bench.
Specifically, persistently high numbers of judicial vacancies deprive the nation of a federal court system that is equipped to serve the people .
This has real consequences for the financial well−being of businesses and the personal lives of litigants whose cases may only be heard by the federal courts−e.g.
cases involving challenges to the constitutionality of a law, unfair business practices under federal antitrust laws, patent infringement, police brutality, employment
discrimination, and bankruptcy.26
Currently, there are over 87 judicial vacancies on the federal bench.27 The ABA notes that these twin pressures of increased vacancies and overtaxed
dockets, if left unchecked, “inevitably will alter the delivery and quality of justice and erode public confidence in our federal judicial
system .”28
2ac Stare

Zero uniqueness --- they’ve overruled hundreds of cases and trampled on stare decisis
this summer
Struyk 18 [Ryan Struyk, “The Supreme Court has overturned more than 300 rulings. Is Roe next?,” September 5, 2018,
https://www.cnn.com/2018/09/05/app-politics-section/history-overruled-supreme-court-roe/index.html]

A CNN analysis of data from the Congressional Research Service shows overruling Roe would be unusual but far from unprecedented: The
Supreme Court has overruled more than 300 of its own cases throughout American history , including five
dozen that lasted longer than the landmark abortion rights case to this point. In the new document, made public Thursday after it had been
kept confidential by the Senate Judiciary Committee, Kavanaugh made the point that "settled law" is nothing permanent before the court. "I am
not sure that all legal scholars refer to Roe as the settled law of the land at the Supreme Court level since Court can always overrule its
precedent, and three current Justices on the Court would do so. The point there is in the inferior court point," Kavanaugh wrote, responding to
a draft op-ed that was circulated for edits between lawmakers and White House staff. At his confirmation hearing Thursday, Kavanaugh said he
was making a point about overstating the consensus of legal scholars, not about the abortion decision. But that phrase "settled law" has
emerged as a litmus test for judicial nominees to indicate their support for precedent, particularly on the complicated issue of abortion.
Observers say Kavanaugh could be a pivotal swing vote to overrule Roe if a case comes before the Supreme Court. To be sure, the vast majority
of cases decided by the Supreme Court are never overruled. Further, this CNN analysis shows that more than half of the cases that have been
overturned by the high court are overruled within two decades of the initial decisions. Still, 60 cases have been overturned after serving as
established precedent for at least 46 years. Roe, which was decided in January 1973, will reach its 46th anniversary in January 2019. Four out of
every five cases overruled by the Supreme Court were overturned before reaching that 46-year mark. The average case that was overruled by
the Supreme Court stood for 28.7 years before being overruled. Kavanaugh, 53, a former George W. Bush aide who has served for twelve years
on a powerful Washington-based federal appeals court, began confirmation hearings on Tuesday. The term "settled law" has emerged as a test
to indicate respect for established precedent. But, as CNN Supreme Court reporter Ariane de Vogue has explained, agreeing that Roe v. Wade is
"settled law" doesn't preclude overruling it. During the 2016 presidential campaign, both President Trump and Vice President Pence said their
nominees to the Supreme Court would overrule Roe v. Wade. Here's what Maine Sen. Susan Collins, who could be a crucial swing vote in the
Senate confirmation process, told Jake Tapper on CNN's "State of the Union" in July, just a week before Kavanaugh was nominated. COLLINS:
"Well, first of all, let me say that there's big difference between overturning some precedents, such as Plessy vs. Ferguson, which was
overturned in the school desegregation case of Brown vs. the Board of Education, vs. overturning a ruling that has been settled law for 46 years
-- 45 years. And it involves a constitutional right and has been reaffirmed by the court 26 years ago. Indeed, Justice Roberts has made very clear
that he considers Roe v. Wade to be settled law. I would not support a nominee who demonstrated hostility to Roe v. Wade, because that
would mean to me that their judicial philosophy did not include a respect for established decisions, established law." Still, just this summer, a
majority on the Supreme Court, including Chief Justice John Roberts and former Trump nominee Neil Gorsuch, overruled the 1977
precedent of Abood v. Detroit Board of Education in a case on public sector union fees called Janus v. AFSCME.
That precedent had been law of the land for more than four decades. Justice Elena Kagan unleashed a scathing
dissent in that case, writing: "Rarely if ever has the Court overruled a decision -- let alone one of this import -- with so little regard for
the usual principles of stare decisis . There are no special justifications for reversing Abood. It has proved workable. No recent
developments have eroded its underpinnings. And it is deeply entrenched, in both the law and the real world."

Individual decisions don’t affect the courts legitimacy – it’s a combination of 200 years
Erwin Chemerinsky 15: dean of the University of California, Irvine School of Law, an American lawyer and law professor, and prominent scholar
in United States constitutional law and federal civil procedure (Interview with Ronald Collins for Concurring Opinions), 02/02/15, “Unto the
Breach: An interview with the all too candid Dean Erwin Chemerinsky,” https://concurringopinions.com/archives/2015/02/unto-the-breach-an-
interview-with-the-all-too-candid-dean-erwin-chemerinsky.html
The Court’s legitimacy is the product of all that it has done over 200 years . Over this time, it has firmly established
its role. I agree with what John Hart Ely wrote in Democracy and Distrust (1980) that the Court’s legitimacy is robust. Some such as Felix Frankfurter and Alexander Bickel argued that the
Court must be restrained to preserve its fragile legitimacy. Brown v. Board of Education (1954) shows the
fallacy of that position. Nothing the Court has done has been more controversial or done more to
enhance its institutional legitimacy . There are virtually no instances in American history of people disobeying the Court and those that occurred, such as in defiance of desegregation
orders, only enhanced the Court’s legitimacy. No single decision (or group of decisions) will seriously affect the Court’s

legitimacy . I remember after Bush v. Gore hearing people say that the decision would damage the Court’s legitimacy. I was skeptical of such claims and I was right. The Court’s
approval rating was the same in June 2001, six months after the decision , as it had been in September 2000, three months before the ruling. It
had gone down among Democrats and up among Republicans. It is why I strongly disagree with those who believe that Chief Justice John Roberts changed his vote to uphold the individual mandate in the Affordable Care Act case
so as to preserve the Court’s credibility. He knew that whatever the Court did would please about half the country and disappoint about half the country.
Roberts isn’t a swing vote
Epps 6/30 (Garrett Epps, Professor of constitutional law at the University of Baltimore. “The Post-Kennedy Supreme Court Is Already
Here.” JUN 30, 2018. https://www.theatlantic.com/ideas/archive/2018/06/the-post-kennedy-supreme-court-is-already-here/564176/)
But Kennedy was not the only changed member of the 2017-18 court. The 2018 model John Roberts may no longer be the
imperiously independent John Roberts of former years. Dahlia Lithwick has written persuasively of the
systematic campaign from Republican circles on Capitol Hill and the White House to intimidate Roberts .
Senate Judiciary Committee Chair Chuck Grassley warned Roberts not to utter a peep during the Merrick Garland
controversy; he complied. Donald Trump (who praised the maverick Kennedy) consistently reviled Roberts for his
Obamacare vote. “Congratulations to John Roberts for making Americans hate the Supreme Court because of his BS,” Trump tweeted in
2012; during the campaign, he called the chief justice “an absolute nightmare” and “a disaster.” That campaign of intimidation
may have worked. Roberts’s opinion in Trump v. Hawaii was not just deferential in tone, it was servile ,
almost a plea for mercy.

Kavanaugh controversy and a host of controversial cases thump


Mauro 9/24 [Tony Mauro, based in Washington, covers the U.S. Supreme Court. A lead writer for ALM's Supreme Court Brief, Tony focuses on
the court's history and traditions, appellate advocacy and the SCOTUS cases that matter most to business litigators. “Awaiting a Ninth Justice,
Supreme Court Tinkers With Its Docket,” https://www.law.com/nationallawjournal/2018/09/24/awaiting-a-ninth-justice-supreme-court-
tinkers-with-its-docket/] BJR
The uncertainty surrounding U.S. Supreme Court nominee Brett Kavanaugh’s confirmation may already be affecting the
court’s docket for the term that begins on Oct. 1. Last week, the court pulled several high-profile cases off the list that the
justices were scheduled to consider today at the court’s so-called long conference. That is when the justices evaluate hundreds of petitions filed over the
summer to decide whether grant review in the coming term. Though the court does not explain why it reschedules or delays the

consideration of pending petitions, it might be that the prospect of an eight-member court in the short or long term led
the justices to shelve cases that might result in 4-4 ties. Justices traditionally try to avoid ties because they have
the effect of allowing the lower court ruling to stand , without further resolution of the issue involved. In the past, according to Vinson & Elkins Supreme Court
specialist John Elwood, justices “definitely appear to have rescheduled cases to push off consideration of them, and I could see them rescheduling cases to await the arrival of a new justice.”
But, he added, “There could be other explanations. Rescheduling is about the murkiest Supreme Court practice.” Elwood, a former clerk to Justice Anthony Kennedy, said his understanding is
that any justice can have a case rescheduled. But, he said, “I suspect that the chief justice does most of the rescheduling, since I think he keeps the closest eye on the docket of all the justices.”

Among the cases that were scheduled to be discussed today but were recently rescheduled for a future unspecified date are: ➤➤
ConAgra Grocery Products v. California and The Sherwin-Williams Company v. California, key business cases challenging
California’s use of public nuisance law to exact damages from companies with long-ago involvement in
promoting the use of lead paint. They were taken off the conference list on Sept. 20. ➤➤ Apodaca v. Raemisch and Lowe v.
Raemisch, testing the Eighth Amendment constitutionality of severe solitary confinement for prisoners .
They were taken off the list and rescheduled on Sept. 18. ➤➤ Altitude Express v. Zarda and Bostock v. Clayton County Georgia ,

asking whether the federal ban on sex discrimination in the workplace includes sexual orientation bias. They
were rescheduled on Sept. 11, four days after Kavanaugh’s hearing ended. ➤➤ Kennedy v. Bremerton School District , a First Amendment

dispute over a public school coach in Washington state who was fired for kneeling in prayer at a football
game. The court rescheduled the case on Sept. 20. Some of the rescheduled cases were ones that court-watchers hoped would
spice up what was shaping up to be an otherwise lackluster term . Several death penalty cases also were delayed. Just last Friday, U.S.
Solicitor General Noel Francisco said at a Federalist Society event, “The docket thus far doesn’t currently have the blockbuster cases before the court, but there are several

big cases in the pipeline .” Some hot-button cases remain untouched on the conference list for today, including Maryland-
National Capital Park and Planning Commission v. American Humanist Association and The American
Legion v. American Humanist Association, a dispute over whether a war memorial in the shape of the
cross on public land in Maryland violates the Establishment Clause of the First Amendment. The court’s
decisions on whether to grant review in the cases discussed today will likely be announced Thursday , the
same day Kavanaugh and his accuser Christine Blasey Ford are expected to testify before the Senate Judiciary Committee.

You might also like