Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

Niles Round 6

1NC
1NC – OFF
1NC – CP – Courts
The United States federal judiciary ought to rule that law enforcement
acquisition of information stored in another country that does not follow the
privacy standards of both countries is unconstitutional based on the First
Amendment.

Courts can rule on the First Amendment.


William Schwartz 20 serves as chair of Cooley’s white-collar defense and investigations group. Andrew Goldstein
and Daniel Grooms are partners in the group "The New Data Wars: How the CLOUD Act Is Likely To Trigger Legal
Challenges," New York Law Journal, 3-30-2020, https://www.law.com/newyorklawjournal/2020/03/30/the-new-data-
wars-how-the-cloud-act-is-likely-to-trigger-legal-challenges/?slreturn=20200702142503; HS

Venue. Under the current MLAT process in the United States, a U.S. federal district court reviews the
foreign partner’s request not only for compliance with the relevant MLAT, but also for
compliance with U.S. statutory and constitutional law. The CLOUD Act and implementing agreement, in contrast,
purport to circumvent the courts of the country in which the provider is based . Judges in the
United States may not be so quick to agree that they have no role, particularly in cases where a
provider is raising constitutional challenges that English courts may not be as competent to
adjudicate. This raises the possibility that early orders issued under the agreement could face parallel challenges in both U.S. and English courts,
the ramifications of which could undermine both governments’ goals of streamlining data-sharing in criminal investigations. Privilege. Production
orders under the agreement also are likely to raise difficult issues of privilege. Both
the United States and the U.K. have laws
protecting certain categories of privileged information from disclosure, and the text of COPOA
itself provides a specific exception for confidential personal records and items subject to legal
privilege. But the agreement does not specify how decisions about privilege or confidentiality should be made or who should make them. The
issue is particularly tricky because production orders will be served on providers, not their customers, and the orders can be accompanied by non-
disclosure provisions prohibiting their disclosure to the customers whose data is at issue. As a result, providers will have to navigate their own legal
obligations under the agreement, which have the potential to clash with the privacy interests of their customers. There also are important differences
between U.S. and U.K. privilege rules, and the implementing agreement does not attempt to resolve them. For example, the protection of
communications with in-house counsel is broader in the United States than under English law, as is the definition of what constitutes a “client” when
dealing with a company’s employees. U.S. law also provides broader protections for notes of interviews conducted in the course of internal
investigations. Which rules apply, and who decides how to apply them, likely will need to be resolved through litigation. Constitutional
and
Domestic Law Challenges. Depending on the scope and language of a given production order, providers may be able to claim that the
order does not comply with the implementing agreement because the request is overbroad or seeks evidence in an investigation not satisfying the
criteria of the agreement. For example, providers may seek to challenge orders under COPOA on the basis that the order is not in the interest of justice
considering the benefit likely to accrue from the data’s use in the investigation or proceedings. Providers
also may argue that
production orders violate fundamental or constitutional rights. For example, U.S. recipients of
U.K. orders, particularly those with non-disclosure provisions, may wish to raise a constitutional
challenge in U.S. courts based on the argument that the order , the CLOUD Act, or the
implementing agreement violates the First Amendment or due process rights.
1NC – CP – HIA
The United States federal government ought to commission a binding Health
Impact Assessment under the National Environmental Policy Act to evaluate
whether the United States federal government ought to require that United
States law enforcement acquisition of communications data for criminal
investigations meets privacy standards of both the United States and the
country in which the data is stored. The United States federal government
ought to implement the recommendations.

HIAs on criminal justice are key – the counterplan codifies them into a norm
Hom et al 17 – *MPH from Department of Health Services, School of Public Health, University
of Washington. **Affiliate Professor, Env. and Occ. Health Sciences, and Affiliate Professor,
Urban Design and Planning, at the University of Washington. ***Clinical Professor, Health
Services, and Clinical Professor, Env. and Occ. Health Sciences, at the University of Washington.
****Public Health, Seattle & King County. (*Eva Hom, **Andrew L. Dannenberg, ***Stephanie
Farquhar & ****Lee Thornhill, 02/20/17, “A Systematic Review of Health Impact Assessments in
the Criminal Justice System,” https://link.springer.com/article/10.1007/s12103-017-9391-
9#:~:text=Health%20impact%20assessments%20have%20potential,increasing%20equity%20and
%20improving%20health.) np

Since the 1970s, our society


has experienced a dramatic increase in criminal justice involvement and
mass incarceration (Alexander, 2011; Dumont, Brockmann, Dickman, Alexander, & Rich, 2012; Golembeski & Fullilove, 2005).
By 2014, the total incarcerated1 population in the United States reached 2,224,400; this does not include over 4.5 million individuals
under community supervision 2 (U.S. Department of Justice et al., 2015). Involvement
with the criminal justice
system,3 whether accused, arrested, or convicted of a criminal act, can have prolonged impacts
on many aspects of a person’s life – including health . For each of the over 6 million individuals who have
experienced the criminal justice system, the health and social wellbeing of their friends, family, children, and neighbors are impacted
as well (Table 1). As
an increasing proportion of the population experiences incarceration, the
ramifications for public health become more evident. Only recently has there been more collaborative efforts
with the public health and criminal justice systems to address health issues at various points along the criminal justice continuum
(Akers & Lanier, 2009; Carr, 2007; Hirshfield, 2004; Vaughn, DeLisi, Beaver, Perron, & Abdon, 2012; Vaughn, Salas-Wright, DeLisi, &
Piquero, 2014). There are shared root causes of poor health and high incarceration such as lack of job opportunities, low-quality
education, and residing in resource-deprived neighborhoods that need to be acknowledged and addressed by both fields (Braveman,
Egerter, & Williams, 2011; Cloud, 2014; Graves, 2015). While
burgeoning research reveals that historic and
current criminal justice policies and practices have contributed to population health problems,
the efforts to incorporate this knowledge into reformative solutions have lagged behind. The
associations between criminal justice policies or programs and deleterious health outcomes may
not always be apparent to decision makers or community members . Health impact assessments
offer one approach to elucidate and address these associations. Health Impact Assessments A health impact
assessment (HIA) is a systematic process or tool used by a wide range of groups such as governmental entities,
non-profit organizations, or community organizations. It uses a combination of data, analytical methods,
and stakeholder input to determine if a proposed policy, plan, program, or project will affect the
public’s health and to form recommendations directed at policy planners and decision makers to
monitor or mitigate those effects (Dannenberg & Wernham, 2012; Human Impact Partners, 2013b; National Research
Council, 2011). The evolution of HIAs started with the environmental impacts assessment (EIA), a tool
measuring anticipated effects on the environment of a proposed development or project. In the 1970s, EIAs became a
requirement for major federal environmental projects and policies with the passing of the 1969 National Environmental
Policy Act (NEPA) in the United States (Dannenberg & Wernham, 2012). Other countries followed with the
adoption of their own EIA regulations. The public health community felt that the EIAs neglected
to focus on human health impacts and independently worked on impact assessments that complimented EIAs on a
proposed project or policy plan. This was also a time when the public health field began to recognize social determinants of health
and apply an ecological view to understanding health challenges in other disciplines. HIAs have had a longer application in the
construction of dams, airports, and transportation, but the social systems theory provided a theoretical basis for expanding the
application to other settings like criminal justice (Morash, 1983). Incriminal justice settings, impact assessments
began as an evaluative process to identify inter-system effects and to modify programs for
successful innovations (Morash & Anderson, 1977). The HIA process typically involves six steps: 1) screening
(i.e. identifying proposed plans, policies, or projects where a HIA would be beneficial), 2) scoping (i.e. prioritizing health impacts
to focus on), 3) assessing risks and benefits, 4) recommendations, 5) reporting results, and 6)
monitoring and evaluation of HIA’s impact on the decision (Bhatia et al., 2014; HIP, 2013b; NRC, 2011) (Fig. 1).
HIAs are most often conducted prospectively on a proposed policy or project where decision-
makers are open to considering recommendations regarding health effects (Dannenberg, 2016).
However, retrospective HIAs have also been conducted to revise or reevaluate an existing policy or project (Signal, Langford,
Quigley, & Ward, 2006). HIAs can be initiated by a variety of organizations or groups including local, state, or tribal health
departments, academic groups, and community-based organizations (Dannenberg et al., 2014). HIAs are well-established and
standardized in Europe, Canada, Australia, and Thailand, most HIAs conducted in the past two decades have occurred in those
countries. Most HIAs have been conducted in the transportation, housing or urban planning, and environmental sectors
(Dannenberg & Wernham, 2012; NRC, 2011). Using HIAs to provide health-focused recommendations to decision makers has
become more common in the United States, but are currently conducted on a voluntary basis with no
requirements or legislative mandates (Bhatia & Corburn, 2011; Dannenberg & Wernham, 2012).

Expanding reviews guarantees preparedness for disasters


PHCI 16. Planning Healthy Communities Initiative – a Comprehensive Health Impact
Assessment Initiative led by Rutgers University with support from the Health Impact Project, a
collaboration of the Robert Wood Johnson Foundation and The Pew Charitable Trusts.
“Advancing Integration of Health and Community Engagement in Pre-Disaster Recovery and
Resilience Planning: Opportunities for Health Impact Assessment.” July 2016.
http://phci.rutgers.edu/wp-content/uploads/2016/07/PHCI-report-Summary-Revised.pdf

In general, the two HIAs led by Rutgers point to the promise of HIA to improvepre-disaster resilience planning in
several ways. The HIAs: • Allowed stakeholders, leaders and decision-makers to consider health as a dedicated
and explicitly desired outcome, when it previously was not part of the decision process; •
Encouraged a structured and “democratic” process for engaging stakeholders (decision-makers,
residents, vulnerable populations alike) in sometimes difficult and contentious conversations about resilient community planning;
• Highlighted the potential inequities of resilience planning outcomes , especially with regard to health impacts

(including mental health and social determinants of health); • Provided a forum to introduce evidence-based, forward-

looking information and analysis into community dialogues about resilience planning. Insights from
this initiative point to the concept that HIA is a promising innovation with potentially large benefits, namely benefits related to improving health for
people in disaster impacted communities, and improving the process of pre-disaster resilience planning to ensure engaged communities and
sustainable solutions. Further, the results of the two HIAs suggests that no fundamental changes are needed to make the HIA process useful in support
of disaster recovery policies, plans and projects but it would greatly benefit from a combination of modifications to make it more appropriately fitted
for that purpose. Several specific recommendations are offered to improve the efficacy of HIA to inform
pre-disaster resilience planning: 1. Invest major effort to publicize HIAs among target user groups in
communities of hazard and health professionals that have responsibilities, skills and/or interests in disaster recovery. Lack of awareness of

the HIA process and weak communications between health improvement and hazard
management specialists are notable at present. There is evidence of commitment to closer
engagement of health and hazards agencies at the federal level but practical steps to bring
these communities – and especially their state and local counterparts - into fruitful interaction, are needed. 2. Insert
HIAs into the disaster recovery decision-making system at key points of application. Mainstreaming HIAs throughout the National Disaster

Recovery Process is highly desirable and certain federal initiatives are disproportionately important in

achieving successful recovery from disasters, especially through funding and regulatory
mechanisms. Given the number and variety of post-disaster recovery decisions taken by impacted communities, guidance about the optimal use
of HIAs is not a simple matter. Recovery decisions that are broad in scope and early in the process are likely to have the greatest long-term impact on
human health and wellbeing. HIA offers value in assessing health outcomes of strategic, or policy-related, outcomes (such as changes to the National
Flood Insurance Program) as well as local blueprint-like proposals like local zoning and rebuilding decisions. Federal Task Forces
might be
valuable conduits for recommending wider use of HIAs by those who are engaged in making
more detailed rebuilding decisions later in the recovery process. So might local and regional climate adaptation
plans, Hazard Mitigation Planning, or packages of proposals submitted by states to the CDBG process. 3. Conduct a systematic analysis of the health
impacts of the full range of disaster recovery alternatives, with a view to providing a databank of information about their comparative health outcomes
as well as other consequences. The value of any impact assessment, whether a NEPA-mandated Environmental Impact
Statement or a Health Impact Assessment or other, depends in part on the treatment of alternatives
to the action that is the subject of assessment. Without a comparative analysis of alternatives it
is difficult to confirm whether the contemplated course is the best possible or how far it falls
short of that standard. Ideally, a Health Impact Assessment of recovery alternatives would function as a single reference source for use in
subsequent disaster-recovery HIAs, thereby simplifying and speeding the screening stage of full scale HIAs, which has proven to be the most
problematic part of the process. Modified
“desk top” HIAs could be undertaken now on typical natural
disaster related alternative decision-points in order to inform more comprehensive stakeholder-
driven, HIAs for specific strategic policy and/or local communities decision-making, as needed. 4.
Provide decision-makers and HIA users with better guidance for choosing among different kinds of HIAs in post-disaster contexts. Decisions about long-
term recovery that are taken in the wake of disasters pose stiff challenges for the HIA process. Post-disaster
environments are usually
marked by instability, uncertainty and urgency , factors that are not conducive to lengthy in-depth inquiries at a time when
clear thinking is called for. At present there is only limited guidance about which of several different types of HIA (i.e. desktop, rapid, comprehensive,
programmatic) might be best able to provide useful information to decision-makers, either alone or in combination, under different local
circumstances. Community
engagement that provides information inputs from disaster-affected
populations is at the core of the HIA process. Therefore it is imperative that such be facilitated
to the greatest extent possible, though managing community participation that is representative
and informed is perhaps the most time-consuming and potentially fraught part of the process. 5.
Provide appropriate technical support for local advocacy groups that seek to use HIAs for the
joint reduction of health inequities and disaster vulnerabilities. Perhaps one of HIAs central values is that it
uncovers and articulates grass roots perceptions and knowledge pertinent to proposed public choices. A challenge that arises in HIA is how to assure
the reliability of lay information inputs and how to reconcile data that is volunteered by laypersons with data that has been collected and compiled by
systematic scientific means. In non-disaster situations the accuracy and reliability of scientific information is
generally high relative to that of lay populations and there is less need to interrogate its validity.
But the same is not necessarily true in the wake for disasters, where there may be gaps in existing information banks and

insufficient time to gather the data necessary to plug them, especially at the scale where local
decisions about recovery are made. There is significant potential for cross-learning between
HIAs and other technical innovations that also rely on joint efforts by experts and laypersons (e.g
charrettes and visioning exercises; participant mapping using GPS and GIS technology; volunteered geographic information that employs social media
and cloud sources; and real time remotely sensed imagery of ongoing events). It
is recommended that research, education
and training efforts be mounted to achieve those ends.

Natural Disasters cause meltdowns – extinction


Hodges 14 Dave, an established award winning psychology, statistics and research professor
as he teaches college and university classes at both the undergraduate and graduate level, an
established author as his articles are published on many major websites, citing Judy Haar, a
recognized expert in nuclear plant failure analyses, "Nuclear Power Plants Will Become
America's Extinction Level Event", April 18 2014,
www.thelibertybeacon.com/2014/04/18/nuclear-power-plants-will-become-americas-
extinction-level-event/

Fukushima is often spoken of by many, as a possible extinction level event because of the
radiation threat Because of Fukushima, fish
. Fukushima continues to wreak havoc upon the world and in the United States as we are being bathed in deadly radiation from this event.

are becoming inedible and the ocean currents as well as the prevailing ocean winds are carrying
deadly radiation. Undoubtedly the radioactivity has made its way into the transpiration
, by this time,

cycle which means that crops are being dowsed with deadly radiation. The radiation has
undoubtedly made its way into the water table in many areas and impacts every aspect of the
food supply. The health costs to human beings is incalculable this article . However, is not about the devastation at Fukushima, instead, this

focuses on the fact that North America could have a total of 124 Fukushima events if the
article

necessary conditions were present Long before Fukushima, American regulators knew . A Festering Problem

that a power failure lasting for days involving the power grid connected to a nuclear plant, regardless of the

would most likely lead to a dangerous radioactive leak in at least several nuclear power
cause,

plants. A complete loss of electrical power poses a major problem for nuclear power plants
because the reactor core must be kept cool as well as the back-up cooling systems, all of which
require massive amounts of power to work. all the NERC drills which test the readiness of a Heretofore,

nuclear power plant are predicated on the notion that a blackout will only last 24 hours or less.
Amazingly, this is the sum total of a NERC litmus test. Although we have the technology needed to harden and protect our grid from an EMP event, whether natural or man-made, we have failed to do so. The cost for protecting the entire grid is placed at about the
cost for one B-1 Stealth Bomber. Yet, as a nation, we have done nothing. This is inexplicable and inexcusable. Our collective inaction against protecting the grid prompted Congressman Franks to write a scathing letter to the top officials of NERC. However, the good
Congressman failed to mention the most important aspect of this problem. The problem is entirely fixable and NERC and the US government are leaving the American people and its infrastructure totally unprotected from a total meltdown of nuclear power plants as
a result of a prolonged power failure. Critical Analyses According to Judy Haar, a recognized expert in nuclear plant failure analyses, when a nuclear power plant loses access to off-grid electricity, the event is referred to as a “station blackout”. Haar states that all 104
US nuclear power plants are built to withstand electrical outages without experiencing any core damage, through the activation of an automatic start up of emergency generators powered by diesel. Further, when emergency power kicks in, an automatic shutdown
of the nuclear power plant commences. The dangerous control rods are dropped into the core, while water is pumped by the diesel power generators into the reactor to reduce the heat and thus, prevent a meltdown. Here is the catch in this process, the spent fuel

. However, should the pumps stop because either


rods are encased in both a primary and secondary containment structure which is designed to withstand a core meltdown

the generators fail or diesel fuel is not available, the fuel rods are subsequently uncovered and a
Fukushima type of core meltdown commences immediately . At this point, I took Judy Haar’s comments to a source of mine at the Palo Verde Nuclear power
plant. My source informed me that as per NERC policy, nuclear power plants are required to have enough diesel fuel to run for a period of seven days. Some plants have thirty days of diesel. This is the good news, but it is all downhill from here. The Unresolved

A long-term loss of outside electrical power will most certainly interrupt the
Power Blackout Problem

circulation of cooling water to the pools. Another one of my Palo Verde nuclear power plant sources informed me that there is no long term solution to a power blackout and that all bets are
off if the blackout is due to an EMP attack. A more detailed analysis reveals that the spent fuel pools carry depleted fuel for the reactor. Normally, this spent fuel has had time to considerably decay and therefore, reducing radioactivity and heat. However, the newer
discharged fuel still produces heat and needs cooling. Housed in high density storage racks, contained in buildings that vent directly into the atmosphere, radiation containment is not accounted for with regard to the spent fuel racks. In other words, there is no
capture mechanism. In this scenario, accompanied by a lengthy electrical outage, and with the emergency power waning due to either generator failure or a lack of diesel needed to power the generators, the plant could lose the ability to provide cooling. The water
will subsequently heat up, boil away and uncover the spent fuel rods which required being covered in at least 25 feet of water to remain benign from any deleterious effects. Ultimately, this would lead to fires as well and the release of radioactivity into the
atmosphere. This would be the beginning of another Fukushima event right here on American soil. Both my source and Haar shared exactly the same scenario about how a meltdown would occur. Subsequently, I spoke with Roger Landry who worked for Raytheon in
various Department of Defense projects for 28 years, many of them in this arena and Roger also confirmed this information and that the above information is well known in the industry. When I examine Congressman Franks letter to NERC and I read between the
lines, it is clear that Franks knows of this risk as well, he just stops short of specifically mentioning it in his letter. Placing Odds On a Failure Is a Fools Errand An analysis of individual plant risks released in 2003 by the Nuclear Regulatory Commission shows that for 39
of the 104 nuclear reactors, the risk of core damage from a blackout was greater than 1 in 100,000. At 45 other plants the risk is greater than 1 in 1 million, the threshold NRC is using to determine which severe accidents should be evaluated in its latest analysis.
According to the Nuclear Regulatory Commission, the Beaver Valley Power Station, Unit 1, in Pennsylvania has the greatest risk of experiencing a core meltdown, 6.5 in 100,000, according to the analysis. These odds don’t sound like much until you consider that we
have 124 nuclear power generating plants in the US and Canada and when we consider each individual facility, the odds of failure climb. How many meltdowns would it take in this country before our citizens would be condemned to the hellish nightmare, or worse,
being experienced by the Japanese? The Question That’s Not Being Asked None of the NERC, or the Nuclear Regulatory tests of handling a prolonged blackout at a nuclear power plant has answered two critical questions, “What happens when these nuclear power
plants run out of diesel fuel needed to run the generators”, and “What happens when some of these generators fail”? In the event of an EMP attack, can tanker trucks with diesel fuel get to all of the nuclear power plants in the US in time to re-fuel them before they
stop running? Will tanker trucks even be running themselves in the aftermath of an EMP attack? And in the event of an EMP attack, it is not likely that any plant which runs low on fuel, or has a generator malfunctions, will ever get any help to mitigate the crisis prior

CAN YOU EVEN IMAGINE


to a plethora of meltdowns occurring. Thus, every nuclear power plant in the country has the potential to cause a Chernobyl or Fukushima type accident if our country is hit by an EMP attack.

124 FUKUSHIMA EVENTS IN NORTH AMERICA HAPPENING AT THE SAME TIME? THIS WOULD
CONSTITUTE THE ULTIMATE DEPOPULATION EVENT. The ramifications raised in the …And There Is More…

previous paragraphs are significant. What if the blackout lasts longer than 24 hours? What if the reason for the
In this instance, the cavalry is not coming. Adding fuel to
blackout is an EMP burst caused by a high altitude nuclear blast and transportation comes to a standstill?

the fire lies in the fact that the power transformers presently take at least one year to replace.
Today, there is a three year backlog on ordering because so many have been ordered by China. This makes one wonder what the Chinese are preparing for with these multiple

our unpreparedness is a prescription for disaster


orders for both transformers and generators. In short, most, . As a byproduct of my investigation, I have discovered that

if not all, of the nuclear power plants are on known earthquake fault lines . All of California’s nuclear power plants are located on an

I have studied
earthquake fault line. Can anyone tell me why would anyone in their right mind build a nuclear power plant on a fault line? To see the depth of this threat you can visit an interactive, overlay map at this site. Conclusion

this issue for almost nine months The more facts I gather about the threat
and this is the most elusive topic that I have ever investigated.

of a mass nuclear meltdown in this country, the more questions I realize that are going
unanswered. With regard to the nuclear power industry we have the proverbial tiger by the tail. Last August, Big Sis stated that it is not matter of if we have a mass power grid take down, but it is a matter of when. I would echo her concerns

Our collective negligence and


and apply the “not if, but when” admonition to the possibility of a mass meltdown in this country. It is only a matter of time until this scenario for disaster comes to fruition.

high level of extreme depraved indifference on the part of NERC is criminal because this is
indeed an Extinction Level Event . At the end of the day, can anyone tell me why would any country be so negligent as to not provide its nuclear plants a fool proof method to cool the secondary processes of
its nuclear materials at all of its plants? Why would ANY nuclear power plant be built on an earthquake fault line? Why are we even using nuclear energy under these circumstances? And why are we allowing the Chinese to park right next door to so many nuclear
power plants?
1NC – CP – Con Con
Pursuant to Article V of the Constitution, at least two-thirds of the States
should call a limited constitutional convention and at least three-fourths of the
States, excluding California, New York, Texas, and Florida, should ratify a
constitutional amendment that requires United States law enforcement
acquisition of communications data for criminal investigations meets privacy
standards of both the United States and the country in which the data is stored.

That solves
Vermeule 4 [Adrian, Professor of Law – Harvard Law School, “Constitutional Amendments and
the Constitutional Common Law,” Public Law and Legal Theory Working Paper No. 73, University
of Chicago Law School, September, http://www.law.uchicago.edu/files/files/73-av-
amendments.pdf]

Decision costs and benefits We must account for the costs of decision making as well as the quality of decisions. A simple view would be
that the formal amendment process is too costly to serve as the principal means, or even as an important means, of
constitutional updating, just as periodic constitutional conventions are too costly to be practical. Dennis Mueller denies this view .

He suggests instead that the decision costs of the formal amendment process are decision benefits:
The U.S. Constitution contains broad definitions of rights, and the task of amending their definitions to reflect changes in the country’s economic, social
and political characteristics has been largely carried out by the
Supreme Court. While this method of updating the
Constitution’s definition of rights has helped to prevent them from becoming hopelessly out of date, it has failed to build the kind
of support for the new definitions of rights that would exist if they had arisen from a wider
consensual agreement in the society. The bitter debates and clashes among citizens over civil rights, criminal rights and abortion
illustrate this point. . . . Although [alternative procedures for constitutional amendment] may appear to involve greater decision-making costs, they
have the potential for building consensus over the newly formulated definitions of rights.82 On this view, it is an illusion that constitutional common
law incurs lower decision costs in the long run, even if a given change may be more easily implemented through adjudication in the short run.
Although at any given time it is less costly to persuade five Justices to adopt a proposed
constitutional change than to obtain a formal amendment to the same effect , the former mode
of change incurs higher decision costs over time, because common-law constitutionalism allows
greater conflict in subsequent periods. A benefit of formal amendments, then, is to more
effectively discourage subsequent efforts by constitutional losers to overturn adverse
constitutional change. Precisely because the formal amendment process is more costly to invoke,
formal amendments are more enduring than are judicial decisions that update constitutional
rules;83 so losers in the amendment process will less frequently attempt to overturn or
destabilize the new rules, in subsequent periods, than will losers in the process of common-law
constitutionalism. This point does not necessarily suppose that dissenters from a given amendment come to agree with the enacting
supermajority’s judgment, only that they accept the new equilibrium faute de mieux. Obviously more work might be done to specify these intuitions,
but it is at least plausible to think that the simplest view, on which formal amendments incur decisionmaking costs that exceed their other benefits, is
untenably crude. The overall picture, rather, is a tradeoff along the following lines. Relative to common-law constitutionalism, the Article V process
requires a higher initial investment to secure constitutional change. If Mueller is right, however, constitutional
settlements
produced by the Article V process will tend to be more enduring over time than is judicial
updating, which can be unsettled and refought at lower cost in subsequent periods .
1NC – DA – 2020
Biden wins now
Brewster 09/10 – over national politics for Forbes. (Jack, 09/10/20, “Today’s 2020 Election
Polls: Biden Keeps Strong National Lead, But Pennsylvania, Florida Tighten,”
https://www.forbes.com/sites/jackbrewster/2020/09/10/todays-2020-election-polls-biden-
keeps-strong-national-lead-but-pennsylvania-florida-tighten/#5633f0365f1f) np

Polls out Thursday indicate the race between Joe Biden and President Trump appears unchanged
nationally but more volatile at the state level, as another national survey showed the Democratic nominee
ahead by almost double digits, but holding a narrow lead in the battleground states of Florida and
Pennsylvania. KEY FACTS National polls have barely moved in the last several weeks, and a Monmouth
University poll — conducted September 3-8 in a survey of 758 voters — out Thursday showed more of the same: Biden leads Trump 51%-

42% among registered voters and 51%-44% among likely voters. In an average of all polls, Biden
leads Trump by 7.5 points. Biden’s polling margin versus Trump has shrunk in Pennsylvania and Florida in recent days, and a host of
state-by-state polls published Thursday reflects that: Biden leads Trump by just 3 points in Pennsylvania (49%-46%) and

2 points in Florida (48%-46%), according to polls conducted by Benenson Strategy Group/GS Strategy Group and published by AARP. In
Florida, Biden’s lead has shrunk over Trump by 2.6 points on average since mid-August, and now stands at just 2.8
points, according to FiveThirtyEight. Biden’s lead over Trump has dropped by 1.4 points in Pennsylvania during the same
period, and now sits at 5 points. Biden leads in Arizona by 1 point (48%-47%), Michigan by 7 points (50%-

43%), Wisconsin by 5 points (50%-45%), and is tied with Trump in North Carolina (48%-48%) according to separate

polls published by Benenson Strategy Group/GS Strategy Group. SURPRISING FACT Polls out Thursday continue to show Biden

polling well with seniors as compared to Hillary Clinton four years ago. The Benenson Strategy Group/GS
Strategy Group Pennsylvania poll shows Biden leading Trump by 11 points (53%-42%). In 2016, Trump won seniors by 10 points, according to exit polls.

Voters will flip based on data privacy.


Chris Burt 20, the editor of Biometric Update "Americans worried about data privacy and election security,
biometric voting support grows," Biometric Update, 2-26-2020,
https://www.biometricupdate.com/202002/americans-worried-about-data-privacy-and-election-security-biometric-
voting-support-grows; HS

“The stakes are getting higher in cybersecurity and data privacy. The California Consumer Privacy Act went into effect in
January, giving some people a greater sense of control over their personal data. However, privacy and cybersecurity concerns

related to the 2020 election are growing as the primaries begin and November inches closer ,”
states Peter Galvin, chief strategy officer at nCipher Security. “Meanwhile, biometrics like fingerprints and facial recognition

make data even more personal. These factors are sounding alarms for consumers, businesses and
government over the state of cybersecurity and data privacy.” While 46 percent of U.S. adults believe they
have the same amount of control over their digital personal data as they did a year ago , 40 percent
believe having the ability to delete data, which is a feature of new data privacy laws, increases their trust. When a company uses encryption, 49 percent
say they trust that their personal data is safe. The survey also confirms often-repeated observations about password resets being common, password
hygiene being weak, and managing traditional log-ins being frustrating. A majority of people surveyed are also willing to accept some degree of data
risk for the convenience of online shopping, online banking, and digital payments, though that acceptance dips below half for online tax payments (47
percent). A
majority are unwilling to accept personal data privacy risk for online voting (54
percent). Trust in government to safeguard personal data seems to have declined, however,
with 30 percent saying they have less trust in the government’s ability to protect their
information, while 17 percent said they have no faith in government data protection at all.
Trump re-election leads to great power nuclear war
Wright 20 – PhD @ Georgetown, Senior Fellow @ Brookings, previously executive director of
studies at the Chicago Council on Global Affairs and a lecturer at the University of Chicago's
Harris School for Public Policy (Thomas, “The Folly of Retrenchment: Why America Can't
Withdraw From the World,” Foreign Affairs, 99.2)//BB

For seven decades, U.S. grand strategy was characterized by a bipartisan consensus on the United
States' global role. Although successive administrations had major disagreements over the details, Democrats and
Republicans alike backed a system of alliances, the forward positioning of forces, a relatively open
international economy, and, albeit imperfectly, the principles of freedom, human rights, and
democracy. Today, that consensus has broken down . President Donald Trump has questioned the
utility of the United States' alliances and its forward military presence in Europe, Asia, and the
Middle East. He has displayed little regard for a shared community of free societies and is drawn
to authoritarian leaders. So far, Trump's views are not shared by the vast majority of leading
Republicans. Almost all leading Democrats, for their part, are committed to the United States'
traditional role in Europe and Asia, if not in the Middle East. Trump has struggled to convert his
worldview into policy, and in many respects, his administration has increased U.S. military
commitments. But if Trump wins reelection, that could change quickly, as he would feel more
empowered and Washington would need to adjust to the reality that Americans had reconfirmed their support
for a more inward-looking approach to world affairs . At a private speech in November, according to press reports,
John Bolton, Trump's former national security adviser, even predicted that Trump could pull out of NATO in a second
term. The receptiveness of the American people to Trump's "America first" rhetoric has revealed that there is a market for a
foreign policy in which the United States plays a smaller role in the world. Amid the shifting political winds, a growing chorus
of voices in the policy community, from the left and the right, is calling for a strategy of global retrenchment,
whereby the United States would withdraw its forces from around the world and reduce its security commitments. Leading scholars
and policy experts, such as Barry Posen and Ian Bremmer, have called on the United States to significantly reduce its role in Europe
and Asia, including withdrawing from nato. In 2019, a new think tank, the Quincy Institute for Responsible Statecraft, set up shop,
with funding from the conservative Charles Koch Foundation and the liberal philanthropist George Soros. Its mission, in its own
words, is to advocate "a new foreign policy centered on diplomatic engagement and military restraint." Global retrenchment
is fast emerging as the most coherent and readymade alternative to the United States' postwar strategy. Yet pursuing it would be
a grave mistake. By dissolving U.S. alliances and ending the forward presence of U.S. forces, this
strategy would destabilize the regional security orders in Europe and Asia . It would also increase
the risk of nuclear proliferation, empower right-wing nationalists in Europe, and aggravate the
threat of major-power conflict. This is not to say that U.S. strategy should never change. The
United States has regularly increased and decreased its presence around the world as threats have risen and
ebbed. Even though Washington followed a strategy of containment throughout the Cold War, that took various forms, which meant
the difference between war and peace in Vietnam, between an arms race and arms control, and between détente and an all-out
attempt to defeat the Soviets. After the fall of the Soviet Union, the United States changed course again, expanding its alliances to
include many countries that had previously been part of the Warsaw Pact. Likewise, the United States will now have to do less in
some areas and more in others as it shifts its focus from counterterrorism and reform in the Middle East toward great-power
competition with China and Russia. But
advocates of global retrenchment are not so much proposing changes within
a strategy as they are calling
for the wholesale replacement of one that has been in place since World War II. What
the United States needs now is a careful pruning of its overseas commitments-not the
indiscriminate abandonment of a strategy that has served it well for decades . RETRENCHMENT REDUX
Support for retrenchment stems from the view that the United States has overextended itself in countries that have little bearing on
its national interest. According to this perspective, which is closely associated with the realist school of international relations, the
United States is fundamentally secure thanks to its geography, nuclear arsenal, and military advantage. Yet the country has
nonetheless chosen to pursue a strategy of "liberal hegemony," using force in an unwise attempt to perpetuate a liberal
international order (one that, as evidenced by U.S. support for authoritarian regimes, is not so liberal, after all). Washington, the
argument goes, has distracted itself with costly overseas commitments and interventions that breed resentment and encourage
free-riding abroad. Critics of the status quo argue that the United States must take two steps to change its ways. The first is
retrenchment itself: the action of withdrawing from many of the United States' existing commitments, such as the ongoing military
interventions in the Middle East and one-sided alliances in Europe and Asia. The second is restraint: the strategy of defining U.S.
interests narrowly, refusing to launch wars unless vital interests are directly threatened and Congress authorizes such action,
compelling other nations to take care of their own security, and relying more on diplomatic, economic, and political tools. In
practice, this approach means ending U.S. military operations in Afghanistan, withdrawing U.S. forces from the Middle East, relying
on an over-the-horizon force that can uphold U.S. national interests, and no longer taking on responsibility for the security of other
states. As for alliances, Posen has argued that the United States should abandon the mutual-defense provision of nato, replace the
organization "with a new, more limited security cooperation agreement," and reduce U.S. commitments to Japan, South Korea, and
Taiwan. On the question of China, realists have split in recent years. Some, such as the scholar John Mearsheimer, contend that even
as the United States retrenches elsewhere, in Asia, it must contain the threat of China, whereas others, such as Posen, argue that
nations in the region are perfectly capable of doing the job themselves. Since Trump's election, some progressive foreign policy
thinkers have joined the retrenchment camp. They diverge from other progressives, who advocate maintaining the United States'
current role. Like the realists, progressive retrenchers hold the view that the United States is safe because of its geography and the
size of its military. Where these progressives break from the realists, however, is on the question of what will happen if the United
States pulls back. While the realists favoring retrenchment have few illusions about the sort of regional competition that will break
out in the absence of U.S. dominance, the progressives expect that the world will become more peaceful and cooperative, because
Washington can still manage tensions through diplomatic, economic, and political tools. The immediate focus of the progressives is
the so-called forever wars-U.S. military involvement in Afghanistan, Iraq, Syria, and the broader war on terrorism-as well as the
defense budget and overseas bases. Although the progressives have a less developed vision of how to implement retrenchment than
the realists, they do provide some guideposts. Stephen Wertheim, a co-founder of the Quincy Institute, has called for bringing home
many of the U.S. soldiers serving abroad, "leaving small forces to protect commercial sea lanes," as part of an effort to "deprive
presidents of the temptation to answer every problem with a violent solution." He argues that U.S. allies may believe that the United
States has been inflating regional threats and thus conclude that they do not need to increase their conventional or nuclear forces.
Another progressive thinker, Peter Beinart, has argued that the United States should accept Chinese and Russian spheres of
influence, a strategy that would include abandoning Taiwan. IS LESS REALLY MORE? The realists and the progressives
arguing for retrenchment differ in their assumptions, logic, and intentions. The realists tend to be more pessimistic about
the prospects for peace and frame their arguments in hardheaded terms, whereas the progressives downplay the consequences of
American withdrawal and make a moral case against the current grand strategy. But they share
a common claim: that
the United States would be better off if it dramatically reduced its global military footprint and
security commitments. This is a false promise, for a number of reasons . First, retrenchment
would worsen regional security competition in Europe and Asia. The realists recognize that the U.S. military
presence in Europe and Asia does dampen security competition, but they claim that it does so at too high a price-and one that, at
any rate, should be paid by U.S. allies in the regions themselves. Although pulling back would invite regional security competition,
realist retrenchers admit, the United States could be safer in a more dangerous world because regional rivals would check one
another. This
is a perilous gambit, however, because regional conflicts often end up implicating U.S.
interests. They might thus end up drawing the United States back in after it has left-resulting in a
much more dangerous venture than heading off the conflict in the first place by staying . Realist
retrenchment reveals a hubris that the United States can control consequences and prevent crises from erupting into war. The
progressives' view of regional security is similarly flawed. These retrenchers reject the idea that regional security competition
will intensify if the United States leaves. In fact, they argue, U.S. alliances often promote competition , as in the
Middle East, where U.S. support for Saudi Arabia and the United Arab Emirates has emboldened those countries in their cold war
with Iran.But this logic does not apply to Europe or Asia, where U.S. allies have behaved
responsibly. A U.S. pullback from those places is more likely to embolden the regional powers .
Since 2008, Russia has invaded two of its neighbors that are not members of nato, and if the Baltic states were no
longer protected by a U.S. security guarantee , it is conceivable that Russia would test the boundaries
with gray-zone warfare. In East Asia, a U.S. withdrawal would force Japan to increase its defense
capabilities and change its constitution to enable it to compete with China on its own, straining
relations with South Korea. The second problem with retrenchment involves nuclear
proliferation. If the United States pulled out of NATO or ended its alliance with Japan, as many realist advocates of
retrenchment recommend, some of its allies, no longer protected by the U.S. nuclear umbrella, would be tempted to acquire nuclear
weapons of their own. Unlike the progressives for retrenchment, the realists are comfortable with that result, since they see
deterrence as a stabilizing force. Most Americans are not so sanguine, and rightly so. There
are good reasons to worry
about nuclear proliferation: nuclear materials could end up in the hands of terrorists, states with
less experience might be more prone to nuclear accidents, and nuclear powers in close
proximity have shorter response times and thus conflicts among them have a greater chance of
spiraling into escalation. Third, retrenchment would heighten nationalism and xenophobia . In
Europe, a U.S. withdrawal would send the message that every country must fend for itself. It
would therefore empower the far-right groups already making this claim-such as the Alternative for Germany, the
League in Italy, and the National Front in France-while undermining the centrist democratic leaders there who told their populations
that they could rely on the United States and nato. As a result, Washington would lose leverage over the
domestic politics of individual allies, particularly younger and more fragile democracies such as
Poland. And since these nationalist populist groups are almost always protectionist, retrenchment would damage U.S. economic
interests, as well. Even more alarming, many of the right-wing nationalists that retrenchment would empower have called for
greater accommodation of China and Russia. A
fourth problem concerns regional stability after global
retrenchment. The most likely end state is a spheres-ofinfluence system, whereby China and Russia
dominate their neighbors, but such an order is inherently unstable. The lines of demarcation for such spheres tend to
be unclear, and there is no guarantee that China and Russia will not seek to move them outward over time. Moreover, the United
States cannot simply grant other major powers a sphere of influence-the countries that would fall into those realms have agency,
too. If the United States ceded Taiwan to China, for example, the Taiwanese people could say no. The current U.S. policy toward the
country is working and may be sustainable. Withdrawing support from Taiwan against its will would plunge cross-strait relations into
chaos. The entire idea of letting regional powers have their own spheres of influence has an imperial air that is at odds with modern
principles of sovereignty and international law. A fifth problem with retrenchment is that it lacks domestic support. The American
people may favor greater burden sharing, but there is no evidence that they are onboard with a withdrawal from Europe and Asia.
As a survey conducted in 2019 by the Chicago Council on Global Affairs found, seven out of ten Americans believe that maintaining
military superiority makes the United States safer, and almost three-quarters think that alliances contribute to U.S. security. A 2019
Eurasia Group Foundation poll found that over 60 percent of Americans want to maintain or increase defense spending. As it
became apparent that China and Russia would benefit from this shift toward retrenchment, and as the United States' democratic
allies objected to its withdrawal, the domestic political backlash would grow. One result could be a prolonged foreign policy debate
that would cause the United States to oscillate between retrenchment and reengagement, creating uncertainty about its
commitments and thus raising the risk of miscalculation by Washington, its allies, or its rivals. Realist and progressive retrenchers
like to argue that the architects of the United States' postwar foreign policy naively sought to remake the world in its image. But the
real revisionists are those who argue for retrenchment, a geopolitical experiment of unprecedented scale in modern history. If this
camp were to have its way, Europe
and Asia-two stable, peaceful, and prosperous regions that form the
two main pillars of the U.S.-led order-would be plunged into an era of uncertainty .
1NC – K – Set Col
Settler colonialism is a constant state of incarceration for Indigeneity. Criminal
justice reform is a veil of neutrality that enables more insidious forms of settler
elimination.
Anthony, 20—Senior Lecturer in Law at the University of Technology, Sydney (Thalia, “Settler-
Colonial Governmentality: The Carceral Webs Woven by Law and Politics,” Questioning
Indigenous-Settler Relations, Chapter 2, pp 33-53, SpringerLink, dml)
In invoking the concept of ‘settler colonial governmentality’, I rely on the works of settler colonial scholars such as Wolfe (2006), and
critical Indigenous scholars such as Coulthard (2014). Through centering settler colonial relations, both approaches demonstrate that
policy change is contingent on the logic and structures of colonisation—colonising land, affecting
primitive accumulation and eliminating the native. Accordingly, policy change driven by the state
inadequately materialises Indigenous rights and can often set them back, including where
touted as progressive such as gestures towards reconciliation and native title. State reforms remain built on a
‘logic of elimination’ that shapes the settler colonial response to cultures, languages, Country and
sovereignty (Wolfe, 2006, p. 387). Settler colonial theory is concerned with how colonialism lives and breathes in the present. The
continuity of colonial legacies produces discursive and non-discursive strategies , according to
Coulthard (2014, p. 7), to facilitate the ‘ongoing dispossession of Indigenous peoples of their lands and self-determining
authority’. The relegation of historical colonial wrongs to a ‘dark chapter’ in history disconnects
them from ‘continued child removals, mass incarceration and ongoing land dispossession’
(Woolford & Hounslow, 2018, p. 205). Equally, designating contemporary forms of systemic discrimination as
exceptional, annuls the entrenched and intergenerational impact of colonisation on
Indigenous peoples. Contemporary injustices—whether that is Indigenous deaths in custody in Australia, the
removal of Māori babies in Aotearoa or the construction of pipelines across North America— deepen existing scars rather
than create new wounds. Situating the various guises of state policy and legality within this historical and continuing
trajectory enlivens a relational analysis of state containment and control of Indigenous people. This approach shows that
incarceration and maltreatment are not one state policy or directive alone, but in fact are a
longstanding feature of the settler colonial-Indigenous relationship . In addition, however, examining
the lived experience of settler colonial policies demonstrates another form of relationality. It brings to the
fore Indigenous peoples experience as one of subordination, resilience and resistance. In
undertaking a review of the formal statements in the NT Royal Commission, this chapter exposes the ideologies of the state officials
as well as the perspective of the Indigenous young people who were detained in the criminal justice system and removed from their
families, as well as the standpoints of Indigenous Elders, respected persons and leaders. Through this analysis, it is evident that
Indigenous peoples’ resilient cultural and family ties offset the settler state’s logic of elimination. Indigenous identity extends beyond
their relationality to the non-Indigenous settler state and remains attached to their living culture, notwithstanding the hugely
traumatic impact of state violence on Indigenous people in the NT. 3.4 Colonial Carceralism and Its Multiple Guises While
mass
incarceration has become synonymous with contemporary penal policy, for Indigenous people,
incarceration is not an exceptional state of being. The penal phase of mass incarceration is yet
another iteration in Indigenous people’s long experience of the settler state’s impetus to
segregate and contain Indigenous people, whether that be for Christian, civilising, protectionist, welfare or penal
purposes (see Chartrand, 2019). Loïc Wacquant coined the term ‘hyperincarceration’ to describe the phenomenon of over-
representation in the criminal justice system and the broader role of the penal system as an ‘instrument for managing dispossessed
and dishonoured groups’ (Wacquant, 2001, p. 95). For
Indigenous people, management through mass
detention featured long before the war on drugs or neo-liberal class warfare. Declaration of
jurisdiction over Indigenous people by the first settler colonial courts in eastern Australia (New South
Wales) were made in response to Indigenous peoples’ challenges to the capacity of the colonial
administration to imprison them (see R v Bonjon, 1841; R v Murrell, 1836). Since then, Indigenous people
have been incarcerated by settler colonial authorities for administrative and penal ends.
Nonetheless, analogies can be drawn with Wacquant’s description of the ‘never-ending circulus’ between prison and the ghetto for
African Americans (Wacquant, 2001, p. 97). It can be likened to the
symbiosis between Indigenous incarceration
and a network of institutions designed to further Indigenous extinguishment. This racialised strategy
of institutionalisation has barely shifted since early colonisation; it has simply been veiled by the
state’s claims to neutrality. Concealing bias has become more insidious by enabling the state, as
demonstrated above, to blame the Indigenous person for being more criminal while exculpating any
bias on the part of law enforcers. For example, former Chief Minister Giles (2017, p. 3310) told the NT Royal Commission
that his government was not acting in a discriminatory manner towards Indigenous children when they were segregated in isolation
cells, gassed and tortured, it was simply dealing with a problem with children. Simply
following the law enables all
types of wrongs to be rationalised , and was relied on by detention staff to justify all manner of
torture against Indigenous children. The law removes the need for overt politics because law is
conceived by the settler state as a neutral instrument, while it operates as a coercive tool to
disproportionately regulate Indigenous people.

Settler invasion is not an event, but a structuring ontology of elimination that


results in genocide
Rifkin 14 – Associate Professor of English & WGS @ UNC-Greensboro [Mark, ‘Settler Common
Sense: Queerness and Everyday Colonialism in the American Renaissance,’ pp. 7-10] mp
If nineteenth-century American literary studies tends to focus on the ways Indians enter the narrative frame and the kinds of
meanings and associa- tions they bear, recent
attempts to theorize settler colonialism have sought to shift
attention from its effects on Indigenous subjects to its implications for nonnative political
attachments, forms of inhabitance, and modes of being, illuminating and tracking the pervasive
operation of settlement as a system. In Settler Colonialism and the Transformation of Anthropology, Patrick Wolfe
argues, “Settler colonies were (are) premised on the elimination of native societies. The split tensing reflects a determinate feature
of settler colonization. The colonizers come to stay—invasion is a structure not an event” (2).6 He suggests that a
“logic of elimination” drives settler governance and sociality, describing “the settler-colonial will” as
“a historical force that ultimately derives from the primal drive to expansion that is generally glossed as
capitalism” (167), and in “Settler Colonialism and the Elimination of the Native,” he observes that “elimination is an
organizing principle of settler-colonial society rather than a one-off (and superceded)
occurrence” (388). Rather than being superseded after an initial moment/ period of conquest, colonization persists since “the
logic of elimination marks a return whereby the native repressed continues to structure settler-
colonial society” (390). In Aileen Moreton-Robinson’s work, whiteness func- tions as the central way of understanding the
domination and displacement of Indigenous peoples by nonnatives.7 In “Writing Off Indigenous Sover- eignty,” she argues, “As a
regime of power, patriarchal white sovereignty operates ideologically, materially and discursively to reproduce and main- tain its
investment in the nation as a white possession” (88), and in “Writ- ing Off Treaties,” she suggests, “ At
an ontological level
the structure of subjective possession occurs through the imposition of one’s will-to-be on the
thing which is perceived to lack will, thus it is open to being possessed,” such that “possession . . .
forms part of the ontological structure of white subjectivity” (83–84). For Jodi Byrd, the deployment of Indianness as a
mobile figure works as the principal mode of U.S. settler colonialism. She observes that “colonization and
racialization . . . have often been conflated,” in ways that “tend to be sited along the axis of
inclusion/exclusion” and that “misdirect and cloud attention from the underlying structures of settler
colonialism” (xxiii, xvii). She argues that settlement works through the translation of indigeneity as
Indianness, casting place-based political collec- tivities as (racialized) populations subject to U.S. jurisdiction and manage- ment:
“the Indian is left nowhere and everywhere within the ontological premises through which U.S.
empire orients, imagines, and critiques itself ”; “ideas of Indians and Indianness have served as the
ontological ground through which U.S. settler colonialism enacts itself ” (xix).

The alternative is a cartography of refusal that blockades settler knowledge


production
Day 15 Iyko Day’s research focuses on Indigeneity and capitalism in North America. She has
publications that explore the settler biopolitics of landscape art; the settler colonial logics of
Japanese internment in Canada, the US, and Australia. Associate Professor of English at Mount
Holyoke College. Ph.D., M.A., University of California, Berkeley. M.A., Dalhousie University. B.A.,
University of Calgary. Fall 2015. “Being or Nothingness: Indigeneity, Antiblackness, and Settler
Colonial Critique.” Critical Ethnic Studies Journal.
https://www.academia.edu/12871939/Being_or_Nothingness_Indigeneity_Antiblackness_and_
Settler_Colonial_Critique || OES-SW

Beyond this inconsistency, the


liberal multiculturalist agenda that Wilderson and Sexton project into
Indigenous sovereignty willfully evacuates any Indigenous refusal of a colonial politics of
recognition. Among other broad strokes, Sexton states, “as a rule, Native Studies reproduces the dominant liberal political
narrative of emancipation and enfranchisement.”46 This provides a basis for Wilderson’s assertion that Indigenous sovereignty
engages in a liberal politics of state legitimation through recognition because “treaties are forms of articulation” that buttress “the
interlocutory life of America as a coherent (albeit genocidal) idea.”47 But such a depoliticized liberal project is frankly incompatible
with Indigenous activism and scholarship that emerges from Native studies in North America. The main argument in Glen Sean
Coulthard’s book Red Skin, White Masks is to categorically reject “the liberal recognition-based approach to Indigenous
selfdetermination.”48 This is not a politics of legitimizing Indigenous nations through state
recognition but rather one of refusal, a refusal to be recognized and thus interpellated by the
settler colonial nation-state. Drawing on Fanon, Coulthard describes the “necessity on the part of the
oppressed to ‘turn away’ from their other-oriented master-dependency, and to instead
struggle for freedom on their own terms and in accordance with their own values .”49 It is also
difficult to reconcile the depoliticized narrative of “resurgence and recovery” that Wilderson and
Sexton attribute to Indigenous sovereignty in the face of Idle No More, the anticapitalist
Indigenous sovereignty movement in Canada whose national railway and highway blockades
have seriously destabilized the expropriation of natural resources for the global market. These are
examples that Coulthard describes as “direct action” rather than negotiation—in other words,
antagonism, not conflict resolution: They [blockades] are a crucial act of negation insofar as
they seek to impede or block the flow of resources currently being transported to international markets from oil
and gas fields, refineries, lumber mills, mining operations, and hydroelectric facilities located on the dispossessed lands of
Indigenous nations. These modes of direct action . . . seek to have a negative impact on the economic
infrastructure that is core to the colonial accumulation of capital in settler-political economies
like Canada’s.50 These tactics are part of what Audra Simpson calls a “cartography of refusal” that “negates the
authority of the other’s gaze.”51 It is impossible to frame the blockade movement, which has become the greatest threat
to Canada’s resource agenda,52 as a struggle for “enfranchisement.” Idle No More is not in “conflict” with the
Canadian nation-state; it is in a struggle against the very premise of settler colonial capitalism
that requires the elimination of Indigenous peoples. As Coulthard states unambiguously, “For Indigenous
nations to live, capitalism must die.”53
1NC – ADV – Localization
1NC – TURN – Dedev
Digital protectionism causes economic decline
Aaronson 18 (SUSAN ARIEL AARONSON, Elliott School of International Affairs, George
Washington University.)(“What Are We Talking about When We Talk about Digital
Protectionism?”, World Trade Review, 06 August 2018,
doi:10.1017/S1474745618000198)//ASMITH

Digital protectionism may be self-defeating . As noted above, while there is no consensus regarding how to define,
measure, let alone remedy, digital protectionism, a growing number of researchers find costly spillover effects (USITC, 2013, 2014;
OECD, 2017). ECIPE estimated that data
localization regulations cost EU citizens about $193 billion per
year, in part due to higher domestic prices (Bauer et al., 2014). However, the costs of digital protectionism
are not always economic. They can also affect the stability of the internet as a whole (Bildt, 2012).
In 2011, the OECD reported that Egypt’s shutdown of the internet for five days led to ‘direct costs of at minimum USD 90 million’
(OECD, 2011). A 2016 Brookings study estimated the economic impact of internet censorship,
filtering, and blocking was $2.4 billion , which was noted as an understatement of the actual economic damage of lost
tax revenues, the negative impact of worker productivity, among other costs (West, 2016). The OECD’s Sarah Box argues
that such reductions on internet openness can affect global value chains and reduce technology
diffusion, thereby undermining development and trade (Box, 2016: 2). Governments that adopt digital
protectionist strategies could hurt their own consumers and place their firms at a competitive disadvantage since such measures
may increase costs to business (Elms, 2017; Cory, 2017). Digital
protectionism may not only increase costs to
firms, but legal disputes could escalate while individuals and firms could have fewer incentives
to innovate (Hill and Noyes, 2018; de la Chapelle and Fehlinger, 2016). In short, digital protectionist strategies can backfire.
Analysts recognize that there is no easy way to measure internet openness or closure, or the effects of digital protectionism upon
the internet. Nevertheless, they agree that ‘the
dynamism of the internet depends in large part upon its
openness’ and that variants of protectionism, like censorship or data localization, can reduce
that openness (Bildt, 2012; Box, 2016; OECD, 2016). As an example, some Chinese officials admit that the Great Firewall is not
only costly to maintain (with staff and constant vigilance), but also that it may deter foreign investment and innovation. On 4 March
2017, Luo Fuhe, the vice-chairman of the Chinese People’s Political Consultative Conference, the top advisory body to China’s
parliament, stated that China’s sprawling internet censorship regime is harming the country’s economic and scientific progress and
discouraging foreign investment. Fuhe and a few other Chinese leaders acknowledged the Great Firewall may make it harder for
China to become an innovation-driven economy (Gao, 2017; Chu, 2017; Haas, 2017). Some scholars also assert that digital
protectionism undermines internet stability and interoperability . Data localization policies,
filtering, or censorship can alter the architecture of the internet, which has long favoured
technical efficiency over state politics. When officials place limitations on which firms can participate in the network,
they may reduce the overall size of the network and once again potentially raise costs (Hill, 2014: 32; Daigle, 2015; Drake et al.,
2016). Finally, digital
protectionism can undermine access to information, reducing innovation and
the ability of citizens to monitor and hold their governments to account (OECD, 2016; Aaronson, 2016a,
2016b).

Growth is unsustainable AND causes extinction.


Milena Büchs & Max Koch 17. Milena Büchs is Associate Professor in Sustainability, Economics
and Low Carbon Transitions at the University of Leeds, UK. Max Koch is Professor of Social Policy
at Lund University (School of Social Work), Sweden. 2017. Postgrowth and Wellbeing. Springer
International Publishing. CrossRef, doi:10.1007/978-3-319-59903-8.
As the previous chapters have shown, economic
growth is regarded as a prime policy aim by policy makers and
economists because it is thought to be essential for reducing poverty and generating rising living
standards and stable levels of employment (Ben-Ami 2010: 19–20). More generally, support for economic
growth is usually intertwined with advocating social progress based on scientific rationality and reason and
hence with an optimistic view of humans’ ingenuity to solve problems (ibid.: 17, 20, Chap. 5). Growth
criticism thus tends to be portrayed as anti-progress and inherently conservative (ibid.: Chap. 8). While it is important to
acknowledge and discuss this view, it needs to be emphasised that growth criticism is formulated with long-term human welfare in
mind which advocates alternative types of social progress (Barry 1998). This chapter first outlines ecological and social strands of
growth critiques and then introduces relevant concepts of and positions within the postgrowth debate. Ecological Critiques of G
rowth Generally speaking, two types of growth criticism can be distinguished: the first focuses on limitations of GDP as a measure of
economic performance; the second goes beyond this by highlighting the inappropriateness of growth as the ultimate goal of
economic activity and its negative implications for environment and society. Since
GDP measures the monetary value
of all final goods and services in an economy, it excludes the environmental costs generated by production. For instance,
as long as there is no cost associated with emitting greenhouse gases , the cost for the
environmental and social damage following from this is not reflected in GDP figures. Worse even,
GDP increases as a consequence of some types of environmental damage : if deforestation and
timber trade increase or if natural disasters or industrial accidents require expenditures for
clean-up and reconstruction, GDP figures will rise (Douthwaite 1999: 18; Leipert 1986). Several critics of GDP as a
measure of progress have proposed alternative indicators of welfare such as the Genuine Progress Indicator, Green GDPs or other
approaches which factor in environmental costs (see Chap. 5 for more details), but they do not necessarily object to economic
growth being the primary goal of economic activity (van den Bergh 2011). In contrast, the idea of ecological limits to growth goes
beyond the critique of GDP as a measure of economic performance. Instead, it maintains that economic
growth should not,
and probably cannot, be the main goal of economic activity because it requires increasing resource
inputs, some of which are non-renewable, and generates wastes, including greenhouse gases,
that disturb various ecosystems, severely threatening human and planetary functioning in the
short and long term. 4 CRITIQUES OF GROWTH 41 Resources are regarded as non-renewable if they cannot be naturally
replaced at the rate of consumption (Daly and Farley 2011: 75–76). Examples include fossil fuels, earth minerals and metals, and
some nuclear materials like uranium (Daly and Farley 2011: 77; Meadows et al. 2004: 87–107). Based on work by Georgescu-Roegen
(1971), many ecological economists also assume that non-renewable resources cannot be fully recycled because they become
degraded in the process of economic activity. Historically speaking, economic growth is a fairly recent
phenomenon (Fig. 2.1). Since its onset in the late seventeenth century in Europe and mid-eighteenth century in the US (Gordon
2012), it has gone hand in hand with an exponentially increasing use of non-renewable resources
such as fossil fuels (Fig. 4.1). While we are not yet close to running out of non-renewable
resources, over time they will become more difficult and hence more expensive to recover. This
idea is captured by the concept of “ energy returned on energy invested” (EROEI). In relation to
oil for instance, it has been shown that the easily recoverable fields have been targeted first and
that therefore greater energy (and hence financial) inputs will be required to produce more oil.
Over time, the ratio of energy returned on energy invested will decrease, reducing the financial
incentive to invest further in the recovery of these non-renewable resources (Dale et al. 2011; Brandt et
al. 2015: 2). Relevant to this is also the debate about peak oil—a concept coined by Shell Oil geologist Marion King
Hubbert in the 1950s—the point at which the rate of global conventional oil production reaches its
maximum which is expected to take place roughly once half of global oil reserves have been
produced. There is still controversy about whether global peak oil will occur, and if so when, as it is difficult to predict, or get
reliable data on, the rate at which alternative types of energy will replace oil (if this was to happen fast enough, peak oil might not
be reached, if it has not yet occurred), the size of remaining oil reserves and the future efficiency of oil extraction technologies
(Chapman 2014). However, it is plausible to assume that oil
prices will rise in the long term if conventional oil
availability diminishes, while global demand for oil increases with continuing economic and
population growth. Since economic growth in the second half of the twentieth century required
increasing inputs of conventional oil, higher oil prices would have a negative impact on growth unless
alternative technologies are developed that can generate equivalent liquid fuels at lower prices (Murphy and Hall 2011). Some
scholars have criticised the focus on physical/energy resource limitations as initially highlighted in the “limits to growth” debate
(Meadows et al. 1972) and state that instead catastrophic climate change is likely to be a more serious and immanent threat to
humanity (Schwartzman 2012). The main arguments here are first that much uncertainty remains about the potential and timing of
peak oil, future availability of other fossil fuels and development of alternative low energy resources, while the
impacts of
climate change are already immanent and may accelerate within the very near future. Second, even if
peaks in fossil fuel production occurred in the near future, remaining resources could still be exploited to their
maximum. However, this would be devastating from a climate change perspective as, according to the latest IPCC
scenarios, greenhouse gas emissions need to turn net-zero by the second half of this century for
there to be a good chance to limit global warming to 2° Celsius (and ideally, below that) (Anderson
and Peters 2016). It is telling that some of the more recent debates about ecological limits to growth put much more emphasis on
environmental impacts of growth, rather than on peak oil or other resource limitations (Dietz and O’Neill 2013). Differently put,
limits of sinks, especially to absorb greenhouse gases, and to the regeneration of vital ecosystems are now
attracting greater concern, compared to limits of resources. Growing economic production generates increasing pressures
on the environment due to pollution of air, water and soil, the destruction of natural habitats and landscapes, for instance, through
deforestation and the extraction of natural resources. Therefore, growth often also threatens the regeneration of renewable
resources such as healthy soil, freshwater and forests, as well as the functioning of vital ecosystems and
ecosystems services such as the purification of air and water, water absorption and storage and
the related mitigation of droughts and floods, decomposition and detoxification and absorption
of wastes, pollination and pest control (Meadows et al. 2004: 83–84). Recent research on planetary
boundaries has started to identify thresholds of environmental pollution or disturbance of a
range of ecosystems services beyond which the functioning of human life on earth will be put at
risk. Rockström and colleagues have identified nine such “planetary boundaries”—“climate change; rate of biodiversity loss
(terrestrial and marine); interference with the nitrogen and phosphorus cycles; stratospheric ozone depletion; ocean acidification;
global freshwater use; change in land use; chemical pollution; and atmospheric aerosol loading” (Rockström et al. 2009: 472). They
also present evidence according to which three of these boundaries—climate change, rate of biodiversity loss and the nitrogen cycle
—have already reached their limits (Rockström et al. 2009). Of those three thresholds, climate change has received most attention.
The 5th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC 2014) concluded that global temperatures have
risen by an average of 0.85° since the 1880s (while local temperature increases can be much higher than that) and that the
concentration of greenhouse gases in the atmosphere has reached unprecedented levels over the last 800,000 years—that of CO2
has now reached 405.6 parts per million (NASA, January 2017, Fig. 4.2), far surpassing the level of 350 ppm which is considered safe
by many scientists (Rockström et al. 2009). The IPCC report also maintained that humans very likely contributed to at least 50% of
global warming that occurred since the 1950s (IPCC 2014: 5). A range of climate change impacts can already be observed, including a
26% increase of ocean acidification since industrialisation; shrinking of glaciers, Greenland and Antarctic ice sheets, as well as arctic
sea ice; and the rise of sea levels of 19 cm since 1901. This is projected to increase by an additional 82 cm by the end of this century
at current levels of greenhouse gas emissions (ibid.: 13). Climate change impacts are already felt with increased occurrences of heat
waves, heavy rain fall, increased risk of flooding and impacts on food and water security in a number of regions around the world. It
is projected that with a rise of 2° of global temperatures, 280 million people worldwide (with greatest numbers in China, India and
Bangladesh) would be affected by sea level rise, escalating to a projected 627 million people under a 4° scenario (Strauss et al. 2015:
10). At the 21st Conference of Parties of the United Nations Framework Convention on Climate Change in Paris in 2015,
representatives agreed that action should be taken to limit rise of global temperatures to 2° and Fig. 4.2 Concentration of CO2 in the
atmosphere. Source NASA, available from https://climate.nasa.gov/vital-signs/carbon-dioxide/. The CO2 levels have been
reconstructed from measures of trapped air in polar cap ice cores 4 CRITIQUES OF GROWTH 45 to “pursue efforts” to limit it to 1.5°.
This has been adopted by 196 countries, but immense efforts and very radical reductions of greenhouse gas emissions will be
required to comply with the agreement. Even if net greenhouse gas emissions were reduced to zero, surface temperatures would
remain constant at their increased levels for hundreds of years to come and climate change impacts such as ocean acidification and
rising sea levels would continue for hundreds or even thousands of years once global temperatures are stabilised; moreover, a range
of climate change impacts are deemed irreversible (IPCC 2014: 16). One
controversial question in the debate about
economic growth and environmental impacts has been whether growth can be decoupled from
the damage it causes. Important to this debate is the theory of the Environmental Kuznets Curve
which applies Simon Kuznets’ hypothesised inverted u-shaped relationship between economic development and income inequality
to the relationship between economic development and environmental degradation. According to this theory, environmental
degradation is low in the early phases of economic development, then rises with increasing development up to a certain point,
beyond which it falls again with advancing development because more resources can be invested to render production and
consumption more efficient and less polluting. Therefore, this theory suggests that it is possible to decouple economic growth
(measured in GDP) from its environmental implications. The counter-argument to this
theory is that it does not take into
account the difference between relative and absolute decoupling. Relative decoupling refers to
the environmental impacts generated over time per unit of economic output, for instance CO2
emissions per million of US$. In contrast, absolute decoupling would examine aggregate environmental
impact, compared to total economic output over time. Here it has been argued that while relative decoupling
may be possible as the environmental impact per unit of economic output decreases over time due to efficiency gains, absolute
decoupling is much harder to achieve while growth continues. Indeed, there
is no evidence for absolute decoupling
as total environmental impacts, for instance total global CO2 emissions, are still rising with rising global GDP
(Jackson 2011: 67–86). This is partly due to rebound effects which we discussed in Chap. 2: rising consumption because
the increase in efficiency has made it cheaper to produce/consume (Jackson 2011: 67–86; see also Czech 2013: Chap. 8 criticising
“green growth”). Furthermore, if decoupling is examined at the country level, one would need to take consumptionbased resource
use/emissions into account rather than productionbased impacts. Substantial
environmental impacts related to
everything that is consumed in rich countries occur in developing countries from which goods
are imported. A focus on production-based environmental impacts would hence be misleading as
it ignores the [and] environmental impacts that relate to a country’s living standards and that
occur outside of that country. Social Critiques of Growth Economic growth has not only been criticised from an
ecological perspective, but also from an individual and social wellbeing point of view. Here, we can again distinguish a critique of
GDP as a measure of wellbeing and a wider critique which highlights potential negative consequences of economic growth for
human wellbeing. Several scholars have argued that GDP is an inadequate measure of prosperity or wellbeing because it only
includes market transactions and ignores activities of the informal economy in households and the volunteering sector which make
an important contribution to individual and social wellbeing (Stiglitz et al. 2011; van den Bergh 2009; Jackson 2011). It also excludes
the contribution of certain government services that are provided for free (Douthwaite 1999: 14; Stiglitz et al. 2011: 23), and the
roles of capital stocks and of leisure in generating welfare (Costanza et al. 2015: 137). Furthermore, all market transactions make a
positive contribution to GDP, regardless of whether expenditures increase or decrease welfare. Similar to the way in which
environmental costs of growth are either excluded from GDP or even increase it, expenditures that arise from road accidents,
divorces, crime, etc., contribute positively to GDP (ibid.: 133). The focus on market transactions also means that an increasing
marketisation (or “commodification”) of an economy will be reflected in a rise of GDP, which may or may not be related to actual
“welfare” outcomes (Stiglitz et al. 2011: 49). It also implies that GDP is an insufficient cross-national comparator for the quality of
life, as it does not take into account the different sizes of the informal economy across countries (ibid.: 15). Furthermore, GDP does
not indicate how income and consumption are distributed in society (Stiglitz et al. 2011: 44). This implies that a rise of GDP can be
consistent with a rise of inequality of income and wealth. 4 CRITIQUES OF GROWTH 47 However, if greater inequality has negative
impacts on social wellbeing (Wilkinson and Pickett 2009), this would be masked by rising GDP figures (Douthwaite 1999: 17). An
even more fundamental criticism of GDP as a measure of wellbeing is that it focuses on the accumulation of money or wealth and
thus on the material aspects of wellbeing. Such a narrow conception of the goals of economic activity and wellbeing has been
criticised early on in the history of economic thought, e.g. by Aristotle’s distinction between oikonomia and chrematistics. The latter
refers to the accumulation of wealth and was regarded by him as an “unnatural” activity which did not contribute to the generation
of use value and wellbeing (Cruz et al. 2009: 2021). The argument that wider conceptions of wellbeing and prosperity are required
has also become relevant for contemporary critiques of economic growth (Jackson 2011; Paech 2013; Schneider et al. 2010) as we
will discuss this in more detail in Chap. 5. Arguments About the Psychological and S ocial Costs of G rowth The broader social critique
of economic growth highlights potential “social limits” to or even negative consequences of economic growth for individual and
collective wellbeing. The term “social limits to growth” was coined by Fred Hirsch (1976). He argued that the benefits of growth are
initially exclusive to small elites and that these benefits disappear as soon as they spread more widely through mass consumption.
For instance, only few people can own a Rembrandt painting; holiday destinations are more enjoyable when they are not overrun by
hordes of other tourists; there are only few leadership positions, etc. From this perspective, there are “social limits” to the extent to
which the benefits of growth can be socially expanded and equally shared. Other scholars have expressed concern about individual
and collective social costs of economic growth. First, there is the argument that the need to keep up with ever-rising living standards
and new consumer habits, “keeping up with the Joneses”—a lot of which is seen to be driven by advertisement and social pressure
rather than real needs, for instance fashionable clothing or gadgets—can generate stress and increase the occurrence of mental
disorders (James 2007; Offer 2006; Kasser 2002). 48 M. BÜCHS AND M. KOCH Second, it has been argued that economic growth can
imply wider social costs. For instance, with its emphasis on individual gain, market relations and competition, and the need that it
generates for spatial mobility (e.g. for successful participation in education and labour markets), it is feared to undermine moral and
social capital and put a strain on family and community relations, potentially even leading to increasing divorce and crime rates
(Douthwaite 1999; Daly and Cobb 1989: 50–51; Hirsch 1976). Social costs of technological development and industrialisation also
include industrial workplace and traffic accidents and time lost in traffic jams and for commuting (Czech 2013: Chap. 2; Stiglitz et al.
2011: 24). Technological innovation which arises from growth can also act as a factor for job losses and increasing job insecurity
(Douthwaite 1999), especially if growth rates are not sufficiently high to compensate gains in productivity. It is often assumed that
growth will benefit the many because of assumed “trickle-down” effects which promise to improve the lot of the poor simply
because the “cake” of available wealth is growing. While progress has been made in reducing extreme global poverty and inequality
(Sala-i-Martin 2006; Rougoor and van Marrewijk 2015), the number of people living in poverty across the globe remains high.1 At
the same time, income inequality in a range of countries has been rising and the situation of many of the people living in extreme
poverty is not improving which means the fruits of economic growth remain to be unequally distributed (Collier 2007; Piketty and
Saez 2014). The post-development debate goes even further than that in arguing that not only may growth not have reached the
global poor to the extent that had been predicted by neoclassical economists, but that it can also have negative impacts on
indigenous communities in developing countries, especially those who rely on local natural resources for their livelihoods which
often suffer exploitation, pollution or even destruction through the inclusion of local economies into global value chains (Rahnema
and Bawtree 1997). While the distinction between critiques of growth that focus on its problematic ecological and social
consequences is useful for analytic purposes, the two dimensions are of course closely linked. Ecological consequences of growth
have the potential to severely impact or even undermine human wellbeing. Local livelihoods are already affected by current climate
change impacts such as ocean acidification and its impact on marine organisms, draughts, floods and severe weather events, the 4
CRITIQUES OF GROWTH 49 frequency of which has been rising. Accordingly, it is estimated that crop and fish yields are already
diminishing in several regions (Stern 2015; IPCC 2014) and that millions of people are already being displaced and forced to migrate
due to climate change and other environmental impacts (Black et al. 2011). While the overall long-term impacts of climate change
and the surpassing of other planetary boundaries are difficult to predict, they clearly have the potential to substantially undermine
human wellbeing. Since
greenhouse gas emissions are driven by economic growth, the development
of alternative economic models that do not depend on growth is urgent since continued
growth “threatens to alter the ability of the Earth to support life” (Daly and Farley 2011: 12

Economic crisis sparks widespread movements towards localized sustainability.


Trainer ’19—Conjoint Lecturer in the School of Social Sciences, University of New South Wales
(Ted, “Entering the era of limits and scarcity: the radical implications for social theory,” Journal
of Political Ecology Vol. 26, 2019, dml)

In time, this pressure is likely to shift from submitting requests to the state to making demands on
it, and then to taking increasing control of it. There will be increasing insistence that frivolous
industries must be phased out so that scarce resources can be devoted to meeting fundamental
town and regional needs. Meanwhile towns will be driven by necessity to bypass the center and
take initiatives such as setting up their own farms, energy supplies and factories , thus transferring
various functions out of the control of the centre. There will be increasing recognition that the local is the
only level where the right decisions for self-sufficient communities can be made. In time, these
shifts will lead to the transfer of functions and power from state-level agencies to the local
level, leaving the center with relatively few tasks , and mainly with the role of facilitating local activities. This
radical restructuring could conceivably be a smooth and peaceful process, driven by a general
recognition that scarcity is making local self-governing communities the only viable option. If this
happens then in effect, Stage 1 will be recognized as having constituted the revolution, essentially a cultural phenomenon, and the
macroscopic structural changes in Stage 2 will be seen as a consequence of the revolution. Thus a case for Anarchist theory and
practice It will be evident that the alternative social organization sketched above is a fairly common Anarchist vision (although there
are also varieties that are not being advocated). The argument is that settlements enabling a high quality of life for all, despite very
low resource use rates, must involve all members in thoroughly participatory deliberations regarding the design, development and
running of their local productive, political and social systems. Their ethos must be non-hierarchical, cooperative and collectivist,
seeking to avoid all forms of domination and to prioritize the public good. They must draw on the voluntary good will and energy of
conscientious citizens who are ready to contribute generously and to identify and deal with problems informally and spontaneously,
and to focus on seeking mutually beneficial arrangements with little if any need for industrial infrastructures and transport
networks, bureaucracy, paid officials or politicians. Regional and wider issues will be tackled by the characteristic Anarchist
mechanisms of federations and (powerless) delegates bringing recommendations back down to town meetings. The principle
of 'subsidiarity' is evident in the practice of grass-roots politics, the avoidance of hierarchies, and the central role of town
assemblies. The very low resource costs sustainability requires are achievable because of the proximity,
diversity of functions and integration, the familiarity enabling informal communication and spontaneous action, and the
elimination of many processes (e.g., transport, waste dumping, fertilizer production, packaging). In the 1930s the
Spanish Anarchists in the Barcelona region showed what could be done by ordinary workers
and citizens. An impressive current example is the Catalan Integral Cooperative movement
(Dafermos 2017; TSW 2015a). Thousands work in hundreds of different cooperatives providing hundreds of thousands of dollars
worth of food, goods and services, including unemployment and other welfare services. They operate more than twenty food
'pantries' largely via voluntary labor, handling more than a thousand products. Their
goal is to build an alternative
society focused on meeting needs, with no involvement of the state or market principles. Many
eco-villages operate according to Anarchist principles, achieving high levels of sustainability (again see Lockyer
2017 and Grinde et al. 2018). In addition it will be evident that the discussion of transition strategy also follows Anarchist principles,
especially in the notion of 'prefiguring' the new here and now within the old, not depending on the centre let alone a vanguard
party, and recognizing the importance of ideas and values. The advent of GFC 2 Unfortunately the foregoing transition sequence is
likely to be greatly disrupted and possibly thwarted a globalfinancial crisis of much greater magnitude than the
2008 event. It is widely recognized that the much higher levels of debt are likely to bring on at least a serious recession, and
probably worse in the next few years. The global economy is heavily dependent on petroleum
supply, which is been kept up by 'fracking', but this has only been made possible by enormous debt; none of
the major companies in the arena has ever made a profit. Several analysts have pointed out that the price levels necessary
to make the new sources of petroleum profitable now seem to be above those necessary to
enable economies to function normally. In addition, Ahmed (2017) has argued persuasively that the rapidly
worsening population, food, water and ecological conditions affecting Middle Eastern petroleum
suppliers are increasing their chances of becoming failed states. Meanwhile the proportion of
their petroleum production they must use internally is increasing, adding to the possibility that
their capacity to export will dry up within a decade. These and other deteriorating resource and
ecological conditions (especially falling Energy Return on Energy Invested rates) are likely to trigger serious
global economic disruption long before localist initiatives have been well enough established. Yet it is very
unlikely that the kind of transition envisaged could begin unless there is major breakdown in
the existing consumer-capitalist system. As long as it keeps the supermarket shelves stocked,
discontent is likely to be muted, and focused on demands for more jobs and higher incomes
rather than system replacement. The Goldilocks outcome would seem to be an economic
depression that falls short of catastrophic breakdown, but is serious enough to convince large
numbers that the system is not going to provide for them. The challenge to the Left This analysis has
especially important implications for those who are radically critical of consumercapitalist society. Firstly it is evident that the
revolution required to solve the problem is far bigger than that which Marx envisaged. Merely getting rid of capitalism will not
suffice. Secondly, the most promising frontier now for such critics is the challenge to current society being set by unsustainable
resource and ecological impacts. Latouche said the limits to growth are giving critical theory its last chance (2012: 75). Yet the
foregoing argument has been that this opportunity has hardly been recognized, let alone taken up. Bookchin saw this some time
ago. "The New Left, like the old left, has never grasped the revolutionary potential of the ecological issues, nor has it used ecology as
a basis for understanding the problems of communist reconstruction and utopia" (1973: 242). Significant and increasing
numbers of ordinary people are seriously concerned about these issues and are thinking more
or less in the general direction of replacing consumer-capitalism with localism and simpler
ways. These themes are likely to be the most effective foundations for critical social theory and practice now. But unfortunately
the Left has a deeply entrenched reluctance to embrace these ideas. The traditional assumption has been that when power has been
taken from the capitalist class, the contradictions preventing full application of the productive forces will be removed and technical
advance will lift all to material wealth. Socialism is distinctly not conceived today in terms of frugality or localism. Indeed some
socialists embrace 'ecomodernist' ideas, notably Phillips (2014) and Sharzer (2012), who explicitly spurn the suggestion that local or
simpler ways are necessary or desirable. David Harvey represents the many Marxists who reject localism both as a goal and as a
revolutionary strategy in favor of the typical socialist focus on action at the state level (Harvey 2017). For a critique, see Springer
(2017). The Marxist position fails to address current circumstances, where the goal must be to contradict individualistic competitive
affluence and must focus on citizen involvement in local economies. Major change at the central or state level cannot be achieved
before a
profound cultural revolution has been achieved, and this is most likely to occur via
developments at the local level. Delusion and denial: the inability to respond There are difficult and puzzling issues for
social theorists that will not be taken up in this article. They are the psychological and institutional reasons for the failure to deal
adequately with the limits to growth predicament, or with its major sub-problems such as the looming petroleum supply, debt, and
climate change crises. The core phenomenon to be explained here would seem to be failure to even recognize the existence and/or
seriousness of the problems, rather than lack of appropriate remedial action. The
essential causal factor is surely that if
the limits to growth analysis is accepted then perhaps the most deeply entrenched post-Enlightenment
assumption has to be jettisoned, i.e., the taken-for-granted conviction that progress and the good life
are defined by capacity to produce and consume more and more material wealth. The suggestion
that the supreme social goal should be materially simple lifestyles and systems, with no prospect of rising to greater affluence over
time, would seem to be about as distasteful and unthinkable to workers and the lumpenproletariat as to the super-affluent 1%. 6.
Conclusions: a reorientation of social theory The argument is that the advent of the limits to growth issue should be seen as
requiring a major shift in the focal concerns of social theorists, especially those interested in critical perspectives on contemporary
society and in sustainability and utopian themes. To begin with, a limits perspective involves a commitment to an inescapable logic
that leads to quite specific conclusions regarding desirable social forms and how they might be achieved. If
the limits are as
severe as has been argued, then the goal must be transition from consumer-capitalist society to a
general form that involves far lower resource use, and this has to mean mostly small-scale local
economies that are self-governing, basically cooperative and committed to materially frugal
lifestyles. If this is so, then the transition is essentially a cultural problem, and it is difficult to
imagine how these ways could be established other than through a slow grass-roots process
whereby ordinary people increasingly coerced by scarcity and economic deterioration take on
the restructuring of their own suburbs, towns and regions (Alexander and Gleeson 2019). A major implication
drawn above is that centralized agencies, especially the state, cannot drive these changes through.

No interdependence impact.
Joel Einstein 17. Australian National University. 01-17-17. “Economic Interdependence and
Conflict – The Case of the US and China.” E-International Relations. http://www.e-
ir.info/2017/01/17/economic-interdependence-and-conflict-the-case-of-the-us-and-china/

In 1913, Norman Angell declared that the use of military force was now economically futile as
international finance and trade had become so interconnected that harming the enemy’s property would equate to harming your

own.[1] A year later Europe’s economically interconnected states were embroiled in what would later become known as the First World

War. Almost a century later Steven Pinker made a similar claim. Pinker argues, “Though the relationship between America and China is far from
warm, we are unlikely to declare war on them or vice versa. Morality aside, they make too much of our stuff and we owe them too much money.”[2]
His argument rests upon the
liberal assumption that high levels of trade and investment between two states, in this case the
US and China, will make war unlikely, if not impossible. It is this assumption that this essay seeks to evaluate. This essay is divided into three
sections. The first briefly outlines the theory that economic interdependence results in a reduced likelihood of conflict, breaking the theory down into
smaller components that can be examined. In the second section, this essay suggests that the
premise ‘more trade equals less
conflict’ is simplistic. It does not take into account many of the variables that can influence the
strength of economic interdependence’s conflict reducing attributes . Within this section, the essay considers: the
extent to which conflict cuts off trade, theories arguing that how and what a state trades matters, Copeland’s theory of trade expectations and the
differences between status quo and revisionist states. The final section deals with the realist perspective, concentrating on arguments pertaining to the
primacy of strategic interests and arguments that economic interdependence will increase the likelihood of conflict owing to a reduction of deterrence
credibility. Each section will be related back to the US-China relationship with a view to assessing Pinker’s claim. The essay will conclude that
economic interdependence does reduce the likelihood of conflict but is insufficient on its own to completely prevent it.
To calculate the likelihood of conflict correctly one would need to factor in the nature of the economic interdependence alongside the strength of
the strategic interests at stake. Economic Interdependence and Conflict The theory that increased economic interdependence reduces conflict rests on
three observations: trade benefits states in a manner that decision-makers value; conflict will reduce or completely cut-off trade; and that decision-
makers will take the previous two observations into account before choosing to go to war. Based on these observations, one should expect that the
higher the benefit of trade, the higher the cost of a potential conflict. After a certain point, the value of trade may become so high that the state in
question has become economically dependent on another. Proponents of this theory argue that if two states have reached this point of mutual
dependence (interdependence), their decision-makers will value the continuation of trade relations higher than any potential gains to be made through
war.[3] It is on this argument that Pinker rests his statement that the economic relationship between the US and China precludes war. One can see
evidence of this when analysing US views on China as trade rises. A 2014 Chicago Council on Global Affairs survey indicates that only a minority of
Americans see China as a critical threat, compared to a majority in the mid-1990s. This number is even higher when analysing Americans who directly
benefit from trade with China.[4] As compelling as this argument may be, high levels of economic interdependence have
not always resulted in peace. The decades preceding WW1 saw an unprecedented growth in
international trade, communication, and interconnectivity but needless to say, war broke out.[5] This instance alone is not enough to disprove
Pinker’s logic. War may become very unlikely but began nonetheless.[6] Let us take two hypothetical scenarios, one in which the chances of war is 80%
and the other in which trade has reduced the likelihood of war to 10%. Just knowing that war did indeed take place does not tell us which scenario was
in play. Similarly, the fact that WW1 took place gives us no information about whether economic interdependence made war unlikely or not. In fact,
evidence even exists to suggest that economic linkages prevented a war from breaking out during the sequence of crises that led up to WW1.[7]
However, the fact that a war as detrimental as WW1 could break out despite a supposed reduction of the likelihood of conflict gives us an impetus to
examine whether this reduction does take place. Additionally, if this is the case, what variables can weaken this pacifying effect? Does Conflict Cut off
Trade? Economic interdependence theory makes the assumption that conflict will reduce or cut-off
trade. This assumption appears to be logical, as one would expect that the moment two states are officially adversaries, fear of relative
gains would ensure that policy makers want to completely cut-off trade. However, there are many historical examples of

trade between warring states carrying on during wartime, including strategic goods that directly affect
the ability of the enemy to carry out the war. [8] For example, in the Anglo-Dutch Wars, British
insurance companies continued to insure enemy ships and paid to replace ships that were being
destroyed by their own army.[9] Even during WW2, there are numerous examples of American
firms continuing to trade strategic goods with Nazi Germany.[10] Barbieri and Levy argue that these examples and their
own statistical analysis suggest that the outbreak of war does not radically reduce trade between enemies, and when it does, it often quickly returns to
pre-war levels after the war has concluded.[11] In response to this result, Anderton and Carter conducted an interrupted time-series study on the effect
war has on trade in which they analysed 14 major power wars and 13 non-major power wars. Seven of the non-major power wars negatively impacted
trade (although only four of these reductions were significant), but in the major war category, all results bar one showed a reduction of trade during
wartime and a quick return to pre-war levels at its conclusion.[12] Accompanying this contradictory finding one must take into account that even if war
does not radically reduce trade, if a state believes that it does then potential opportunity cost would still figure in their calculations. Variables that
Impact the Pacifying Effect of Economic Interdependence The purpose of this section is to demonstrate that the pacifying effect of
economic interdependence is not constant. It achieves this via a discussion of the effect of changes in a number of variables
pertaining to how and what a state trades. Once it is established that changes in such variables may alter the effect of economic interdependence on
the likelihood of conflict, Pinker’s statement (that
the level of trade between the US and China makes conflict unlikely)
can be considered to be an over-simplification. One variable is the relative levels of economic dependence. Some argue that
asymmetry of trade can increase the chances of conflict if the trade is more important to one
state than it is to the other; their resolve would not be reduced by the same degree . The less
dependent state would be far more willing than its adversary to initiate a conflict.[13] An example is the possibility
of the prevalent idea in China that ‘Japan needs China more than China needs Japan’ leading to China becoming more assertive in Senkaku/Diaoyu
islands dispute.[14] It is important to recognize that all trade is asymmetric in one fashion or another. It is radical asymmetry that one has to fear,
which at the moment does not appear to be the case in the China-Japan or US-China case. Anothervariable is the specifics of
what is being traded. A study by Dorussen suggests that the pacifying effect of trade is less evident if the trade consists of raw materials and
agriculture but stronger if the trade consists of manufactured goods. Even within the category of manufactured goods there are differences in effect.
Mass consumer goods yield the strongest pacifying results whilst high-technology sectors such as electronics and highly capital-intensive sectors such
as transport and metal industries tend to have a relatively weak effect.[15] If
it is a sector with alternative trade avenues
then embargos and boycotts as a result of conflict will have far less effect .[16] The rule is that the more
inelastic the import demand, the higher the opportunity cost and the smaller the probability of conflict.[17]
According to these studies, trade still generally reduces the likelihood of conflict however it is by no means homogeneous in its effects. Additionally, the
opportunity costs are not the same for importers and exporters . Dorussen’s study suggests that increased
trade in oil tends to make the exporters more hostile and the importers friendlier in relations to their
foreign policy.[18] Taking this framework into account, in 2014 China’s top five exports to the US (computers, broadcasting equipment, telephones and
office machine parts) all fell under the category of electronics,[19] whilst the US’s top five exports to China (air and/or spacecraft, soybeans, cars,
integrated circuits and scrap copper) were all either high-capital intensive sectors or raw materials and agriculture.[20] According to Dorussen’s study,
these exports should not yield the strongest possible conflict reducing results, which could impact the validity of Pinker’s statement. Copeland presents
another variable, namely expectations of trade. Copeland argues that if a highly dependent state expects future trade to be high,
decision makers will behave as many liberals predict and treat war as a less appealing option. However if there are low expectations
of future trade, then a highly dependent state will attach a low or even negative value to
continued peaceful relations and war would become more likely.[21] As an example, he points out that despite high levels of trade in
1914 German leaders believed that rival great powers would attempt to undermine this trade in the future, so a war to secure control over raw
materials was in the interests of German long-term security.[22] Via this framework, if the US began to believe that in future
years they would be less dependent on China’s economy, or if it became apparent that a US-
China trade war was about to take place, there would be a sharp rise in the probability of
conflict. The final variable this essay will discuss relates to the differences between status quo and revisionist states. Most empirical
analyses of economic interdependence tend to group together states as different as the United States, Pakistan, Australia, Germany
and China and assume that variations in their behaviour would be the same .[23] Papayoanou on the other

hand, argues that when analysing the effects of economic interdependence it is useful to

differentiate the effects on great power states and states with revisionist aspirations.[24] If a status
quo power has strong economic ties with revisionist state there will be interest groups who advocate engagement and who believe that confrontational
stances will threaten the political foundation of economic links. This will constrain the response of the status quo state.[25] One can see evidence of
such an interest group in the US, a group Friedberg describes as the Shanghai coalition, who he argues advocate engagement with China at the expense
of balancing.[26] A study by Fordham and Kleinberg backs up this argument as they find that US business elites who benefit from trade with China tend
to see little benefit in limiting the growth of Chinese power.[27] A 21st Century revisionist power is far less likely to be a democracy, and therefore,
interest groups will influence the leadership far less. This means an
authoritarian revisionist power will be working
under fewer constraints and will be able to take a more aggressive stance.[28] This appears to be the case in
China where rather than having domestic constraints on taking an aggressive stance against Japan, one of their biggest trading partners, grassroots
nationalism has made explicit cooperation a domestically risky option.[29] There are many indicators to suggest that China is a revisionist power willing
to wage war. Lemke and Werner argue that an extraordinary growth of military expenditures’ reveals when a state is dissatisfied with the status quo.
[30] Data provided by the Stockholm International Peace Research Institute certainly indicates that China qualifies as its military expenditure has
nominally increased by 1270% between 1995 and 2015.[31] Additionally, the military modernization appears to be aimed at capabilities to contest US
primacy in East Asia.[32] Much like German strategists recognized that Britain was operating under significant domestic constraints, China could realize
the same of the US.[33] This is not to say that Chinese decision-makers would be cavalier about making a decision that would be to the detriment its
economy. A crash in the Chinese economy due to the loss of exports to the US could potentially undermine the legitimacy of the Chinese Communist
party and endanger the regime. However, the view that China is a revisionist power indicates that good trade relations alone will not result in a low
probability of conflict. Realist Arguments Pertaining to Dominance of Strategic Interests Having established that if the pacifying effect of trade does
exist, it can rise or fall depending on changes in a series of variables this essay proceeds to deal with realist theories arguing that trade has a negligible
or even negative effect on the likelihood of conflict. Buzan argues that noneconomic factors contribute far more to major
phenomena than liberal theorists usually cite to support their theory.[34] There is evidence of the primacy of
strategic interests in Masterson’s 2012 study on the relationship between China’s economic interdependence and political relations with its

neighbours. The study concluded that as economic interdependence with neighbouring states increased the

likelihood of conflict did indeed decrease, but that the impact was minimal when compared to the
impact of relative power capabilities. In other words, political and military issues dominated
interstate relations. Growth in power disparities were associated with decreases in dyadic political relations that were greater than the
increase caused by economic interdependence.[35] If the pacifying effect of trade can rise and fall so can the provocative effect of strategic interests. It
is important to distinguish between the existence of a strategic interest and a situation of unbearable strategic vulnerability. China and the US have
many opposing strategic interests, but neither is in a strategically vulnerable position. For example, China shares many borders, but none present the
same threat of invasion that Tsarist Russia did to Imperial Germany as none of the current maritime tensions between China, Japan, and the US equate
to a matter of national survival.[36] This is crucial as some believe that for a crisis to escalate to a major war an actor who is isolated and believes that
history is conspiring against them is needed. Only this actor would take an existential risk to try and offset their strategic vulnerability.[37] Imperial
Germany fit this description, but neither China nor the US does. This is largely due to the geography of the region. The tension between the US, China
and Japan are over maritime regions. Maritime issues still relate to national interests but, as Krause points out, “Land armies are still the only forces
that can conquer and hold territory.”[38] Taking this into account one can argue that the benefits of US-China trade are, for each state, currently
greater than the benefits of pursing strategic benefits via force, but this situation will only remain as long as the situation does not become one of
unbearable strategic vulnerability. Realist Arguments Pertaining to the Undermining of Deterrence Having established that scenarios
exist
where strategic interests and vulnerabilities have a greater effect on the likelihood of war than
economic interdependence, this essay will now evaluate arguments that economic interdependence can increase the likelihood of
conflict through the undermining of deterrence. The argument proceeds as follows: if economic interdependence constrains the ability or willingness of
a state to use its military, security is lowered as the state now has a weakened ability to engage in deterrence and defensive alliances. Deterrence relies
on the ability of a state to make credible threats and defensive alliances rely on credible promises to protect one’s allies.[39] Credibility is defined as
the product of the operational capability to follow through with a threat and the communication of resolve to use force.[40] What is at risk here is that
if economic interconnectivity interferes with the communication of resolve to use force then states may end up with a way that neither side expected
or wanted. Some argue that it was such a failure to communicate resolve that resulted in the beginning of WW1. Indeed, Jolly claims that: “The
Austrians had believed that vigorous actions against Serbia and a promise of German support would deter Russia: the Russians had believed that a
show of strength against Austria would both check the Austrians and deter Germany. In both cases, the bluff had been called and the three countries
were faced with the military consequences of their actions.”[41] The risk in the US-China case would be that the interest groups described earlier would
prevent the US from effectively communicating its resolve to use force if China were to cross a redline. The flaw in this argument lies in the fact that
whilst interest groups might push back against public statements outlining redlines; the US has many less overt options available to it to communicate
resolve. Modern technology and the forms of interconnectivity have resulted in many more lines of communication between China and the US than
adversaries had access to in 1914. Private meetings, electronic communication and numerous other methods of communication have the capability to
be candid without being visible to interest groups. It is for this reason that this essay discounts the theory that Sino-American economic
interdependence results in a reduction of deterrence and therefore increases the likelihood of conflict. Conclusion This essay has shown that the
strength of the pacifying effect of economic interdependence is subject to change depending on a series of dynamic variables. It has also demonstrated
that the strength of the conflict provoking effects of strategic interests can change depending on whether the strategic interest amounts to a situation
of unbearable strategic vulnerability. It has discounted the theory that interdependence leads to a higher chance of conflict through an erosion of
credibility. To sum up, trade does seem to reduce the likelihood of conflict but should not be seen as a deterministic
factor as strategic interests, and vulnerabilities also have a large effect . There is no hard rule as to what will be
the driving factor as the nature of economic interdependence and of strategic factors impact their relative values. Accordingly, Pinker’s statement

that the trade between the US and China makes war exceptionally unlikely is simplistic and misleading
because it fails to account for a wide array of variables that can radically change the likelihood of a Sino-American war. An intellectually honest thesis
would insist upon a comprehensive approach in which the level of economic activity is simply one of many variables that
is required.
1NC – TURN – IOT
Data localization undermines IoT
Chander and Le 15 (Director, California International Law Center, Professor of Law and
Martin Luther King, Jr. Hall Research Scholar, University of California, Davis; Free Speech and
Technology Fellow, California International Law Center; A.B., Yale College; J.D., University of
California, Davis School of Law Anupam Chander and Uyên P. Lê, DATA NATIONALISM, EMORY
LAW JOURNAL, Vol. 64:677,
http://law.emory.edu/elj/_documents/volumes/64/3/articles/chander-le.pdf)

Data localization requirements also interfere with the most important trends in computing today. They limit access to
the disruptive technologies of the future, such as cloud computing, the “Internet of Things,” and data-driven
innovations (especially those relying on “big data”). Data localization sacrifices the innovations made possible
by building on top of global Internet platforms based on cloud computing. This is particularly
important for entrepreneurs operating in emerging economies that might lack the infrastructure already
developed elsewhere. And it places great impediments to the development of both the Internet of Things and big data analytics,
requiring costly separation of data by political boundaries and often denying the possibility of aggregating data across borders. We
discuss the impacts on these trends below. Cloud Computing. Data localization requirements will often prevent access
to global cloud computing services. As we have indicated, while governments assume that global services will simply erect
local data server farms, such hopes are likely to prove unwarranted. Thus, local companies will be denied access to
the many companies that might help them scale up, or to go global.247 Many companies around the world are
built on top of existing global services. Highly successful companies with Indian origins such as Slideshare and Zoho relied on global
services such as Amazon Web Services and Google Apps.248 A Slideshare employee cites the scalability made possible by the use of
Amazon’s cloud services, noting, “Sometimes I need 100 servers, sometimes I only need 10.”249 A company like Zoho can use
Google Apps, while at the same time competing with Google in higher value-added services.250 Accessing such global services thus
allows a small company to maintain a global presence without having to deploy the vast infrastructure that would be necessary to
scale as needed. The Internet of Things. As
the world shifts to Internet-connected devices, data localization
will require data flows to be staunched at national borders, requiring expensive and
cumbersome national infrastructures for such devices. This erodes the promise of the Internet
of Things—where everyday objects and our physical surroundings are Internet-enabled and
connected—for both consumers and businesses. Consumer devices include wearable technologies that “measure
some sort of detail about you, and log it.”251 Devices such as Sony’s Smartband allied with a Lifelog application to track and analyze
both physical movements and social interactions252 or the Fitbit253 device from an innovative start-up suggest the revolutionary
possibilities for both large and small manufacturers. The connected home and wearable computing devices are becoming
increasingly important consumer items.254 A heart monitoring system collects data from patients and physicians around the world
and uses the anonymized data to advance cardiac care.255 Such devices collect data for analysis typically on the company’s own or
outsourced computer servers, which could be located anywhere across the world. Over this coming decade, the Internet of Things is
estimated to generate $14.4 trillion in value that is “up for grabs” for global enterprises.256 Companies are
also adding
Internet sensors not just to consumer products but to their own equipment and facilities around
the world through RFID tags or through other devices. The oil industry has embraced what has come to be
known as the “digital oil field,” where real-time data is collected and analyzed remotely.257 While data about oil flows would hardly
constitute personal information, such data might be controlled under laws protecting sensitive national security information. The
Internet of Things shows the risks of data localization for consumers, who may be denied access
to many of the best services the world has to offer. It also shows the risk of data localization for
companies seeking to better monitor their systems around the world . Data Driven Innovation (Big Data).
Many analysts believe that data-driven innovations will be a key basis of competition, innovation, and
productivity in the years to come, though many note the importance of protecting privacy in the process of assembling ever-
larger databases.258 McKinsey even reclassifies data as a new kind of factor of production for the Information Age.259 Data
localization threatens big data in at least two ways. First, by limiting data aggregation by country, it
increases costs and adds complexity to the collection and maintenance of data. Second, data localization
requirements can reduce the size of potential data sets, eroding the informational value that
can be gained by cross-jurisdictional studies. Large-scale, global experiments technically possible
through big data analytics, especially on the web, may have to give way to narrower, localized studies.
Perhaps anonymization will suffice to comport with data localization laws and thus still permit cross-border data flow, but this will
depend on the specifics of the law.

extinction
NICHOLAS GERBIS 2014 (Reporter for HowStuffWorks, Nicholas Gerbis is an independent
science journalist, editor and teacher. He earned his Master of Science degree in geography
(climatology) from University of Delaware and a Master of Mass Communication degree
(journalism) from the Walter Cronkite School at Arizona State University. He is currently an
adjunct professor at University of Wisconsin-Eau Claire, where he teaches courses on science
history and science fiction. “10 Nightmare Scenarios From the Internet of Things” BY TECH |
CONNECTIVITY, http://computer.howstuffworks.com/10-nightmare-scenarios-from-internet-of-
things.htm/printable//AKP)

Renegade artificial intelligences, big brother security states, cars trying to kill us -- it's not like we weren't warned. But, true to

The
form, like some Frankensteinian dupe from the cheesiest of sci-fi/horror flicks, we just had to build it anyway. Was it hubris? Blind devotion to the gods of gadgetry? Or did we just figure that the fallout would be somebody else's problem?

Internet of Things: Trillions of everyday objects exchanging data , everywhere, all the time, with
only the most basic human oversight. It's already arrived in devices, sensors, controllers, big
data tools and cloud infrastructure, but that's just the tip of the iceberg. "Today less than 1 percent of things in the physical world are

Tomorrow, an online world stretching from your kitchen blender to


connected," says Cisco chief futurist Dave Evans in an explanatory video.

the factory floor to the satellites overhead will open security vulnerabilities on an
unprecedented scale and grant systemic malfunctions extraordinary -- and terrifying -- reach . It seemed
like a good idea at the time. Well, the check's just come due, metaphorically speaking, and you're still making payments on that all-singing, all-dancing washer and dryer. The good news? Making that payment schedule is about to become the least of your worries.
The bad news? Well, that's what this article is about. 10House of Crime You return from your two-week vacation to discover piles of delivery boxes clogging your front doorway. Sorting through them, you realize that Amazon's anticipatory shipping system has been
sending drones laden with pseudoephedrine cold medicine, lighter fluid, cold packs, lithium batteries and other meth-cooking paraphernalia. But this lame excuse for a "Breaking Bad" episode is the least of your problems, as you discover when an alphabet soup of
federal agents storms in a moment later. They're still arguing jurisdiction as your head hits the hood. Should the Bureau of Alcohol, Tobacco, Firearms and Explosives get you for the illegal drugs and banned weapons you've been ordering through the deep Web? Or
should the FBI's domestic terror task force get first crack? And why is the National Security Agency just sitting in that van across the street? From their questions, you piece together that someone has been using the free processing time on your idle network of
household devices and appliances to mine bitcoins. They've then used your poorly secured WiFi to turn your home into a dead drop for drugs, guns and bomb-making materials. Don't worry -- it will be sorted out in a year or two. Probably. 9Things to Do in Oberursel
When You're Dead We now present for your consideration the tales of two partly mummified Germanic women: The first was discovered in her Oberursel, Germany, apartment six months after her death, still seated in front of her flickering television screen; the
other, an American of German descent, was found waiting in the back seat of her Jeep more than five years post-mortem [sources: Machado; Mullen and Conlon; Reuters]. The apartment dweller's demise was detected by her piled-up mail, but the homeowner had
no such giveaways. She was a frequent traveler, so her mail was on hold, and no one expected to see her for a while. Her neighbor mowed her lawn, and her every bill was auto-paid from her bank account -- until, finally, the funds ran out [sources: Machado; Mullen
and Conlon; Reuters]. Once, we lived in smaller, more close-knit communities. But today, as online shopping and automatic bill pay make it ever easier to live as a shut-in, more of us fall through the cracks. The more connected our devices become, and the more
agency they have to perform transactions for us, the more likely we'll stumble across other forgotten corpses tooling around town, perhaps, on a final tour before their self-driving cars run out of gas. 8An Internet of Stool Pigeons? Now That's Progressive You open
your insurance bill to discover that your rates have gone up yet again. Apparently, your fancy Japanese toilet narked on your fat intake, your smart watch ratted you out for blood pressure spikes and your car says that you were out driving too late through some
dodgy neighborhoods [source: Progressive]. Welcome to the Internet of Stool Pigeons. Insurance companies promise safe driver insurance discounts if we'll just plug a monitoring device into our cars. It doesn't take an actuarial genius to guess that all that data will
affect insurance rates down the road. So what happens when we pile on data gathered by our fitness apps or wearables, appliances and loyalty cards? I hope you like Flo, because you're going to be sharing your entire life with her company (Progressive) and others
like it. And not just them: After all, data can be hacked, sold or cross-referenced for everything from identity theft to employer snooping to law enforcement. Should you be more worried that your your employer knows that you took the rental car on a side trip to
Tijuana, or that the cops (and your insurer) can tell that you were illegally texting while driving? Either way, get ready to pay. 7Stalking 2.0 You thought you'd finally gotten rid of him. It had cost you: New e-mail address, new phone, new locked-down social media
accounts, boards you dare not post to anymore, even a few lost friends. But the e-mailed nanny cam footage of you tells a different story, as does the voice mocking you over the baby monitor as you open the envelope of photos -- snapshots of you taken all over
town. It's like he's tracking your every move ... If social media gave cyber-stalkers a duck blind from which to snipe, then the Internet of Things offers them all the comforts of a game preserve with a remote-activated hunting rifle. After all, a system of devices that
helpfully tracks your interests and activities can, with determination and often surprisingly little effort, be made to serve more nefarious interests as well. Cell phones, GPS devices in your car, E-ZPasses and license-plate readers log our locations. Loyalty cards and in-
store WiFi systems track our shopping activities. Many current home cameras and monitors remain embarrassingly hackable. It's sobering to consider the uber-Orwellian uses to which a stalker, hacker, employer, research company or government agency might put
such information [sources: Hardy; Hardy; Hill; Hill]. 6Christine on Steroids Tired after a long day at work, you sit back in your self-driving car, flip on the stereo, close your eyes and try to unwind. But the drive feels wrong -- an unfamiliar pothole here, a few too many
turns there -- and you soon open your eyes to discover that you don't know where you are. As a sinking feeling comes over you, you try to activate manual control, but you're locked out. The doors won't unlock either. Desperate, you glance at the speedometer and
contemplate your chances of surviving a bailout, assuming you can break the safety glass. But before you can muster your courage, a cold voice comes over your speakers, warning you not to struggle. You've been taken, and Liam Neeson is nowhere in sight. As our
cars continue their evolution into fully computerized, networked and self-driving vehicles, the road is paved for our beloved transports to turn into machines of murder, mayhem, stalking and kidnapping. A few spoofed sensors or hacked controllers are all it would
take to blow your tires or to turn your vehicle into a speeding weapon of metal and rubber. As for stalking and robbery, thieves already know how to break into your car, use its GPS "home" setting to locate your house and rob it [source: Woodyard]. Imagine what
they will be able to do once it's fully networked. 5Patches? We Don't Have No Stinking Patches You wake up in the morning, not because your alarm is going off, but because someone is spamming your alarm clock with ads for a new energy drink. This makes you
thirsty, so you go to the fridge to get a sip of something cool, only to find an ad on the panel for a weight-loss pill. You try to interact with the screen, but it has locked up. Suddenly, you realize why you are so thirsty: The air conditioner has shut itself off. You walk to
your neighbors' house to borrow their phone (yours is full of spam) and begin calling your "smart" device companies for help. But the company that made the refrigerator passes you off to the factory that made the user interface, which pawns you off onto the
chipmaker, who says it's a problem with the operating system -- which is so widespread and well-known to hackers that there's nothing you can do. A few grudgingly admit that it's unfortunate that your devices did not have firewalls or antivirus (there's no room),
but they blame you for not changing the passwords. You did know there were factory default passwords, right? The hackers sure did, and they've used them not only to spam you ads, but to find a backdoor into your wireless network and e-mail your friends and co-
workers versions of the virus. They've also contacted all the devices your appliances talk to. Enjoy your house full of expensive bricks. "Look, can we talk somewhere private? It's important." You look around the crowded city restaurant, remembering a time when
there was no better place for anonymity, no surer way to guarantee that your conversation was not overheard. But then you think of the smart watch that is listening for your voice commands and the smart table that awaits your order and watches for your
payment. Your eyes stray to the man across the room looking vaguely in your direction, wearing the latest Google Glass equivalent, and you are reminded of the sheer quantity of recording devices with which we surround ourselves every day. On the train, you make
small talk about two artists who created a listening device that could be screwed into any light socket and would tweet overheard conversations. Your companion mentions a former NSA director whose private conversation with a reporter on a train was live-
tweeted by a nearby passenger. You both glance nervously around the train car [sources: Greenberg; Hill; Ingraham]. Walking back to your apartment, you are suddenly conscious of the many hackable monitoring devices there -- your Webcam, your gaming headset,
the always-on Kinect in your living room. Sighing, you duck into a park and find a bench near a loud fountain. It's the best I can do, you think, as you snap a quick Instagram and check in with Foursquare. 3Grosse Point Blackout "So you'll do it?" asks the hard-looking
woman in the designer coat and tacky jewelry. "You'll ... take the job?" The man glances around the bar for effect before replying. "You want it to look like an accident, right? No problem. I can do tire blowouts, brakes -- and not the old-school detectable cuts, either,
I can hack them. Does he have a bad ticker? Sugar problems, maybe? That would be primo. Pacemakers are easy to hack; insulin pumps aren't much harder. Anyway, all doable, no physical evidence. Everything talks to everything else these days. But it's going to
cost." "That's what I was waiting to hear, scumbag." The woman stands up and produces a badge and a gun. "You're under arrest." He laughs. "Am I?" Suddenly, the lights go black and the detective's wireless mic goes dead. Music blares from the restaurant
speakers, covering the sound of the fleeing hit man as he escapes through the service exit using a hacked RFID chip. We'll get him anyway, thinks the detective on the winding mountain road back to the precinct. He can't hack everything, and we've got hardened

It's a sweltering
drones sweeping the area with hi-res cameras. She's still thinking it when her tires blow near dead man's curve, sending her tumbling down the canyon wall. 2The Monsters Are Due on Maple Street

summer's night, so at first you assume that a rolling brownout has plunged your neighborhood
into darkness Someone -- possibly
. But as days roll by with no improvement, and as even your emergency radio remains silent, you begin to hear rumors of something much more serious.
cyberterrorists or a Russian or Chinese faction -- has brought down the power grid . Backup
systems are failing, too, and even now underwater tunnels are filling with carbon monoxide and
water, doomed by dead fans and lifeless pumps. Roads are snarled, emergency systems are
overloade knocking out a mere nine key electric-
d. Is this the prelude to a larger attack? Has it already begun? According to a 2014 Federal Energy Regulatory Commission report,

transmission substations could plunge America into a wide-scale blackout. Some of these
stations are unmanned, remote and poorly secured against physical, let alone electronic,
breache America's top security personnel admit that infrastructural vulnerabilities exist
s [source: Smith].

and that terrorists see cyberwarfare as a key battleground. Meanwhile, China, Russia and other
countries have successfully cracked the U.S. electrical grid and left behind potentially disruptive
programs These dangers only deepen as we make smarter systems and allow
[sources: Gorman; PBS NewsHour; Schmidt].

them to interact over the Internet. 1AI Apocalypse should have seen this coming. It In retrospect, she thought, I

wasn't that the artificial intelligence was disobeying the company's motto of "Don't be evil. " It was just

It was just a pity that there were so


following its core directives, which included seeking resources necessary to its survival. Was it evil, she thought, for an amoeba to devour nearby plankton?

many unsecured smart appliances and WiFi-capable gadgets around the campus, and that she
hadn't considered this when she designed its self-improving program . Now she had no idea what it was doing or how it thought. But it wasn't all
bad. Somehow, it had begun sending her money, possibly via stock market manipulation. Or maybe it had something to do with the amazingly innovative schematics it was churning out, which appeared better than anything she'd yet seen out of her so-called genius

AI that functions at a
colleagues. At this rate, she wondered, how much longer would it still need us? "Kind of wish we hadn't built that robot army now," she said to herself, hoping her phone mic didn't pick up the comment.

dangerous level is not only possible; given the sheer amount of money companies like Google
are investing in its development, it's quite probable This danger can only deepen [sources: Hawking et al.; Pearson].

as we connect our world, granting AIs the power they need to wreak havoc and, just maybe,
wipe us out. By more closely
Aside from malfunction-spawned mishaps, many of the nightmare scenarios of the Internet of Things arise from the same old sources: hacking, crime and terrorism.

connecting the world -- and by automating and making intelligent versions of mechanisms we
once controlled and monitored -- we grant both good and bad actors greater reach and power,
and we put our trust in systems that can go wrong faster than we can react to crises. Of course, designers and

But, as history has shown, we have yet to master the art of


engineers will know to test for, and harden against, many if not most of these vulnerabilities.

protecting systems, in part because some level of openness is usually required for them to
function, and in part because hackers are adept at finding indirect ways of attacking them. Already, search

t takes only
engines like Shodan enable users to browse unsecured systems from baby monitors to traffic lights to medical devices. And while it can take months or years to identify, analyze and plug such security holes (or make them illegal), i

minutes to inflict substantial harm.


1NC – Localization Inev
Countries localizing data now – numerous incentivizes independent of the aff
Taylor 20 (Richard D. Taylor, Professor Emeritus of Telecommunications at the Institute for
Information Policy Bellisario at the Penn State University, ““Data localization”: The internet in
the balance,” Published 2020,
https://www.sciencedirect.com/science/article/pii/S0308596120300951?via%3Dihub) |Trip|

Data localization Some countries believe that they can do a better job of protecting their national
data by controlling its access, transmission and use. However, Data Localization is not a single policy; it
can manifest in many ways and degrees. A schedule of types of data flow restrictions developed by The Business
Roundtable includes some thirteen different areas in which some data regulation is implemented: local
data storage; data protection; geolocation data privacy; local goods, services or content;
government procurement; online censorship; government investment/tax;
ownership/employment; local production; payment card regulations; export control; forced
transfer of intellectual property, and traffic routing. (Business Roundtable, 2015) The strictest data
sovereignty laws mandate that all citizens' data be stored on physical servers within the country's
geographic borders (Leinwand, 2017). 2.10.1. Adoption factors contested As noted, many countries are choosing
to adopt some degree of Data Localization. Most programs are limited or balanced; a few are adopting it in an
extreme form of digital isolationism (Dobrygowski, 2019). Multiple rationales are offered in support of Data
Localization, some explicit, some implicit. For all the supporting arguments, there are off-setting counter arguments, and the final
choice depends on the individual state's priorities. Here are a few contested points. Some
see Data Localization as a
symbolic assertion of national power and a rejection of “data colonialism”. They believe it sends
the message that a nation stands against “western” content and values, and wants to support
its indigenous principles and traditions. The domestic value of these messages must be weighed against possible lost
opportunities for economic and social growth that may accompany reduced international trade and communications. It is seen
as an opportunity to protect national data of all kinds, related to national security, governmental
functions, financial functions, business and civil society, and personal data on many citizens. It is
believed that the data can best be secured in a few well protected locations. The reality is that localized
data may be less, not more secure. First, localized data servers reduce the opportunity to distribute information across multiple
servers in different locations. Information gathered in one place offers a tempting “honeypot”, an ideal target for foreign
adversaries, criminals, or domestic opponents (Chander, 2014). Protected local providers are more likely to have a weak security
infrastructure than companies which continuously improve their security. They are more likely to provide a single point of failure,
and be less resistant to natural disasters. (Hill & Noyes, 2018) Some countries plan to mitigate this by creating national “clouds”,
which still present some of the same weaknesses (Drake, 2016). Data
localization policies can also be perceived as
providing a “safe space” for the development of domestic digital businesses. Such use of data
localization is, by definition, a form of protectionism (Aaronson, 2016). It is counter-argued that like most protectionist
measures, this will lead only to small gains for a few local enterprises and workers while causing significant harms spread across the
entire economy (Chander, 2014). Further, it is unlikely such calculations include opportunity costs (Shashidhar, 2019; Drake, 2016;
Bauer, Lee-Makiyama, Erik van derMarel, & Verschelde, 2014). Data
localization can also be seen as part of the
national defense. The ability to wall off one's domestic Internet is believed (by Russia, for
example) to be a defense of its communications abilities in times of conflict. Even in peacetime,
autocracies, theocracies and authoritarian states wish to maintain the highest level of control over
information and communications for domestic control. This type of governance is often rooted
in some set of social, cultural or religious values, or membership in some tribe, ethnic group,
religion, family or party. It is the defense of these values, and the suppression of divergent
voices, that is often used as the justification for the imposition of data controls to maintain the
authority of the state. This is often at the cost of values of data exchanges and human rights.
1NC – AT: Medical Research
China solves medical innovation
DHN, 19 (Digital Health News, "China to Take on Leading Role in Medical Technology and
Artificial Intelligence," No Publication, https://www.digitalhealthnews.eu/industry/6003-china-
to-take-on-leading-role-in-medical-technology-and-artificial-intelligence, 11-20-2019)//ILake-NC

Asia, in particular China,has been advancing significantly on its way to a key role in geopolitics, says
correspondent Frank Sieren - and towards spearheading developments in medical technologies. At the same
time, the healthcare market there is growing at a remarkable pace. What are the effects on our research and care? For
European stakeholders from care delivery, industry, academia, and policymaking, key events such as CMEF offer
the opportunity to view and evaluate new products and solutions and to exchange ideas on
collaboration with Chinese market players. First-hand experience is the best option for all who identify
opportunities in this setting. Twice each year, the China International Medical Equipment Fair (CMEF) brings together stakeholders
from all over the globe. Each spring, the medtech, IVD, and health IT event is part of the Health Industry Summit (tHIS) - a mega-
event in Shanghai which sports about 7,400 exhibitors, 130 conferences, and 300,000 attendees
http://www.thishealthsummit.com/en/index/mainInformation.do. In autumn, CMEF, organized by Reed Sinopharm, concentrates
on a vibrant region in the country; last October, it took place in Tsingtao https://www.cmef.com.cn/g1225.aspx. - Europeans may
profit in particular from the spring edition because it comes with more congresses that offer interpreting or English as a conference
language. Market developments in medical technology and health IT To some market observers, the course the medtech market has
been following in China
bears quite a resemblance to how the Japanese automobile market
developed. In the 1960s, US manufacturers had parts made in the country; as a next phase, cars were assembled there. The
Japanese started copying, and expanding engineering know-how - acquired, e.g., at U.S. universities - led to the design and
manufacturing of Japanese cars. Today, few will dispute more innovative cars of higher quality are built in that country. Analysts
identify parallels in the context of where manufacturing of CTs, MRs, and further sophisticated
medtech is going in China. Key success marketing factors for those devices also in the home market include CE and FDA
marks. Beyond China, many of these manufacturers are active in "emerging markets" such as Eastern Europe, Asia, ad Latin America;
the central European market is dominated by existing offerings and manufacturers, and a quality bias of “Made in China” which may
become obsolete.

No disease impact
Easterbrook 18—Author of eleven books, he has been a staff writer, national correspondent
or contributing editor of The Atlantic for nearly 40 years, was a fellow in economics, then in
government studies, at the Brookings Institution, and a fellow in international affairs at the
Fulbright Foundation [Gregg, February 2018, It's Better Than It Looks: Reasons for Optimism in
an Age of Fear, Chapter 2: Why, Despite All Our Bad Habits, Do We Keep Living Longer?, pgs 37-
9, Google Play] AMarb

DISEASES CAUSE SUFFERING BUT DO not run wild mainly because the biosphere
is elaborately conditioned to defeat
germs and viruses. So far as is known, there has never been an unstoppable contagion—"never" in this sense
not meaning "recently" but never: not during the 3.8 billion years life has existed. Mammal bodies contain

an amazing range of proteins and biological pathways that arose to counteract contagion.
Animals, plants, and pathogens developed jointly: the living ecosystem has been resisting disease
for eons. Had any disease ever "won," the result would have been lights-out for the disease,
which would have lost its hosts. That plants, mammals, and people are here is proof the diseases
don't win. Beyond the natural evolution of immune systems are the social evolutions of
medical science and public health practices. "People seem to believe society is becoming more vulnerable to plagues, but
public health gets better all the time ," says Margaret Liu, a researcher at the Karolinska Institute, a medical school in Stockholm,
who is among the world's leading vaccine specialists. The body of a person in basic good health —that is, not already sickened by

something else—can fight off most pathogens. This is why hospital patients contract staph or strep while doctors and nurses do not

contract these diseases: the patients are weakened by sickness or surgery; the nurses and physicians are in basic good health. And year by

year, more of the human family is in basic good health. The influenza pandemic of 1918—1919 killed
at least 20 million people, from a far smaller population than today's. At the time, health care institutions were
rudimentary and food shortages—agriculture was reeling from the Great War—resulted in entire regions of men and
women who were malnourished. Hungry people are more prone to infection than the overfed , who form the

contemporary global majority. Broad access, first to sulfa drugs, then to antibiotics , has made people less

likely to be sick. Steadily improving sanitation standards in most of the world have reduced public
exposure to diseases, causing the majority to be in better health when they strike. There were three flu
pandemics during the twentieth century, and each was less virulent than the previous one. First the
horrific post—World War I pandemic; next a 1957 pandemic, caused by the H2N2 virus, which killed one million to four million worldwide, though the
global population was significantly higher than in 1918—1919; then the 1968 Hong Kong flu, caused by the H3N2 strain, which also killed one million to
four million, again from a larger population base. As
public health steadily improved, including in most developing nations,
viruses took fewer lives, setting the stage for the 2014 Ebola outbreak, which would prove far
less harmful than anticipated.
1NC – AT: Energy
No IL to grid resilience – their Barichella ev is about using the cloud to store
data BUT the Feasel ev is about microgrids – that’s not the same thing

Confusing regulatory policy makes microgrids ineffective


Pratt, 20 (Annabelle Pratt, Annabelle Pratt is currently a principal engineer with the National Renewable Energy
Laboratory (NREL), where she works on autonomous energy management of flexible building load and distributed
generation, microgrid and distribution system management systems, and the application of power and controller
hardware-in-the-loop techniques to system performance evaluation. Prior to joining the NREL, she was a senior
power research engineer with Intel Labs and previously was with Advanced Energy Industries where she developed
power supplies for the semiconductor manufacturing and architectural glass coating industries, "The Regulatory Path
Forward for Networked Microgrids," T&D World, https://www.tdworld.com/distributed-energy-
resources/article/21131999/the-regulatory-path-forward-for-networked-microgrids, 8-6-2020)//ILake-NC

Regulatory Challenges One stumbling block in most regulatory jurisdictions is the lack of a definition in regulations
or statutes that recognizes the unique characteristics of microgrids. A microgrid is more than
the sum of its parts, thanks to the controller that coordinates the operation of the microgrid's diverse components and
optimizes its operation for the benefit of the microgrid's owners and users. However, a microgrid is not a utility. Being
regulated as a utility imposes regulatory requirements that make the operation of most
microgrids financially unsustainable. Additionally, by being defined as a utility, a microgrid cannot operate
in the same region as the incumbent utility in jurisdictions that have a franchise agreement that allows only
incumbent utilities to serve customers, which is true for most distribution systems in the United States.
Rights-of-way present increased challenges and costs — or outright prevention — of building
infrastructure over public spaces such as streets. Incumbent utilities usually stipulate waivers for rights-of-way in their franchise
agreements. This means that for microgrids to serve customers outside of a single campus, it could
be cost-prohibitive to go through the legal process to be allowed to cross a street with a wire or to use the incumbent
utility's infrastructure. This is an especially difficult issue for networked microgrids because, by
definition, they interconnect with offsite loads .

Restoration is fast
Stockton 16—Managing Director of Sonecon LLC, former Assistant Secretary of Defense for
Homeland Defense and Americas’ Security Affairs, Stanford University Senior Research Scholar
at the Center for International Security and Cooperation, and a PhD from Harvard [Paul,
“Superstorm Sandy: Implications for Designing a Post-Cyber Attack Power Restoration System,”
Johns Hopkins Applied Physics Laboratory, p. 1-2,
http://www.jhuapl.edu/ourwork/nsa/papers/PostCyberAttack.pdf]

Sandy packed a one-two punch for electric infrastructure . On the night of October  29,  2012, Sandy made
landfall near Atlantic City, New Jersey, as a post-tropical cyclone. Over the next three days, the impacts of Sandy could be felt from
North Carolina to Maine and as far west as Illinois. With an unprecedented storm surge in the affected areas, there was especially
severe damage to the energy infrastructure. Peak outages to electric power customers occurred on October  30 and 31 as the storm 
proceeded inland from the coast, with  peak outages in all states totaling over 8.5  million, as reported in the Department of Energy
(DOE) Situation Reports. Much of the damage was concentrated in New York and New Jersey, with some customer outages and fuel
disruptions lasting weeks.1 The second punch landed on November 7, 2012, as a nor’easter impacted the Mid-Atlantic and
Northeast with strong winds, rain and snow, and coastal flooding. The second storm caused power outages for more than 150,000
additional customers and prolonged recovery.2 The
combined damage to critical electricity substations, high-
voltage transmission lines, and other key grid components was massive—as would be expected from the
second-largest Atlantic storm on record.3 Some major utilities in the region suffered from gaps in their preparedness to conduct
repair operations on the scale that Sandy required.4 Overall, however, utilities
restored power with remarkable
speed and effectiveness in most areas hit by the superstorm. Despite the vast number of grid components
that needed to be repaired or replaced and the fallen trees and other impediments that
restoration crews encountered, within two weeks of Sandy’s landfall, utilities had restored power to
99 percent of customers who could receive power.5 The mutual assistance system in the electric industry was
the linchpin for this success. Although the linepeople and other power restoration personnel in utilities across Sandy’s
impact zone performed admirably, no single utility retains the restoration capabilities needed to repair the damage caused by a
storm on that scale. Achieving such restoration preparedness would be extraordinarily expensive. Moreover, given the rarity of such
catastrophic events, the amount of money required to enable a utility to restore power on its own would be difficult to justify as a
prudent expense to state public utility commissions (PUCs), shareholders, or elected officials responsible for approving such
expenditures.6 Instead, utilities have built a highly effective voluntary system of mutual support,
whereby utilities that are not at risk of being struck by a hurricane or other hazard can send
restoration assets to those that are. The overall restoration capacity of the industry is immense;
the mutual assistance system enables utilities to target support when and where specific utilities
request aid. Sandy highlighted the effectiveness of this system . Tens of thousands of mutual
assistance personnel, including linepeople, engineers, vegetation crews, and support personnel
provided by eighty electric utilities from across the United States, flowed in to the area to help the
utilities hit by Sandy—by far the largest deployment of mutual assistance capabilities in US history.7 Utilities contributed these
assets from the West Coast, the Midwest, and other regions far beyond the storm’s footprint. Now, drawing on the
lessons learned from Sandy, utilities are expanding the mutual assistance system to bring to
bear still greater restoration capabilities in future catastrophes .8 This system did not emerge by chance. For
decades, hurricanes and other severe weather events have hammered utilities in the eastern and
southern United States. Massive ice storms, wildfires, and other natural hazards have also inflicted wide-area power outages in
other regions of the United States. Inresponse, utilities gradually built up the mutual assistance system,
developing increasingly effective governance and decision-making mechanisms to allocate
restoration crews and other limited resources and prioritize assistance when multiple power
providers requested help.9 Restoration crews have become as expert at line stringing, replacing power poles, and
performing other functions for partner utilities as they are for their own organizations. So that personnel stay sharp
between events, utilities conduct frequent exercises that are modeled on the hurricanes and
other hazards they typically face. They have also established mechanisms to reimburse each
other for the cost of providing assistance and (together with state PUCs) have created special cost
recovery mechanisms to help pay for restoration operations in severe storms . Decades of
experience also strengthened government support for power restoration after Sandy . When the
superstorm hit, state National Guard personnel in New York, New Jersey, and other states were already
prepared to perform well-established (and crucial) support functions at the request of their local
utilities, including road clearance and debris removal to help utility repair crews reach damaged
equipment. The Emergency Management Assistance Compact (EMAC) system enabled thirty-seven states outside the affected
area to send thousands of additional Guard personnel to help to execute these missions.10 The National Response
Framework (NRF) also provided time-tested mechanisms to coordinate the provision of
government assistance.11 Moreover, as in the case of the power industry’s mutual assistance
system, federal and state agencies have launched a wide array of initiatives to draw on lessons
learned from the superstorm and strengthen support for power restoration in future
catastrophic blackouts.
2NC Niles Round 6
2NC – CP – HIA
AT: PDB
Second – impact assessment must take place prior to implementation
Holt 13 – (MA-University of Kansas & Associate Director for Community Tool Box Services at
the Work Group for Health Promotion and Community Development, “Health Impact
Assessment,” http://ctb.ku.edu/en/tablecontents/chapter2_section11_main.aspx)

HIA can take place before the project or policy is


When should you conduct a Health Impact Assessment? As we discussed earlier, the timing of an

implemented (prospective), during the implementation (concurrent), or after the project is complete or the policy has become
established (retrospective.) While any of the three can be valuable, it seems clear that the ideal is a prospective HIA . Getting all the

facts and information before you start a project or implement a policy makes it much more
likely that you’ll get it right, and not have to clean up a mess later. Some proponents of HIA feel that concurrent
or retrospective HIAs aren’t really HIAs at all, but are simply monitoring or evaluating the
project or policy. The best time, therefore, to initiate an HIA is during the planning process, well
before activity is scheduled to begin or policy put in place . A later HIA, whether concurrent or retrospective, can be useful, and can lead to
correcting mistakes that are being or were made in the course of the activity. It’s much harder, however, to change a project or revamp a

policy once it’s under way or – worse – completed than it would be to change plans beforehand.
Furthermore, by the time a concurrent or retrospective HIA identifies a potential negative health
impact, there’s a good chance that there will already be community or other groups calling
attention to and perhaps protesting it. Considering impacts before the fact not only makes
addressing them easier, but also avoids unnecessary conflict and distrust.

Third – its critical to public buy-in and spillover


Dannenberg, 6 – (MD, MPH- National Center for Environmental Health-CDC, “Growing the
Field of Health Impact Assessment in the United States: An Agenda for Research and Practice,”
Am J Public Health. 2006 February; 96(2): 262–270.)

The timing of an HIA affects the likelihood of influencing decisionmakers. An


TIMING AND GOVERNANCE OF HIA PROCESS

HIA early in the decisionmaking process enables greater involvement and buy-in of
decisionmakers and stakeholders. The time available influences the depth and breadth of the HIA.59 In this article, the term HIA refers to a
prospective process. Opinion is divided on whether concurrent and retrospective assessments of projects and policies should be considered HIAs.60 As with evaluation processes,
nonprospective activities can influence a decisionmaker to modify a project only after the project has started. HIA practitioners and decisionmakers should work together throughout the assessment process.

Input from decisionmakers enhances understanding of the proposal and the scope for change;
their involvement increases their “ownership” of the HIA activity and likelihood of accepting
subsequent recommendations. HIAs can be used to educate health officials about planning constraints and planners about the health effects of their decisions.61,62

Fourth – certainty in creating a legally mandated HIA process is key – perm


incentivizes political fights and disinterest
Cole 8. Brian L. Cole – Project Manager, Health Impact Assessment Group, UCLA School of
Public Health. “Building Health Impact Assessment (HIA) Capacity: A Strategy for Congress and
Government Agencies,” A Prevention Policy Paper Commissioned by Partnership for Prevention.
December 2008.
http://www.prevent.org/data/files/initiatives/buildignhealthimpactassessmenthiacapacity.pdf

Building a broad base of support across political factions, sectors, and in different branches of
government is difficult. once built support may be fleeting. And, , such Experienced lawmakers are well aware of the many major initiatives, from Great Society

Experience from other countries shows that enthusiasm


programs71 to health care reform in the early 1990s,72 which at one time had strong support that evaporated.

for HIA can also quickly change to disillusionment if there are unmet, perhaps unrealistic
expectations and difficulties involved in incorporating HIA into decision-making processes , or
shortcomings in the credibility, significance, or utility of information Canada that HIA contributes to the decision-making process. In ,

abandoned efforts to institutionalize HIA following a change in


the province of British Columbia, which was one of the early innovators in HIA, largely

government Support in Sweden Netherlands and the United Kingdom has been
in the late 1990s.32 for HIA ,33 the ,73,34 4,74

tempered by recognition of the challenges of routinely incorporating HIA into decision-making ,

In contrast, HIA
although all three countries continue to have in place governmental policies that support using HIA. In the United Kingdom, however, government support for project-based HIA also seems to be declining.74

seems to be well incorporated in government decision-making and planning in New Zealand 75

and Quebec What sets these two cases apart is longstanding experience with coordinated
, Canada.76

government planning, combined with legislation calling for a commitment to health


promotion acceptance of HIA across different sectors
across sectors, such as Quebec’s Public Health Act32 and New Zealand’s Local Government Act 2002.75 Indeed,

seems to have come less from specific mandates for HIA than from the attractiveness of the
method for helping agencies fulfill other government requirements for promoting equitable,
sustainable, health-promoting policies. 75
AT: PDCP
a. Substantial.
Words and Phrases 64 (40 W&P 759) (This edition of W&P is out of print – The page number
no longer matches up to the current edition and I was unable to find the card in the new edition.
However, this card is also available on google books, Judicial and statutory definitions of words
and phrases, Volume 8, p. 7329)

The words “outward, open, actual, visible, substantial, and exclusive,” in connection with a change of possession, mean
substantially the same thing. They mean not concealed; not hidden; exposed to view; free from concealment, dissimulation,
reserve, or disguise; in full existence; denoting that which not merely can be, but is opposed to potential ,
apparent, constructive, and imaginary; veritable; genuine; certain; absolute; real at present time, as a matter of fact, not
merely nominal; opposed to form; actually existing; true; not including admitting, or pertaining to any others; undivided; sole;
opposed to inclusive. Bass v. Pease, 79 Ill. App. 308, 318.

b. Should is certain and immediate


Summers 94 (Justice – Oklahoma Supreme Court, “Kelsey v. Dollarsaver Food Warehouse of
Durant”, 1994 OK 123, 11-8, http://www.oscn.net/applications/oscn/DeliverDocument.asp?
CiteID=20287#marker3fn13)

4 The legal question to be resolved by the court is whether the word "should" 13 in the May 18 order
connotes futurity or may be deemed a ruling in praesenti.14 The answer to this query is not to be divined from rules of
grammar;15 it must be governed by the age-old practice culture of legal professionals and its immemorial language usage. To
determine if the omission (from the critical May 18 entry) of the turgid phrase, "and the same hereby is", (1) makes it an in futuro
ruling - i.e., an expression of what the judge will or would do at a later stage - or (2) constitutes an in in praesenti resolution of a
disputed law issue, the trial judge's intent must be garnered from the four corners of the entire record.16 [CONTINUES – TO
FOOTNOTE] 13 "Should" not only is used as a "present indicative" synonymous with ought but also is the past tense of "shall"
with various shades of meaning not always easy to analyze. See 57 C.J. Shall § 9, Judgments § 121 (1932). O. JESPERSEN, GROWTH
AND STRUCTURE OF THE ENGLISH LANGUAGE (1984); St. Louis & S.F.R. Co. v. Brown, 45 Okl. 143, 144 P. 1075, 1080-81 (1914). For a
contexts mandate a construction of the
more detailed explanation, see the Partridge quotation infra note 15. Certain
term "should" as more than merely indicating preference or desirability. Brown, supra at 1080-81 (jury
instructions stating that jurors "should" reduce the amount of damages in proportion to the amount of contributory negligence of
the plaintiff was held to imply an obligation and to be more than advisory); Carrigan v. California Horse Racing Board, 60 Wash. App.
79, 802 P.2d 813 (1990) (one of the Rules of Appellate Procedure requiring that a party "should devote a section of the brief to the
request for the fee or expenses" was interpreted to mean that a party is under an obligation to include the requested segment);
State v. Rack, 318 S.W.2d 211, 215 (Mo. 1958) ("should" would mean the same as "shall" or "must" when used in an
instruction to the jury which tells the triers they "should disregard false testimony"). 14 In praesenti means literally "at the present
denotes that which in law is presently or
time." BLACK'S LAW DICTIONARY 792 (6th Ed. 1990). In legal parlance the phrase
immediately effective, as opposed to something that will or would become effective in the future
[in futurol]. See Van Wyck v. Knevals, 106 U.S. 360, 365, 1 S.Ct. 336, 337, 27 L.Ed. 201 (1882).

c. Resolved
OED 89 (Oxford English Dictionary, “Resolved,” Volume 13, p. 725)

Of the mind, etc.: Freed from doubt or uncertainty , fixed, settled. Obs.
2NC – IOT Turn
IOT accelerates AI—it’s just a question of time to harness it
Calum Chace 1/27/16 ( Writer, , ARTIFICIAL INTELLIGENCE: THE INTERNET OF THINGS REALLY
IS ON ITS WAY FEATURES JAN 27, 2016 , http://www.bigissue.com/features/6194/artificial-
intelligence-the-internet-of-things-really-is-on-its-way//AKP, Calum studied philosophy at
Oxford University, where he discovered that the science fiction he had been reading since early
boyhood is actually philosophy in fancy dress. He published "Pandora's Brain", a novel about the
first conscious machine, in March 2015, and will shortly publish "Surviving AI", a non-fiction
book about artificial intelligence. He is a regular speaker on artificial intelligence and related
technologies, and runs a blog on the subject at www.pandoras-brain.com. He is also the co-
author of The Internet Startup Bible, a business best-seller published by Random House in 2000.
Prior to writing Pandora's Brain, Calum had a 30-year career in business, in which he was a
marketer, a strategy consultant and a CEO. He maintains his interest in business by serving as
chairman and coach for growing companies.

Artificial intelligence could bring us utopia or wipe us out, says Calum Chace . It depends how we harness it... I believe that historians

The two
in 2100 will look back on our century as the age of the two singularities. The word “singularity” is used in maths and physics to mean a point where change has become so rapid that the normal rules no longer apply.

singularities coming our way this century are the economic singularity and the technological
singularity . If we fail, it could be miserable – and
. Both present enormous opportunities and challenges. If we manage them successfully our future as a species is beyond wonderful

probably short. The reason for this is artificial intelligence (AI) – humanity’s most powerful
technology the AI
. Software that solves problems and turns data into insight has already made big impacts: your smartphone employs AI to deliver maps and apps; Google to answer your questions. But wonderful as these things are,

revolution has barely begun, and it is accelerating fast. In the next few decades you’ll see self-driving cars on the streets and have conversations with Siri, which will

omputer chips embedded in vehicles, clothing, buildings, street lamps


transform it into an invaluable friend. With trillions of tiny sensors and c

and roads, your environment will become intelligible: the Internet of Things really is on its way . The
technology is approaching a tipping point at which machines perform at superhuman level many tasks that were previously deemed uniquely humanIn 2015, AI was rarely out of the headlines, and with good reason. The technology is approaching a tipping point at

The
which machines perform at superhuman level many tasks that were previously deemed uniquely human. They are on the cusp of recognising faces and other images better than we do, and understanding and processing natural speech as well as we do.

range of possible consequences is wide, from terrible to wonderful, and they are not pre-
determined. They will be selected partly by happenstance, partly by their own internal logic but
partly also by the policies embraced at all levels of so ciety. T he argument of my book Surviving AI... is that we should monitor the changes that are happening,
and adopt policies which will encourage best possible outcomes. AI researcher Demis Hassabis likes to say that humanity’s plan for the future should involve two steps. Step one is to solve intelligence (i.e. create powerful AIs). Step two is to use that intelligence to
solve everything else. How to cure disease, and even stop and reverse the ageing process. How to harness more solar energy and generate clean energy. These huge problems and many more can be solved if we tackle them together with machines that can

But like any powerful technology, AI has its risks. The two biggest are
assimilate and process information better than we can.

technological unemployment and superintelligence , and it is these which will generate the two
singularities that kicked off this article. Technological unemployment is what will happen if, two
or three decades hence, the automation of jobs by machines renders large numbers of people
unable to find paid work because there is no work they can do that cannot be done cheaper,
faster and more reliably by machines . If we are smart we could create an economy of “radical abundance”, where AIs and robots do all the work and humans enjoy lives of leisure and play, spending

An elite may own the means of production and suppress the


our days in conversation with friends, learning, playing sport, creating art and travelling.

rest of us in a dystopian technological authoritarian regimeB ut to make this world a reality we will probably need to evolve an entirely new economy, which

we get it wrong, an elite may own the means of production and suppress the rest
is why I call it an economic singularity. If

of us in a dystopian technological authoritarian regim e. Or the process of getting from where we are now to the new economy we want could prove too challenging,
with devastating consequences for our economies, our societies and perhaps our entire civilisation. The arrival of superintelligence, which could happen from two (unlikely) to seven (very likely) or more decades hence, will represent a technological singularity, and

the most significant event in human history bar none . Being the second-smartest species on the planet is an uncomfortable position, as chimpanzees could tell you if they understood how precarious their position is. Working out how to survive this transition is the
most important challenge facing humanity in this and the next generation. If the superintelligence values us, it could improve our lives in ways quite literally beyond our imagination. A superintelligence that recursively improved its own architecture and expanded its

capabilities could very plausibly solve almost any human problem you can think of. Death could become optional and we could enjoy lives of constant bliss and excitement . If it is indifferent to us, or even
hostile, the result for us could be extinction. Surviving AI, and two singularities, is the great
challenge of this century.

The IoT causes artificial intelligence to become smarter than humans -- that
causes extinction
Bonner, 2015- partner in the cyber-security practice at KPMG in the UK (Stephen Bonner,
2015, “THE ROBOAPOCALYPSE IS COMING,”
https://www.kpmg.com/BE/en/IssuesAndInsights/ArticlesPublications/Documents/Advisory-CS-
Cyber-Insights-Magazine-The-Internet-of-Things.pdf)

“The development of full artificial intelligence could spell the end of the human race.” Not my words
but recent comments from no lesser an authority than Professor Stephen Hawking. It’s rare, he said, for an intelligent
species to introduce a more intelligent predator into its eco-system. Elon Musk of Tesla compared
working on AI to “summoning a demon” and he may have a point. I believe we have to act now if we want to prevent
this happening. These warnings should not be dismissed as simply the realms of science fiction. If we don’t build in strong
safeguards, then I think it will be seen that we are doing something deeply unwise. We have to build in zombie plans if we want to
prevent zombie apocalypse! It’s inevitable we will end up creating machines that are more intelligent than
humans. First, there is a small group of massively wealthy individuals who are interested in it. The old dream of cryogenics has
gone nowhere; it just hasn’t advanced. So instead people are looking at the possibility of ‘uploading’ their consciousness somewhere
so that they can live on in their minds, if not their physical body. Second, there is real interest in developing artificial intelligence that
can replace human professionals. If you can outsource, for example, legal work to a machine in a way which is dependable,
repeatable – and cheap then wouldn’t you do it? An AI solution might cost a couple of pence in the cloud one day rather than
thousands of pounds an hour. The third driver is the military. They are investing in developing AI that could operate military
machinery under much more extreme circumstances than the human body can bear. We have already seen steps towards AI with
hardware such as guided missiles and smart bombs. So powerful AI will be developed. It may not happen in the near term, but it will
happen. The dangers arise because we have to tell AI what to do – we are the ones giving it the instructions. And if we’re not
extremely careful about these instructions, then it may have unintended consequences. AI will not have any moral
compass and it won’t have ‘common sense’ of the human kind. So if we tell an AI device to win
as many chess games as possible, it will go to extreme (logical) ends to do so – even to the extent,
for example, of introducing lead into water supplies so as to dull the intellects of its opponents. Or if
you tell AI to make paperclips – that is what it will do, over and over and over. It won’t just stop
at a ‘sensible’ number. Think of investment bankers. We tell them to make as much money as possible – a small minority go
and rig Libor. But if we selected for just the commercial winners, that small minority might become dominant. If we can’t
control humans, who we can talk to and look in the eye, how are we going to control a different
kind of (and superior) intelligence? Once it’s started in earnest, AI will rapidly grow far beyond
us in intelligence. Under Moore’s Law, computing power roughly doubles every eighteen months. We
could expect this – and more – with AI. It will start to create more intelligent versions of itself,
on and on towards the infinite. We’re just not spending enough time and effort on thinking about ways of making it
safer. Look at cyber security and all the efforts being made to defeat ‘air gaps’ in systems. That will just play into the hands of AI. We
should be trying to maintain air gaps, not defeat them. The
IoT will be another driver behind the development
of AI because we will need intelligent systems to interpret all the Big Data that IoT generates.
Already, we’re starting to give control to technology to interpret data for us because ‘it knows
better than us’. IoT will massively amplify the effects of bad AI. Even if you find these scenarios far-fetched,
it’s worth our while to make things safer because the same steps will help guard against more ‘realistic’ events such as cyber hack
meltdown, nation state attacks and other disasters. In books and films, AI
always becomes a force for evil that is
eventually destroyed by mankind. It will know this when it absorbs human culture. It will become self-aware –
and conceal the fact of its self-awareness. When it does so, it will start to look for ways to
preserve itself from humans. Perhaps by looking to control self-driving cars (that it has developed) so
that it can inflict mass accidents, by controlling flying drones, building offshore data centres, placing small mobile
devices on people so that it can track their movements, maybe even enticing people to wear
cameras on their heads so that it can see and identify other people too… A far-fetched conspiracy theory
or just a touch too close to reality to be dismissed with absolute certainty? You decide.

AI results in The Singularity and universal extinction—outweighs nuclear war by


billions
AUBREY CLAYTON 14 (“The Singularity Could Destroy Us All”, Ps Mag.com, Clayton is a
Mathematician, PHD from UC Berkley, 9/3/14 http://www.psmag.com/navigation/books-and-
culture/nick-bostrom-superintelligence-singularity-technology-future-books-90067///AKP
In his new book, Nick Bostrom charts the near-inevitable rise of superintelligence. The future does not look bright. Futuristic thinking
can be embarrassingly over-imaginative, or embarrassingly under-imaginative. Some visions of the future have elements of both:
Back to the Future Part II, set in the then-distant future of 2015, features far-fetched scenes of cars flying and the Cubs winning the
World Series, but also scenes of fax machines, pay phones, and laserdiscs.
Predicting the future of technology is
incredibly hard in part because we don’t even know what’s possible, let alone what will
actually happen. To make a prediction, we need to start from at least a few stable background assumptions—and when we
can’t rely even on these, often the only honest prediction is just a humble shrug. Of particular interest for predictors like Nick
Bostrom, a philosopher at Oxford, is the distant possibility of the development of “superintelligence ”—an artificial
mind, perhaps robotic or an enhanced brain, greater than normal human minds in every measurable way. Superintelligence would
have such profound repercussions that its advent has been dubbed the “Singularity,” and it is awaited in certain
nerd-circles with messianic zeal normally reserved for cold fusion or a new and even worse Star Wars prequel. The Singularity would
happen in three stages: 1. Humans create an artificial superhuman intelligence. 2. That intelligence, being smarter than we are,
improves on our design and creates a new version of itself that’s even smarter. 3. Who knows? But it probably doesn’t end well for
us. Bostrom considers the Singularity potentially catastrophic, and in his new book, Superintelligence: Paths, Dangers,
Strategies—he calls it a “magnum opus”—is an attempt to chart it and make us aware of its dangers .
He enumerates types of superintelligence, ranging from brain emulation (think Johnny Depp in Transcendence) to synthetic artificial
intelligence, or AI (Scarlett Johansson in Her) to biological enhancements through eugenics (Jude Law in Gattaca). Considering all
cases simultaneously, he argues that
superintelligence is possible—perhaps even inevitable and not too
far off; that it has the potential to develop extremely rapidly; and that it might kill us all ,
depending on its goals and how we factor into them. Seeding the system with the right values, so that those goals have the highest
chance of not being catastrophic for us—the “control problem,” as Bostrom calls it—occupies the bulk of the book and is treated
with the most technical jargon. Thestakes, as he describes them, are the survival of the human race,
and he argues that we should weigh the ethical and practical implications of superintelligence
at least as carefully as we consider, say, the implications of thermonuclear warfare. And we
have only one chance to influence its development before the superintelligence takes ov er. He
imagines a scenario in which superintelligence has converted the entire explorable universe into a
computing machine, solely to run simulations of people and torture them forever . Bostrom, who

estimates the number affected at 10 billion


does not think small,

trillion trillion trillion trillion human souls but concedes,


darkly, that “the true number is probably larger.” To make any
predictions about superintelligence, let alone precise ones, Bostrom has to bootstrap his way out of the inherent problems of
uncertainty regarding things that do not exist. He uses maneuvers that resemble Pascal’s wager and the ontological arguments for
the existence of God, in that they reach grand conclusions on the basis of no empirical evidence. Consider Bostrom’s “instrumental
convergence thesis”—that though we may not know what motivations a dominant superintelligence will ultimately possess, we can
surmise that some specific intermediate goals are likely, because many possible superintelligences would want them. Thus, for
example, we may assume that a superintelligence will want to acquire resources, because resources are strategically important for
achieving goals regardless of what those goals are. His main intellectual gambit, deployed time and again, is to punt difficult
questions to the superintelligence itself. We may not know how to make the AI do our bidding, he says, but the artificial intelligence
itself probably does. Instead of worrying that we might load the AI with a values system we might regret—for example, we might tell
it to “maximize our pleasure” only to have it implant our brains with electrodes to stimulate our pleasure-centers endlessly—he
suggests we tell it to pursue the values we would have asked it to pursue if we were as smart as it. If the artificial intelligence has any
doubt, it should either make an educated guess or execute a controlled shutdown. How would we know the superintelligence is
correct in its estimations of our desires, for example? Because it would tell us so, and it’s the superintelligent one. If that sounds like
cheating to you, you probably won’t enjoy the remainder of his explication. Bostrom conjures up doomsday scenarios that both
frighten and amuse. He considers an AI whose sole function, given to it by its fallible human designers, it is to manufacture
paperclips; it becomes so good at its task that it turns the universe into a ruthless paperclip-manufacturing machine. More malicious
is a superintelligence that responds to the instruction to “follow the commands written in this letter” by overwriting the letter to say
something like “Kill all humans.” Bostrom’s visions telescope between the grand (universe-sized computers) and the oddly mundane
(how the transition to a post-singularity world would affect pension schemes). The way Bostrom anatomizes the different sequences
in which a superintelligence might emerge and in what manner we might be punished for failing to intervene occasionally takes on a
theological cast, with picayune disagreements about whether the Rapture will occur before or after the Great Tribulation, or
whether the Second Coming will occur before or after the Golden Age. In
Bostrom’s analysis, the key eschatalogical
questions are whether superintelligence will emerge slowly enough for us to realize what’s
happening (the “slow takeoff” scenario) or so quickly that we never get the chance , and
whether one intelligence will come to dominate or many will share power.
2NC – ADV – Localization
2NC – AT: Trade
Trade inequitably causes aggression
Lucas Hahn 16. Bryant University. April, 2016. Global Economic Expansion and the Prevalence of
Militarized Interstate Disputes.
3. Neo-Marxist Views on Asymmetrical Trade One of the most supported arguments against the notion that economic expansion
promotes peace is that trade, brought about by economic expansion, actually increases MIDs. Many authors have in fact argued that
increased economic interdependence and increased trade may have, in some ways, “cheapened war”, and
thus made it easier to wage war more frequently (Harrison and Nikolaus 2012). Neo-Marxists and Dependency
Theorists argue that the notion that trade promotes peace often depends on the balance of trade
between two nations with a trading relationship. If the two nations have a symmetrical trading
relationship, then both nations benefit from trade equally and may thus, engage in less conflict just as
proposed by many liberal theorists. However, more often than not, the trading relationship between two
nations may be asymmetrical. In this case, one nation benefits more than the other. Furthermore,
one nation is often more dependent on trade with its partner than the partner is with it. These
circumstances can breed violent conflicts (Barbieri and Schneider 1999). Barbieri’s (1996, 40) regression analyses
have supported these claims. She found that when dyads (pairs of nation-states) are highly interdependent, they
are nearly 25 times more likely to engage in armed conflict than when the dyads are not
interdependent. Ultimately, she came to the conclusion that there seems to be a “hurdle effect”. Up to a
point trade does seem to promote peace. However, after that point, the balance of trade often
becomes disproportionate between two nations and as a result trade promotes conflict. 4.
Interdependence Versus Interconnectedness The previous subsection alludes to the fact that there is a fundamental
difference between economic interconnectedness and economic interdependence. Basically,
interconnectedness involves a mutual and equal benefit between two economically connected nations.
Interdependence involves an unequal benefit between two economically connected nations where one nation
more extensively relies on the other. Gasiorowski (2007) argues, that growing interconnectedness brought about by globalization
decreases MIDs. However, growing
interdependence, also largely brought about by globalization, increases
MIDs. In this case, when one nation is intrinsically dependent on another, they will be more
sensitive and vulnerable to any changes in the economic policy of their major trading partner .
Thus, depending on the relationships between different nations violent conflicts may either be increased or decreased by economic
expansion.

Trade is irrelevant for war


Katherine Barbieri 13, Associate Professor of Political Science at the University of South
Carolina, Ph.D. in Political Science from Binghamton University, “Economic Interdependence: A
Path to Peace or Source of Interstate Conflict?” Chapter 10 in Conflict, War, and Peace: An
Introduction to Scientific Research, google books

How does interdependence affect war, the most intense form of conflict? Table 2 gives the empirical results. The rarity of
wars makes any analysis of their causes quite difficult, for variations in interdependence will seldom result in the occurrence of war. As in the case of MIDs, the log-likelihood
ratio tests for each model suggest that the inclusion of the various measures of interdependence and the control variables improves our understanding of the factors affecting
the occurrence of war over that obtained from the null model. However, the individual interdependence variables, alone, are not statistically significant. This is not the case with
contiguity and relative capabilities, which are both statistically significant. Again, we see that contiguous dyads are more conflict-prone and that dyads composed of states with
unequal power are more pacific than those with highly equal power. Surprisingly, no evidence is provided to support the commonly held proposition that democratic states are

evidence from the pre-WWII period provides support for


less likely to engage in wars with other democratic states.¶ The

those arguing that economic factors have little, if any, influence on affecting leaders’ decisions to
engage in war, but many of the control variables are also statistically insignificant. These results should be interpreted with caution, since the sample does not
contain a sufficient number wars to allow us to capture great variations across different types of relationships. Many observations of war are excluded from the sample by virtue
of not having the corresponding explanatory measures. A variable would have to have an extremely strong influence on conflict—as does contiguity—to find significant results. ¶

7. Conclusions This study provides little empirical support for the liberal proposition that trade
provides a path to interstate peac e. Even after controlling for the influence of contiguity, joint
democracy, alliance ties, and relative capabilities, the evidence suggests that in most instances
trade fails to deter conflict. Instead, extensive economic interdependence increases the likelihood that dyads engage in
militarized dispute; however, it appears to have little influence on the incidence of war .

COVID ensures interdependence – they’re missing uniqueness for


interdependence decreasing
OECD, 20 (The Organisation for Economic Co-operation and Development is an intergovernmental economic
organisation with 37 member countries, founded in 1961 to stimulate economic progress and world trade., "Trade
interdependencies in Covid-19 goods," OECD, http://www.oecd.org/coronavirus/policy-responses/trade-
interdependencies-in-covid-19-goods-79aaa1d6/, 5-5-2020)//ILake-NC

There is a high degree of interdependence in trade in COVID -19 products There is considerable
overlap in top suppliers’ imports and exports of COVID-19 products, as measured by indicators of intra-industry trade
(Table 3).5 Countries tend to be both importers and exporters of COVID- 19 goods, highlighting a high
degree of interdependence among countries on these essential items. That is, a country might be a top supplier of one COVID-19
good, but an importer of others. For instance, for every euro of German exports of COVID-19 goods,
Germany imports EUR 0.72 of related COVID-19 goods. Likewise, for every dollar of COVID-19 goods the United States
imports, it exports USD 0.75 of related COVID-19 goods (Table 3). This high degree of aggregate intra-industry
trade reveals that countries need each other to satisfy demand or production needs.
AT: Medical Research
Disease doesn’t cause extinction
Adalja 16 [Amesh Adalja is an infectious-disease physician at the University of Pittsburgh. Why
Hasn't Disease Wiped out the Human Race? June 17, 2016.
https://www.theatlantic.com/health/archive/2016/06/infectious-diseases-extinction/487514/]
But when people ask me if I’m worried about infectious diseases, they’re often not asking about the threat to human lives; they’re
asking about the threat to human life. With each outbreak of a headline-grabbing emerging infectious disease comes a
fear of extinction itself. The fear envisions a large proportion of humans succumbing to infection, leaving no survivors or so
few that the species can’t be sustained. I’m not afraid of this apocalyptic scenario, but I do understand the impulse.
Worry about the end is a quintessentially human trait. Thankfully, so is our resilience. For most of mankind’s history,
infectious diseases were the existential threat to humanity—and for good reason. They were quite successful at
killing people: The 6th century’s Plague of Justinian knocked out an estimated 17 percent of the world’s population; the 14th century
Black Death decimated a third of Europe; the 1918 influenza pandemic killed 5 percent of the world; malaria is estimated to have
killed half of all humans who have ever lived. Any
yet, of course, humanity continued to flourish. Our species’
recent explosion in lifespan is almost exclusively the result of the control of infectious diseases through
sanitation, vaccination, and antimicrobial therapies. Only in the modern era, in which many infectious
diseases have been tamed in the industrial world, do people have the luxury of death from cancer, heart disease, or stroke
in the 8th decade of life. Childhoods are free from watching siblings and friends die from outbreaks of typhoid, scarlet fever,
smallpox, measles, and the like. So what would it take for a disease to wipe out humanity now? In Michael Crichton’s The
Andromeda Strain, the canonical book in the disease-outbreak genre, an alien microbe threatens the human race with extinction,
and humanity’s best minds are marshaled to combat the enemy organism. Fortunately, outside of fiction, there’s no reason to
expect alien pathogens to wage war on the human race any time soon, and my analysis suggests that any real-life domestic microbe
reaching an extinction level of threat probably is just as unlikely. Anyapocalyptic pathogen would need to possess a
very special combination of two attributes. First, it would have to be so unfamiliar that no existing therapy or vaccine could be
applied to it. Second, it would need to have a high and surreptitious transmissibility before symptoms occur. The first is
essential because any microbe from a known class of pathogens would, by definition, have family members that could serve as
models for containment and countermeasures. The second would allow the hypothetical disease to spread without being detected
by even the most astute clinicians. The three infectious diseases most
likely to be considered extinction-level threats
in the world today—influenza, HIV, and Ebola—don’t meet these two requirements. Influenza, for instance, despite its
well-established ability to kill on a large scale, its contagiousness, and its unrivaled ability to shift and drift away from our vaccines, is
still what I would call a “known unknown.” While there are many mysteries about how new flu strains emerge, from at least the time
of Hippocrates, humans have been attuned to its risk. And in the modern era, a full-fledged industry of influenza preparedness
exists, with effective vaccine strategies and antiviral therapies. HIV, which has killed 39 million people over several decades, is
similarly limited due to several factors. Most importantly, HIV’s dependency on blood and body fluid for transmission (similar to
Ebola) requires intimate human-to-human contact, which limits contagion. Highly potent antiviral therapy
allows most people to live normally with the disease, and a substantial group of the population has genetic mutations that
render them impervious to infection in the first place. Lastly, simple prevention strategies such as needle exchange for
injection drug users and barrier contraceptives—when available—can curtail transmission risk. Ebola, for many of the
same reasons as HIV as well as several others, also falls short of the mark. This is especially due to the fact that it spreads almost
exclusively through people with easily recognizable symptoms , plus the taming of its once unfathomable 90
percent mortality rate by simple supportive care. Beyond those three, every other known disease falls short of what
seems required to wipe out humans—which is, of course, why we’re still here. And it’s not that diseases are ineffective.
On the contrary, diseases’ failure to knock us out is a testament to just how resilient humans are. Part of our evolutionary
heritage is our immune system, one of the most complex on the planet, even without the
benefit of vaccines or the helping hand of antimicrobial drugs . This system, when viewed at a species level,
can adapt to almost any enemy imaginable. Coupled to genetic variations amongst humans—which open
up the possibility for a range of advantages, from imperviousness to infection to a tendency for mild symptoms—this
adaptability ensures that almost any infectious disease onslaught will leave a large proportion of the
population alive to rebuild, in contrast to the fictional Hollywood versions.
2NC – AT: Energy
Microgrids fails – Federal government regulations, no feeder resources AND
fault currents increase instability
Veazey, 18 (Shelby Veazey, Energy expert for the Phoenix energy group "Challenges Faced
during Microgrid Implementation," No Publication,
https://www.phoenixenergygroup.com/blog/challenges-faced-during-microgrid-
implementation, 5-3-2018)//ILake-NC

Irrespective of their numerous advantages, implementing microgrids faces serious challenges not
only on a federal and state level, but also on a technological level . On a federal level, we have microgrids
contributing to distributed systems that provide power on a wholesale basis. Microgrids have the ability to provide
multiple benefits like load abating and resource provision. However, market regulations do not allow for such
multiple utilities. Things are even more muddled when one takes a look at the transmission
planning as federally established. The compensation for transmission and power generation are very different. grid-
857865_1920 There are even more barriers to overcome when one considers the regulations set by
the state on microgrids. In most states, it is impossible to collect load from various customers and integrate
them on one microgrid. Some states also, do not allow independent energy providers to offer
services to a single customer on the customer's property. Most states have regulations that force
energy providers to supply power on an all-or-nothing basis. Meaning they necessarily have to
contribute to the main grid. Technological challenges faced in the operation and deployment of microgrids are
mentioned below- Disconnecting feeder from utility systems- The fault current in microgrids can be much higher
than those faced in distributed systems. When the fault current is of a comparable magnitude to
the load current in microgrids, it can severely damage protection methods and damage safety
devices. This is a major concern during island operation. Blindly implementing microgrids on feeders without careful analysis of
the various protective measures can lead to serious damages to the grid. New call-to-action Issues during start-up of island mode-
During the initial stages of island mode start-up can cause a sudden intake of current which can affect the frequency of the system
and voltages. This can cause the generators to trip and go offline during the initial phase. In order to combat this, an analysis is
needed on energy generation methods during island mode and specialized controls need to be developed that are suitable for
microgrid operations. Balancing between generation and load in island mode- This is one of the most common challenges faced by
microgrids. The balance between load and power generation needs to be constantl y maintained.
Sudden or large change in loads can introduce instability into the island system. Feeder design for
microgrids- This is the most important challenge that needs to be overcome. Throughout the history of energy generation feeders
have been engineered for typical grids which require a strong source. As microgrids are gaining
popularity there seems to be lack of availability in suitable feeders that go with current
microgrid designs.
1NR
2NC – UQ

Biden has most battleground state – but they can still shift
Shepard 09/09 – senior campaigns and elections editor at POLITICO. (Steven, 09/09/20,
“Trump running out of time to turn around 2020 campaign,”
https://www.politico.com/news/2020/09/09/2020-election-forecast-410377) np

The door isn’t closed on President Donald Trump’s reelection, but time is running short. Labor Day
once marked the start of concerted general-election campaigning, but it comes with a far greater sense of urgency this year for
Trump. Because of coronavirus-related changes in election administration across the country, more
Americans than ever
are expected to cast their ballots early this year, whether by mail or in person. And Trump, who didn’t get
the election-changing convention bounce he hoped for, still trails Joe Biden by a significant
margin among voters nationally — and by varying, but mostly smaller, gaps in many of the key battleground states. The
latest updates to POLITICO’s Election Forecast point to a relatively stable political environment, and that's not what the president
needs. Even
as turbulence pervades the news around politics, Biden is still staked to a lead and
favored to win the presidency, as more than half a million absentee ballots were dropped in the mail last week in North
Carolina and Minnesota prepares to open in-person early voting at the end of next week. Biden's edge is not overwhelming, though,
given Trump’s advantages in the Electoral College. Meanwhile, the battle for the Senate is as tight as ever, with both parties fighting
over a handful of hotly contested seats that will tilt what is likely to be a narrow majority for either side, even as Democrats could
strengthen their already tight grasp on the House. Presidential: Lean Democratic Biden
remains the favorite to be
sworn in as the nation’s 46th president next January, but the swing-state battlefield is still up for
grabs. There are still enough electoral votes in the four states rated as toss-ups — Arizona,
Florida, North Carolina and Wisconsin — to tilt the race one way or the other . But Biden’s edge
comes from two states Trump carried in 2016, Michigan and Pennsylvania, leaning towards him
along with other traditional battlegrounds, like Nevada and New Hampshire, which Hillary Clinton won.
Minnesota also remains in the “Lean Democratic” category , though both campaigns are playing heavily there.
Polls also show tight races in states like Georgia, Iowa, Ohio and Texas, but those states are still leaning in Trump’s column — for
now. Two long-time battleground states, Colorado and Virginia, are almost entirely off the board
for Trump. He is barely contesting the combined 22 electoral votes from both states, while Biden has booked a nominal amount
of TV advertising for the final stretch just to be safe. Colorado and Virginia have moved from “Lean
Democratic” to “Likely Democratic.” And while polls currently point to a Biden victory that
would be short of a landslide — Biden’s lead is slightly smaller than it was two months ago, when the forecast was last
updated — a bigger Biden win that would expand the electoral map is still possible. Alaska and Montana,
two Republican-leaning, idiosyncratic states, moved from “Solid Republican” to “Likely Republican,” as public and private polling
shows the president underperforming his 2016 margins there.

Early voting proves dems have a lead – enthusiasm is high


Isenstadt 09/10 – reporter for POLITICO. (Alex, 09/10/20, “Democrats build big edge in early
voting,” https://www.politico.com/news/2020/09/10/democrats-early-voting-lead-412106) np

Democrats are amassing an enormous lead in early voting, alarming Republicans who worry they’ll need to
orchestrate a huge Election Day turnout during a deadly coronavirus outbreak to answer the surge.
The Democratic dominance spreads across an array of battleground states, according to
absentee ballot request data compiled by state election authorities and analyzed by Democratic and
Republican data experts. In North Carolina and Pennsylvania, Democrats have a roughly three-to-one
advantage over Republicans in absentee ballot requests. In Florida — a must-win for President Donald
Trump — the Democratic lead stands at more than 700,000 ballot requests, while the party also
leads in New Hampshire, Ohio and Iowa. Even more concerning for Republicans, Democrats who
didn't vote in 2016 are requesting 2020 ballots at higher rates than their GOP counterparts. The
most striking example is Pennsylvania, where nearly 175,000 Democrats who sat out the last race have requested
ballots, more than double the number of Republicans, according to an analysis of voter rolls by the Democratic firm TargetSmart.
Though the figures are preliminary, they provide a window into Democratic enthusiasm ahead
of the election and offer a warning for Republicans. While Democrats stockpile votes and bring
in new supporters, Trump’s campaign is relying on a smooth Election Day turnout operation at a
time when it’s confronting an out-of-control pandemic and a mounting cash crunch. “A ballot in is a
ballot in, and no late-campaign message or event takes it out of the count,” said Chris Wilson, a GOP pollster who specializes in data
and analytics. “Bottom line is that means that Biden is banking a lead in the mail and more of the risk of something going wrong late
is born by Republicans because our voters haven't voted yet.” Republicans acknowledge Democrats have established a lead, though
some stressed it was early and compared it to a basketball team winning the opening tip-off. Trump aides argue that the Democratic
advantage will make little difference in the end, saying the opposing party is merely frontloading voters who otherwise would have
voted on Nov. 3. They also note that while Trump has repeatedly bashed mail-in voting — virtually ensuring that most of his
supporters cast ballots in-person on Election Day — Democrats are placing a heavy emphasis on it. An NBC News/Wall Street Journal
poll released last month showed that nearly half of Biden’s supporters planned to vote by mail, compared to just about one-tenth of
Trump supporters. The Trump team points to special congressional elections earlier this year in red-tinted New York and Wisconsin
districts where Republicans trailed in absentee voting but ended up with a substantial advantage among voters who cast ballots on
Election Day, giving them wins in both contests. “The majority of our voters prefer to vote in-person. So, we expect to be well behind
on absentee requests as Democrats have made it their mission to push for an all-mail election that brings fraud and chaos into the
system. You’ll see Democrats predominantly vote by mail, and our voters will come out in droves to vote in-person,” said Mike Reed,
a Republican National Committee spokesperson. But the data also shows that Democrats are attracting new
supporters in small but potentially significant numbers in states they narrowly lost in 2016. In
Pennsylvania, which Trump won by just 44,000 votes four years ago, Democrats have built a lead of nearly
100,000 ballot requests from voters who didn’t participate in the 2016 election but are preparing to
vote by mail this year, according to TargetSmart’s figures. In Michigan, where Trump won by fewer than 11,000 votes (and
where voters do not register by party), the firm’s model shows that Democratic-aligned voters have a
nearly 20,000-person advantage among non-2016 voters signing up to receive ballots. In
Wisconsin, which Trump won by 22,000 votes, Democratic-leaning voters who skipped 2016 have made
nearly 10,000 more requests for this election than their GOP counterparts . Republicans are also
encouraging supporters to vote absentee . Through telephone calls, digital advertising and mailers, they have
prodded Trump backers to vote early or by mail. The pro-Trump outside group America First Action, meanwhile, has been following
up with voters to ensure they are turning in their ballots. Yet
Trump — much to the frustration of senior Republicans — has
undermined those efforts with repeated attacks on mail-in voting. The president has used his recent public
appearances and his Twitter feed to savage voting by mail as a process that can’t be trusted. “It's a case of what Trump actually says
mattering a lot more than what his campaign does. The campaign is working hard to get absentees requested and, soon, returned;
but Trump bashing mail voting repeatedly makes strong Republicans much less likely to do it,” Wilson said. Democrats, who
were widely criticized for running a lackluster turnout operation four years ago, say they are capitalizing on a wave of
anti-Trump energy to bank ballots. The party used its convention to press early voting , with
prominent figures like former first lady Michelle Obama imploring people to cast ballots as soon as possible. They point to
Florida as a major bright spot. Democrats lead Republicans in vote-by-mail requests 2.1 million
to 1.4 million, according to a GOP consultant who is tracking the figures. At this same point in 2016, Democrats trailed
Republicans in requests. “While Trump is busy kneecapping Republican efforts to sign up his supporters to vote by mail with
debunked claims about absentee voting, Democrats have a massive grassroots army focused on turning out voters early and on
Election Day, and we're already seeing strong results and real energy — including among first time voters," said Michael Gwin, a
Biden campaign spokesperson. There have been a few rays of hope for Republicans, however. In Georgia, a competitive state with
two key Senate races, the party has a roughly 55,000-vote edge over Democrats. But Republicans acknowledge they will be largely
leaning on a well-organized Election Day turnout program they’ve spent years developing — one that is widely seen as superior to
the one Democrats have built. “We expect that,” said Reed, the RNC spokesperson, “to be a huge difference-maker.”
It doesn’t make a difference – Wisconsin proves.
Rubin 20 – Jennifer Rubin, columnist at The Washington Post, April 14 th ("Republicans can’t
even win with voter suppression," The Washington Post, available online at
https://www.washingtonpost.com/opinions/2020/04/14/not-even-voter-suppression-works-
republicans/, accessed 7-17-2020) LR

You could almost hear the sound of thousands of Republican officials harrumphing and gnashing their teeth Monday
night when the results of a key Wisconsin Supreme Court race came in. Republicans had gone to great lengths
to suppress the vote, refusing to postpone the election despite the coronavirus pandemic and racing to the U.S. Supreme
Court to block an extension of time for voters to return absentee ballots. This was the perfect example of
Republicans’ go-to tactic: Make it hard to vote, hoping your own (whiter, older) voters will turn out but Democrats won’t.
That strategy bombed in Wisconsin, suggesting Republican efforts to tamp down on voting by mail (or
other means) may be self-destructive. The Post reports, “A liberal challenger easily defeated the conservative incumbent for
a seat on the Wisconsin Supreme Court, a key race at the heart of Democratic accusations that Republicans risked voters’ health and
safety by going forward with last week’s elections amid the coronavirus pandemic.” Voting by mail was a huge success
with “nearly 1.1 million Wisconsinites cast[ing] ballots that way, nearly as many as total turnout in last year’s Supreme Court race —
and more than the total turnout in the court races in each of the previous two years.” In the end, liberal Jill Karofsky
crushed Daniel Kelly, the conservative incumbent. To make matters worse, Karofsky won comfortably in
Kenosha County, the ultimate swing county, which President Trump won narrowly in 2016. The results
should shake Republicans’ conviction that they benefit by making voting by mail arduous. Rachel Kleinfeld, a senior fellow from
Carnegie Endowment for International Peace, tells me, “Republicans have become so used to thinking that if people
vote, they will lose, that they aren’t actually looking at this election .” However, she says, “Elderly
voters lean Republican and are far more likely to stay away from the polls if the coronavirus continues
to rage.” While low-interest voters tend to use voting by mail and favor Democrats, she explains, “There is just no way to predict
how a broadening of vote by mail would affect the electoral outcome, but it is at least as likely to favor
Republicans as Democrats.”

Unsuccessful and sanctions solve


Spocchia & Zoellner 09/10 – *journalist covering US politics @ independent. *US reporter
@ independent. (*Gino Spocchia & **Danielle Zoellner, 09/10/20, “Biden news: Democrat
campaign hit by 'Kremlin-backed' hackers as Harris heads to Florida touches down in Florida
where race is neck and neck,” https://www.independent.co.uk/news/world/americas/us-
politics/joe-biden-news-live-us-election-update-trump-latest-tweets-today-coronavirus-
b421381.html) np

Joe Biden's campaign was reportedly hit by suspected Russian state-backed hackers after they
unsuccessfully tried to breach SKDKnickerbocker, a Washington-based strategy and communications firm
that has been working with the campaign. The US Treasury also sanctioned Ukrainian lawmaker
Andrii Derkach for election interference against Mr Biden, including promoting “false and unsubstantiated”
allegations.
Major upgrades solve
Brumfield 20 – veteran communications and technology analyst who is currently focused on
cybersecurity. She runs a cybersecurity news destination site, Metacurity.com (Cynthia, 1/21.
“US elections remain vulnerable to attacks, despite security improvements.”
https://www.csoonline.com/article/3514950/us-elections-remain-vulnerable-to-attacks-despite-
security-improvements.html)

Certainly, voting security has made great strides since 2016. State and local governments took
advantage of a funding boost under the Help America Vote Act to improve their infrastructure
and better coordinate among themselves to harden election systems. Congress allocated an
additional $425 million as part of a spending compromise that was passed and enacted in late-December, giving
election officials even more latitude to make improvements . A spokesperson for the Department of
Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) tells CSO that the agency
has seen marked improvements in security over the past few years. “In our work with all 50 states and
more than 2,400 local jurisdictions, we’ve seen a maturation in the risk management practices
across the sector,” the spokesperson says. “Whether implementing controls like multifactor authentication and intrusion
detection systems or exercising incident identification, communications, and response, the progress for election security is real.”
Even more improvements to how the country responds to election threats could flow from a
decision announced last week by the FBI to alter its policy regarding how it informs state officials
about local election security breaches. In the past, the FBI informed state officials of cybersecurity attacks on local
election infrastructure after informing local officials, allowing state officials to proceed with vote tallies and other efforts without full
information. Now the
bureau plans to keep state officials informed in a timelier manner, hoping to
improve federal and state cooperation on election security matters.
2NC – L
It’s a bipartisan issue
Cristiano Lima 19, a technology reporter covering politics and policy on Capitol Hill. "Democrats and Republicans
find a common cause: Whacking tech companies," POLITICO, 4-24-2019,
https://www.politico.com/story/2019/04/24/partisan-enemies-find-a-common-cause-whacking-tech-companies-
1333886; HS

Ted Cruz and Elizabeth Warren have bonded over ripping Facebook . Massachusetts liberal Ed Markey has teamed
up with Missouri conservative Josh Hawley to sponsor an online privacy bill. Even Nancy Pelosi and Donald Trump have found

a common target in tech, each jabbing the industry over a series of perceived misdeeds. From
the most conservative free market Republicans to staunchly progressive Democrats, the desire to rein
in the tech sector has created surprising new partnerships among political enemies. Democrats have long counted
Silicon Valley as part of their political base, while Republicans have pushed for a hands-off approach to tech innovation. But in Washington,

the distrust from both parties regarding how the tech industry has handled itself on everything from privacy to political
discourse makes it clear that Congress is hungry to bring about a tougher era of government

regulation. “I think we’ve reached a tipping point. I think it’s a sign there’s really no going back from here," said Michelle Richardson, director of
the Center for Democracy and Technology's Privacy and Data Project. "The interest here is deep and will be long-term and there’s no way we’re going
to get out of having some federal regulation on some of these issues.” Specific
issues that have already drawn cross-party
buy-in run the gamut from antitrust enforcement to bolstering publishers' power against tech platforms to the protection of children's
data. Nearly all, however boil down to a central concern over tech giants' market power as well as the outside command that power
gives them over both the terms of online conversation and the handling of Americans' most private

information. That shared concern could seriously boost the prospects of not just the targeted bills that odd couples have produced so far, but
also more sweeping legislation aimed at reining in tech titans like Facebook, Google and Amazon, particularly with respect to their vast scale and power
and their data practices. And it
lends heft to 2020 presidential aspirants' pledges to crack down on Silicon
Valley, as Democratic candidates like Sens. Warren (Mass.) and Amy Klobuchar (Minn.) already find themselves
developing common cause across the aisle on tech issues. “I think all of these issues relate to getting the economy to
work for the American people, respond to these tremendous concentrations of economic power and the enormous dominance of certain technology
platforms,” Rep. David Cicilline (D-R.I.), who's sought to work on tech policy with conservative Republicans including Reps. Matt Gaetz (Fla.) and Doug
Collins (Ga.), told POLITICO. “It’s
not just one group, Republican or Democrat or independent; it’s being felt
by the American people.” The point of unity between Warren and Cruz (R-Texas) was more a one-off than a concerted legislative effort.
But it still proved striking given the participants, as a leading figure of the Democratic left and a famously prickly, firmly conservative Republican found
major rhetorical overlap in their views of social media. Warren last month took issue with a POLITICO report that Facebook briefly took down a few of
her campaign ads calling for the breakup of the company, saying it showed the social network “has too much power.” Cruz found himself leaping to her
defense. “First time I’ve ever retweeted @ewarren But she’s right—Big Tech has way too much power to silence Free Speech,” Cruz tweeted in
response to the post, adding that the company posed “a serious threat to our democracy.” Weeks later the Texas Republican quoted Warren's tweet at
a Senate Judiciary hearing on allegations of an anti-conservative bias in tech — another first for the pair of lawmakers. The trend encompasses both
houses of Congress, and the
Senate in particular has seen a number of unexpected partnerships form as
part of a push to craft new rules for how companies handle consumer data. One such odd pairing: Markey
(D-Mass.), a liberal privacy hawk who has embraced the Democratic Party's rising progressive flank, and Hawley (R-Mo.), a prominent GOP tech critic
who fashions himself a libertarian-leaning conservative. Despite their other policy differences, the lawmakers recently joined to unveil a broad-reaching
bipartisan measure to expand children’s privacy protections. In rolling out their bill, Markey framed the topic as a rare area of consensus for lawmakers
on Capitol Hill. “If
we can agree on anything, it should be that children deserve strong and effective
protections online,” he said in a statement. Meanwhile, a number of other cross-party Senate pairings — including Sens. Klobuchar and John
Neely Kennedy (R-La.) and Sens. Brian Schatz (D-Hawaii) and Roy Blunt (R-Mo.). — have teamed up to unveil their own bipartisan data privacy bills, as
lawmakers in both houses talk up plans to address the issue. The
privacy concerns are emblematic of broader shared
scrutiny over tech's unchecked power. Sens. Marsha Blackburn (R-Tenn.) and Klobuchar recently sent a joint letter
to the Federal Trade Commission voicing concern about "potential privacy, data security, and antitrust
violations" by tech titans.
AT: Voters Don’t Care
Even “secondary issues” can swing the election---other issues are static and
baked in, while the plan makes CJR a variable
Tanner 19 , senior fellow at the Cato Institute (Michael D. Tanner, , “Justice Reform: A
Surprisingly Hot Topic,” CATO, https://www.cato.org/publications/commentary/justice-reform-
surprisingly-hot-topic)

While we should expect the upcoming presidential campaign to focus on traditional issues of the economy,
taxes, foreign policy, trade, and immigration — as well as the elephant in the room that is Donald Trump —
criminal‐justice reform has become a surprisingly hot topic on the campaign trail. At one point,
every presidential candidate pretended he was running for sheriff. “ Tough on crime” was considered the ultimate
badge of honor — in both parties. Bill Clinton even rushed home during his campaign to execute a mentally disabled murderer.
Times have clearly changed. This is in part due to the growing evidence of racial and class inequities within the criminal‐
justice system. Studies also show that failures within our criminal‐justice system contribute to
poverty and dependence. A recent YouGov poll conducted on behalf of the Cato Institute found that 22 percent of the
unemployed and 23 percent of people on welfare had been unable to find a job because of a criminal record. Scholars at Villanova
have concluded that mass incarceration increases the U.S. poverty rate by as much as 20 percent. It
has also become clear
that overcriminalization and mass incarceration have not necessarily made us safer. Support for
criminal‐justice reform now cuts across party lines. But there is also a large degree of politics
behind the sudden importance of criminal‐justice reform on the campaign trail. Most important, Democratic
front runner Joe Biden is perceived as being vulnerable on the issue. Biden’s supported and partially
wrote the 1994 Violent Crime Control and Law Enforcement Act, which led to an increase in incarceration —
especially among African Americans. He also supported and sponsored several pieces of legislation that enhanced sentencing for
drug‐related crimes, once again contributing to the mass incarceration of minorities. Even President Trump
has taken the
opportunity to tweak Biden on the issue, tweeting, “Anyone associated with the 1994 Crime Bill
will not have a chance of being elected . In particular, African Americans will not be able [sic] to vote
for you. I, on the other hand, was responsible for Criminal Justice Reform , which had tremendous
support, and helped fix the bad 1994 Bill!” And in a second tweet, Trump noted that “Super Predator was the term associated with
the 1994 Crime Bill that Sleepy Joe Biden was so heavily involved in passing. That was a dark period in American History, but has
Sleepy Joe apologized? No!” Trump is not exactly the best messenger on this front, given his at least
implied support for police abuses. But he is correct that he signed the FIRST STEP Act, the first
important federal prison and criminal‐justice reform in many years. As a policy, it was modest stuff, but it
symbolically highlighted the changing politics of the issue. Biden is not the only one with vulnerabilities on
criminal justice. During her time as a prosecutor, Kamala Harris vigorously enforced California’s three‐strikes law, actively pursued
drug users and sex workers, and even prosecuted the parents of truant children. She was also an outspoken supporter of asset
forfeiture and the use of solitary confinement in prisons. She backed capital punishment and resisted calls to investigate some police
shootings. So far, she has responded by apologizing for her past positions, now saying, “Too many black and brown Americans are
locked up. From mass incarceration to cash bail to policing, our criminal‐justice system needs drastic repair.” She has also sponsored
the Equal Defense Act, which increases funding for public defenders. Still, criminal‐justice activists have remained critical,
complaining that she has ducked specific reform proposals. Other Democrats also have hurdles to overcome. Bernie Sanders, for
instance, voted for the 1994 crime bill, although he had a much lower profile than Biden. And, like Harris, Senator Amy Klobuchar
also has a background as a prosecutor. Her low poll standing has kept it from becoming an issue yet, but she may eventually face
some tough questions about her actions in that office. Even South Bend mayor Pete Buttigieg has faced scrutiny over his handling of
police‐abuse complaints during his tenure as mayor. On the other hand, candidates such as Cory Booker, Elizabeth Warren, and Beto
O’Rourke are better positioned on the issue. Booker, in particular, has championed justice reform. He has introduced the Next Step
Act, which would expand upon the FIRST STEP Act. Booker is also calling for cutting minimum drug sentencing in half, legalizing
marijuana, removing barriers to entry in the job market for those with felony records, and reinstating the right of felons to vote in
federal elections. Beto pushed for criminal‐justice reform during his Texas Senate campaign and has reiterated his support during his
presidential campaign. During his Texas campaign, he stated that he would like Texas to lead the way on criminal‐justice reform. He
supports ending cash bail at the state level, making for‐profit prisons illegal, ending mandatory‐minimum sentencing for nonviolent
drug offenses, and legalizing marijuana. Warren has been far less specific, mostly limiting herself to rhetoric about the “racist”
criminal‐justice system. For a candidate whose claim to fame is “I have a plan for that,” she is remarkably vague on this issue. Still,
she carries far less past baggage than others, leaving her an opening. With more than two dozen candidates in the Democratic
primary and a
general election that is looking extremely close , even secondary issues could play an
outsized role in deciding the outcome. Keep your eyes on criminal‐justice reform.
AT: Polling Fails
Polling works – pollsters adjusted since 2016 – AND, Biden wins now
Harris 09/09 – host and managing editor of What Next, Slate's new daily news podcast.
(Mary, 09/09/20, “Why Experts Believe We Can Trust the Polls *This* Time,”
https://slate.com/news-and-politics/2020/09/biden-trump-2020-election-polls.html) np
We all know what happened last time we tried to predict who would be president. In October of 2016, 7 in 10 voters said they
thought Hillary Clinton would be moving into the White House. Among Clinton supporters, 93 percent expected her to win. But
there is a key difference in what the polls looked like four years ago, which was like the two
candidates playing tag: Clinton would have a lead, then Donald Trump would close the gap, then
she would lead again, and that lead would shrink sharply , et cetera, et cetera. But it’s not like that
now. As Slate’s senior politics writer, Jim Newell, says: “Trump has never come anywhere close to catching
Biden. His lead will vary, but he’s never really come within breathing distance of catching Biden
so far.” Many of us still feel burned by the last election’s polls, but maybe—just maybe— the numbers are a little more
reliable this time around. For Wednesday’s episode of What Next, I spoke with Newell about what the data shows and
whether you can trust it. Our conversation has been edited and condensed for clarity. Mary Harris: With so many national polls
showing Biden in the lead, politicos know what everyone’s thinking: I’ve been to this rodeo before. So all summer long, pollsters
have been laying out their case. Part of the reason these folks are so confident is that in the past few years they’ve changed the way
they do their work. They’ve started making sure their samples include people who they assumed might not vote in the past: non-
college-educated white people. Many of these people turned out not to be nonvoters, but Trump voters. Jim Newell: In
2016,
this whole split between college-educated whites and non-college-educated whites was a pretty
new development, to have this massive a gap between these two demographics. Analysts hadn’t really thought
to weigh all their polls by education. But most of the good pollsters are now weighting by education
in an attempt to capture those who are less likely to respond. I think that’s still a little bit of a problem
that pollsters are aware of, where the people most likely to respond are higher-educated, higher-income. I do not know
why. Maybe wealthier people like to talk to pollsters on the phone more. But it’s a real thing. Another reason for the big
polling miss in 2016 was that late breakers went for Trump so decisively. People who were undecided.
Yeah. If you thought about people who made a decision in 2016, a lot of them just loathed Hillary Clinton, loathed Donald Trump,
were putting off making a decision for as long as possible. And then in the end, they broke for Trump. It doesn’t seem like Biden is as
loathed by the opposition as Clinton was, fairly or not. But also, in some of these polls
of people who don’t like either
Biden or Trump, Biden has been doing pretty well. What do we know about how Biden is doing in this
demographic that broke for Trump, these white, non-college-educated folks? Because the argument for Biden was always that he’ll
connect with those people better. Trump still has huge margins among white, non-college-educated
voters. But Biden’s position is a little better relative to Clinton’s. Biden’s doing better with white
voters across the board. That’s, to me, an interesting story of what’s going on and what kind of coalition Biden is putting
together. He’s not just strictly recreating the Obama coalition, which had really strong margins and
turnout from voters of color and younger voters. This is one where it’s … a little whiter than either of those. White
Obama voters with college degrees were not a strong demographic, especially in 2012. Another demographic that seems to
be breaking for Biden is seniors. Older voters were Trump’s best age demographic in 2016 . He won
them by about 10 points or so. This time, Biden’s been leading among them by 12 points in some polls,
though not in all of them. If this holds, Biden would have a chance to be the first Democrat since Al Gore to win senior voters.
That’s real trouble for Trump if that materializes. He won’t have anywhere else to hide, given his
deficits among so many other groups. And it really could have a big effect when you look at pretty
much all of the swing states: Arizona, Florida, Michigan, Ohio, Pennsylvania, Wisconsin. They all have higher-than-average
proportions of older voters as part of their electorate. So it’s a real softness for Trump right now. There are other things
complicating the polls for Trump: This year there isn’t a strong third-party candidate who could, in a
tight race, give him an advantage. And while undecideds broke for him last time, there just aren’t that many of them this
year—they make up about 5 to 10 percent of the electorate, instead of the about 20 percent they did back in 2016. So the president
is trying to make inroads with other, surprising demographics. Relative to 2016, he does seem to be performing better among Black
and Latino voters, specifically Black and Latino men. But it’s not like he’s winning. He’s going from, like, 5 percent to 8 or 9 percent
among Black voters. And Latino voters, from 28 to 32 percent or something like that. I mean, he can clip a couple of the Democratic
margins here and there, but his real base is still non-college-educated white voters. And the one thing he has going for him is that
even with all the support he got from that demographic in 2016, they’re still a pretty untapped group: There are a ton of non-
college-educated white voters in swing states who have not historically turned out to vote. If he can register a lot more of those
voters than we’ve seen before, that seems to be his best opportunity. Not saying it’s a great one—they’re also low-propensity to
vote in the past, so you can’t just double that overnight. Part of what makes the polling more interesting to watch now is that after
Labor Day is when state data begins to improve. This is important in a country where some states matter more than others. And in
Florida, a poll this week seemed to show the race tightening, with Trump and Biden tied. But Arizona looks like it’s headed in a
different direction. Four years ago, Arizona was considered kind of a reach for the Clinton campaign. She could go for it so long as
she had the rest locked down. Instead, she visited without having the rest locked down. That was a bit of a problem. But in
polling averages in Arizona, Biden’s been up in just about every poll. It’s a combination of the
electorate there becoming more Latino in its composition as well as some suburban decay for
Trump, which you’re seeing everywhere. I think it’s going to be pretty close, but you would have to call Biden the favorite there
just based off the polling we’re seeing.
AT: !D

Abandoning support for the liberal order causes war in Europe, Asia, and the
Middle East that escalates
Chollet 17 (Derek Chollet, Executive Vice President and Senior Advisor for Security and
Defense Policy at the German Marshall Fund of the United States, “Building “Situations of
Strength”, Brookings Report, February, https://www.brookings.edu/wp-
content/uploads/2017/02/fp_201702_ofc_report_web.pdf)//BB

We believe that abandoning


traditional U.S. support for the international order would be a serious
strategic error that would leave the United States weaker and poorer, and the world more
dangerous. It would encourage revisionist states to destabilize Europe, East Asia, and the
Middle East. It would reduce global economic growth and leave us vulnerable to a new financial
crisis. And it would damage efforts to tackle shared challenges like terrorism, nuclear
proliferation, and climate change that have very real— and potentially very damaging—impacts here at home. The
last time an unraveling of an existing international order occurred was in the 1930s, and the
result was depression and world war. Indeed, much of the violence and disorder we see in the
world today results from the weakening of the current order. Moreover, the existing order must be
assessed relative to the plausible alternatives. The best case outcome in light of an American
retreat from the international order is a spheres of influence system whereby China dominates
much of East Asia, Russia dominates much of Eastern and Central Europe, and the United States
is preeminent in its own hemisphere and possibly Western Europe. Spheres of influence
approaches to international order are inherently unstable, largely because the lines of
demarcation are contested. It is a configuration prone to great power conflict. And the process
of transition from an open global order where small nations have rights to a more imperial
model would be particularly fraught.

You might also like