Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

2AC

On
Taxes
Growth’s sustainable:
1) Tax crackdown---ensures quick rebounds by funneling trillions into safety nets and
public investment.
2) Innovation.
McAfee 19. cofounder and codirector of the MIT Initiative on the Digital Economy at the MIT Sloan School of
Management, former professor at Harvard Business School and fellow at Harvard’s Berkman Center for Internet
and Society (Andrew, “What Causes Dematerialization? Markets and Marvels,” More from Less: The Surprising
Story of How We Learned to Prosper Using Fewer Resources—and What Happens Next, Chapter 7, pg 132-138,
Kindle, dml)

There is no shortage of examples of dematerialization . I chose the ones in this chapter because they illustrate a set of fundamental

principles at the intersection of business, economics, innovation, and our impact on our planet. They are:
We do want more all the time, but not more resources. Alfred Marshall was right, but William Jevons was wrong. Our
wants and desires keep growing, evidently without end, and therefore so do our economies. But our
use of the earth's resources does not. We do want more beverage options, but we don't want to keep
using more aluminum in drink cans. We want to communicate and compute and listen to music, but we don't want an arsenal of
gadgets; we're happy with a single smartphone. As our population increases, we want more food, but
we don't have any desire to consume more fertilizer or use more land for crops. Jevons was correct at
the time he that total British demand for coal was increasing even though steam engines were becoming much more efficient. He was right, in other words,
that the price elasticity of demand for coal-supplied power was greater than one in the 1860s. But he was wrong to conclude that this

would be permanent. Elasticities of demand can change over time for several reasons, the most
fundamental of which is technological change. Coal provides a clear example of this. When fracking
made natural gas much cheaper, total demand for coal in the United States went down even though its
price decreased.n With the help of innovation and new technologies, economic growth in America and other rich
countries— growth in all of the wants and needs that we spend money on—has become decoupled from resource consumption. This

is a recent development and a profound one. Materials cost money that companies locked in competition would rather

not spend. The root of Jevons's mistake is simple and boring: resources cost money. He realized this, of course.
What he didn't sufficiently realize was how strong the incentive is for a company in a contested market to

reduce its spending on resources (or anything else) and so eke out a bit more profit. After all, a penny saved is a
penny earned. Monopolists can just pass costs on to their customers, but companies with a lot of competitors

can’t. So American farmers who battle with each other (and increasingly with tough rivals in other countries)
are eager to cut their spending on land, water, and fertilizer. Beer and soda companies want to
minimize their aluminum purchases. Producers of magnets and high tech gear run away from REE as
soon as prices start to spike. In the United States, the 1980 Staggers Act removed government subsidies for freight-hauling railroads, forcing them
into competition and cost cutting and making them all the more eager to not have expensive railcars sit idle. Again and again, we see that

competition spurs dematerialization. There are multiple paths to dematerialization. As profit-hungry


companies seek to use fewer resources, they can go down four main paths. First, they can simply find ways to use less of a given
material. This is what happened as beverage companies and the companies that supply them with cans teamed up to use less aluminum.
It's also the story with American farmers, who keep getting bigger harvests while using less land, water,
and fertilizer. Magnet makers found ways to use fewer rare earth metals when it looked as if China
might cut off their supply. Second, it often becomes possible to substitute one resource for another. Total
US coal consumption started to decrease after 2007 because fracking made natural gas more attractive to
electricity generators. If nuclear power becomes more popular in the United States (a topic we'll take up in chapter 15), we could use

both less coal and less gas and generate our electricity from a small amount of material indeed. A kilogram
of uranium-235 fuel contains approximately 2—3 million times as much energy as the same mass of coal or oil. According to one estimate, the total amount of
energy that humans consume each year could be supplied by just seven thousand tons of uranium fuel. Third, companies can use fewer
molecules overall by making better use of the materials they already own. Improving CNW's railcar utilization from 5
percent to 10 percent would mean that the company could cut its stock of these thirty ton behemoths in half. Companies that own expensive

physical assets tend to be fanatics about getting as much use as possible out of them, for clear and
compelling financial reasons. For example, the world's commercial airlines have improved their load factors—essentially the percentage of seats
occupied on flights—from 56 percent in 1971 to more than 81 percent in 2018. Finally, some materials get replaced by nothing at all.

When a telephone, camcorder, and tape recorder are separate devices, three total microphones are
needed. When they all collapse into a smartphone, only one microphone is necessary . That smartphone also uses
no audiotapes, videotapes, compact discs, or camera film. The iPhone and its descendants are among the world champions

of dematerialization. They use vastly less metal, plastic, glass, and silicon than did the devices they have
replaced and don't need media such as paper, discs, tape, or film . If we use more renewable energy,
we'll be replacing coal, gas, oil, and uranium with photons from the sun (solar power) and the
movement of air (wind power) and water (hydroelectric power) on the earth. All three of these types of power are also among
dematerialization's champions, since they use up essentially no resources once they're up and running. I call these four paths to

dematerialization slim, swap, optimize, and evaporate. They're not mutually exclusive. Companies can and do pursue
all four at the same time, and all four are going on all the time in ways both obvious and subtle. Innovation is hard
to foresee. Neither the fracking revolution nor the world-changing impact of the iPhone's introduction
were well understood in advance. Both continued to be underestimated even after they occurred. The
iPhone was introduced in June of 2007, with no shortage of fanfare from Apple and Steve Jobs. Yet several months later the cover of Forbes was still asking if
anyone could catch Nokia. Innovation is not steady and predictable like the orbit of the Moon or the accumulation of interest on a certificate
of deposit. It's instead inherentlyjumpy, uneven, and random. It's also combinatorial, as Erik Blynjolfsson and I discussed in
our book The Second Machine Age. Most new technologies and other innovations , we argued, are combinations or

recombinations of preexisting elements. The iPhone was "just" a cellular telephone plus a bunch of sensors plus a touch screen plus an
operating system and population of programs, or apps. All these elements had been around for a while before 2007. It took the vision of Steve Jobs to see what they
could become when combined. Fracking was the combination of multiple abilities: to "see" where hydrocarbons were to be found in rock formations deep
underground; to pump down pressurized liquid to fracture the rock; to pump up the oil and gas once they were released by the fracturing; and so on. Again, none of
these was new. Their effective combination was what changed the world's energy situation. Erik and I described the
set of innovations and
technologies available at any time as building blocks that ingenious people could combine and
recombine into useful new configurations. These new configurations then serve as more blocks that
later innovators can use. Combinatorial innovation is exciting because it's unpredictable. It's not easy to
foresee when or where powerful new combinations are going to appear, or who's going to come up
with them. But as the number of both building blocks and innovators increases, we should have
confidence that more breakthroughs such as fracking and smartphones are ahead. Innovation is highly
decentralized and largely uncoordinated, occurring as the result of interactions among complex and interlocking social, technological, and
economic systems. So it's going to keep surprising us. As the Second Machine Age progresses,

dematerialization accelerates. Erik and I coined the phrase Second Machine Age to draw a contrast with the Industrial Era, which as we've seen
transformed the planet by allowing us to overcome the limitations of muscle power. Our current time of great progress with all things

related to computing is allowing us to overcome the limit s of our mental power and is transformative in a
ation

different way: it's allowing us to reverse the Industrial Era's bad habit of taking more and more from the earth

every year. Computer-aided design tools help engineers at packaging companies design generations of aluminum cans that keep getting lighter. Fracking took
off in part because oil and gas exploration companies learned how to build accurate computer models of the rock formations that lay deep underground—models
that predicted where hydrocarbons were to be found. Smartphones took the place of many separate pieces of gear. Because they serve as GPS devices, they've also
led us to print out many fewer maps and so contributed to our current trend of using less paper. It's easy to look at generations of computer paper, from 1960s
punch cards to the eleven-by-seventeen-inch fanfold paper of the 1980s, and conclude that the Second Machine Age has caused us to chop down ever more trees.
The year of peak paper consumption in the United States, however, was 1990. As our devices have become more capable and interconnected, always on and always
with us, we've sharply turned away from paper. Humanity as a whole probably hit peak paper in 2013. As these examples indicate, computers
and their
kin help us with all four paths to dematerialization. Hardware, software, and networks let us slim, swap,
optimize, and evaporate. I contend that they’re the best tools we’ve ever invented for letting us tread more
lightly on the planet. All of these principles are about the combination of technological progress and
capitalism, which are the first of the two pairs of forces causing dematerialization.

Solves the wars their ev’s about


Dominic Dudley 19. Freelance journalist with two decades' experience in reporting on business,
economic and political stories in the Middle East, Africa, Asia and Europe, internally citing a report from
the Global Commission on the Geopolitics of Energy Transformation, chaired by a former president of
Iceland, Olafur Grimsson. 01-11-19. “China Is Set To Become The World's Renewable Energy
Superpower, According To New Report.” Forbes.
https://www.forbes.com/sites/dominicdudley/2019/01/11/china-renewable-energy-superpower/amp/
The continuing growth in renewable energy around the world is set to boost the power of China while undermining the influence of major oil exporters such as
Russia and Middle East states like Saudi Arabia, according to a new report on the geopolitical implications of the changing energy landscape. With a leading position
in renewable energy output as well as in related technologies such as electric vehicles, Beijing now finds itself in an influential position which other countries may
struggle to counter. “Nocountry has put itself in a better position to become the world’s renewable energy
superpower than China,” says the report, which was issued by the Global Commission on the Geopolitics of
Energy Transformation – a group chaired by a former president of Iceland, Olafur Grimsson. The commission was set up by the International
Renewable Energy Agency (IRENA) last year and its findings were published on January 11 in Abu Dhabi, at IRENA’s annual assembly. The report argues that the

geopolitical and socio-economic consequences of the rapid growth of renewable energy could be as
profound as those which accompanied the shift from biomass to fossil fuels two centuries ago . The
changes are likely to include the emergence of new energy leaders around the world, changing patterns
of trade and the development of new alliances . It could also spark instability in some countries which have grown dependent on oil and
gas revenues. One of the key factors driving these changes is that, unlike traditional fossil fuels, renewable energy sources

are widely available around the world. Whether it is solar or wind power, tidal energy or hydroelectric
plants, most countries have the potential to develop some clean energy themselves. This means that
many countries which currently have to import most of their energy will in the future be able to
generate their own power – helping to improve their trade balance and reducing their vulnerability to
volatile prices. While the changes promise to democratize the provision of energy, not all countries will fare equally well in the new landscape. The report
points out that China has taken a lead in renewable energy and is now the world’s largest producer, exporter

and installer of solar panels, wind turbines, batteries and electric vehicles . China also has a clear lead in
terms of the underlying technology, with well over 150,000 renewable energy patents as of 2016, 29% of the global total. The next closest
country is the U.S., which had a little over 100,000 patents, with Japan and the E.U. having closer to 75,000 patents each. While not all patents are useful or
valuable, these figures give an indication of how much investment different countries have been putting into the industry. By contrast, major oil exporters such as
Russia, Indonesia and Saudi Arabia had negligible numbers of renewable energy patents. “ The
renewables revolution enhances the global
leadership of China, reduces the influence of fossil fuel exporters and brings energy independence to
countries around the world,” said Grimsson, speaking at the launch of the report. “The transformation of energy brings big power shifts.” Beyond
China, there are a few other groups of countries which stand to gain from the trends now under way. They include countries with high potential for renewable
energy generation, such as Australia and Chile, which could become significant exporters of renewable electricity. Mineral-rich countries such as Bolivia, the
Democratic Republic of Congo and Mongolia could also tap into rising global demand for their raw materials. There are great dangers for other countries though. In
particular, there is the potential for political instability in oil-exporting countries which find their revenues drying up. And some countries blessed with large reserves
of newly-popular resources used by the renewable energy industry may also be at risk of a new wave of the resource curse. The Global Commission’s report notes
that the states of the Middle East and North Africa, together with Russia and other countries in the Commonwealth of Independent States are most exposed to a
reduction in fossil fuel revenues. On average, these regions have net fossil fuel exports of more than a quarter of their GDP. “Declining export revenues will
adversely affect their economic growth prospects and national budgets,” the report says. “To prevent economic disruption, they will need to adapt their economies
and reduce their dependence on fossil fuels.” Many of these governments are well aware of the risks they face and have been making significant investments into
renewable energy in recent years. For example, the UAE has developed vast solar energy parks and Saudi Arabia recently unveiled plans to develop 59GW of
renewable energy by 2030. However, while these will provide a useful source of energy in the future, they won’t do anything to replace the loss of income if oil and
gas demand slumps around the world. Despite launching many economic diversification programs, governments in the Gulf in particular have yet to find a way to
break their dependency on oil revenues. However, the Gulf states at least have the benefit of large savings which they can use to try and remodel their economies,
or at least soften the blow as oil and gas demand declines. Other oil exporting countries are more vulnerable to instability, particularly those which are already
unstable or have weak political systems, such as the Republic of Congo, Iraq, Libya, South Sudan, Venezuela and Yemen. On a more positive note, most countries in
sub-Saharan Africa ought to benefit from lower energy costs if they can develop domestic renewable energy to replace their existing fossil fuel imports. The same is
true for countries in south Asia and other energy importers such as European countries, China and Japan. Stressing a more optimistic view, Adnan Amin, director
general of IRENA, said at the launch of the report that “ The
global energy transformation driven by renewables can reduce
energy-related geopolitical tensions as we know them and will foster greater cooperation between
states. This transformation can also mitigate social, economic and environmental challenges that are
often among the root causes of geopolitical instability and conflict.” However, he acknowledged that the changes now under
way present both opportunities and challenges and said “the benefits will outweigh the challenges , but only if the right policies and strategies
are in place. It is imperative for leaders and policy makers to anticipate these changes, and be able to manage and navigate the new geopolitical environment.”

War is uniquely likely to escalate in the current landscape


Sundaram 19. Jomo Kwame Sundaram, Former Economics Professor, Former United Nations Assistant
Secretary-General for Economic Development, Received the Wassily Leontief Prize for Advancing the Frontiers of
Economic Thought, and Vladimir Popov, Research Director at the Dialogue of Civilizations Research Institute and
Former Senior Economics Researcher in the Soviet Union, Russia and the United Nations Secretariat, “Economic
Crisis Can Trigger World War”, IPS – Inter Press Service News Agency, 2-12,
http://www.ipsnews.net/2019/02/economic-crisis-can-trigger-world-war/

Economic recovery efforts since the 2008-2009 global financial crisis have mainly depended on unconventional monetary policies. As
fears
rise of yet another international financial crisis, there are growing concerns about the increased
possibility of large-scale military conflict. More worryingly, in the current political landscape, prolonged
economic crisis, combined with rising economic inequality, chauvinistic ethno-populism as well as
aggressive jingoist rhetoric, including threats, could easily spin out of control and ‘morph’ into military
conflict, and worse, world war. Crisis responses limited The 2008-2009 global financial crisis almost ‘bankrupted’
governments and caused systemic collapse. Policymakers managed to pull the world economy from the
brink, but soon switched from counter-cyclical fiscal efforts to unconventional monetary measures, primarily ‘quantitative
easing’ and very low, if not negative real interest rates. But while these monetary interventions averted realization of the worst fears at the
time by turning the US economy around, they did little to address underlying economic weaknesses, largely due to the ascendance of finance in
recent decades at the expense of the real economy. Since then, despite promising to do so, policymakers have not seriously pursued, let alone
achieved, such needed reforms. Instead, ostensible structural reformers have taken advantage of the crisis to pursue largely irrelevant efforts
to further ‘casualize’ labour markets. This lack of structural reform has meant that the unprecedented liquidity central banks injected into
economies has not been well allocated to stimulate resurgence of the real economy. From bust to bubble Instead, easy credit raised asset
prices to levels even higher than those prevailing before 2008. US house prices are now 8% more than at the peak of the property bubble in
2006, while its price-to-earnings ratio in late 2018 was even higher than in 2008 and in 1929, when the Wall Street Crash precipitated the Great
Depression. As monetary tightening checks asset price bubbles, another economic crisis — possibly more severe than the last, as the economy
has become less responsive to such blunt monetary interventions — is considered likely. A decade of such unconventional monetary policies,
with very low interest rates, has greatly depleted their ability to revive the economy. The
implications beyond the economy of
such developments and policy responses arealready being seen. Prolonged economic distress has worsened public
antipathy towards the culturally alien — not only abroad, but also within. Thus, another round of
economic stress is deemed likely to foment unrest, conflict, even war as it is blamed on the foreign.
International trade shrank by two-thirds within half a decade after the US passed the Smoot-Hawley Tariff
Act in 1930, at the start of the Great Depression , ostensibly to protect American workers and farmers from foreign competition!
Liberalization’s discontents Rising economic insecurity, inequalities and deprivation are expected to strengthen ethno-
populist and jingoistic nationalist sentiments, and increase social tensions and turmoil, especially
among the growing precariat and others who feel vulnerable or threatened . Thus, ethno-populist
inspired chauvinistic nationalism may exacerbate tensions, leading to conflicts and tensions among
countries, as in the 1930s. Opportunistic leaders have been blaming such misfortunes on outsiders and
may seek to reverse policies associated with the perceived causes, such as ‘globalist’ economic
liberalization. Policies which successfully check such problems may reduce social tensions, as well as the likelihood of social turmoil and
conflict, including among countries. However, these may also inadvertently exacerbate problems. The recent spread of anti-globalization
sentiment appears correlated to slow, if not negative per capita income growth and increased economic inequality. To be sure, globalization
and liberalization are statistically associated with growing economic inequality and rising ethno-populism. Declining
real incomes and
growing economic insecurity have apparently strengthened ethno-populism and nationalistic chauvinism,
threatening economic liberalization itself , both within and among countries. Insecurity, populism, conflict Thomas Piketty has
argued that a sudden increase in income inequality is often followed by a great crisis . Although causality is difficult to
prove, with wealth and income inequality now at historical highs, this should give cause for concern. Of course, other factors also contribute to
or exacerbate civil and international tensions, with some due to policies intended for other purposes. Nevertheless, even if unintended, such
developments could inadvertently catalyse future crises and conflicts. Publics often have good reason to be restless, if not angry, but the
emotional appeals of ethno-populism and jingoistic nationalism are leading to chauvinistic policy measures which only make things worse. At
the international level, despite the world’s unprecedented and still growing interconnectedness,
multilateralism is increasingly being eschewed as the US increasingly resorts to unilateral, sovereigntist
policies without bothering to even build coalitions with its usual allies . Avoiding Thucydides’ iceberg Thus, protracted
economic distress, economic conflicts or another financial crisis could lead to military confrontation by the
protagonists, even if unintended. Less than a decade after the Great Depression started, the Second World War had begun as the Axis
powers challenged the earlier entrenched colonial powers.
Crime
Investigatory power is key to dismantle terror financing---risks are rising now.
DOT 18. U.S. Department of the Treasury, (12-20-2018, “National Terrorist Financing Risk Assessment,”
Treasury, https://home.treasury.gov/system/files/136/2018ntfra_12182018.pdf)//pacc

TF = terrorist financing, MSB = money services business

The wealth and resources of the U nited States continue to make it an attractive target for terrorist financiers raising
funds to support activity outside of the United States, and the central role played by U.S. financial institutions in processing global payments expose the U.S. financial
system to risk from transferring funds that are ultimately being sent on behalf of terrorist organizations or their supporters. Multiple
terrorist groups, including ISIS and its regional affiliates, AQ and its regional affiliates, Hizballah, ANF, and Al-Shabaab, benefit from
funds provided by U.S.-based individuals. These groups and their supporters target certain diaspora communities and aggressively use social media to identify followers and solicit
financial or other forms of material support, such as traveling abroad to serve as FTFs. U.S.-based supporters of these groups also raise funds from legitimate commercial activity and from criminal activity, and may also procure

commercial goods from U.S. companies. Despite the considerable mitigation measures and efforts of U.S. banks, the U.S. banking
system is exposed to TF risk, such as from foreign respondent banks that may not have effective AML/CFT programs. TF facilitators may seek to abuse MSBs to
move funds through the banking system , and while financial regulatory, supervisory and outreach efforts have mitigated much of the potential vulnerability, risk remains, especially
from complicit MSB employees assisting TF facilitators. The U.S. government has also aggressively prosecuted persons or entities operating as unlicensed

money transmitters and worked with financial institutions to develop measures to more effectively recognize such activity; however, given the difficulty in
identifying these transactions and their observed use to facilitate TF , some risk does remain. Terrorist groups who
place a priority on operational security will also seek to use U.S. currency to move funds, despite the increased cost and time for moving cash. Overall, the U.S. charitable sector has been significantly
strengthened against TF abuse through both government actions and charities’ efforts at enhancing transparency, due diligence and risk mitigation. Those U.S. charitable organizations that operate or send funds abroad or that

have branches outside of the United States, particularly in high-risk areas where ISIS, AQ, and other terrorist groups are most active, such as Afghanistan, Pakistan, Somalia, Syria, and Yemen, continue to face
potentially higher risk of TF abuse, while those U.S. charities operating solely domestically face low TF risk, which are the vast majority of U.S. charities. While there have been some
instances of terrorist groups soliciting funds in virtual currencies such as bitcoin, and using virtual currencies to move funds or purchase goods or serves, virtual currencies do not currently present a significant terrorist financing
risk. However, inadequate regulation and supervision in most jurisdictions worldwide exacerbates the money laundering, terrorist financing, and sanctions evasions risks that virtual currency payments present. To address these

challenges, the U.S. government has developed a comprehensive and coordinated approach to dismantle terrorist financial networks in the United States and overseas.
Through the targeted and complementary use of law enforcement authorities, financial sanctions, and other financial tools, the U.S. has been able to identify and disrupt the actions of terrorist
financiers and other terrorist supporters, including investigations, prosecutions and financial sanctions targeting ISIS, AQ and

Hizballah financiers and facilitators. ISIS leaders and operatives have been aggressively targeted around the world, resulting in the U.S. sanctioning fifteen ISIS branches along with more
than 95 ISIS senior leaders, operatives, financial facilitators, recruiters, and affiliated MSBs since 2014. U.S. efforts to disrupt ISIS financing have gone well beyond financial sanctions to include targeted air strikes against ISIS cash
storage sites and support for regulatory and enforcement action by the Iraqi government. The U.S. government has aggressively utilized financial tools to limit AQ funding streams globally. This includes designating over 160
individuals affiliated with AQ and other terrorist organizations throughout Afghanistan and Pakistan, over 70 individuals and entities across the Gulf, and several more in Africa and other countries. U.S. authorities have used
financial sanctions to designate Hizballah supporters in over 20 countries, including in the Western Hemisphere, West Africa, and across the Middle East, including designations of over 25 Hizballah-affiliated individuals and entities

U.S. law enforcement, led by the DOJ, has disrupted numerous avenues of support
in 2018 alone– more than any previous year. Along with the use of financial sanctions,

for terrorists and terrorist organizations through the investigation and prosecution of dozens of individuals in the U.S. and abroad
for providing material support to designated foreign terrorist organizations .

Nuke-terror escalates every conflict---extinction.


Hayes 18. Peter Hayes, PhD from Berkeley, Director of the Nautilus Institute and Honorary Professor at the
Centre for International Security Studies at the University of Sydney, "NON-STATE TERRORISM AND INADVERTENT
NUCLEAR WAR", NAPSNet Special Reports, January 18, 2018, https://nautilus.org/napsnet/napsnet-special-
reports/non-state-terrorism-and-inadvertent-nuclear-war/

Nuclear terrorism trigger for inadvertent nuclear war


post-cold war: ? The possible catalytic effect of nuclear terrorism on the risk of state-based nuclear war is not a simple linkage. The multiple

pathways include Early warning systems fail or are


types and scales of nuclear terrorism may affect state-nuclear use decisions along multiple that lead to inadvertent nuclear war. These :

“tripped” in ways that lead to launch-on-warning Strategic miscalculation in Accidental nuclear detonation, including sub-critical explosions.

crisis Decision-making failure


, show of force Allied or enemy choices to seek
(such as irrational, misperception, bias, degraded, group, and time-compressed decision-making) (
revenge, to exploit nuclear risk, to act out of desperation ) Organizational cybernetics whereby a nuclear command-control-and communications (NC3) system generates error, including
the interplay of national NC3 systems in what may be termed the meta-NC3 system. Synchronous and coincident combinations of above.[4] Exactly how, where, and when nuclear terrorism may “ambush” nuclear armed states already heading for or on such a path to inadvertent nuclear
war depends on who is targeting whom at a given time, either immediately due to high tension, or generally due to a structural conflict between states. Nuclear armed states today form a complex set of global threat relationships that are not distributed uniformly across the face of Earth.
Rather, based on sheer firepower and reach, the nine nuclear weapons states form a global hierarchy with at least four tiers, viz: Tier 1: United States, clear technological supremacy and qualitative edge. Tier 2: Russia, China, global nuclear powers and peers with the United States due to
the unique destructive power of even relatively small nuclear arsenals, combined with global reach of missile and bomber delivery systems, thereby constituting a two-tiered global “nuclear triangle” with the United States. Tier 3: France, UK, NATO nuclear sharing and delivery NATO
members (Belgium, Germany, Italy, the Netherlands and Turkey) and the NATO and Pacific nuclear umbrella states (Japan, South Korea, Australia) that depend on American nuclear extended deterrence and directly and indirectly support US and US-allied nuclear operations even though
they do not host nor deliver nuclear weapons themselves. Tier 4: India, Pakistan, Israel, DPRK. The first two tiers constitute the global nuclear threat triangle that exists between the United States, Russia, and China, forming a global nuclear “truel.” Each of these states targets the others;
each represents an existential threat to the other; and each has a long history of mutual nuclear threat that is now a core element of their strategic identity. Tier three consists of states with their own nuclear force but integrated with that of the United States (even France!) that expand
the zone of mutual nuclear threat over much of the northern and even parts of the southern hemisphere; and states that host American nuclear command, control, communications, and intelligence systems that support US nuclear operations and to whom nuclear deterrence is
“extended” (if, for example, Australia’s claim to having an American nuclear umbrella is believed). The fourth tier is composed of smaller nuclear forces with a primarily regional reach and focus. Between most of these nuclear armed states and across the tiers, there are few shared “rules
of the road.” The more of these states that are engaged in a specific conflict and location, the more unpredictable and unstable this global nuclear threat system becomes, with the potential for cascading and concatenating effects. Indeed, as the number of nuclear states projecting

The emergence of a fifth tier of non-state actors with the capacity to


nuclear threat against each other increases, the notion of strategic stability may lose all meaning. —

project nuclear threat against nuclear-armed and nuclear umbrella states is a critically (although not only these states)—

important possible catalytic actor in the new conditions of nuclear threat complexity that already exist
today Such a layer represents an “edge of chaos” where the attempts by nuclear armed states to exert
.

absolute “vertical” control over the use of nuclear weapons confront the potential of non-state entities
and even individuals (insiders) to engage in “horizontal” nuclear terrorism , presenting radically different control imperatives to the standard paradigm of
organizational procedures, technical measures, and safeguards of various kinds. This tier is like the waves and tides on a beach that quickly surrounds and then causes sand castles to collapse. In 2010, Robert Ayson reviewed the potential linkages between inter-state nuclear war and non-

T]hese two nuclear worlds a non-state actor nuclear attack and a catastrophic interstate
state terrorism. He concluded: “…[ —

nuclear exchange are not necessarily separable — some sort of terrorist attack especially an act . It is just possible that , and

of nuclear terrorism could precipitate a chain of events leading to a massive exchange of nuclear
,

weapons between two or more of the states that possess them .”[5] How this linkage might unfold is the subject of the next sections of this essay. Are non-state actors motivated

A diverse set of non-state actors have engaged in terrorist activities


and able to attempt nuclear terrorism? In —for which there is no simple or consensual definition.

2011, there were more than 6,900 known extremist, terrorist and other organizations associated with
guerrilla warfare, political violence, protest, organized crime and cyber-crime . Of these, about 120 terrorist and extremist groups had been blacklisted by

Some have argued that the technical, organizational, and funding demanded
the United Nations, the European Union and six major countries.[6]

for a successful nuclear attack, especially involving nuclear weapons, exceeds the capacity of most of the
non-state actors with terrorist proclivities this assertion is not true especially at lower levels of . Unfortunately, ,

impact but even at the highest levels of obtaining authentic nuclear weapons capabilities a small
as shown in Figure 1; ,

number of non-state actors already exhibit the motivation and possible capacity to become nuclear-
armed. Ellingsen suggests a useful distinction that nuclear terrorists may be impelled by two divergent motivations, as shown in Figure 2, creating “opportunistic” and “patient” profiles.[7] The requirements for an opportunist non-state nuclear terrorist tend towards
immediate use and the search for short-term payoffs with only tactical levels of commitment; whereas the patient non-state nuclear terrorist is able and willing to sustain a long-term acquisition effort to deal a strategic blow to an adversary in a manner that could be achieved only with
nuclear weapons. In turn, many factors will drive how a potential nuclear terrorist non-state organization that obtains nuclear weapons or materials may seek to employ them, especially in its nuclear command-and-control orientations. Blair and Ackerman suggest that the goals,
conditions, and capacity limitations that shape a possible nuclear terrorist’s posture lead logically to three types of nuclear terrorist nuclear command-and-control postures, viz: pre-determined (in which the leadership sends a fire order to a nuclear-armed subordinate and no change is
entertained and no capacity to effect change is established in the field, that is, the order is fire-and-forget); assertive (in which only the central command can issue a nuclear fire order, central control is maintained at all times, with resulting demanding communications systems to support
such control); and delegative (in which lower level commanders control nuclear weapons and have pre-delegated authority to use them in defined circumstances, for example, evidence of nuclear explosions combined with loss-of-connectivity with their central command).[8] An example
of such delegative control system was the November 26, 2008 attack on Mumbai that used social media reporting to enable the attacking terrorists to respond to distant controller direction and to adapt to counter-terrorist attacks—a connectivity tactic that the authorities were too slow
to shut down before mayhem was achieved.[9] Logically, one might expect nuclear terrorists oriented toward short-term, tactical goals to employ pre-determined nuclear command-and-control strategies in the hope that the speed of attack and minimum field communications avoids
discovery and interdiction before the attack is complete; whereas nuclear terrorists oriented toward long-term, strategic goals might employ more pre-delegative command-and-control systems that would support a bargaining use and therefore a field capacity to deploy nuclear weapons
or materials that can calibrate actual attack based on communications with the central leadership with the risk of interdiction through surveillance and counter-attack. These differing strategic motivations, timelines, and strategies in many respects invert those of nuclear weapons states
that rely on large organizations, procedures, and technical controls, to ensure that nuclear weapons are never used without legitimate authorization; and if they are used, to minimize needless civilian casualties (at least some nuclear armed states aspire to this outcome). The repertoire of
state-based practices that presents other states with credible nuclear threat and reassures them that nuclear weapons are secure and controlled is likely to be completely mismatched with the strengths and strategies of non-state nuclear terrorists that may seek to maximize civilian
terror, are not always concerned about their own survival or even that of their families and communities-of-origin, and may be willing to take extraordinary risk combined with creativity to exploit the opportunities for attack presented by nuclear weapons, umbrella, and non-nuclear
states, or their private adversaries. For non-state actors to succeed at complex engineering project such as acquiring a nuclear weapons or nuclear threat capacity demands substantial effort. Gary Ackerman specifies that to have a chance of succeeding, non-state actors with nuclear
weapons aspirations must be able to demonstrate that they control substantial resources, have a safe haven in which to conduct research and development, have their own or procured expertise, are able to learn from failing and have the stamina and strategic commitment to do so, and
manifest long-term planning and ability to make rational choices on decadal timelines. He identified five such violent non-state actors who already conducted such engineering projects (see Figure 3), and also noted the important facilitating condition of a global network of expertize and
hardware. Thus, although the skill, financial, and materiel requirements of a non-state nuclear weapons project present a high bar, they are certainly reachable. Figure 3: Complex engineering projects by five violent non-state actors & Khan network Source: G. Ackerman, “Comparative
Analysis of VNSA Complex Engineering Efforts,” Journal of Strategic Security, 9:1, 2016, at: http://scholarcommons.usf.edu/jss/vol9/iss1/10/ Along similar lines, James Forest examined the extent to which non-state actors can pose a threat of nuclear terrorism.[10] He notes that such
entities face practical constraints, including expense, the obstacles to stealing many essential elements for nuclear weapons, the risk of discovery, and the difficulties of constructing and concealing such weapons. He also recognizes the strategic constraints that work against obtaining
nuclear weapons, including a cost-benefit analysis, possible de-legitimation that might follow from perceived genocidal intent or use, and the primacy of political-ideological objectives over long-term projects that might lead to the group’s elimination, the availability of cheaper and more
effective alternatives that would be foregone by pursuit of nuclear weapons, and the risk of failure and/or discovery before successful acquisition and use occurs. In the past, almost all—but not all—non-state terrorist groups appeared to be restrained by a combination of high practical
and strategic constraints, plus their own cost-benefit analysis of the opportunity costs of pursuing nuclear weapons. However, should some or all of these constraints diminish, a rapid non-state nuclear proliferation is possible. Although only a few non-state actors such as Al Qaeda and
Islamic State have exhibited such underlying stamina and organizational capacities and actually attempted to obtain nuclear weapons-related skills, hardware, and materials, the past is not prologue. An incredibly diverse set of variously motivated terrorist groups exist already, including
politico-ideological, apocalyptic-millenarian, politico-religious, nationalist-separatist, ecological, and political-insurgency entities, some of which converge with criminal-military and criminal-scientist (profit based) networks; but also pyscho-pathological mass killing cults, lone wolves, and
ephemeral copy-cat non-state actors. The social, economic, and deculturating conditions that generate such entities are likely to persist and even expand. In particular, rapidly growing coastal mega-cities as part of rapid global urbanization offer such actors the ability to sustain
themselves as “flow gatekeepers,” possibly in alliance with global criminal networks, thereby supplanting the highland origins of many of today’s non-state violent actors with global reach.[11] Other contributing factors contributing to the supply of possible non-state actors seeking
nuclear weapons include new entries such as city states in search of new security strategies, megacities creating their own transnationally active security forces, non-states with partial or complete territorial control such as Taiwan and various micro-states, failing states, provinces in
dissociating, failing states that fall victim to internal chaos and the displacement effects of untrammeled globalization, and altogether failed states resulting in ungoverned spaces. To this must be added domestic terrorist entities in the advanced industrial states as they hollow out their
economies due to economic globalization and restructuring, adjust to cross-border migration, and adapt to cultural and political dislocation. In short, the prognosis is for the fifth tier of non-state actors to beset the other four tiers with intense turbulence just as waves on a beach swirl
around sandcastles, washing away their foundations, causing grains of sand to cascade, and eventually collapsing the whole structure. Observed non-state nuclear threats and attacks In light of the constraints faced by non-state terrorist actors in past decades, it is not surprising that the
constellation of actual nuclear terrorist attacks and threats has been relatively limited during and since the end of the Cold War. As Martha Crenshaw noted in a comment on the draft of this paper: We still don’t know why terrorists (in the sense of non-state actors) have not moved into
the CBRN [chemical,biological, radiological or nuclear ] domain. (Many people think biosecurity is more critical, for that matter.) Such a move would be extremely risky for the terrorist actor, even if the group possessed both capability (resources, secure space, time, patience) and
motivation (willingness to expend the effort, considering opportunity costs). So far it appears that “conventional” terrorism serves their purposes well enough. Most of what we have seen is rhetoric, with some scattered and not always energetic initiatives.[12] Nonetheless, those that
have occurred demonstrate unambiguously that such threats and attacks are not merely hypothetical, in spite of the limiting conditions outlined above. One survey documented eighty actual, planned attacks on nuclear facilities containing nuclear materials between 1961-2016[13] as
follows: 80 attacks in 3 waves (1970s armed assaults, 1990s thefts, post-2010, breaches) High threat attacks: 32/80 attacks posed substantial, verified threat of which 44 percent involved insiders. All types of targets were found in the data set—on reactors, other nuclear facilities, military
bases leading Gary Ackerman and to conclude: “Overall, empirical evidence suggests that there are sufficient cases in each of the listed categories that no type of threat can be ignored.”[14] No region was immune; no year was without such a threat or attack. Thus, there is a likely to be a
coincidence of future non-state threats and attacks with inter-state nuclear-prone conflicts, as in the past, and possibly more so given the current trend in and the generative conditions for global terrorist activity that will likely pertain in the coming decades. Of these attacks, about a
quarter each were ethno-nationalist, secular utopian, or unknown in motivation; and the remaining quarter were a motley mix of religious (11 percent), “other” (5 percent), personal-idiosyncratic (4 percent), single issue (2 percent) and state sponsored (1 percent) in motivation. The
conclusion is unavoidable that there a non-state nuclear terrorist attack in the Northeast Asia region is possible. The following sections outline the possible situations in which nuclear terrorist attacks might be implicated as a trigger to interstate conflict, and even nuclear war.

Particular attention is paid to the how nuclear command, control and communications systems may
play an independent and unanticipated role in leading to inadvertent nuclear war, separate to the contributors to inadvertency normally

and misinterpretation of intended or


included such as degradation of decision-making due to time and other pressures; accident; “wetware” (human failures), software or hardware failures;

unintended signals from an adversary At least five distinct nuclear-prone axes of . Regional pathways to interstate nuclear war

conflict are evident These are: US-DPRK conflict including with United States, US allies Japan,
in Northeast Asia. (

South Korea and Australia; and all other UNC command allies. Many permutations possible ranging from non-violent collapse to implosion and civil war, inter-Korean war, slow

China-Taiwan conflict
humanitarian crisis. Of these implosion-civil war in the DPRK may be the most dangerous, followed closely by an altercation at the Joint Security Area at Panmunjon where US, ROK, and DPRK soldiers interact constantly. ,
whereby China may use nuclear weapons to overcome US forces operating in the West Pacific , either at sea, or based on

China-Japan conflict escalates via attacks on


US (Guam, Alaska) or US allied territory in the ROK, Japan, the Philippines, or Australia); or US uses nuclear weapons in response to Chinese attack on Taiwan.

early warning systems China-Russia conflict possibly in context of loss-of-control


, for example, underwater hydrophone systems (Ayson-Ball, 2011). ,

of Chinese nuclear forces in a regional conflict involving Taiwan or North Korea Russia-US conflict . ,

involving horizontal escalation from a head-on collision or somehow starts at sea with Russian nuclear forces in Europe or the Middle East; (mostly

or over North Korea


likely seems ASW) Combinations of or simultaneous
(some have cited risk of US missile defenses against North Korean attack as risking Russian immediate response).

eruption of the above conflicts that culminate in nuclear war are also possible. Other unanticipated nuclear-prone conflict axes (such as Russia-Japan) could also emerge with little
warning.
off
1st off
Case turns and outweighs the DA – structural economic issues drive nationalistic
pressures which makes Republican governance inevitable. That’s Liu

Normal means is Paul, Graham, Grassley, and Scott breaking ranks for a Trump
signature
Zhou 18. Li Zhou is a Vox Politics and Policy Reporter, 12-12-2018, "Republicans’ civil war over criminal
justice reform, explained," Vox, https://www.vox.com/2018/12/12/18131130/mitch-mcconnell-
criminal-justice-reform - AM

Louisiana Republican John Kennedy


just added yet another twist to Senate Republicans’ heated internal fight
over criminal justice reform. Kennedy — a longstanding opponent of sentencing changes in criminal justice legislation — said he intends to block a
vote on the bill this Thursday, because he’d like some more time to review it, according to BuzzFeed’s Paul McLeod. Kennedy’s move marks the latest development
in a contentious back-and-forth that has roiled Senate Republicans — who are deeply divided on the matter, despite President Donald
Trump’s endorsement of the legislation. On the one hand, there’s Sens. Chuck Grassley, Lindsey Graham, Tim Scott

and Rand Paul, who are among a vocal GOP contingent pressing the Senate to support the First Step Act, a bipartisan
criminal justice reform bill aimed at easing prison sentences for those incarcerated in the federal system. On the other, there’s a
group of Senate Republicans, including most prominently Sen. Tom Cotton, who have vowed to oppose the legislation and
argue that it could give violent criminals a pass. The result is a split among Republicans during a time when, increasingly, everything is

partisan: There is a small but dedicated group of Republicans for whom criminal justice reform feels personal.
There is another subset of Republicans who worry it makes the party of “law and order” seem weak
on crime. And there’s a president who, as a criminal probe into his campaign’s activities during the election circles closer, no one is
quite sure where he stands despite his official support. Because of these intra-party dynamics, Senate Majority Leader Mitch
McConnell had been reluctant to bring the bill to the floor, though he announced that he would do so earlier this week. As Vox’s German Lopez has reported, the

bill itself contains a relatively mild set of reforms that would only apply to a small fraction of the broader prison population, but it would still mark one
of the most significant criminal justice reform efforts that Congress has passed in years. The legislation includes
key provisions aimed at cutting down recidivism and reducing mandatory-minimum sentences. So far, it’s not clear which force in the Republican Party will win.

None of those Senators are in competitive races – we’ll insert their ev for reference
1NC Steinberg 5-10-2020 - served as Regional Administrator of Region 2 EPA during the
administration of former President George W. Bush and as Executive Director of the New Jersey
Meadowlands Commission (Alan, “The Electoral Math That Will Give the Democrats US Senate Control,”
Insider NJ, https://www.insidernj.com/electoral-math-that-will-give-the-democrats-us-senate-
control/)//BB

So Donald Trump is doomed to defeat. I reaffirm my prediction that Biden will win at least 400 electoral votes and
attain a popular vote margin of at least ten percent . And the presidential landslide will enhance the Democratic Senate campaign as
well. The GOP currently holds a 53-47 Senate majority , and Democrats would need to net three seats to win control
of the chamber if they also win the White House — or four seats if Donald Trump wins reelection. The GOP is certain to regain the Alabama US Senate seat currently
held by Democrat Doug Jones. This means that in order to win control of the Senate, the Democrats must win five seats currently held by Republican incumbents. It
so happens that there
are five races in which the Democratic challengers are solid favorites to defeat the
Republican incumbents.
Arizona Democrat Mark Kelly, a former Astronaut, US Navy Captain, and husband of former
Representative and assassination survivor Gabby Giffords, is an overwhelming favorite to defeat
incumbent GOP US Senator Martha McSally.

Democratic former Colorado Governor John Hickenlooper is a prohibitive favorite to defeat


incumbent Republican US Senator Cory Gardner.

Speaker of the Maine House of Representatives Sara Gideon is a solid favorite to triumph over
incumbent GOP Senator Susan Collins, at long last sending her into an overdue retirement.

Montana Democratic Governor Steve Bullock is continuing to maintain a sizable lead over
incumbent Republican US Senator Steve Daines.

North Carolina Democrat former State Senator Cal Cunningham is on course to maintain his
start-to-finish lead over Republican incumbent US Senator Thom Tillis.

Republicans win the Senate even if Biden wins.


McCormack 20 – John Mccormack, the Washington correspondent for National Review and a fellow
at the National Review Institute. [Republicans Could Hold the Senate Even If Trump Loses, 8-6-20,
https://www.nationalreview.com/2020/08/republicans-could-hold-the-senate-even-if-trump-
loses/]//BPS

But the conventional wisdom seems to have grown too bearish on the GOP odds of holding the Senate.
The defeat of the deeply unpopular Kris Kobach in Tuesday’s Kansas GOP Senate primary, along with fresh polling this week out of Iowa and
Montana showing Republicans ahead, suggests that Republicans have a decent chance of narrowly holding onto the upper
chamber even if Joe Biden wins the presidency.

Here’s a closer look at the latest polls and the math behind preventing unified Democratic control of
government.

On Tuesday, the first public poll of Alabama since the July primary runoff shows Republican nominee Tommy Tuberville leading incumbent
Democrat Doug Jones
by 17 points (52 percent to 35 percent). Absent revelations as bad as what took down Roy Moore in
2017, this Alabama Senate seat will likely return to Republicans.

A Tuberville win in Alabama would mean Republican incumbents could lose in three states and the GOP would still
hold a 51-seat majority in the Senate.
But pick-up opportunities abound for the Democrats.

First, there are the two Republican seats in states that voted for Hillary Clinton in 2016: Colorado and Maine.

In Colorado, polls this spring showed Republican incumbent Cory Gardner trailing Democratic candidate and former governor John
Hickenlooper by double digits, but only one public nonpartisan poll has been released this summer: A late July Morning Consult poll showed
Hickenlooper leading Gardner 48 percent to 42 percent. The same survey showed Joe Biden leading Trump 52 percent to 39 percent. Hillary
Clinton defeated Trump in Colorado 48 percent to 43 percent in 2016.

In Maine, there has also only been one public nonpartisan poll released this summer: A late July Colby College poll showed Democrat Sarah
Gideon leading incumbent Republican Susan Collins 44 percent to 39 percent, while Biden was leading Trump 50 percent to 38 percent. Trump
lost Maine 45 percent to 48 percent in 2016.

Then there are the two purple states where Republican incumbents are most endangered: North Carolina and Arizona.
In North Carolina, incumbent Republican Thom Tillis trailed Democratic challenger Cal Cunningham by 9 points in two separate nonpartisan
polls in July (one conducted by Marist and the other by YouGov). Trump, who won the state by 3.7 points in 2016, trailed Biden by 44 percent
to 51 percent in the Marist poll and 44 percent to 48 percent in the YouGov poll.

In Arizona, several polls were released in July, and the RealClearPolitics polling average shows Republican incumbent Martha McSally trailing
Democrat Mark Kelly by nearly 7 points (42.8 percent to 49.6 percent). Trump carried the state by 3.5 points in 2016 and now trails Biden by 3.7
points in the RCP polling average.

If Democrats sweep all four races, it’s lights out for the GOP Senate majority. And if the election were held today, Republicans would likely lose
all four races. In the eight congressional election cycles dating back to 2004, there is only one Senate race in which a candidate won despite
trailing by more than three percentage points in the RealClearPolitics polling average on Election Day.

There are, however, many examples of Senate candidates who still won despite trailing by three points or
less in the RCP polling average on Election Day. (It’s called a “margin of error” for a reason.)

And Election Day is still three months away. It’s not impossible to imagine North Carolina or Arizona tightening as
the election nears. In August 2016, for example, Wisconsin GOP senator Ron Johnson trailed Democrat Russ Feingold by 11 points, and the race
didn’t tighten until October. On Election Day, the polls showed Johnson down 2.7 points; he won by 3.4 points.

But the most realistic path for the GOP Senate holding onto a 51-seat majority involves Republicans losing North Carolina, Colorado, and
Arizona, while Susan Collins holds onto Maine and the rest of the GOP incumbents win other competitive races discussed below.

Although Collins is down by 4.5 points in the RCP polling average, a


Republican strategist tells National Review that “all of
Collins’s polling over the last four months has shown her ahead. The last month it’s been outside the
margin of error.” Collins won’t run 21 points ahead of the GOP presidential nominee, as she did in 2008, but it’s not hard to see her
running far enough ahead of Trump enough to win in 2020.

So it’s entirely possible over the next three months that the
presidential race could tighten. Perhaps some swing voters could
become worried about the consequences of an unchecked Democratic government.

Congress can’t separate from the party.


Grossmann 20 – Matt Grossmann, Political Science Professor at Michigan State University. [Why GOP
Senators Are Sticking with Trump — Even Though It Might Hurt Them in November, 8-10-20,
https://fivethirtyeight.com/features/why-gop-senators-are-sticking-with-trump-even-though-it-might-
hurt-them-in-november/]//BPS
Senators up for reelection are siding with Trump

Trump may not be able to resist intervening in Senate and House races, given his desire for being the
center of attention. But Republican legislators have choices about how to present themselves. So far, they are sticking
with Trump.
McSally supported repealing Obamacare, opposed the president’s impeachment and has rarely publicly criticized the Trump administration.
That is a big shift from 2016, when she chose not to endorse Trump and bashed his behavior. Moreover, McSally has voted with Trump’s
position 95 percent of the time over the last two congressional sessions — far more often than you’d expect based on Arizona’s politics. In fact,
in the Senate, McSally has the fourth highest “Trump Plus-Minus” score, which measures how often a member of Congress votes in line with
the president’s position relative to what you would expect from that member based on the partisanship of their state or district.

Other Republican senators up for reelection this year have also remained loyal to Trump’s positions. Gardner has voted with
Trump 89 percent of the time during the 115th and 116th Congresses (he has the second highest “Trump Plus-Minus”), Thom Tillis (North
Carolina) with Trump 93 percent of the time, David Perdue (Georgia) 95 percent, Joni Ernst (Iowa) 91 percent and Steve Daines (Montana) 86
percent.

The one partial exception is Collins, who has sided with Trump two-thirds of the time over the last two Congresses (though that is still far more
than expected based on Trump’s losing margin in Maine). Collins has clearly tried to put some distance between herself and Trump as her
reelection race draws nearer — but not too much. She did not support Obamacare repeal, though she voted to acquit the president in his
Senate trial and voted to confirm his Supreme Court nominees. She went from voting with Trump 77 percent of the time in the 115th Congress
(2017-18) to only 46 percent in the 116th (2019-20).

It’s been a tough balancing act that may end up pleasing no one. Indeed, Collins has become the poster child for
Republicans’ unwillingness to seriously break with Trump. She has been “disappointed” in Trump without taking action to restrain him so often
that “Saturday Night Live” did a skit about it.

Still, Collins may still have a chance to retain an independent reputation, something she’s cultivated for years by necessity in blue-ish Maine.
The problem is … that’s harder to do these days.

Why it’s become more difficult to break with the national party

Many members of Congress used to have local reputations independent of their parties, presenting
themselves as fighters for local interests and dollars in Washington. Even if most voters hated Congress,
they still liked their own representatives and senators.

But the long-term trends are nationalization (voters perceive their representatives through the lens of national and presidential
politics) and polarization (voters see the parties as distinct and agree more with one side). Voters learn less about their own
legislators and more about the president, in part due to decreasing reliance on local news. As a result, fewer
voters split their tickets, voting for one party’s candidate for president and the other’s for Senate or the House.
Democrats have faced the same problem in trying to distinguish themselves from their party. Voters recognized the independent streak of West
Virginia’s Joe Manchin and Montana’s Jon Tester in the 2018 midterms, but Missouri’s Claire McCaskill, North Dakota’s Heidi Heitkamp and
Indiana’s Joe Donnelly weren’t able to overcome the Republican lean of their states. Manchin went so far as to appear in ads showing him
shooting at policies he disliked and proclaiming “for me, it’s all about West Virginia.” He won a state that Hillary Clinton lost by more than 42
points.

Nationalization makes it more difficult for senators to be seen as separated from their party’s president and
his priorities. So even if Republican senators do break with Trump, fewer voters now learn about it
because they no longer see state-specific news. Since voters tend to assume that partisans vote like their
parties, voters are often unable to perceive moderate senators’ divergent policy positions.

And legislators who do break with their party now face a risk of a primary challenger. McSally won her 2018
Republican primary1 facing two candidates closely tied to Trump: Joe Arpaio and Kelli Ward. In this year’s race to defend the seat she was
appointed to, McSally this week fended off a primary challenge from Daniel McCarthy, who tried to build support from local pro-Trump groups.

“More [congressional] members are running scared in the primaries,” political scientist Sarah Treul told me.
“Even if they’re actually not having quality challengers emerging, they’re afraid of it happening. And I
think a lot of them are spending time trying to figure out how [they] can ward off one of those challengers from even
coming to the table.”

That usually means doing little to upset the party’s base by breaking with the president.

No link – no one wants higher taxes


Voters have short attention span.
Achen & Bartels 16 – Christopher H. Achen, Politics Professor at Princeton. Larry M. Bartels, Political
Science Professor at Vanderbilt University. [Democracy for Realists: Why Elections Do Not Produce
Responsive Government, Princeton University Press]//BPS
How would we expect reelection-seeking incumbent politicians to respond to the electoral incentives generated by such “myopic”
retrospection? The obvious-seeming answer is that they should attempt to maximize income growth in the immediate run-up to elections, but
care little about what happens to the economy at other times. A president who shirks (or, more realistically, pursues his own
ideological agenda) inthe months just before the election may be punished, but a president who shirks (or
pursues his own ideological agenda) earlier in his term is likely to suffer little or no penalty at the polls. Thus, there is
little or no electoral incentive for presidents to promote myopic voters’ well-being during much of their
time in office. Meanwhile, voters’ short time horizons magnify incentives for incumbents to manipulate the
economy in order to maximize economic performance around election time. The result is “a rational incentive for
the party in power to manipulate the business cycle for electoral benefit” (Erikson 1989, 570).

People don’t care anymore.


Tessler 20 – Michael Tesler, Political Science Professor at the University of California, Irvine. [Support
for Black Lives Matter Surged During Protests, But Is Waning Among White Americans, 8-19-20,
https://fivethirtyeight.com/features/support-for-black-lives-matter-surged-during-protests-but-is-
waning-among-white-americans/]//BPS

But the protests’ impact on public opinion appears to be fading — particularly among white Americans,
as you can see in the chart below. Black Americans’ opinions have stayed much steadier, as they have in the past.

Drawing on data from the Democracy Fund + UCLA Nationscape’s weekly tracking surveys, I found that unfavorable views
of the police
are trending back down toward their pre-protest levels among white Americans and have dipped
among Black Americans. White respondents are also becoming somewhat less likely to say that African Americans face “a lot” or “a
great deal” of discrimination, though those numbers remain higher than they were before before George Floyd was killed in May. Black
Americans’ views on the discrimination they face have remained essentially unchanged.

The same patterns are evident in tracking surveys from Civiqs and YouGov/The Economist. In the Civiqs data, white
respondents’ net support (support minus opposition) for the Black Lives Matter movement surged from -4 shortly before the
protests to +10 in early June, but has since dropped to 6 points underwater. Meanwhile, Black Americans’ net support went
from +76 in early May to +85 in early June and has remained within a point of that mark ever since. And in the YouGov/The Economist surveys,
the share of white Americans who said racism is a big problem decreased from 45 percent in June to 33 percent when the question was last
asked in early August. Three-quarters of Black Americans, on the other hand, said racism was a big problem in both surveys.

Why, then, do white Americans’ views on racism and the police seem to be returning to their baseline, but Black Americans’ views remain
steady? Well, as
media attention turns away from the protests, it may simply be easier for white people to
forget about the issue, while the stakes were always greater for Black Americans.
In an analysis of closed captioning data of cable news broadcasts from the TV News Archive,1 we found a huge spike in the number of clips that
mentioned “racism” or “Black Lives Matter” as the protests raged during the first two weeks of June. But, as you can see in the chart below,
the amount of attention cable news paid to racism and the Black Lives Matter movement has dropped as we’ve
moved farther away from peak protest activity. (Coverage of these two issues is still higher than it was prior to Floyd’s death,
however.)2

This surge and decline in media attention clearly corresponds to changes in public opinion among white Americans, and it’s possible that some
of the historic gains we’ve seen in white views of the Black Lives Matter movement might not last.

This drop is not surprising, since we’ve seen it before in how public opinion changes on school shootings, for example. Because media
attention on even the most high-profile mass shootings tends to be fleeting, so are these shootings’ effects on public opinion. And now, white
Americans’ opinion of the Black Lives Matter movement may be following the same trajectory. That’s driving a decline in overall public support
even as Black Americans continue to back the movement at very high rates.

This decline in public opinion is consistent with a long line of political science research that tells us that
the effects of events on public opinion tend to last only for as long as they are at the forefront of the
country’s — or, in this case, one group’s — collective consciousness. That also means that without prolonged activism
and sustained media attention, the impact of this year’s protests on white public opinion could
evaporate entirely.
Predictions fail – sample bias, model design, and October Surprise.
Rosenbaum 18 – Eva Galanes-Rosenbaum, Chief of Staff, Director, Media & Opinion Analysis at
Rethink, Comparative Politics Masters, from the London School of Economics & Political Science, citing
Nate Silver of 538 fame. [2018 Midterms: Did the Polls Fail—Again?! 11-12-18,
https://rethinkmedia.org/blog/2018-midterms-did-polls-fail—again]//BPS

2016, THE POLLS DIDN’T FAIL—THE MODELS DID


Many people wrote extensively on this question in the aftermath of the 2016 election (and, in fact, leading up to it). If you’re interested in a
deep dive, I suggest starting with FiveThirtyEight and going on from there. But the important points are these:

National polls aligned with the popular vote.

State polls were not as accurate. And because our presidential elections rest on a wacky, unfair electoral college system in which some states
have massively outsized power, a small inaccuracy in several key states obscured the likelihood of a Trump victory.

It wasn’t polls that failed, it was the models. Polls themselves are tools to measure the present. It’s election models that try to use current
information to predict the future, and that is where we went rather wrong.
Given these three factors, it’s worth distinguishing between polls and models.

OK, BUT THE MODELS ARE BASED ON POLLS, RIGHT?

Polls are good at finding out what big populations think. But polls are not as good at predicting the future.
Remember that polls are trying to find out about a universe of people by taking a small sample of those people. That universe can be all
American adults, or people of color in a state, or people who frequently shop at Target. The idea is the same: take a large enough sample of
that universe and you can tell a lot about it without having to ask every person in that universe.

Election polls—that is, polls asking about voters’ choices in an upcoming election—are trying to survey a group that doesn’t
yet exist: voters. Up to the point when voters literally cast their ballots, everything is probability and
supposition. We are supposing that some people will actually vote, and supposing others will not. As Ariel
Edwards-Levy from the Huffington Post put it, “Today [on Election Day], across the nation, we’re seeing that universe be created, person by
person.”

Polling tells you what is true now. Even if a pollster guesses correctly about who is likely to vote, all
we know is what the outcome
would (probably) be if the election were held today and those people did vote. We are assuming that what was
true when the poll was taken will be true on November 6.

Polls already have blind spots and error: no sample perfectly represents a population, and many polls do a poor
job of accounting for certain groups of people. When you add in the uncertainty of a population that doesn’t yet exist (people who
will actually vote in the upcoming election), the chances of getting it wrong go up. Pollsters make educated guesses about who is
in that universe of “likely voters” and who isn’t. They base these guesses on a lot of elements—the most important is whether someone voted
in a previous election—but there are a lot of things that pollsters can’t foresee.

For example, a confusing ballot design in Broward County, FL. Or voting machines miscounting ballots in some Florida counties. Or
problems with the design, not of the ballot itself, but of the return envelope for absentee ballots, as we saw in Gwinnett County, GA. Apart
from election administration issues like these, tropical
storms and other natural disasters can interfere with voter
registration deadlines; snow and rain on Election Day can make it hard for voters to wait in long lines at
polling places—and wet ballots can cause malfunctions.

There are a lot of things that pollsters


simply can’t account for when drawing and weighting their samples of “likely
voters”—things that go well beyond an October surprise.
BUT THE MODELS ARE COMPLEX, SCIENTIFIC ALGORITHMS—RIGHT?!

It’s true that election prediction models like the one from FiveThirtyEight and The Upshot are complex. They “ingest” a lot of information, from
national and state polls to fundraising levels, historical trends in turnout, and even experts’ ratings. Normally, we endorse the idea behind these
models: rather than relying on a single poll, compare a bunch of polls and use additional relevant information to find the “truthiest” truth.

However, there are some potential problems that these models have a hard time dealing with. “Herding” is one problem Nate Silver has written
about—basically, pollsters aren’t weighting their samples in a vacuum, but may be making their results look similar to their colleagues’ or like
they think they “should” look. When
pollsters are listening too closely to “conventional wisdom,” their polls may
start to look like their expectations rather than reflecting the current population of voters they’re trying to
sample.
Similarly, pollsters and model-builders make assumptions about the future based on the past, but some of those assumptions may be wrong.
For example, Americans of color vote at lower rates, and less consistently, than white Americans. Pollsters will weight their samples
accordingly, and models will also assume that respondents of color are less likely to turn out. But Black voters have voted at nearly the same
rates as white voters in the last few elections, and many advocates are trying to change this trend for all racial groups. If the models do not
account for this—or if they overcompensate—then they may be off.

Also related are systemic problems and diminishing returns. As Silver points out, “it’s better to be ahead in two polls
than ahead in one poll, and in 10 polls than in two polls. Before long, however, you start to encounter diminishing returns. Polls tend to
replicate one another’s mistakes.” In 2016, most polls apparently missed the group of white, male voters without college degrees
who made a big difference to Trump’s win. Across most polling, it’s harder to get people of color to participate in a survey than white people, so
polls systematically “miss” these groups. Adding another 10 or 20 polls to a model won’t make it more accurate if
there is a systemic flaw.

Finally, remember that turnout


models—like much of the polling world—are more of an art using scientific approaches
than a true science. Each aspect of a model is a decision made by a person, who has biases and blind spots
like any other person.
BOTTOM LINE: IS POLLING BROKEN?

Elections are hard to call because they’re make-believe until they’re not. Past may be prologue, but it isn’t
prescription.

Polls will never be perfect. They tell us about the present, but they can’t see clearly into the future. They can’t account for poor ballot design,
machine malfunction, weather, or the myriad other factors that may affect who votes and how.

In 2018, the polls seemed to be more accurate than not, as they have been for several years. Late polls showing Gillum and Nelson winning
their contests when they both ended up losing is not a sign that polls are untrustworthy, but may point to other phenomena like herding or
unforeseeable elements.

The extent to which you trust polls should also depend on what you’re using them for. Most of us use
polls to read public opinion,
not future behavior. For this purpose, the precision of the win-loss percentage is less crucial. What’s more important is that we know
who thinks what, how that has changed over time, and what we can do about it.

Voter’s are too polarized to change their minds


Bitecofer and Epstein 20. Rachel Bitecofer is a Senior Fellow at the Niskanen Center and a former
associate professor of political science at Christopher Newport University. Sam Epstein graudated
Haverford College in 2019 with a B.S. in Chemistry and now attends New York University, where he is
working towards a Ph.D. in Chemistry and Chemical Biology. 9-15-2020, "Negative Partisan Predicts
2020: The September Update," Niskanen Center, https://www.niskanencenter.org/bitecofer-epstein-
september-update/ - MBA AM
The same is true for door-to-door efforts geared at mobilizing like-minded voters to turn out. Elections since 2004 have focused extensively on these tactics,
especially on the Republican side, and have yielded good results. This is primarily because the job of the canvasser or candidate in a mobilization conversation is to
increase the salience or “stakes” of participation in the election to the targeted, like-minded voter rather than having to convince them to support a certain
candidate.

And although polarization


and hyperpartisan are causing “inelasticity” in public opinion, again, we did see some modest
change in the head-to-head between Biden and Trump pre and post-pandemic. Inelasticity
disallows significant change, in essence,
muting the effect of Trump’s mismanagement of the pandemic. Nevertheless, the weeks following the onset of the pandemic introduced
slippage in survey data among some key constituencies for Trump, including voters over the age of 65 and white voters, both college-educated whites and more
surprisingly, non-college-educated whites. As stated previously, Biden’s advantage over Trump increase to an average of 7.5pts in what we call a minor “pandemic
effect.”

To better understand the source of this change we analyzed data from Democracy Fund’s Voter Study Group’s latest data, which includes a set of survey data from
their “second-phase” of data collection of 2020 voter data. The last week of survey data in the second-phase is June 25 – July 1, a time period that captures voter
attitudes after the pandemic came in, peaked once and began to re-peak again as it spread through the rest of the nation due to mismanagement. This data
predates the backlash to the George Floyd murder and the summer of protests, riots, and looting that followed. But it does reflect at least three months of the
pandemic–including President Trump’s press conferences– as well as campaign ads against Donald Trump by two entities, the Biden campaign, and The Lincoln
Project.

Although stump journalists and analysts tend to focus on demographic characteristics of voters such as
“small business owner,” “independent,” or “veteran,” political scientists tend to segment voters
primarily by their partisanship. This is in recognition of the tremendous predictive power partisanship
plays in vote choice and other voter behaviors. Recognition of voters’ other characteristics can be helpful in identifying their partisanship,
but if one knows their party id entification, one can almost perfectly predict their vote choice, even

months before Election Day.

In the graph below, we present the probability a voter will vote for either Trump or Biden based on their self-identified partisanship. As you can see, the

probability that a strong Democrat is voting Biden or a strong Republican is voting for Trump is nearly
perfect, and you’ll note, the probability does not decline much for people who describe their partisanship as
“weak.” Simply guess that a “weak” Republican is casting a ballot for Donald Trump in November and you’ll be right, on average, 9 out 10 times.

Hopefully, you are also noticing how high a probability


independent “leaners” have of voting for the candidate of the party
they lean towards. Again, most independents are “leaners” and will admit when asked by pollsters that
they “lean to the Democrats” or “lean to the Republicans.” These leaners are “closet” partisans– voting
like their partisan counterparts and holding issue preferences that are similar to their partisan
counterparts. And as you can see, as of June 2020- they are firmly heading to their partisan “homes” in vote choice and
have modeled probabilities that are almost as strong as their partisan counterparts.

This suggests then, that many independents are not “persuadable.” For any campaign- using analytics that allows you to sort out the party
“leaners” of the opposition party to avoid wasting resources on them (unless you are in a PVI +10 district (for the opposition) you do not need these voters to win
and cannot convert them as these data clearly shows. Instead, finding your race’s “pure” independents, latent partisans (already registered partisans with “iffy”
turnout), and registered new partisans and focusing your efforts on convincing them to show up is a dominant strategy in most “swing” contests.
setcol
Case o/w prioritize extinction

They must disprove the consequences of the plan. Comparing opportunity costs is best
for clash and argument refinement, which is a prerequisite to making their framework
portable.
Fairclough and Fairclough, 18—emeritus Professor of Linguistics at Lancaster University AND
School of Humanities and Social Sciences, University of Central Lancashire (Norman and Isabela, “A
procedural approach to ethical critique in CDA,” Critical Discourse Studies Volume 15, 2018 - Issue 2,
169-185, dml) [CDA=critical discourse analysis]

The term ‘discourse ethics’ is Habermas’s (Fairclough & Fairclough 2012: 30-34), but we are using it here in a general sense: for the view that

an adequate framework for ethical evaluation and critique must include the comparison and
evaluation of different arguments for different lines of action in a process of deliberation. Such assessments
of arguments pose difficult problems, and deliberation is by no means guaranteed to produce consensus. Nevertheless, deliberation can
contribute to the quality of ethical critique by ensuring that a wide range of arguments are considered
in making decisions, that all alternatives are taken into account and thoroughly criticized, and that
people have to (at least) moderate their own partialities in evaluating a range of arguments collectively .
To illustrate this, we shall refer to two ethically contentious political decisions and the courses of action which they led to. The first is the
decision by the British Prime Minister Tony Blair to advocate Britain’s participation in the invasion of Iraq in 2003 (we have discussed this in
Fairclough & Fairclough 2012: 96-97). The second is the decision by the German Chancellor Angela Merkel to open Germany’s borders to the
refugees coming from the Middle East in the autumn of 2015. In so doing, we will illustrate the relevance of ethical critique from all three of the
major ethical positions: deontological, consequentialist and virtue ethics.

CDA and practical argumentation

CDA is mainly concerned with critical analysis of discourse which is oriented to action, including political discourse, but also
managerial, organisational and other forms of discourse. The primary activity in such discourse is practical

argumentation, argumentation over action, over what is to be done (e.g. what policies should be
adopted). Practical argumentation should accordingly be the primary analytical focus in CDA (Fairclough &
Fairclough 2012). This
does not exclude other familiar forms of analysis (such as analysing representations) but
subsumes them. The point of representing (or ‘framing’) an issue in a particular way is to create particular
public attitudes and opinions, and thus legitimize or facilitate a particular course of action.

Critique of discourse is the focal concern for CDA, but critique of discourse is by no means exclusive to CDA. On the contrary, critique
of
discourse is a normal part of all discourse. It is a normal part of everyday practical argumentation: people find
reasons in favour and against proposals for action, they consider alternatives, adopt them or discard
them, and so on. A course of action worthy of being adopted is one that has withstood criticism.
Agents may decide to discard proposals either because they are likely to be instrumentally inadequate
in relation to the goals they are supposed to achieve, or because they find them ethically problematic,
for example because the values or goals they are motivated by are unacceptable. Ethical critique is a concern for CDA at three levels: as an
aspect of agents’ reasoning, for example as an aspect of politicians’ deliberation over what policy to adopt; as an aspect of the normative
critique of those deliberative practices which CDA carries out; as an aspect of the critique that CDA itself is open to. There are therefore three
main places where ethical values come into the picture: what values are arguers (e.g. politicians) arguing from? what are the values that CDA
analysts are espousing, from the perspective of which they are evaluating the arguments of those arguers? what are the values of other critics
(including critics of CDA)?

CDA is itself a form of discourse, which is specialized for academic critique of social actions, events, practices and structures, with a focus on
discourse. It can itself be viewed as a form of practical argumentation (Fairclough 2013), open to the same
critical questions that it directs at the discourse it subjects to critique . CDA practitioners are bound by an obligation
to address ethical evaluations that are critical of their work. Moreover, the ethical judgement which is part of the normative
critique carried out in CDA does not come out of thin air, but is built upon elements drawn selectively
from ethical judgement and critique in public discourse . And CDA needs to rethink its own critique in response to shifts in
public discourse and political reality, such as the emergence of controversy over ‘political correctness’ (Fairclough 2003).

We have argued that the primary focus of critical analysis in CDA should be practical argumentation and
deliberation (Fairclough & Fairclough 2012). This was based upon a claim about the character of political discourse, which we saw as
primarily concerned with the question of what is to be done. Deliberation is an abstract genre in which
(alternative) proposals are being tested. The framework for critical analysis of practical argumentation and
deliberation which we have developed since 2012 provides CDA with an effective way of evaluating and critiquing
discourse from an ethical point of view. One of its strengths is that it allows different approaches to thinking
about ethical questions (deontological, consequentialist and virtue ethics) to be combined within an ethical
deliberative procedure for achieving impartiality.

In a more recent version of this framework (Fairclough, I. 2016, 2018), deliberation is modelled as a critical procedure
designed to filter out those practical conclusions (and corresponding decisions) that would not pass the test of
critical questioning. Two distinct argument schemes are involved in deliberative activity types: an argument from goals, circumstances
and meansgoal relations, and anargument from (negative or positive) consequences. Proposals are tentatively supported by
practical arguments from goals, and are tested in the light of their potential consequences , via
practical arguments from consequence. Goals are generated by various sources of normativity, and these can be what
conventionally is called ‘values’, but can also be obligations, rights and duties. Critical questioning seeks to expose potential
negative consequences of proposals and thus evaluate them in terms of their acceptability or
reasonableness: if the consequences are on balance unacceptable for those affected, then it would be
more reasonable not to engage in the proposed course of action. Unacceptable consequences are
critical objections which can conclusively rebut a proposal. Where two or more proposals survive
critical testing, one may be chosen as the better proposal on nonarbitrary grounds (e.g. being simpler to enact).

In our view, the


most significant perspective in the light of which proposals are to be tested is a
consequentialist one (Fairclough & Fairclough 2012, Fairclough, I. 2016). The term ‘consequence’ is however used here broadly to refer
to several types of states-of-affairs: the goals of the proposed action (the intended consequences); the potential unintended consequences (or
risks) involved; various known and predictable impacts, including impacts on institutional, social facts. If
a proposal is likely to result
in a situation that is illegal or unjust, then the proposal can be evaluated as unacceptable from both a
consequentialist ethics and a deontological ethical position. Our framework can therefore
accommodate deontological ethical issues within a broader consequentialist perspective. By inquiring into the
motives of action, the framework can also accommodate a virtue-ethical perspective.
Settler colonial theory is incomplete and neglects settler responsibility by flattening
history.
Greer, 19—professor of history and Canada Research Chair in Colonial North America at McGill
University (Allan, “Settler Colonialism and Empire in Early America,” The William and Mary Quarterly,
Vol. 76, No. 3 (July 2019), pp. 383-390, dml)
The most rigorous of the settler colonial theorists in my opinion, Wolfe insisted that his subject was not an ideology or a set of ideas but rather
a logic. “Although predicated on land rather than on human bodies,” he writes, “settler colonialism is premised on a cultural logic of elimination
that insistently seeks the removal of indigenous humans from the land in question.”4 The “logic of elimination” is a basic drive to get
rid of the Indigenous presence by one means or another and to replace it with a new society. This approach encompasses material as
well as discursive aspects; massacre, removal, assimilation, and immigration are part of its repertoire, and so too are various forms of
racism, legal instruments of dispossession, and historical narratives denying violence.5 Heavily indebted to Marxism and postcolonial theory,
Wolfe grounded his concept in material considerations: the basic distinction between settler colonialism and the “ordinary” colonialism of the
sort one finds in nineteenth-century India or Africa is that the latter depends on the exploitation of native labor while the former had no real
need for the natives’ work and only wanted their land.6

Extending this basic analysis, Wolfe developed a highly suggestive, if somewhat schematic, theory of race formation.7
With the United States mainly in mind, he argued that the racism directed at African Americans and that focused on Native Americans were
different species of exclusion. Whereas the one had to do with denigrated, unfree labor, the other targeted peoples whose very existence stood
as an obstacle to the expansion of settler society—it was the racism of work versus the racism of land. Wolfe pointed to the divergent
treatment of African Americans and Indigenous people where “race mixing” was concerned, arguing that the “one-drop rule” that treated
anyone with the slightest African ancestry as black reflected the colonizer’s concern to maximize the laboring population, whereas the tendency
to assimilate people of European and Indigenous ancestry to the white category (particularly characteristic of Australian practice) stemmed
from an impulse to reduce and eliminate the Native population.

All very well where the nineteenth century is concerned, but readers of the Quarterly may legitimately ask
whether the concept of settler colonialism helps us to understand North America prior to the late
eighteenth century. Or is Wolfe’s framework stuck in the modern? Is it indeed a theory of modernity? Wolfe did have much
to say on early America and settler colonialism, but insightful though his writings on that subject are, they are quite different from the
reflections he derived from the Australian case. The materialism has faded, replaced by a preoccupation with
colonialist doctrines and discourses. He emphasized an imperialist legal notion, the “doctrine of discovery,” whereby European
monarchies supposedly asserted both sovereignty and dominium from the moment of contact, reducing Indigenous peoples to mere occupants
of the land. This he saw as the basis for a future physical dispossession and replacement by settlers.8 On this point, Wolfe seems to be
swallowing the historical fable promulgated in the 1820s by Chief Justice John Marshall to justify Indian removal and other legal techniques of
dispossession. Marshall notoriously propounded the view that “discovery” was tantamount to conquest and that the United States had
inherited Britain’s claim not only to rule but also to own vast portions of the continent.9 Wolfe took this breathtaking distortion of the colonial
past as a description of early modern colonization rather than as a more modern ideological justification for contemporary practices of
dispossession. More generally, he tended
to exaggerate the importance and misconstrue the thrust of the arrogant
pronouncements of sixteenth- and seventeenth-century imperialists, reading into their vague territorial
pretensions a real program for replacing Natives with settlers. Apart from the fact that colonial charters
and other early assertions of sovereignty were more likely to suggest the incorporation than the
elimination of Indigenous peoples, these expressions of imperialist chutzpah cannot be taken as guides
to what actually happened, any more than the Epistles of Saint Paul can explain the Crusades.

In Wolfe’s wake, a whole school of settler colonial history has arisen with the aim of reexamining world
history through this lens. This intellectual movement has spawned many valuable studies of modern colonialism, notably in applying
the concept to the case of Israel as well as to numerous other nineteenth- and twentieth-century contexts. Insofar as more remote
periods are concerned, however, results have been less impressive. A recently published handbook attempts to sum up
the history of settler colonialism over the millennia and around the world through an array of essays on topics ranging from ancient empires to
present-day New Caledonia.10 Readers of the volume learn about the Portuguese settlement of the islands of Madeira and the Azores (where
there were no indigenous populations to eliminate) and about Roman colonia, which reinforced Roman presence on the edges of their
multiethnic empire but which only pushed aside natives from small enclaves. Some of the premodern cases show affinities to settler colonialism
à la Wolfe, but the contributors generally conclude that the
fit is partial at best. The scholarship on display is very good
but in most cases fairly conventional in approach, and it is hard to see what value settler colonial theory
adds. In this volume and in other programmatic publications, definitions of settler colonialism are rather amorphous,
generally lacking the theoretical bite of Wolfe’s early writings.

In my own field, attempts to reconceive New France’s history on a settler colonial basis have led to lamentable
results. Emboldened by theory and unencumbered by substantial knowledge of the topic, Edward Cavanagh argues that early New France
needs to be understood as a “corporate” colony founded on the principle of terra nullius .11 That legal notion had long served as a
justification for the colonization of Australia, so why not New France? Aware of, but undeterred by, Lauren Benton and Benjamin Straumann’s
demonstration that this phrase was
unheard of before the nineteenth century, Cavanagh constructs a new, ad hoc definition
of the term by which it comes to stand for a failure to recognize “Aboriginal title” (which is actually a twentieth-century concept)
linked to a willingness to settle land without purchase or cession. “The practice of terra nullius—whereby settlers acquire
title, improve, and alienate, in a colonized region where no purchases, cessions, or conquests take place—was prevalent in New France.”12 In
fact, the
French never maintained that North America was empty; to the contrary, their program of
colonization was all about incorporating Indigenous nations into their empire. In New France, as in New
Spain, officials repeatedly proclaimed Indigenous lands inviolable, and the layered land tenures of
Canada left considerable room for settler-Indigenous coexistence. That said, French settlers did indeed
dispossess and displace Natives (if not on a large scale, given the demographics); however, purchases and cessions
are neither here nor there. The practice of purchase and cession , initiated in some of the English colonies in the
seventeenth century and later enshrined in the Royal Proclamation of 1763, was an instrument of unusually thoroughgoing dispossession.13 It
was a quintessential settler colonial technique for utterly eliminating Indigenous property, and so the
fact that the French took a less absolutist approach makes a poor justification for equating them with
the colonizers of Australia .

Unarguably, there are places and periods in the early modern history of North America where the “logic of
elimination” was operative in both its material and its discursive aspects , where Natives were massacred and
pushed aside to make way for colonists who proclaimed the land rightfully and exclusively theirs. For readers of this journal, it is hardly
necessary to enumerate the sites along the Atlantic coast where colonists displaced Indigenous peoples and established jurisdictions,
sovereignties, and property regimes for themselves, for the basic outlines of this story of appropriation and dispossession have long been
familiar to historians. It
is not entirely clear that labeling it an instance of settler colonialism adds much to our
understanding of the phenomenon. More importantly, it is not obvious that settler colonial studies have
much to contribute to the study—central to current work in the field—of the broader, continental context in
which North American colonies took shape.

The European invasion of America was extensive and variegated; settler colonies were but one
dimension of the larger process and, until the nineteenth century, not the most spatially significant.
North of New Spain and east of the narrow English and French settlements lay the vast bulk of the North American continent,
Indigenous country that was neither conquered nor colonized. Yet even in the absence of the
eliminationist workings of settler colonialism, it was strongly affected by the European presence, more
so in some periods and regions than in others. Consider, for example, the large southeastern region often referred to as
the “shatter zone,” where waves of violence and disease succeeded one another , beginning with Hernando de
Soto’s entrada of 1539–42 and continuing through successive slave raids and the rise of militaristic coalescent societies that Robbie Ethridge
and others have tracked.14 Trade
with South Carolina, especially of guns for slaves, was the main driver of this
destructive upheaval, and so, of course, colonization was centrally implicated. Something generally similar was
occurring in the southwestern borderlands due to the presence of the colony of New Mexico .15 Only in a
settler national narrative of the most providentialist sort could the emergence of these regional shatter
zones be seen as simply paving the way for settlement and colonization.
Further north, theGreat Lakes region was similarly shaken by more than a century of wars and migrations
following Haudenosaunee and French intrusions that started in the 1660s. This was a site where French, and later British, intruders
played a transformative role through trade, religious evangelism, diplomatic negotiation, sex, and war. They did not
conquer or rule—and they certainly did not settle, except in the tiniest enclaves—but they did exert influence
and claim imperial sovereignty.16 Coming to terms with French sovereignty claims to the Great Lakes and Upper
Mississippi requires us to recognize a more complex, less fully territorialized and exclusive concept of
political authority than the modern definition that dominates Wolfe’s thinking on the subject. French imperial sovereignty here
was a matter of infiltration rather than full takeover; certainly it had nothing to do with eliminating the
Native, for it was entirely dependent upon that Indigenous presence. As was the case in the Southeast, there was a
colony in the picture, in this case French Canada on the Saint Lawrence River, the source of commercial supplies, missionaries, coureurs de bois,
and military officers. The whole pays d’en haut phenomenon was unimaginable in the absence of this European settlement (and vice versa).
Consequently, it would be problematic to isolate the colonized colonies from the interior zones of influence
and subject them to analysis as instances of settler colonialism. Canada and the pays d’en haut were inextricably
connected. In this respect, New France was exceptional only in the scale of its imperial hinterland. All across the trans-Appalachian
interior in the eighteenth century, Indigenous territories were affected by direct and indirect Euro-American
infiltration, without conquest or real colonization. Settler colonial theory seems ill-equipped to deal
with the complexities of these commercial/imperial incursions except as a prelude to settlement.

To take full account of the larger continental field and the upheavals occasioned by European intrusion, we need to think
about empires as well as settler colonies—or rather we need to consider settler colonialism as an
aspect of early modern imperialism. Recent work on the history of empires underscores the wide
variety of spatial practices employed in the creation of overseas empires in the early modern period, the nodes and
networks, the reliance on sea-lanes and interior waterways to extend power and extract wealth.17 Colonial settlements were one
element of a broader pattern of imperial expansion, especially prominent in the British American Empire. Settler
colonial theory, valid and useful though it may be in certain settings, has the effect of isolating processes of
colonization from larger processes of imperial penetration. It also has the effect of flattening long-term
historical change by assimilating early modern colonialism to patterns of settlement and dispossession
more characteristic of the nineteenth century .
That said, let me acknowledge one of the important contributions of settler colonial theory to the practice of history. Regardless of the period
we study, historians inhabit the modern world (call it postmodern if you prefer; it makes no difference to the present point), and many of us are
non-Indigenous residents of settler colonial countries. Since, as Patrick Wolfe never tired of repeating, settler colonialism is “a structure
not an event,” we are the beneficiaries of eliminationist practices that continue to victimize Native peoples.18 As citizens and as scholars,
we should be mindful of our subject positionality in this respect. And surely that means scrutinizing the past for
differences and transformations, not for pieces of evidence taken out of context to suggest an eternal
always-already condition.

Perm do the plan and the alternative in other instances – their generic evidence
justifies limited intrinsicness for the perm
Sweeping theories of radical indigenous ontological difference ignore the nuances of
actual struggles that strategically repurpose settler categories
Rosenow, 19—Senior Lecturer in International Relations at Oxford Brookes University (Doerthe,
“Decolonising the Decolonisers? Of Ontological Encounters in the GMO Controversy and Beyond,”
Global Society, 33:1, 82-99, dml)

Despite the force and importance of this argument, I have felt slightly uneasy when reading those conclusions. Focusing
on radical
ontological difference can easily lead to a romanticised reification of other peoples’ difference that is in
danger of ignoring actual political struggles and demands on the ground. As Cusicanqui argues, those
struggles might very well emerge out of an “indigenous modernity”, rather than an insistence on the
right to one’s difference. By this she means that some Indigenous people aim to formulate a hegemonic vision
for how to structure a society that is valid for everyone (Indigenous AND non-Indigenous): they work for a
society that is in their “image and likeness”, and to use modern notions such as “citizenship” for this purpose, rather
than rejecting the latter as irreconcilable with one’s own world.39 By contrast, some North American Indigenous
intellectuals call for an Indigenous “resurgence” that, rather than seeking hegemony, altogether turns
away from seeking recognition by wider (colonial) “society”. As Leanne Betasamosake Simpson points out, in such
“resurgent mobilization … there is virtually no room for white people”. 40 But my unease was also emerging from
something else, which is what I want to focus on in this article: the problem that encounters and conflicts are yet again made sense of within
overarching structures of knowledge production rather than cultivation (despite the intention to do otherwise). As de la Cadena herself makes
clear in the quotation above, what is encountered as “different” is inevitably described “in forms that I could understand” (my emphasis)—even
whilst simultaneously recognising that one’s description does not capture what the encountered practices actually do. Sense-making, for de la
Cadena, takes place at what could be called two levels: At a first level, there is the inevitable process of making sense of an alienating affective
experience on the spot, from within one’s own framework of understanding the world. At a second level, then, de la Cadena attempts to make
legible her grappling and not-understanding in the context of a book for an academically literate and interested audience—in other words, in
the writing-up of her ethnographic research. In Rojas’ and Blaney and Tickner’s case, given that their articles do not aim to make an empirical
contribution, sense-making takes place at what could be called a third level: what is drawn upon is the understanding that emerged out of the
ethnographic work of others, which is brought into conversation with various bodies of theoretical work in order to make a conceptual
contribution. This takes place via the coining of central concepts and the outlining of all-encompassing frameworks that are meant to help us
understand the analytical, normative and political consequences of their argument for scholarly work more broadly. The ontological encounters
of others are used to delineate the merits of ontological encounters in general, in IR and beyond. This objective leads to a particular way of
developing and structuring a generic argument that makes it difficult to move beyond sense-making frameworks that are necessarily geared
towards settling all those unsettling and disconcerting experiences that were the focus of the articles in the first place. This
is also the
problem of some central decolonial work. Drawing on Edouard Glissant, Mignolo, for example, critiques the “requirement of
transparency” that forms the basis for understanding in Western social science scholarship. He argues for the “right to opacity” of
those located on the other side of the colonial difference.41 But this claim sits at odds with his simultaneous desire to
write a new, all-encompassing history of “the modern/colonial world system”. 42 And like in Rojas’ and Blaney and Tickner’s articles, terms such
as “pluriversality”43 or “diversality”44 arecoined in order to have a (one!) concept for a similarly all-
encompassing solution to domination. While de la Cadena is critical of her own “anxiety to understand coherently (with which I
meant clearly and without contradiction”), and while she points out how this “was often out of place”, 45 Mignolo as well as Rojas and Blaney
and Tickner seek to place such anxiety in yet another coherent framework that holds everything together. The
question arises whether this can be any different in scholarly work that is not directly based on ethnographic research itself, and which can therefore not lay claim to a direct experience of ontological controversies. This has become an important question for my own (likewise third-level) work on anti-GMO activism. My work to date has primarily aimed at making a conceptual
contribution, and has relied on a conversation between the ethnographic research of others and various bodies of conceptual work, including decolonial and “ontological turn” literature.46 But as I have already indicated in relation to de la Cadena’s work, when writing up their research for academic purposes, even those who have directly experienced ontological encounters find it
hard to resist the tendency to conclude their work with stringent, overarching, coherent conclusions that the Westerneducated reader can grasp and “take home”. In the next section, I will draw on two anthropological ethnographic texts that are significant for research on the GMO controversy to show how this works. The two texts that will be analysed in the next section engage
with the GMO controversy in Paraguay and Mexico respectively, and they have stood out for me in the way they manage to convey a sense of unease and grappling with ontological encounters and conflicts. However, as the next section will show, they as well end up providing a framework and conclusions that can accommodate and make sense of the encountered ontological
difference. 3. Ontological Encounters in the GMO Controversy According to Susana Carro-Ripalda and Marta Astier, much of the research that is carried out in relation to the question of what smallholder producers in the Global South truly think of (and say about) agricultural biotechnology is unable to grasp the “ontological incompatibility” that exists between the experienced
human/nonhuman relations in small-scale agriculture on the one hand, and the logic that underlies genetic engineering (GE) on the other.47 This is precisely because most social research is itself grounded in the crucial modern/colonial nature-culture divide: the former can only be known through scientific means, while the latter can be known through the study of
social/cultural/political practices. Knowledge about nature is about establishing “facts”, which are either true or false (i.e. nature as “one” is either correctly or incorrectly represented), while knowledge about culture is about studying meaning, which is necessarily (due to the existence of different cultures) multiple. The question of whether GMOs do or do not pose a “factual”
danger consequently lies outside of the remit of the social sciences, which therefore focus on the social dimension of statements that are made about nature. But as Kregg Hetherington’s reflections on his own anthropological research journey in Paraguay make clear, this tacit signing-up to modern ontology can lead to difficulties in understanding the reality of the people one is
interested in.48 Coming from a position in which he took for granted the scientific distinction between (proven) “fact” and “error”, Hetherington explains how he “translate[d]” the claims of the leader of a local peasant movement49 (Antonio) about the truth of (GM) soy “killer beans” into something else: Until this point, I had approached ethnography as an extended discussion with
and about humans, and I was less interested in beans than I was with what Antonio said about them … To be blunt, Antonio kept pointing at the beans, and I kept looking at him … I was comfortable saying that this was a figure of speech, a kind of political rhetoric, or even to claim that this is what Antonio believed, all of which explicitly framed ‘la soja mata’ (soy kills) as data for
social analysis, rather than analysis itself worthy of response.50 However, Hetherington points out that not believing in the truth of the killer bean did not prevent him from “participating in Antonio’s knowledge practices”. 51 Becoming involved in the anti-soy bean activism of the peasants, Hetherington became “part of the situation” that made the killer bean turn into a crucial
agent in a court case that was brought against two soy farmers for the murder of two activist peasants. As a result, killer beans became transformed into a matter of national concern. Crucially for Hetherington, participation involved more than joining the situation in spite of his lack of belief: it led to him becoming immersed in a relation with both peasants and beans that started to
have a physical impact on him—in de la Cadena’s words, he indeed became “partially connected”: 52 Beans didn’t scare me at first. Indeed, as a foreigner to the situation that gives rise to killer beans (a Canadian no less), giant fields of soy were a familiar, even a comforting sight. But it took only a few months with Antonio for me to start feeling the menace from those fields. Soon,
the sweetish smell of glyphosate, recently applied, and especially the corpselike smell of 2, 4-D mixed with Tordon, could ruin my appetite and make me expect to see people emerge from their homes to show me pustules on their legs and stomachs.53 Similar observations are also found in Carro-Ripalda and Astier’s contribution to the 2014 Agriculture and Human Values
symposium on the challenges of making smallholder producer voices being heard in relation to agricultural biotechnology.54 While most of the contributions to the symposium concentrate on how to tease out smallholders’ “real” voices in the most effective way, Carro-Ripalda and Astier critically reflect on their own perceived failure to become knowledgeable about smallholders’
voices in their research on GM maize cultivation in Mexico. It was through ethnographic fieldwork in rural areas in Central Mexico, in-depth structured interviews, focus groups, participant observation and, finally, a National Workshop in Mexico City with over 50 stakeholders (including smallholder producers) that Carro-Ripalda and Astier attempted to get a better sense of what the
actual voices of peasants in the GM controversy were trying to convey.55 However, particularly the final workshop, which aimed to create conditions under which Mexican smallholder producers could speak on their own terms about GM maize cultivation, “unwittingly reproduced the conditions of exclusive, techno-scientific and regulatory spaces”. 56 The public discourse that
centres on questions of safety, science, possibilities of regulation and problems of potential contamination, and which is upheld by both GM maize proponents and antiGMO activists, dominated the workshop debate. Even when present smallholders raised different concerns, the discussion always returned to the previous, main ones, as if those who had spoken differently “had not
spoken at all”. The way that smallholders could articulate “their perceptions, ideas, and desires” was thereby “severely limited”. 57 Carro-Ripalda and Astier are focused on the dominance of one particular (techno-scientific, regulatory) discourse that, they maintain, disabled smallholder voices engaged in different discourses from speaking up or, when speaking, from being heard. In
other words, smallholders were unable to adequately represent their own understanding of what is at stake in the GM maize controversy in Mexico. Considering what I have pointed out in the previous section, based on Rojas, difference is thereby transformed into an epistemological, rather than an ontological one: Carro-Ripalda and Astier’s argument is implicitly based on the
assumption that, under the right conditions, difference can be translated into something that can be communicated to, and discussed with, other stakeholders. But the term “ontological incompatibility” that the authors themselves use indicates there is something else at play, which cannot easily be translated: the nature of the relation of smallholder producers to their “land, seed,
crop, climate … as told and understood by themselves”; the “central place” that Maize continues to occupy in Mesoamerican pre-Hispanic cosmology, and “the social and cultural significance” that goes along with that.58 Carro-Ripalda and Astier’s emphasis on the problem of the dominant discourse, and the overarching Mexican structures of domination this discourse is related to
(such as the “neoliberal vision of the Mexican agricultural future”59), makes it occasionally difficult to understand what the problem of “ontological incompatibility” really is about. At the end of the article, the place of the smallholder producers whom they have engaged seems once again clearly delineated and knowable: at stake for smallholders are, Carro-Ripalda and Astier argue,
“their lives as maize cultivators, their pride in their craft and knowledge, and their ceremonially demanded right to information, choice and access to their ‘own resources’”. It is not just about “retaining ‘traditional’ ways of agriculture”, as the anti-GMO movement maintains, but also about claiming “political, economic and socio-cultural rights.”60 Though this certainly adds a
significant dimension to the debate, it indeed simply seems to add to, rather than radically challenge, the frameworks that are conventionally used in the anti-GMO debate, as well as the frameworks that focus on how to bring out and represent other people’s “voices” in a better way. Is this simply unavoidable when it comes to the production of academic knowledge through/in
academic writing? As already indicated in the previous section, academic writing pursues by definition the objective of enhancing knowledge and providing improved insight into a certain situation. In its very structure, an academic piece of work aims to resolve and settle, rather than to dislocate, to destabilise, or to provide discomfort. Carro-Ripalda and Astier’s article is meant to
render legible their own encounter of ontological difference for an academic audience. Is it possible for the reader to dig below these representational strategies, and to relate more directly to their encounter of what they themselves call ontological incompatibility? And which has led them to brand their final workshop, in a quite un-academic way, as a “failure”? There are a few
places in the article in which their inability to put into words and arguments all of “the complexity of experiences, relations and reasons that bind people to maize”61 is more obvious. Becoming attuned to this complexity is linked to the authors having to become at least “partially connected”—to yet again use de la Cadena’s phrase—to the relations they attempt to trace. It is
interesting, for example, that Carro-Ripalda and Astier talk about “voices” as going beyond the semantic level, as conveying something acoustically, and as requiring a form of listening that shies away from asking pre-given questions. It is also interesting that some of that took place when they literally walked together with their interlocutors; precisely as it is emphasised by Blaney
and Tickner:62 Despite the shortcomings of the workshop … we felt that that, through our research on the ground, we had engaged with male and female farmers, heard about their perspectives on GM and their visions of a rural future, and accompanied them to work in milpas and markets. So, what do smallholder farmers’ voices sound like? What meanings did they convey to us?
We will provide here but a few of those sounds and meanings … 63 Despite returning to the idea of voices as conveying “meaning” in this quote, meaning is related to sounds, to walking together, to particular places with their own sounds, smells, and colours. The sample of actual “voices” Carro-Ripalda and Astier then choose to present yet again invoke an intricate sense of the
relationality of farmers and nonhumans: It is a joy to plant, getting hold of the maize, of a beautiful cob which is pleasant, to go to the harvest, to look at pretty cobs, all regular. Because this is what sustains me. You can see the difference in the seeds straight away … You need to look at the cob and as soon as I grab it I see the difference. It is the person who knows the seed the one
who chooses it [for replanting the following year].64 By contrast, GM maize is associated by the smallholders whom Carro-Ripalda and Astier cite with feelings of “artificiality, estrangement and distrust towards the created object (the GMO) in itself, not only because of deep ontological considerations … but because of the political and economic motives which are ‘assembled’ into
it.”65 Although the authors make a distinction between ontology and politics/ economics here, their invoking of the “assemblage” precisely shows how the latter becomes part of ontology itself, and then (as in the case of Hetherington) impacts on the sensual, bodily connection with the maize. Understanding the relation between “things” in this way allows for an analysis of power
and domination that has at least the potential of moving beyond pre-given frameworks; strategically suspending them in order to “sharpen [the] analysis of exactly how power operates, how relations are made and undermined, and with what consequences”. 66 Genetically modified maize is a problem because it is part of particular Mexican neoliberal visions and strategies, but in
the context outlined by Carro-Ripalda and Astier, that vision is not only (and not even primarily) made sense of through given frames of knowledge, such as Marxist theories of the exploitation of labour, but sensually, through the way it disrupts the (physical) pleasure and joy that has sustained the farmer-maize-assemblage so far.67 GM technology externalises the maize from
farmers and estrange them from their ways of life; and it is only through this externalisation that GM maize becomes perceivable as a potential source of “contamination”, as a danger against which farmers need to “defend” their seeds.68 Now, some might counter that the previous paragraph in practice only provides a fancy repackaging of the two well-rehearsed arguments
brought forward by many anti-GMO activists: (a) that the problem of GMOs is an intrinsic property that makes it “unsafe” (which activists try to scientifically prove), and/or (b) that the fundamental problem of agricultural biotechnology is that it estranges farmers from their traditional, ancestral way of life, that it allows for their exploitation, and that it provides a further foothold for
neoliberal visions of how the world should be ordered. Both arguments are grounded in modern ontology: the first goes down the route of science (contesting “facts” about the “nature” of GMOs on the basis of science itself), while the second goes down the “social” route by either making a case for the need to respect cultural multiplicity, or for the need to prevent economic
exploitation. Some activists make use of all of these routes and arguments. Famous environmental activist and intellectual Vandana Shiva, for example, determines the alienating character of the GMO to be an intrinsic property, while at the same time depicting smallholder producers as intrinsic “‘reservoirs’ of local or indigenous knowledge or as ‘natural’ conservators of biodiversity
through their traditional practices”. 69 According to Carro-Ripalda and Astier, this “unwittingly reinforce[es] images of smallholder producers as passive, timeless and voiceless.”70 This leads to precisely the sort of romanticised reification of “difference” that I have critiqued in the previous section of this article—paradoxically, in this case, on the basis of an ontology that is deeply
modern, as it regards both “things” and “people” as ontologically stable and classifiable. By contrast, the authors of the two texts I have analysed in this section trace ontological encounters that cannot be contained by the nature/culture dichotomy. There is no pre-given (social) theory of neoliberalism and global power relations that dictates how the “voice” of the farmer needs to
be made sense of. There is also no pregiven understanding of the “factual” (scientific) nature of GMOs. The notion of radical difference that comes up in these two texts emerged from precisely the “misunderstandings” that the encounter of ethnographers with “other people” and their relations brought to the fore; but importantly, it did not make any clearer to the ethnographer
what the “stuff” that grounded the misunderstandings is actually composed of.71 Yet, somewhat paradoxically, despite all this emphasis on misunderstandings, incompatibility, grappling, failure, and critical self-reflection of one’s own assumptions—at the end of the day what is left for the readers (at least if they do not explicitly focus on the “ethnographic excess” found in the
writings) is the impression that they know more about “stuff” than they did before: that they understand the situation better, that new knowledge has been produced, that the object of analysis is more transparent than it has been before. How can this subjugation of the encountered ontologically difference to academic strategies of comprehensive sense-making avoided (if at all)?
This article itself is now coming up to what would normally be a conclusion—i.e. the treacherous waters of nailing its contribution to knowledge. Given that this article is yet again another “third-level” engagement with questions of ontology and decoloniality, the question is whether there is any way to avoid this pull of hegemonic modes of academic knowledge production. Rather
than providing a conclusion and reiterate the core argument that the article has made, I will attempt to finish this piece by raising even more questions, and by providing some further reflections. 4. Turtles all the Way Down: (Further) Reflections on What Questions to Ask The pull of hegemonic systems of academic knowledge production is difficult to avoid. This is the case even in
writings that are directly based on ontological encounters and controversies, and that reflect on the displacement that encountering different ontologies has entailed. But as I have indicated, this problem is even more pronounced in writings—like my own—that provide what I have previously called “third-level” sense-making of ontological encounters. The contribution of third-level
analysis is usually a conceptual one, which makes it by definition veer towards the general and abstract rather than the concrete. In relation to the literature on decolonial thought and the ontological turn, this becomes manifest in three different (yet interrelated) ways: first, in the desire to provide an understanding of ontology that enables a conceptualisation of the former as
multiple. Drawing on the work of Mario Blaser and Eduardo Viveiros de Castro respectively, Rojas and Blaney and Tickner argue that ontology can be thought of as multiple if reality is understood as always being “enacted” or “performed”. 72 This is what Blaser calls an understanding of ontology as “materialsemiotic”: one that defines reality as “always in the making through the
dynamic relations of hybrid assemblages”. 73 Pinpointing it like this is inevitably geared towards answering the question of what reality as such, in general is about. Secondly, there is an ambition to coin the general normative-political project that arises out of this understanding with a singular concept, such as the pluriverse. Thirdly, arguments about ontological multiplicity and the
emancipatory-decolonial political projects that arise out of its recognition are written for an audience of a particular discipline, such as IR: the aim is to provide a wholesale, general rethinking, or, indeed, “reconstruction” of the latter.74 What sort of questions drive conceptual work into that direction, and what desire “to know” underlies the questions? According to Cherokee
philosopher Brian Yazzie Burkhart, for Native Americans “the questions we choose to ask are more important than any truths we might hope to discover in asking such questions”. 75 By contrast, Western knowledge is always (at least in the mainstream) propositional knowledge: “knowledge of the form ‘that something is so’”. Here, knowledge cannot be verified by referring to direct
experiences: “there must be something underlying them and justifying them”. 76 Burkhart gives the example of the “routine response” given by “Western people” to Indigenous accounts of creation: “In [one] account, the earth rests on the back of a turtle. The Western response to this account is simply the question, ‘What holds the turtle?’” This question makes no sense to the
Native storyteller, because the truth of the story lies in the paths to rightful action that it outlines, rather than what it has to say about the “reality” of the world. But when the Westerner insists on the question, the answer finally is: “‘Well, then there must be turtles all the way down’.”77 Equating Rojas’ and Blaney and Tickner’s work with European mainstream (hence analytic)
philosophy seems, at first glance, incredibly unfair. After all, those authors precisely advocate the cultivating of knowledge by direct awareness or acquaintance in exactly the way that Burkhart identifies as typical for Native Americans. But on the other hand, the framework that circumscribes their emphasis on the need for “concreteness” is still an abstract one that wants to answer
the question of how things really are and should be: enacted, performed, pluriversal, … The point is not whether this argument about reality and politics is right or wrong. The point is to recognise that it is driven by particular questions that might make no sense in the context of other intelligence systems, but that need to be addressed in an academic article in order to make a
conceptual argument compelling, convincing and original for an audience that primarily sits (whether it likes it or not) within a Western, colonial, hegemonic system of knowledge production.78 And even when the contribution to knowledge production is not primarily conceptual, as in the “second-level” work that I have analysed in the previous section in relation to the GMO
controversy, the final argument that is made (e.g. about peasants’ economic and cultural rights) is yet again lucid and comprehensible to an audience that seeks to comprehend “stuff” within modern parameters. Where to go from here (particularly as a white, European scholar)? As suggested by Tucker, one way might be to engage in much more direct, ethnographic research, which
would enable more direct experience of ontological encounters. Despite previously-mentioned problems of even that research not going far enough, there is without doubt more space for providing a sense of grappling and dislocation if the originality of a piece of work is not purely grounded in the conceptual contribution it aims to make. However, not every scholar is able— body-,
context- or funding-wise—to spend extensive periods of time in different places, and the ethical and political pitfalls of researching “radical difference” through fieldwork with—but often rather on—others have been pointed out by Indigenous scholars numerous times.79 But even for those unable or unwilling to do more primary, empirical research, there is space to push the
boundaries of what can and should be written about (and how). For decades there have been attempts to provide “innovative” platforms, for example at conferences, to talk about “stuff” in different ways (e.g. through storytelling or artistic practices; not at least by e.g. Indigenous peoples themselves80). However, these “innovations” are still at the margins, and they will most likely
never be able to compete with acknowledged knowledge production outlets such as journal articles and scholarly books. But even within the latter, there is always at least some space to push for more open-endedness, more reflection on the author’s embodied positionality, more auto-critique, more uncertainty and grappling (even if this is based on reading about the ontological

Although this sort of embodied self-reflection on a writer’s “situatedness” (which in my own case means being “on the colonising side
encounters of others).

of a divide”81) has obviously been advanced by many critical scholars for decades (including feminists and post- as well as decolonial scholars),
this article has hopefully shown that there
is still (always) a need to go further, in order to more fundamentally
challenge hegemonic, modern/colonial modes of knowledge production. The sense of unease that I have outlined
in section two was particularly strong when reading conclusions that were geared towards making recommendations for the discipline of IR, or
for “international politics”, as such. Aiming
to make generic conclusions for entire disciplines, political fields, or
global “issues” pushes the generality and abstraction of a contribution even further away from an
advocacy of the concrete. Why, and to whom, does it matter whether IR, as a discipline, or international politics, as object of study,
becomes more pluriversal or not? What are the actual benefits of the concept of the pluriverse in the first place? Or to pick up the theme of
this special issue: why does it matter whether IR is, or should move into, a mode of affirmation rather than critique?82 Why is this a good
question to ask—and for whom? This
is not just a theoretical problem, but it has real-life consequences for
actually-existing decolonial struggles. The desire for making a generic argument about relational
ontologies and a pluriversal politics harbours the danger of making a huge variety of demands and
struggles that often exist in tension and contradiction with each other commensurable. Indigenous demands
for the repatriation of “their” land might be at odds with the social justice demands for redistribution and “the commons”. 83 For Blaney and
Tickner, decolonial thought is commensurable with not just the ontological turn literature, but also feminist and other critical interventions.84
Mignolo and Arturo Escobar advocate a transnational fight for global justice and are enthusiastic about the potential of global movements to
achieve that aim together.85 Like Mignolo, Rojas explicitly draws on the World Social Forum slogan “Another world is possible” as well as the
Zapatistas slogan of “a world where many words fit” to make her case about the need for a pluriversal understanding of emancipatory-
decolonial politics.86 While it can be argued that this problem of seeing all these struggles and demands as
commensurable goes back to a lack of actual engagement with particular decolonial practices and
battles, what I have argued in this article is that it is also related to the problem of how and what sort of
knowledge is produced and valued in the Western academy: knowledge that is abstract, generic, and
applicable beyond a specific context. Knowledge that is driven by the desire to know what is. Knowledge
that desires to know what holds the turtle—all the way down.
states
1) Jurisdiction---states’ can’t add predicate offenses to federal statutes---key to
deterrence and IRS prosecution.
2) Preemption---States can’t regulate crimes with international implications.
Gupta 9. Amar Gupta, Law Professor at the University of Minnesota. Death Sao, JD at the University of Arizona
in progress. [Anti-Offshoring Legislation and United States Federalism: The Constitutionality of Federal and State
Measures Against Global Outsourcing of Professional Services, Texas International Law Journal, 44(4),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1324555]//BPS

1. Potential Violations Under the Supremacy Clause

Because such anti-offshoring legislation involves parties and activities beyond the nation’s borders, it
triggers the Doctrine of Preemption by potentially entering into the federal domain of international
affairs, and conflicting with existing federal statutes or international agreements. The Constitution
grants Congress the power to preempt state law.44 In mandating federal dominance over states, the
Framers of the Constitution met their desire for uniformity in areas of national concern.45 Even if a Congressional act
contains no express provision for preemption, state law must when Congress intends to “ occupy the
field;” and even if Congress has not occupied the field , where state law is preempted to the extent that it
conflicts with a federal law.46 The U.S. Supreme Court has found preemption where it is impossible for a private
party to comply with both state and federal law.47 Alternatively, state law is invalid when it is an obstacle to the
accomplishment and execution of the full purposes and objectives of Congress.

No follow-on, even if pressured


Mikos 15 - Professor of Law and Director of the Program in Law and Government, Vanderbilt
University Law School (Robert, “ARTICLE: INDEMNIFICATION AS AN ALTERNATIVE TO NULLIFICATION,”
76 Mont. L. Rev. 57)//BB
The federalization of criminal law arguably poses a threat to the states' traditional police powers. 1 Congress has created thousands of distinct federal crimes, 2 and the "amount of individual
citizen behavior now potentially subject to federal criminal control has increased in astonishing proportions in the last few decades." 3 Though not all of these federal criminal statutes
necessarily upset the careful regulatory choices the states have made, many of them likely do. For example, Congress has criminalized activities the states now permit; it has denied federal
criminal defendants many of the special procedural rights they would enjoy if prosecuted in state criminal justice systems; and it has imposed punishments on convicted offenders that [*58]

In many instances, Congress's decision to


vary both in degree and kind from the punishments imposed by state law for comparable offenses. 4

supplant the policy choices made by the states seems unjustified by any legitimate federal interest . 5 The
conventional wisdom suggests there is very little the states themselves can do to stop the federalization of
criminal law and the resultant diminution of state prerogatives. The states, of course, have no authority to nullify federal law, nor can they interfere with the enforcement of federal
law. At most, the states can petition the federal courts, Congress, and the President to respect state authority,

but it seems unlikely they will find a receptive audience in any of the three branches of the national
government. The federal courts have done little to stem the tide of federalization; Congress lacks the
incentive to abstain from criminal legislation and has repeatedly passed over proposals to
comprehensively reform federal criminal law; and while the President has discouraged enforcement of
certain federal criminal statutes, the President's willingness and ability to do so are limited in important
respects. 6
Fed will pre-empt the CP since States’ lack jurisdiction.
FATF 16. Financial Action Task Force (FATF), independent inter-governmental body that develops and
recommends the global standard of policies for anti-money laundering and counter-terrorist financing, (December
2016, “Anti-money laundering and counter-terrorist financing measures - United States,” FATF, Fourth Round
Mutual Evaluation Report, https://www.fatf-gafi.org/media/fatf/documents/reports/mer4/MER-United-States-
2016.pdf)//pacc

33. The States have historically exercised “police powers” to make laws relating to public safety and welfare, including criminal laws;
however, there are certain areas in which the Congress is constitutionally permitted to legislate , such as on
matters affecting interstate or foreign commerce. Due to the international nature of both the financial system and serious

crime and terrorism, the Federal Government has taken the primary role in law making and enforcement in the areas of
money laundering (ML) and terrorist financing (TF). State laws can be pre-empted when Congress explicitly includes a pre-
emption clause, when a State law conflicts with a Federal law, and when the States are precluded from regulating conduct in a field that
Congress has determined must be regulated exclusively by Federal authorities.

There’s a rollback DA---litigation in federal courts waters down the CP.


Morawa 11. Alexander H. E. Morawa, 11, Chair in Comparative and Anglo-American Law and Associate Dean
for Internationalization at the University of Lucerne, School of Law, Switzerland, "The 'New Judicial Federalism' in
the United States – The Interplay of Federal and State Courts in Constitutional Adjudication,"
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2604459)SEM
**Note: the author uses <<>> to indicate when he is quoting an outside source

These general affirmations of states’ rights notwithstanding, it has been suggested 115 that recent Supreme
Court practice has tended to reign in state supreme courts that attempted to construe constitutional
rights more broadly than the scope of federal interpretation was, many times by means of litigation
brought to the federal arena by a state's executive branch that is in disagreement with its own
supreme court. The Court has held beginning with Oregon v. Hess116 and reiterated in numerous other cases, such as
Arkansas v. Sullivan117 that: «[W]hile ‹a State is free as a matter of its own law to impose greater restrictions on

police activity than those this Court holds to be necessary upon federal constitutional standards,› it
‹may not impose such greater restrictions as a matter of federal constitutional law when this Court
specifically refrains from imposing them. »118
t- crim law
CJR includes increased enforcement against white-collar crime.
Steiker 19. Carol S. Steiker, Professor of Law & Co-Director of the Criminal Justice Policy Program at Harvard
Law School, former clerk for SCOTUS & D.C. Circuit of Appeals, former defense attorney against SCOTUS, and has
testified infront of Congress & State Legislature as an expert witness on CJR, (Nov. 2019, “Article: Dunwody
Distinguished Lecture In Law: Keeping Hope Alive: Criminal Justice Reform During Cycles Of Political
Retrenchment,” Florida Law Review, 71, 1363, Lexis)//pacc

Before I turn to the tool catalog, a word is in order about the


definition of "criminal justice reform." Although a central focus
of current reformers is--completely appropriately--the rolling back of mass incarceration, the problems of American
criminal justice are not simply problems of too much punishment. Thus, reform is not always synonymous with
less law enforcement or punishment. Even as we over-incarcerate on a massive scale, there are also important pockets of
disturbing under-enforcement--of laws against political corruption, corporate and white-collar crime,
police abuse, violence in prisons and jails, and sexual assault . Reform, construed broadly, includes addressing over-incarceration
as part of a wider rethinking of law-enforcement priorities and overall investments, through criminal justice and otherwise, in public safety and security.

Counter-interp---“Sentencing” declares legal consequences.


Pointer 00. Pointer, Jr., District Judge. [Bryan v. Rainwater, United States District Court, N.D. Alabama, 254 B.R.
273, 1-11-2000, Westlaw]//BPS
Even adopting the broadest available definition of “claim” in this case, the actions of the Defendants did not violate the automatic stay provision of 11 U.S.C. § 362.
Section 362(b)(1) provides that “[t]he filing of a petition [in bankruptcy] ... does not operate as a stay ... *278 under subsection (a) of this section, of the
commencement or continuation of a criminal action or proceeding against the debtor.” This section applies without limitation to any specific subsection of section
362(a). In the words of the Supreme Court, it “removes criminal prosecutions of the debtor from the operation of the Code's automatic stay provision.” Davenport,
495 U.S. at 553, 110 S.Ct. 2126. In the present case, Mrs. Rainwater pled guilty to a felony theft charge and was sentenced to a term of ten years, the execution of
which was however stayed by an order for probation and restitution in lieu of incarceration. Sentencing
of a criminal defendant, whether
by fine, imprisonment, or order to pay restitution, declares the legal consequences of predetermined
guilt. See U.S. v. Henry, 709 F.2d 298, 310 (5th Cir.1983) . Because a criminal sentence would be meaningless absent authority to
ensure that it is complied with, an action by the state to enforce the terms of a sentence is clearly a continuation of a criminal action. As such, the action taken by
the State in this instance with respect to Mrs. Rainwater's delinquency hearings and probation revocation was exempted from the automatic stay, whether or not
her restitution obligation might constitute a “claim” under 11 U.S.C. § 362(a).

You might also like