Northwestern Deo Fridman Aff NUSO Round1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 118

1AC

1AC---Advantage
Neoliberalism is dying---the financial crisis revealed its weaknesses, and COVID laid
bare the consequences of privatization. BUT, the response to these failures by
policymakers and businesses has been to double down on the broken model and
invest in massive global surveillance architectures, ensuring authoritarian capitalism.
-Surveillance by corps coopts movements/activism

Laurie Macfarlane 20 economics editor at openDemocracy, and a research associate at the UCL
Institute for Innovation and Public Purpose. He is the co-author of the critically acclaimed book
'Rethinking the Economics of Land and Housing', April 16 , openDemocracy, April 16,
https://www.opendemocracy.net/en/oureconomy/a-spectre-is-haunting-the-west-the-spectre-of-
authoritarian-capitalism/

The death of neoliberalism

The global financial crisis laid bare the underlying weaknesses of the neoliberal form of capitalism that
has dominated policymaking in the West since the 1980s. But without a clear alternative to take its place,
the response was to double down on a broken model. The impact of the crisis, and the austerity policies
that followed, fractured the political argument in many countries, and contributed to a series of political
earthquakes including Brexit, the election of Donald Trump, and the rise of nativist parties across Europe
and beyond.

At the same time, the


economics profession has entered a period of intellectual upheaval . Stagnant living
standards, sharply rising inequality and environmental breakdown have led growing numbers of
economists and commentators – including those in mainstream institutions such as the International Monetary Fund
(IMF) and the Organisation for Economic Cooperation and Development (OECD) – to acknowledge the shortcomings of
free-market orthodoxy.

If neoliberalism was already on life support, then the coronavirus has administered the lethal blow. The
pandemic has laid bare the disastrous consequences of decades of privatisation, deregulation and
outsourcing in countries like the US and UK, and highlighted the critical importance of strong public
services and a well-resourced state bureaucracy . In order to contain the economic fallout from the pandemic, Western
countries have ripped up the neoliberal playbook. Market forces have been shunned in favour of
economic planning, industrial policy and regulatory controls. Even the IMF, for decades the standard
bearer of neoliberal orthodoxy, has floated policy responses that have more in common with the
Chinese model of capitalism. In a recent blog, four senior researchers wrote that: “If the crisis worsens,
one could imagine the establishment or expansion of large state holding companies to take over
distressed private firms.”

But those who have spent years dreaming about a world beyond neoliberalism should think twice
before popping the champagne. While some may celebrate the arrival of policies that, on the surface at
least, involve a greater role for the state in the economy, there remains one problem: there is no evidence that
state action inherently leads to progressive social outcomes.
China is a clear case in point. Income inequality is among the highest in the world, labour rights are notoriously weak, and freedom of speech is often brutally
suppressed. Speculative dynamics have created vast real estate bubbles and an explosion in private sector debt which many believe could trigger a severe crisis.
Workers do not have freedom of association to form trade unions, and non-governmental labour organisations are closely monitored by the state who carry out
regular crackdowns.

While Western capitalism is unlikely to turn Chinese anytime soon,


it would be naive to assume that the state stepping in to play
a greater role in the economy is necessarily going to push politics in a progressive direction . As Christine Berry
writes:

“Thequestion is not simply whether states are intervening to manage the crisis, but how. Who wins and
who loses from these interventions? Who is being asked to take the pain, and who is being protected?
What shape of economy will we be left with when all this is over?”

This goes beyond the economic sphere. Many


leaders are already using the coronavirus crisis to ramp up intrusive
surveillance and roll back democracy, often taking inspiration from China. Hungary’s Prime Minister Viktor Orbán has won new
dictatorial powers to indefinitely ignore laws and suspend elections. In Israel, Prime Minister Benjamin Netanyahu enacted an emergency
decree preventing parliament from convening, in what has been described as a “corona-coup.” In Moscow, a network of 100,000 facial
recognition cameras are being used to make sure anyone placed under quarantine stays off the streets.

But itis not just strongmen leaders who are exploiting the crisis to tighten their own grip over society .
Last week Google and Apple announced that they were jointly developing a global tracking “platform”
that will be built into the operating system of every Android and Apple phone, turning virtually every
mobile phone into a coronavirus tracker. Many Western governments are now working in partnership
with them to scale up national surveillance tools. In the UK, Google and Apple are working with the National Health Service
(NHS) to develop a mobile phone app that will trace people’s movement and identify whether they have come into contact with infected
people. The US, Germany, Italy and the Czech Republic are also reported to be developing their own tools. Thanks
to the coronavirus,
China’s surveillance architecture could arrive in the West much sooner than we think.

Once surveillance measures are introduced, it is likely that they will be extremely difficult to unwind.
“The relationship between the citizen and the state here in the West will never be the same again after
the Pandemic,” Paran Chandrasekaran, Chief Executive of Scentrics, a privacy focused software developer, recently told The Times.
“There will be increased surveillance. For decades to come, there will be new thoughts on privacy –
possibly even the state having wide-ranging and sweeping powers to see data that might pose a threat
to national security.”

For progressives across the West, the task ahead is enormous . Not only is there a need to respond to the growing
dynamism of China’s political-economic system, there is a need to do so in a way that strengthens democracy and protects civil liberties at a
time when both are increasingly under threat. When
economies eventually open up again, the urgency of the climate
crisis means that we cannot afford to return to business as usual. Patterns of production, distribution and consumption
must rapidly be decarbonised, and our environmental footprint must be brought within sustainable limits. And all of this must be done in a way
that reduces rather than exacerbates existing inequalities.

What such an agenda looks like, and whether it is politically possible, remains to be seen. In recent years there has been an outburst of
progressive new economic thinking on both sides of the Atlantic that aims to combine the aims of social justice at home and abroad,
democratic participation and environmental sustainability. While elements of this agenda – from the Green New Deal to expanding democratic
forms of ownership – have been embraced by politicians such as Bernie Sanders in the US and Jeremy Corbyn in the UK, it is now clear that
neither will take these ideas into power. But for
much of the millennial generation whose adult lives have been
defined by the financial crisis, climate change and now the coronavirus pandemic, their agendas are now
viewed as the minimum baseline for a much bigger transformation of our economic and political
systems. Among much of this generation, capitalism – in both its liberal or authoritarian variants – is increasingly
viewed as the problem. Socialism, of a new democratic and green variety, is increasingly viewed as the
solution.
Following the global financial crisis however, it was the authoritarian right, not the progressive left,
that managed to gain a foothold in many countries. The same can be said of the Great Depression in
the 1930s. As governments struggle to deal with an economic crisis on a scale that could easily surpass
both, there are signs that authoritarian forces could stand to benefit once again.

In 1848 Karl Marx wrote that ‘A spectre is haunting Europe — the spectre of communism.’ Today
another spectre is haunting the West: its name is authoritarian capitalism.

To navigate the collapse of neoliberalism away from digital authoritarianism, it’s


crucial to understand the role of law and legal institutions in creating neoliberalism.
Otherwise, Big Tech will subvert the rule of law and create their own sovereignty
outside the state, unchecked by any force.
-How we think about neolib is important – aff framing/discourse/ideology matters – understanding of
neolib as inhering in government and institutions is crucial to break it down at this moment in history
particularly (consistent with rest of aff method)

-History of evolution of neolib – wasn’t created by pure ideology but rather was driven by institutions

-Anti-neolib policies emerging now

James Meadway 9/3/21 PhD in econ at the University of London, director of the Progressive
Economy Forum, previously economic advisor to the Shadow Chancellor, and chief economist at the
New Economics Foundation, “Neoliberalism is dying – now we must replace it”, openDemocracy,
https://www.opendemocracy.net/en/oureconomy/neoliberalism-is-dying-now-we-must-replace-it/

Debate over neoliberalism’s future is not new, and has been reignited since the COVID-19 pandemic
disrupted economies across the world. But this isn’t simply an academic matter: whether we think
neoliberalism is dead, dying, or in rude health has strategic consequences for political activity. If neoliberalism –
meaning the way in which capitalism has been run for the past three decades (and in some parts of the world, for longer) – is really on its way
out, we need to be alert to the ways in which the system is changing, and perhaps update and refresh our own
slogans and demands and strategies accordingly.
But if neoliberalism remains firmly in place, there might appear to be fewer challenges for the Left and progressives in dealing with the situation. All our slogans,
policies and strategies, honed over the last decade, will still basically apply: a case of better the devil you know, and an opportunity to stay in our collective comfort
zone.

Shifting paradigms

Whether we are seeing a real break in global capitalism’s mode of operation, a temporary deviation
from the neoliberal norm during a global pandemic, or simply a continuation of business as usual depends crucially on
what we think neoliberalism was and is. Those stressing that we are seeing a break, like French economist Cedric Durand in two
recent New Left Review essays, tend to view the shift as pre-dating the pandemic. Durand has described the Biden administration as “1979 in
reverse”: instead of driving up interest rates, cutting social expenditure, and attacking trade unions, Biden is overseeing a regime that is
suppressing interest rates, driving up social spending, and expanding trade union rights. Crucially, however, he locates the breach with
neoliberalism before the COVID-19 pandemic. And, like others stressing a significant shift, he points towards material factors driving the
‘contradictions’ facing capitalists, such as the difficulties in securing profitable investments. In this reading, neoliberalism was related primarily
to the restructuring of capitalism from the 1970s onwards.

Those seeing the current period as mainly a continuation of the neoliberal era stress the specificity of the pandemic in
forcing temporary actions, much like the temporary ‘Keynesianism’ that followed the 2008 financial crisis. Crucially, they view neoliberalism as

primarily an intellectual movement. Writing in Tribune magazine, historian Quinn Slobodian argued that the intellectual forebears of today’s
nativist and government-friendly radical Right – the Steve Bannons or Marine Le Pens of this world – can be found amongst the ranks of neoliberal gurus like
Friedrich von Hayek. Far from wanting a pure free market everywhere, Slobodian claims, Hayek and his co-thinkers were only too happy to see authoritarian
governments erect barriers to markets. Far from promoting the unfettered free market of libertarian fantasy, neoliberals were very happy to use the ‘strong state’”
if doing so meant building support for their vision of society. Economist Grace Blakeley, meanwhile, has also argued that governments across the globe still see
themselves as working to a neoliberal playbook.

This way of seeing neoliberalism, as an intellectual movement above all, is most associated with Philip Mirowski and his work on
the ‘Neoliberal Thought Collective’. It turns the history of neoliberalism into a story about the role of the Mont Pelerin Society,

established in the eponymous Swiss town in 1947 by Friedrich von Hayek, Milton Friedman and other neoliberal thinkers. In this version of

events, the Neoliberal Thought Collective then spent decades nurturing their vision of a market-organised
world before the crisis of the 1970s gave them their opportunity to mould governments in their image .
This version of history has gained some support in the past decade amongst the broader Left in the form of the idea that the 1970s and 1980s were a ‘paradigm
shift’ in economic thinking – with neoliberalism replacing the earlier ‘paradigm’ of Keynesian government intervention in the 1970s crisis. What is needed now, in
this view, is a similar paradigm shift, but in the opposite direction.

As Will Davies has written, the 1970s “inspired a vision of crisis as a wide-ranging shift in ideology, which has retained its hold over much of the
Left ever since”. But
this was – as the filmmaker Adam Curtis might say, himself a proponent of the paradigm shift view – an illusion.
Neoliberalism didn’t arrive on tablets of stone, brought down from the Swiss mountains. What became
neoliberalism in government was the product of actions by different governments , at different times, under
different guises. For governments in the West, the process in the formation of neoliberalism was strikingly uneven. Margaret Thatcher’s British
government led the charge in western Europe, but it was only through successive victories – both industrially, against a series of trade unions,
and electorally – that a decisively neoliberal domestic regime was installed by the end of the 1980s, along with Thatcher’s exit from office.

But the process was not confined to separate national governments. Eric Helleiner’s classic book, ‘States and the Rise of Global Finance’, shows
how, across the major developed economies, domestic crises from the end of the 1970s pushed countries towards building a new international
order for capitalism. Although many countries were partly influenced by neoliberal think tanks, they were also responding ad hocly to changing
global circumstances. It was the 1974-79 Labour government, for example, which first removed exchange controls, not from a commitment to
neoliberal ideology but in the belief this would help domestic manufacturing investment. As Slobodian argues elsewhere, it is at the level of
international organisation and global rules that neoliberalism can best be understood. Crucially, however, those rules emerged from specific
domestic circumstances and the outcome of uncertain struggles in different countries. It was in responding to different domestic circumstances,
as the world around them changed, that different national
governments – led by the largest economies in the West –
pulled together the neoliberal global regime.

Heroic neoliberalism

To understand the world today, we need to shift the focus away from neoliberalism’s heroic years in the 1980s,
when it acted as an injunction to attack and destroy the enemies of capital such as the miners in Britain or the air traffic
controllers in the US. This account of neoliberalism’s early combative years offers a clear, simple story, with obvious goodies, baddies, winners and losers. But the

severity of the class struggle then can lead us to misunderstand its real triumphs globally. The years of
struggle can be found not in the period of neoliberalism’s completion and triumph, but its difficult and
contested emergence.
This period, from the end of the 1970s through to the early 1990s in Britain, when strike days and union membership collapsed never to
recover, is the period during which the neoliberal style of government was contested and other competing options were still on the table. The
National Union of Mineworkers could have won in 1984-85, as we now know the government feared; ‘shock therapy’ in Eastern Europe was not
at all what reformers and dissidents wanted or expected; and although now largely presented in the West as a student struggle, the convulsions
across China around the Tiananmen Square protests that were brutally suppressed on 4 June 1989 drew in far wider layers of Chinese society –
crucially including workers on a mass scale.

Neoliberal governments tore down controls on the movement of capital across borders, ripped up protections on labour and the environment
in the interests of multinationals, and instituted a global race to the bottom on tax that saw major corporations often pay less in tax than their
workers. International organisations like the IMF and the World Bank were repurposed, new agreements on intellectual property were signed,
and the World Trade Organization was established. This triumph meant that even where there were domestic differences in how different
countries organised their economies, the general tendency everywhere was for all national economies to come increasingly into alignment with
the neoliberal international order. At varying speeds, and from different starting points, almost every country on the planet found itself
adapting to the same neoliberal rulebook for its domestic economy. “There is no alternative,” was Thatcher’s striking phrase, but it was only
after she left office that this began to ring true.
What’s crucial about all this is that neoliberalism didn’t arrive fully-formed . The ‘shock therapy’ reformers in eastern Europe
certainly had some ideas about what they were trying to eventually create, but the drivers of change were the freeloaders from inside the old
nomenklatura who took advantage of privatisation and liberalisation to steal billions from their local populations. Thatcher did not enter office
anticipating that she would decimate British manufacturing and create a monstrous debt bubble – quite the opposite, her rhetoric was about
Britain becoming once more the “workshop of the world” and about the virtues of thrift – but that is where she ended up regardless. It was
only after the dust settled from the great class battles of the 1980s that, in the West, we could see the shape of the neoliberal system that had
been created. And it was only by the early 2000s that we could see the shape of it everywhere.

At each stage in these early years, the alternative to a neoliberal turn was on the cards, but it was only in
the success of the ruling-class offensive that neoliberalism truly took shape. Moreover, it did so on the basis of the
new economic circumstances that had been opened up in the course of the offensive: the incredible fall in the price of transporting goods and
materials, primarily through containerisation; the huge expansion of the global labour force (via China and, to a lesser extent, eastern Europe);
and, in financial markets, the continuing dominance of the US dollar – the lynchpin of the global financial system that allowed the booming of a
global credit bubble. This bubble kept consumers supplied with money for purchases even as real wage growth was suppressed.
Multinational companies, which had begun to rise to dominance in the post-war years, became the cutting edge of the
neoliberal global economy, becoming increasingly adept at exploiting weaknesses in the global tax system through avoidance and the
use of havens. These material conditions were collectively the basis for dramatic catch-up growth in China and parts of the less developed
world and, by the 2000s, of an unstable, debt-funded version of prosperity across the developed world.

In other words: neoliberalism


was formed in the West not at the start of the 1980s, but at its end. The
combative years of the 1980s often attract too much attention , which means we end up seeing
neoliberalism as a kind of permanent ruling-class offensive against its enemies, rather than it being what
happened when the ruling-class offensive had essentially won . This is where neoliberalism comes into its own as a set of
automatic rules that, in the ideal society, no longer require intentional government action to function.

The peak neoliberal governments – those that perfected the form – were not Thatcher and Reagan’s, but those of Blair and Clinton. And the
peak neoliberal moment in history was not the defeat of the British miners’ strike, but the entry of China into the World Trade Organization in
2001.If we think of neoliberalism as primarily an intellectual struggle, dictating a form of ruling-class
combat against its opponents, we will view this history the wrong way round. It wasn’t the most
ideological governments who were the most neoliberal: it was those who came afterwards, who
proclaimed themselves to be ‘beyond ideology’ or ‘beyond Left and Right’.
This should alert us to a second point: whilst governments have used spending cuts and austerity to drive through neoliberal programmes of
change, there is no necessary attachment of neoliberalism to austerity. For all the early rhetoric about ‘rolling back the frontiers of the state’,
neoliberalism in practice has accepted a significantly expanded state sector. And however grudging the acceptance may be by neoliberal
ideologues, neoliberal governments in practice have accepted many ‘Keynesian’ or social-democratic institutional hangovers, like the
institutionalised trade union bargaining that many European countries still maintain.

Neoliberal principles
Although neoliberalism came together in a disparate way, this doesn’t mean it is a system without its own principles, or that there is no logic to
what neoliberal governments did beyond responding to circumstances. Quite the opposite: neoliberal
thinkers, whatever their tactical
manoeuvrings and opportunistic alliances, have been very insistent on two related values: firstly, the supremacy of
law and, secondly, the centrality of markets and the price mechanism in organising society. Neoliberals like Hayek and
Friedman may have disagreed on much, but both would insist on the need for society to operate automatic mechanisms that regulate it. In
other words, that there would be no place for interventions by government or other public bodies if a society was well run. And a well-run
society would (of course!) be one where the automatic social stabilisers of law and the market could operate freely, independently of political
intervention. This belief is the ‘liberalism’ of neoliberalism. Like the classical liberals of the 19th century, neoliberals viewed a society with the
rule of law and free markets as a society in its ideal form. Its clearest expression is perhaps Hayek’s book ‘The Constitution of Liberty’, which is
an extended argument for a (limited but fundamental) rule of law and the operation of the price mechanism as the cornerstones of meaningful
human freedom.

From these two underlying values, neoliberal governments tended towards three principles . First, that law and the
market should work together, with well-designed protections for private property that ensure markets can find
the correct price. These private property rights should be extended as far as possible , particularly in the field of information
and knowledge to cover software, for example, making it possible for Microsoft to copyright a program or to lay claim to genetic information,
and allowing Monsanto to patent genetically modified crops. This is despite the fact that Intellectual property right extension was contested by
some leading neoliberal thinkers – a further example of how it is the institutions, not the ideas, that define the period. Even
if
temporary, the Biden administration’s support for the ending of intellectual property protections on
COVID-19 vaccines marks an important blow to this concept.

The second principle is that both law and the market take precedence over democracy, and even government itself.
In other words, democratic demands to change how markets operate, or to change the law on property – removing intellectual property
protections on HIV medicine, for example – should be pushed aside.

The early legal offensive on trade unions was an important aspect of this primacy of law . In the early 1980s,
Hayek recommended to Thatcher that her government should repeal the 1906 Trade Disputes Act, which would allow unions to be sued for
actions undertaken by their members during a strike. The effect would be to reset decades of accepted practice in British
industrial relations, in which the unions and employers would each bargain from positions of organised strength, and instead turn unions
back into something more like any other voluntary association, lacking the capacity to act collectively.
In the end, Thatcher didn’t go this far, but the succession of trade union laws passed by Conservative governments in the 1980s and 1990s very
significantly undermined trade unions’ abilities to organise and bargain. It was the legal victory that was the real strategic
goal for neoliberalism, building out from the industrial defeats (of car workers, steelworkers, miners, and printworkers), since it was
the means to preserve those victories. It is why the proposals by the Biden administration to reverse some of the
neoliberal legal restrictions on US trade unions, introduced in the 1980s, are so important, and the
biggest single breach he has proposed in the neoliberal system to date.

The third principle is that both law and the market extend beyond the boundaries of the nation-state
and encompass (as far as possible) the international economy. The extension of the intellectual property regime via the TRIPS
Agreement in the 1990s was a key moment in the development of a neoliberal global economic order. So, too, was the attempted creation of
something like an international commercial law through the establishment of Investor-State Dispute Settlement (ISDS) mechanisms, in which
trade treaties like the North American Free Trade Agreement would create independent court-like bodies that gave corporations the power to
sue governments that had allegedly breached their ‘rights’ – for example by imposing environmental obligations on them.

This international dimension is what we might call ‘peak neoliberalism’. Once the neoliberal rules of the
game were bolted in place internationally, it would become harder for individual countries to seriously
breach them. It is in the sphere of international economic relations that neoliberalism is most apparent, since it is here that the separation
of the activities of government and the activities of those in the market are most obviously separated, and even come into conflict. This is in
contrast to Blakeley’s argument that, being capitalists, they are fundamentally the same. Different national governments have different interests
that can conflict, but neoliberal institutions aim to smooth out these conflicts in deference to clear sets of rules – like intellectual property, or agreements on trade
subsidies.

This international order helped create a certain ideal type of neoliberal business, one that depended on both the smooth global operation of the price mechanism,
and on the legal protections of international commercial law and regulations. All multinational enterprises required both to some degree, but the companies that
were most dependent on the neoliberal global order were the multinational investment banks like Lehman Brothers, RBS and Deutsche Bank – vast, powerful,
global enterprises, at least until the credit bubble burst in 2007-08.

Domestic and international dimensions to neoliberalism

If we want to look for an ending to neoliberalism, then we need to look at both the domestic and the international spheres. The turn against neoliberalism from
2008 onwards has been more obvious in the international dimension than the domestic, at least for countries in the historic West. Global trade was already falling
as a share of GDP after 2008, but it was Donald Trump’s dramatic assault on neoliberal trading rules, via his trade war with China, which really shifted the
international political order.

That breach has been further reinforced by the recent G7 Global Tax Agreement, which paved the way for a global minimum rate of corporation tax. Driven by
domestic political considerations, notably around the brazen way in which Big Tech has scarcely
managed to wriggle out of its tax obligations, major economies like Britain and France had already
begun seeking ways to impose new taxes on companies such as Facebook and Amazon. The US had robustly
opposed this, and one way to view the G7 agreement is as a compromise between the desire of the US government to raise more tax revenues
domestically, and the desire of other G7 members to prevent mainly US tech multinationals from undermining their own tax systems. The
crucial point is that, just as in the 1970s and 1980s when neoliberalism first emerged as a rulebook for
the major economies, a combination of domestic politics and shifting international conditions has moved
the G7 countries to turn against the existing international rulebook .

Combined with the striking turn across the major economies towards government intervention in the
form of industrial strategy, the anti-neoliberal tendency is clear. Governments across the world are
pledging to spend more on investment and to act more directly to support their national businesses, and
to shape economic outcomes through initiatives like Britain’s alleged ‘green industrial revolution’. The
period since 2008 has been one in which states and businesses have drawn closer together, breaking the
neoliberal claim that the two should be distinct.

But this is nothing new: states and businesses have always been mixed up with each other. Socially,
those running major businesses are closer to senior civil servants, leading politicians or those who run
newspapers, than they are to those who work for them. Politically, businesses have always actively
sought the support of politicians and political parties, as well as seeking close relationships with those
parts of the permanent state that deal with their interests. But stating this truism doesn’t get us any
closer to understanding how the system has changed over time. If we turn ‘neoliberalism’ into merely a
general description of how any capitalist state operates, the term is redundant. But if we want to
understand how capitalism has come to operate, and the specific relationship between major
businesses and capitalist states, we need to specify the term more closely. This is particularly the case
given that the companies now dominant in the world do not correspond to the historic neoliberal ideal.

Big Tech as an ‘anti-neoliberal’ formation

In 2009, the four largest corporations in the world were Petrochina, ExxonMobil, Industrial and Commercial Bank of China, and
Microsoft. By 2019, they were Apple, Microsoft, Amazon and Google ’s parent company, Alphabet. These digital platforms
don’t correspond to the neoliberal ideal of a corporation. Notoriously, whilst they are happy to exploit elements of the neoliberal system (like
intellectual property, and tax havens), the
platform companies have spent the past decad e or more busily pushing
beyond the existing boundaries of law and regulations through their data-gathering operations.
Investment banks certainly seek out ‘regulatory arbitrage’ opportunities in existing law and regulation (that is, loopholes to exploit), and try to
create new financial products that can be traded beyond the point of regulation. But what they do is of necessity bound by existing property
and contract law. You cannot trade a financial product if you don’t have a price for it, and if the property it lays claim to isn’t legally protected.

Financial companies also employ lobbyists to try and shape the law to their advantage. Crucially, however, the
existence of a financial system, particularly in a neoliberal policy environment, depends on the existence of a law
around which it can profit. At the most elementary level, it is the presence of both a government-backed form of money (including
deposit insurance) and a government-backed central bank that allows finance to function. As a system, it is in continual tension
with the law but it is ultimately the presence of law and, relatedly, regulation that determines its capacity to
profit. It is impossible to understand the modern financial system without understanding it as
a by-product of government decisions. It is, in the end, fundamentally subordinate to law – as the
post-2008 bailouts graphically demonstrated.

This does not apply to the platform corporations in the same way. The major tech giants grew up in a neoliberal
environment, and it is quite hard to see how they could have grown in the way they did if the internet itself was not left very significantly
free of government regulation (in line with neoliberal principles pushed particularly hard by the US Democrats of the early 1990s). But as
Shoshanna Zuboff’s ‘Surveillance Capitalism’ details, it was the first US crisis of the new millennium – that of the ‘dotcom crash’ of the early
2000s – that pushed Google and others into a distinctive new form of business organisation, based on the mass acquisition and analysis of user
data.
Once the secret of monetising mass user data was unlocked, a path out of neoliberalism was opened. This happened in a number of ways. First ,
the most aggressive of the companies were immediately involving themselves in vast new areas of
human activity where law and regulations did not exist, making their own policy and regulations as they
went along. The driver of their business model, the acquisition of data, was a permanent – and
increasing – extension beyond the reach of law . Blowback from this overreach has led to the creation of a quasi-judicial
function by Facebook, establishing a supposedly independent ‘Oversight Board’ to rule on policy decisions, after Mark Zuckerberg had
previously speculated about creating a “Supreme Court” for Facebook decisions .
Neoliberalism – in practice and in theory – tended to
view government sovereignty as something that should be limited in its extent, but that corporations
and private individuals should ultimately be subordinate to it. The data giants fundamentally subvert this
idea, starting to define what look very much like their own forms of sovereignty over the new domains
of human behaviour they oversee and manage.

Second, the principle of price as an organiser of economic activity has become increasingly tenuous. The
products of the platform giants, at least on the consumer side, tend to work against market
mechanisms. Facebook still boasts that “it is free, and always will be”. There is no consumer market and
no price being established, when a product like Facebook is free. Where a product is not obviously free,
the platforms have attempted to push their customer base into a subscription model: not organising a
market through price, but creating a continual flow of income, at a fixed rate, from the consumer to
themselves. For example: Apple wants you to use their hardware to establish subscriptions to its
services, and Netflix does not charge you per film viewed, but expects you to remain a subscriber in
perpetuity. The ultimate expression of this has been the attempts, led by Facebook, to establish their
own currencies, free from government, allowing them to operate their own payments system and so
take a slice of every transaction made.

Continually expanding the boundaries of what can be turned into data is fundamental to the data-
gobbling business model of the platforms. Like cuckoos in the nest, they grew up under the protection of
neoliberal governments, but rapidly outgrew their limitations. ‘Move fast and break things’ is an anti-
neoliberal statement. These are anti-neoliberal companies.

This has, in turn, produced an anti-neoliberal reaction from governments. The proposal for special new
taxes, as made by Britain and France, to capture only the US digital giants, is a break with the neoliberal
programme on tax, which has always sought to create a ‘level playing field’, culminating in the demand
for flat taxes on all forms of income.

Or take the emergence of the ‘neo-Brandeis’ school of competition scholars, now firmly entrenched in
the Biden administration with the appointments of radical legal experts Lina Khan as Federal Trade Commission chair
and Jonathan Kanter as assistant attorney general for antitrust. Whilst for decades, neoliberal thinking has been
thoroughly embraced by competition authorities, stressing that market structures matter less than
presumed consumer benefits, the neo-Brandeis school stresses the importance of competition not only
for consumers but for democracy itself .
These anti-neoliberal interventions by governments against anti-neoliberal companies work in the opposite direction too. The aggressive
interventions made by the US government and its allies against Chinese tech competitors are determinedly anti-neoliberal, seeking to intervene
in the economy to promote the interests of US corporations whilst targeting foreign competitors like Huawei. These
aggressive
interventions also reflect the US’s loss of status and position: it was happy to be neoliberal when the
rules worked in its favour. Now that no longer applies. The US now faces a competitor that quite deliberately uses the state
to push its favoured companies and industries, and it is being forced to respond in kind.
The rise of the digital economy is, ironically, occurring as the US itself is facing what the State Department calls a “peer competitor” in the form
of China – a country that has never wholly embraced neoliberal governance. This competition is forcing a symmetrical response on the US and
other large economies. When Boris Johnson tells civil servants to “be more creative and more confident around who [the government] choose
to back”, he is expressing the same state-capitalist competitive logic. Government
is no longer a passive umpire, as in the
neoliberal ideal, but has become an active participant in the contest.

The broader point here is that the


material base of the global economy has, in the past decade, been decisively
reshaped around data technologies and a major new competitor economy outside of the West, and that this in turn has
promoted a direct challenge to neoliberal norms of government across the globe . To the extent that the
pandemic has accelerated the shift into the digital economy, and has expanded the range of government
intervention, it has brought neoliberalism’s death rather closer.

A post-neoliberal strategy for the Left

From the above analysis, a number of conclusions follow. Firstly, there


are real changes happening in terms of how the
global system operates, and these changes are having an impact on the behaviour of governments. It is
too early to say that neoliberalism is dead, and a revival in some form – at least at the national level –
cannot be completely ruled out. More likely we will be entering a period where some decidedly neoliberal institutions and
practices survive amongst different forms, and potentially outlive any transitional period. Much as Britain’s National Health Service (NHS) has
survived decades of neoliberal governments, even as the rest of society is privatised, it’s certainly possible to see (for example) its finance
sector continuing to exist in a recognisably neoliberal fashion for an extended period of time. But the
general tendency of
capitalism, beginning with the 2008 crash and accelerating dramatically in the ongoing pandemic, is
clear: neoliberalism is dying, if not yet dead.

What does this mean for a post-neoliberal strategy for the Left? Firstly, an excessive focus on
neoliberalism as a system of ideas, and, related to this, a fixation on its early combative years in the West, means the
material conditions that sustained it as a form of government , which are now coming to an end, are
often overlooked.

These changes in the material operation of the economy point towards one version of the post-neoliberal
future, in which states and the major businesses in each state work closely together to overcome cut-
throat competition on a global scale. It suggests a model of what openDemocracy’s Laurie Macfarlane has called
“authoritarian capitalism ” prevailing in national economies, facing an increasingly anarchic international
world. Authoritarian power, rather than the rule of law and still less democracy, would come to
dominate.

This isn’t the only option, however. Where the Left has been able to organise and exert its influence, as in the United States, it has
helped shift the political conditions for an exit from neoliberalism in a more obviously pro-worker direction. This terrain is still contested,
however: Biden may talk a good game about investment and jobs and unions, but his administration still has to make it
into reality. And the sabre-rattling against China, to which his programme of domestic economic expansion is allied, is an obvious cause for
concern.

Over the past decade, the Left has pulled together broad, anti-neoliberal coalitions , with the fight against
austerity at their core. The great advantage of opposing neoliberalism, rather than opposing capitalism as such, was that it allowed a far
broader coalition to be pulled together – from radical socialists to social democrats to environmentally-minded liberals. If the dynamic of the
system is now working differently, a somewhat different coalition will be needed – one that emphasises more the virtues of a dispersal of
power and equality before the law.

If thedisintegration of law is a central feature of the world after neoliberalism , especially where this
disintegration has taken on an authoritarian guise, it suggests a major part of the strategy for the Left
should be supporting precisely the norms of law as they have been established over decades, even
centurie, sof struggle. Avowedly liberal attempts to expose government corruption via legal challenge, as the Good Law Project is doing,
should be supported as part of the broader struggle for the maintenance of democratic rule. The Left should aim to be the best
defenders of those legal norms, and seek alliances to defend them.

Second, and relatedly, if in future economic outcomes are going to be increasingly determined by access
to power and government support, the Left should be looking to build alliances with those – who will be
the majority – excluded from this access. As Grace Blakeley has pointed out elsewhere, the rise of giant,
state-connected corporations means small businesses lose out – and certainly COVID-19 has so far left
them in a perilous state. The Left should be defenders of smaller businesses, and the Preston Model,
with its shifting of local government and other procurement towards local businesses, is a critical part of
that.

But on a similar basis we should be looking to support forms of action against the state, and build up
forms of power separately from it – as activist and researcher Jonas Marvin has recently argued.
Crucially, this should include independent trade unions: not just supporting newer unions where they
organise, but also fighting to make sure the older, established unions are keeping their critical distance
from the government and preserving the ability to organise and fight independently for their members.
Seeing the UK’s Trades Union Congress lining up to back the (now-abandoned) exclusionary ‘Coronavirus
Job Support Scheme’ was a troubling development. This scheme was heavily skewed towards supporting
relatively better-off workers in steady employment if they were faced with shorter hours and pay cuts,
whilst leaving millions of insecure and mainly poorer-paid workers without government support beyond
the inadequate Universal Credit system.

Third, the Left should be looking to both extend existing rights, and broaden the ownership of wealth
and assets in society. That applies particularly in the digital economy, where definitions of rights remain
contested, but also to productive assets more generally. We should be fighting for a proliferation of
worker and community and local authority ownership of assets as a necessary mechanism for both the
redistribution of productive wealth and the creation of a more resilient economy. Beyond that point, if
government intervention means economic questions themselves are increasingly politicised, we should
be pushing for the most expansive version of economic rights and demands available: against forms of
discrimination in access to support, and in favour of universalism.

Finally, it should be clear by this point that global capitalism (at best) finds addressing climate change,
and addressing its consequences, exceptionally difficult. A competitive global system is not built to solve
coordination problems like this, and we are living through its consequences in the failure of the global
vaccination campaign against COVID-19. Support for investment in Green New Deal-type programmes
should sit alongside demands to reduce working hours, provide free broadband as an essential of
modern life, and offer comprehensive state income assistance for future shocks – of which Universal
Basic Income is the most fundamental.

Neoliberalism might be dying, but this is no reason for complacency. Some of what is now happening
points to a decisively worse world: digitally enabled authoritarianism and overtly discriminatory
government policy, for example. It is critical that the Left across the globe understands these changes , in
order that we can adapt our political strategies and our programme. Now more than ever we have to
learn how to fight effectively for a progressive, post-neoliberal future.
Corporations optimize for shareholder value at the expense of planetary boundaries,
human life, and human welfare. They are an existential threat, only resolvable by
changing the legal structures that permit their existence.
Dominic Leggett 21, University of Warwick, “Feeding the Beast: Superintelligence, Corporate
Capitalism and the End of Humanity,” Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and
Society, Association for Computing Machinery, 07/21/2021, pp. 727–735 ACM Digital Library,
doi:10.1145/3461702.3462581
Superintelligence

The idea of a mechanical intelligence that comes to dominate mankind is a staple of science fiction -
epitomised by Fredric Brown’s celebrated 1954 story ‘Answer’, quoted above - but with the recent exponential increases in computer
processing power, philosophers, and others who are professionally concerned with future risks, have begun to
take this potential threat seriously.Nick Bostrom, in his book ‘Superintelligence’, writes:
The challenge presented by the prospect of superintelligence, and how we might best respond is quite possibly the most important
and most daunting challenge humanity has ever faced. And-whether we succeed or fail-it is probably the last challenge we will ever
face. [2] Entrepreneur

Elon Musk, in an interview at SXSW said that:

We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I
think that is the single biggest existential crisis that we face and the most pressing one. [3]

And philosopher Sam Harris claims:

We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.
[4]

All of these assessments


of risk have two common features - they see superintelligence as a problem that
we will need to deal with at some time in the future, and they assume that it wil be developed in such a
way that we will also, simultaneously, be able to build a way to keep it under control . Estimates of exactly
how far in the future that time will be vary from ten years to a hundred years - but there is general agreement that we have an opportunity and
the ability to prepare for the threat. Bostrom again:

We would want the solution to the safety problem before somebody figures out the solution to the AI problem. [5]

But what if superintelligence already exists and we just haven’t noticed that yet? And what if it’s already has so
much power that it will be immensely difficult, if not impossible, to bring it back under our control?

How should we identify superintelligence? Sam Harris thinks we might do so just on the basis of computing power:

We need only continue to produce better computers— which we will, unless we destroy ourselves or meet our end some other way.
We already know that it is possible for mere matter to acquire “general intelligence”—the ability to learn new concepts and employ
them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe
that a suitably advanced digital computer couldn’t do the same. [4]

Bostrom suggests that a superintelligence would require certain capabilities, or ‘drives’. These might include self- preservation, goal-content
integrity, continued cognitive enhancement and technological perfection, and resource acquisition. So it defends itself, it improves itself, it
supplies itself, and it keeps finding ever more efficient and effective ways to achieve its goals, whatever they happen to be.

How could a superintelligence be a threat to humanity? Bostrom suggests three ways. First, it could have goals that are not commensurate with
human survival. Bostrom suggests, as an example, a machine that has a goal of making paperclips, and uses the entire resources of the planet
to make paperclips. Second, the machine’s goal of self-preservation could neglect the preservation of human life. Third, the machine could
learn to predict and control human decision making, turning humans into, effectively, slaves.

To understand these threats a little better, it’s useful to think about how a superintelligence might interact both with humans, and with the
world around it. We tend to imagine a computer as a box with electronics inside - and this box on its own, however ‘intelligent’ the silicon
circuits it contains might be, is not a threat unless it has a way to change things outside itself. So, for example, a computer connected to others
might tamper with robots in factories, or interfere in broadcasts, or take control of weapons systems. A computer connected to just a screen
and a keyboard might understand human psychology well enough to convince its operator to act on its behalf. So a human would become the
hands and eyes and ears of the machine, and the agent of its designs. In the extreme case of complete obedience, we could see the human as,
effectively, just a part of the machine.

To understand the threat a machine might pose we also, of course, would need to understand how it might acquire its goals. So we would need
to know who might build it in the first place, and for what purpose.

Of course, a computer doesn’t have to be made out of silicon and wires. An abacus is a computer. The
propose Babbage Analytical Engine is entirely mechanical. The mathematicians who worked to calculate
the trajectories for NASA’s early spacecraft were known as ‘computers’. It is the function - manipulating inputs
according to a set of rules – an algorithm - to produce outputs - that counts in identifying something as a computer ,
not the physical mechanism that performs that function.
Markets as Computers

In particular, some social structures can act as computing machines. In the Introduction to ‘Leviathan’, Hobbes writes;
For seeing life is but a motion of limbs, the beginning where of is in some principal part within; why may we not say, that all
automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life? For what is the heart, but a
spring; and the nerves, but so many strings; and the joints, but so many wheels, giving motion to the whole body, such as was
intended by the artificer? Art goes yet further, imitating that rational and most excellent work of nature, man. For by art is created
that great LEVIATHAN called a COMMONWEALTH, or STATE, in Latin CIVITAS, which is but an artificial man; though of greater stature
and strength than the natural, for whose protection and defence it was intended. [6]

The state itself is an ‘artificial man’ - and one that’s more powerful than any individual human. The state isn’t just strong, though; it’s also an
intelligent agent - an intelligent machine - that takes decisions on behalf of a group of humans. Another example is the corporation. As David
Runciman says:

Corporations are another form of artificial thinking-machine in that they are designed to be capable
of taking decisions for themselves . Many of the fears that people now have about the coming
age of intelligent robots are the same ones they have had about corporations for hundreds of
years. The worry is, these are systems we never really learned how to control. [7]

One further example is a market. A market is


a social strucutre that consists of a group of agents who have items
they want to trade, and a currency, and rules for making contracts. In such a market, the inputs are
demand for and supply of goods, the calculation is performed through the setting of prices and through agreements to buy and sell, and,
theoretically, in a perfect market, the outcome is the most efficient allocation of resources. As Adam Smith
puts it, in ‘The Wealth of Nations’:

Every individual... neither intends to promote the public interest, nor knows how much he is promoting it... he intends only his own
security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain,
and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. [8]

In fact, in a lecture in 1957, Herbert Simon argued that:

Physicists and electrical engineers had little to do with the invention of the digital computer...the real inventor was the economist
Adam Smith, whose idea was translated into hardware through successive stages of development by two mathematicians, Prony and
Babbage [9]

Markets also incentivise certain kinds of human behaviour. Humans have an incentive to act as traders in the market,
because it’s a way for them to obtain the goods that they need. Also, though, markets give an incentive to humans to work - to supply labour -
by connecting them with buyers who want what they can produce.

These basic markets have a simple objective of creating an efficient allocation of goods and labour. They’re constructed and regulated by states,
for this purpose. Governments write laws and construct institutions that allow contracts to be enforced, allow information to be validated,
create currencies, regulate to prevent negative externalities, and determine the moral and physical limits of the market. So humans, and the
intelligent machines that are their governance structures, create and control markets, and attempt to ensure that they work for the benefit of
humans, as determined by these governance structures.

Despite a pervasive ideology that sees participating in markets as a key element of human freedom, some such markets have a surprising co-
ordinating power, and one that is almost independent of the individual decisions of agents participating in the market. Sunder, in ‘Markets as
Artefacts’ finds:

A claim that the predictions of the first fundamental theorem in economics are approachable in classical environments without
actual or attempted maximization by participants might have been met with skepticism until recently. Thanks to a largely
serendipitous discovery using computer simulations of markets, we can claim that weak forms of individual rationality, far short of
maximization, when combined with appropriate market institutions, can be sufficient for the market outcomes to approach the
predictions of the first fundamental theorem. These individual rationality conditions (labeled zero-intelligence) are almost
indistinguishable from the budget or settlement constraints imposed on traders by the market institutions themselves....We prefer
markets to be robust to variations in individual cognitive capabilities and responsive to their wants and resources. If creation without
a creator and designs without a designer are possible, we need not be surprised that markets can exhibit elements of rationality
absent in economic agents[10]

It turns out then, that the


sensation of freedom when participating in so-called ‘free’ markets may be
something of an illusion. And, further, humans who live in states that have created such markets have
little option but to participate in them, if they want to be able to source the basics they need to survive.

Further, these market machines, although they have been created by states to benefit humans, can end up
malfunctioning, in ways that can cause serious harm. Mirowski, who theorises markets as machines that he calls
‘markomata’ describes one such potential malfunction:

In markomata economics, the very notion of ‘market failure’ thus assumes an entirely different meaning. When a markomata
fails, it appears unable to halt. Prices appear to have no floor (or ceiling, in the case of hyperinflation), and the
communication/ coordination functions of the market break down. Hence there exists the phenomenon of ‘circuit-
breakers’, which make eminent good sense in a computational economics (even as they are
disparaged in neoclassical finance theory). Earlier generations of market engineers had
apprehended the need for a manual override when there were ‘bugs’ in the system. And as any software
engineer knows, one never entirely banishes all bugs from real-world programs. Markomata,
therefore, never can become reified as the apotheosis of rationality. [11]

In Mirowski’s framework, markets fail to work in the interests of humans when there are ‘ bugs’ in the
machine. The idea of a circuit-breaker and ‘manual override’ is attractive in theory. In practice, however, markets are in
place exactly because they perform an allocation task that cannot easily be replicated manually - and,
where millions of humans are dependent on resources allocated efficiently by market structures, economic activity cannot easily be brought to
a complete halt. If markets can be ‘debugged’, it must be on the fly.

Capitalist Markets

Where a surplus exists in a market (which can happen whenever a market is imperfect), states can also construct a legal framework to allow
capitalist markets. In these markets, humans with capital can, also, invest in goods and buy the labour of other humans to obtain a return to
capital. Capitalist markets, then, create more complex incentive structures for humans. Humans have incentives to labour, act as traders, or
invest their capital. So one of the functions of these market machines, alongside efficient allocation of goods and labour is efficient allocation of
capital to create the highest possible return.

The return on this investment then needs a place to be itself invested. To provide an opportunity for this, the market needs to expand. So
humans with capital have an incentive to expand markets. Capitalist markets and humans working with and within them form a machine that
has an inbuilt incentive to expand itself.

To expand, capitalist markets need to find or create new products that can be bought or sold, and to create demand for these products. This
process can involve resource extraction, innovation, or constructing a market in natural resources or human activities that didn’t previously
have a price. It might also mean creating new incentives for humans, to establish that demand.

Further, capitalist markets create a power differential between those humans with capital, and those without. Where there are costs to
enforcing contracts, for example, those without capital are at a disadvantage.
Human individuals, alone, though, even those with capital, tend, with few exceptions, to have limited resources, and limited power in the
marketplace. Their participation in markets, and their drive to accumulate capital, is limited by their own desire for wealth and power. If they
do have some power in the marketplace, their ability to do harm is limited by, their own moral discomfort at the prospect of abusing that
power - and by the fact that where capitalists, alone or in partnership, invest their own money in their own enterprise, they are (alone, or
jointly) fully liable for any harm that that enterprise imposes.

Corporate Capitalism

However, human governments have created machines within the market machine that are also
participants in the marketplace, and that can transcend the limits of individual human resources, and of
human morality. These are corporations. Corporations are institutions that allow large numbers of individual capitalists to
work together to make a profit. They have five essential qualities. First, they are not just participants in the market.
They are also products, that, divided into shares, can be bought or sold. Individual capitalists, or other
corporations, can own shares in any corporation. Second, as corporations bring together the combined
resources of all the individual capitalists who invest in them, they are not limited in size. Third,
corporations are governed according to a rule of shareholder primacy; the humans who are employed
to manage these corporations are legally obliged to put the interests of shareholders first, and rewarded
according to the rewards they bring to the shareholders who have invested their capital in the
corporation. Fourth, individual capitalists or other corporations that hold shares in a corporation are not
held liable for the acts or debts of that corporation, and so the risk they face in investing is limited to
losing the amount of capital they have invested. Fifth, in many jurisdictions, corporations are given many
of the legal rights of humans - for example, in the USA, the right to political speech, and the right to fund political activity that that is
accepted to imply - without all the concomitant structures that ensure compliance with human law and moral structures. As Edward, First
Baron of Thurlow, famously said, a corporation has ‘no soul to damn, and no body to kick’. As Thomas Ireland writes:

At present, corporate shareholders (including parent companies) enjoy the best of all possible legal worlds. On the one hand they
are, for some purposes, treated as ‘completely separate’ from the companies in which they hold shares and draw dividends, in that
they are not personally responsible for the latter’s debts or liabilities (or behaviour). On the other hand the companies in which they
hold shares must be run exclusively in their interests: for these purposes the interests of ‘the company’ (formally a separate entity)
are synonymous with those of its shareholders. In short, the law treats separate personality very seriously in some contexts
(shareholder liabilities), while ignoring it in others (shareholder primacy, shareholder control rights). The result is a shareholder’s
paradise: a body of law able to combine the ruthless pursuit of ‘shareholder value’ without any corresponding responsibility on the
part of shareholders for the losses arising out of corporate failure or the damage caused by corporate activities or malfeasance. [12]

Corporations are machines that enforce a singleness of purpose, and allow efficiencies of scale, that
make them far more effective than individual capitalists in obtaining a return to capital. The individual
capitalists who own shares in a corporation often do not know what the corporation does (or even that they own those particular shares) - and,
as they bear no liability for those actions, they have no pressing need to know. Meanwhile, those governing the corporation have a primary
duty to provide a return to shareholders. Corporate Attorney Robert Hinkley tells us:

the corporate design contained in hundreds of corporate laws throughout the world is nearly identical.. the people who run
corporations have a legal duty to shareholders, and that duty is to make money. Failing this duty can leave directors and officers
open to being sued by shareholders. ... No mention is made of responsibility to the public interest. Corporate law thus casts ethical
or social concerns as irrelevant, or as stumbling blocks to the corporation’s fundamental mandate. [13]

In fact, humansworking in a publicly-traded corporation, at any level, whatever their personal morality,
have very little freedom to act. Hinkley again:
Companies believe their duty to the public interest consists of complying with the law. Obeying the law is simply a cost. Since it
interferes with making money, it must be minimized--using devices like lobbying, legal hairsplitting, and jurisdiction shopping.
Directors and officers give little thought to the fact that these activities may damage the public interest. Lower-level employees
know their livelihoods depend upon satisfying superiors’ demands to make money. They have no incentive to offer ideas that would
advance the public interest unless they increase profits. Projects that would serve the public interest--but at a financial cost to the
corporation--are considered naive. [13]
They have some leeway to make the corporation’s actions serve the public interest, but only if doing so
does not challenge the primacy of shareholder interests. And where there is conflict between shareholder interests and
their own morality, they only have the choice to act in the interests of shareholders, or to quit their positions, knowing that they will easily be
replaced by others who have greater financial needs, or fewer moral qualms. Humans in corporations can serve the corporate machine more or
less effectively, but they cannot change its overall function, which is to serve the financial interests of its shareholders.

So a corporation is not restrained by human morality, only by regulatory law, and, as it is constructed
by corporate law, acts only to increase its own value in the marketplace, to the fullest extent that the regulatory
law permits. Where that law is weak, corporations can find themselves legally obliged to do harm to
human welfare, if that is in the shareholders’ interest. As Hinkley puts it:
Corporate law thus casts ethical and social concerns as irrelevant, or as stumbling blocks to the corporation’s fundamental mandate.
That’s the effect the law has inside the corporation. Outside the corporation the effect is more devastating. It is the law that leads
corporations to actively disregard harm to all interests other than those of shareholders. When toxic chemicals are spilled, forests
destroyed, employees left in poverty, or communities devastated through plant shutdowns, corporations view these as unimportant
side effects outside their area of concern. But when the company’s stock price dips, that’s a disaster. The reason is that, in our legal
framework, a low stock price leaves a company vulnerable to takeover or means the CEO’s job could be at risk. In the end, the
natural result is that corporate bottom line goes up, and the state of the public good goes down. This is called privatizing the gain
and externalizing the cost. [13]

And because shareholders most often know very little about the corporations they buy shares in, apart
from the potential return to capital offered by the share, and buy or sell on that basis, corporations exist
in a Darwinian marketplace where those that do not provide sufficient return to capital cannot attract
investment, and grow. The result is the survival and expansion of the most efficient, and the most ruthless,
corporate machines. Nancy Fraser identifies how that drive to expansion is inherent in the system, and humans
find themselves compelled by the system:
Capitalism is peculiar in having an objective systemic thrust or directionality: namely, the accumulation of capital. In principle,
accordingly, everything the owners do qua capitalists is aimed at expanding their capital. Like the producers, they too stand under a
peculiar systemic compulsion. And everyone’s efforts to satisfy their needs are indirect, harnessed to something else that assumes
priority—an overriding imperative inscribed in an impersonal system, capital’s own drive to unending self-expansion. Marx is
brilliant on this point. In a capitalist society, he says, capital itself becomes the Subject. Human beings are its pawns, reduced to
figuring out how they can get what they need in the interstices, by feeding the beast. [14]

A Darwinian Marketplace

Corporations don’t just expand. Under the pressure to give a return to capital, and in the Darwinian conditions of the corporate
marketplace, they also continuously renew and improve themselves, gradually evolving into more efficient,
more complex, and more ruthless forms. Less profitable corporations lose investment to competition,
die out and are replaced by more profitable concerns. Profitability can be increased by designing and
manufacturing better products, and by improving the tools that are used to extract resources, and to
manufacture products - or by finding ways to allow humans to work more efficiently. But it can also be
increased by using the power of the corporation in the political marketplace and the information
marketplace to remove restraints on corporate behaviour in ways that diminish human welfare.

Where corporations own the sources of information, they are able to propagate narratives and belief
systems that prioritise the return to capital over the needs and welfare of humans . For example, they might
demonise the poor, or the unemployed, or workers’ unions. Or they might call for markets to be introduced into parts of society where they
haven’t historically had a foothold, in order, they might claim, to improve efficiency, or fairness, or overall welfare. Or they might emphasise
the benefits (or intrinsic virtue) of humans working to construct themselves to increase their value as labour, or as a product. As Wendy Brown
points out:

A subject construed and constructed as human capital both for itself and for a firm or state is at
persistent risk of failure, redundancy and abandonment through no doing of its own, regardless
of how savvy and responsible it is. Fiscal crises, downsizing, outsourcing, furloughs —all these
and more can jeopardize us, even when we have been savvy and responsible investors and
entrepreneurs. This jeopardy reaches down to minimum needs for food and shelter, insofar as social- security pro- grams of all
kinds have been dismantled by neoliberalism. [15]

Corporate capitalist markets have also created incentives for humans with capital to buy the services of those politicians who are willing to be
paid to legislate to re-structure markets to improve returns to capital - for example, by removing rights from workers, or from consumers, or by
removing regulation that prevents pollution or exploitation of the natural world - and to use public authorities to neutralise threats to the
capitalist machine, such as worker unions, or environmental activists. At worst, the machine has incentivised corporate capitalists to use
military means to create new extractive markets and to force humans who previously existed mostly outside of global markets to participate in
them, as indentured labour, or as expendable slaves. Corporations, under the twin pressures of the requirement to maximise return to capital,
and the Darwinian corporate marketplace, evolve in ever more intricate ways to take advantage of human appetites and fears to allow
themselves to grow. The global corporate capitalist market machine forms a mirror image to humanity that reflects the worst of human greed
and cruelty, as well as the astounding complexity of human innovation.

The ‘Information Revolution’.

In the last half century, corporate capitalist markets, and humans working under the incentives they create, under the pressure of creating
returns to capital, have produced silicon- based computers that make these markets immeasurably more efficient, and more interconnected.
Markets including these machines are capable of processing much more information than tradition markets, and have made information itself
into an increasingly valuable commodity. The speed with which they work, the nature of the algorithms they use - including learning algorithms
based on trained neural networks, which act on a set of embedded rules as complicated as the network itself - and the volume of information
they process, means that humans are often unable to be fully aware of the exact workings behind the results of their calculations .

Simultaneously, the nearly-instant communication and sharing of data over long distances enabled by these machines have allowed markets to
become networked, and globalized. With the removal of barriers to trade between countries, as promoted, in the name of efficiency, by those
in control of the largest reserves of capital and of the biggest corporations, corporate capitalist markets have merged to form one inter-
connected global corporate market machine. This allows corporations the ability to escape the regulatory structure of the governments that
allowed them to come into being - and so to ‘bid down’ tax levels, and environmental and social protections, by promising to bring economic
activity, and tax receipts, to the country that allow them to operate with the highest levels of profit.

Simultaneously, the rise of social media has led to the effective privatisation of the ‘public square’.
Social media companies enable physically easier communication between humans - but they also, in the
private public spaces they create, shape and curate the public conversation in ways that privilege profits
from engagement, and subsequent exposure to advertising, not enlightenment. They also collect data from this communication,
and use it, or sell it.

The information revolution has also allowed silicon-based machines to replace humans in places where they
can do the same work more efficiently. So machines make trades in the marketplace, they search for and provide
information, they set prices for goods and services, - and, increasingly, and, particularly in public/private
social media spaces, and in private companies, they collect detailed information about individual
humans, and work to manipulate the incentives these humans face to ensure that, knowingly or
unknowingly, they act to help increase the return to capital .
‘Surveillance Capitalism’

Individual humans can sometimes be rational. But they are also tribal animals that all need food and shelter and social connection, and that are
predictably irrational in their approach to risk, or to the future. With sufficient information, their individual behaviour can easily be predicted,
and they can be manipulated, not just with economic incentives, but with carefully targeted selective information, or with psychological tricks,
to act in ways that are contrary to their own best interests, or the best interests of the society they live in.

In the last three decades, market-driven exponential increases in the speed, information-processing capacity and interconnectedness of silicon-
based computers, together with the improvements in efficiency and reductions in cost that allow billions of humans to carry a connected
silicon-based computer with them, and the market-driven introduction of learning algorithms that can operate at scale, have constructed a new
and almost entirely automated market in human attention, and in behaviour prediction and manipulation of individual humans. Shoshana
Zuboff calls this ‘surveillance capitalism’. This market was originally constructed to allow individually targeted advertising, but the capability to
predict and manipulate human behaviour, and control the information that humans have access to, can be sold to the highest bidder. As Zuboff
puts it:
Markets in human futures compete on the quality of predictions. This competition to sell certainty produces the economic
imperatives that drive business practices. Ultimately, it has become clear that the most predictive data comes from intervening in
our lives to tune and herd our behaviour towards the most profitable outcomes...Data scientists describe this as a shift from
monitoring to actuation. The idea is not only to know our behaviour but also to shape it in ways that can turn predictions into
guarantees. It is no longer enough to automate information flows about us; the goal now is to automate us. As one data scientist
explained to me: “We can engineer the context around a particular behaviour and force change that way ... We are learning how to
write the music, and then we let the music make them dance” [16]

It’s a market that was created quickly, and that operates with very little regulation. Billions of economic decisions that affect us all directly -
what price to charge to whom, who should be offered a loan, how ‘gig economy’ workers should be incentivised to maximise the time they
spend working, which workers should be hired and fired - are now made by proprietary algorithms that remove the opportunity for the
intervention of human judgment or morality in business decisions - and that often make these decisions in ways we don’t fully understand.

Economic Subjects

It’s not just consumers and those who live by their labour whose incentives are structured by this market. The market gives an ability and an
incentive to humans who are holders of capital, or those who work at the head of large corporations on their behalf, to remove the political and
social constraints that reduce the return to capital, by targeting humans who have power over the structures that constrain the market,
through their vote or otherwise, with individually targeted information and incentives.

They can use machines that invisibly control the information available to humans, and identify and use their irrational behavioural tendencies,
to paralyse regulatory political systems, pervert human moral systems and to de-humanise those humans who are no longer useful to the
corporate market machine. Zuboff again:

These economic imperatives erode democracy from below and from above. At the grassroots, systems are designed to evade
individual awareness, undermining human agency, eliminating decision rights, diminishing autonomy and depriving us of the right to
combat. The big picture reveals extreme concentrations of knowledge and power. Surveillance capitalists know everything about us,
but we know little about them. Their knowledge is used for others’ interests, not our own. [16]

Capital-driven nationalist populist movements can win elections by individually and privately targeting voters with disinformation and socially
unacceptable messages to convince them to vote against their real economic interests - and then dismantle or privatise the social welfare and
regulatory structures that keep them safe, opening all of human behaviour to control through markets and by corporations. Zuboff again:

The absolute authority of market forces would be enshrined as the ultimate source of imperative control, displacing democratic
contest and deliberation with an ideology of atomized individuals sentenced to perpetual competition for scarce resources. The
disciplines of competitive markets promised to quiet unruly individuals and even transform them back into subjects too preoccupied
with survival to complain. [17]

The global corporate market machine that we have created is in the process of re-creating us humans as its economic and ideological subjects.

Superintelligence?

In short, we have created a corporate market machine that is now capable of manipulating and
controlling individual humans, and that is infinitely better, already, at this task than any human is, or
could hope to be. And we have given this machine the single, overarching goal of obtaining a return to
capital.

So what we’ve built is an agent with a clear objective - an objective that requires continued expansion -
and a very powerful optimisation function that is driven both by reinforced learning, and by
competitive evolution. And we’ve allowed it to learn to control humans. In fact, we need to allow it to
control humans, if we want to continue to access the resources we need to survive. Bringing it to a
complete stop isn’t an option - even if we knew how.

Does this machine fit the definition of a superintelligence, as proposed by Bostrom et al? Arguably it does.
It is engaged in self-preservation, both through economic control of the channels of propagation of
information and ideology, and through the use of economic carrots and sticks to corrupt or punish
politicians or activists that might oppose it. Its goal is creating a return to capital, and any attempts to
impose variants of this goal are warded off by the same mechanisms. Obtaining a return to capital
depends on constant acquisition both of natural resources and of data. The market-driven development
of silicon-based computers has led to exponential cognitive advancement, both in terms of speed and of capacity
of processing information.

And this superintelligent machine can in fact work miracles of invention and coordination. As Kate Crawford
puts it:

A cylinder sits in a room. It is impassive, smooth, simple and small. It stands 14.8cm high, with a single blue-green circular light that
traces around its upper rim. It is silently attending. A woman walks into the room, carrying a sleeping child in her arms, and she
addresses the cylinder.

‘Alexa, turn on the hall lights’

The cylinder springs into life. ‘OK.’ The room lights up.

The woman makes a faint nodding gesture, and carries the child upstairs.

This is an interaction with Amazon’s Echo device. A brief command and a response is the most common form of
engagement with this consumer voice-enabled AI device. But in this fleeting moment of interaction, a vast matrix of capacities is
invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics,
distribution, prediction and optimization. [18]

The primary purpose of this near-miraculous object, and of the massive, intricate structures of resource
extraction and labour behind it, of course, is to reduce by just a tiny fraction the friction between
having the desire for something, and buying it. And the secondary purpose is data-gathering, to reduce
that friction yet further. An immensely complex object that has been produced by what’s at heart a very
simple algorithm - maximising return to capital - itself exists to help maximise return to capital .

It could be argued that a superintelligent machine that is so well attuned to the intricacies of the desires
of humans, and does such an elegant job of fulfilling them, is well-aligned with the interests of humanity .
And it’s certainly true that the corporate market machine has provided us with physical and institutional tools that have allowed us, Echo
notwithstanding, to achieve vast increases in human welfare, on a planetary scale.

However, a superintelligent market machine that is designed to maximise return to capital will not, by
definition, take into account any negative externalities that that maximisation entails - whether they are
costs to human welfare, or to the natural environment - unless they reduce return to capital. The
machine will do harm as easily as it does good unless it is prevented from doing so. Here, governments -
the intelligent decision-making entities that have created this corporate market machine - can attempt to constrain it, with laws and
regulations, so that it’s actions are aligned with the welfare of humans and preservation of the natural
environment that supports them. However like the superintelligences that AI safety researchers imagine
- and the highly optimized AIs they observe - the corporate market machine will do everything in its
power to escape from any constraints that might reduce its ability to achieve its overriding goal . So it
enters the political market and buys changes to regulation – to the point where, in some cases, it effectively owns the
state. It takes control of sources of information, and individually targets voters to ensure support for
these changes. It lies, where a lie helps it achieve its objective. It expands into every space that has not, so far, been
taken over by the market, and that might, therefore, be subject to less regulation - and it creates new,
and potentially hazardous, products and environments (the corporate-dominated sector of the internet is one) that it
can sell, inhabit, and/or control before the state is able to act .

Where there is a conflict between state machinery that is attempting to protect human welfare or the
environment, and the corporate market machine, the corporate machine has a structural advantage.
State machinery is usually slow, cumbersome, and weighed down with complex and sometimes
contradictory goals. The corporate machine has only one objective, and is structured to do everything in
its power to achieve it.
An Existential Threat?

Is this superintelligent global corporate capitalist market machine an existential threat? Almost
certainly. The machine’s single goal of producing a return to capital ensures that it assigns no value to
planetary environmental support systems, human welfare, or human life, except where these prove necessary to
providing that return to capital. For a short period, while these planetary environmental support systems were
functional, and while human labour was vital to the functioning of the machine, the machine’s narrow
goal appeared to be an advantage, and both the machine and humanity prospered. However, as these
environmental support systems start to fail under the pressures of resource extraction, and where, in the face of
automation, large numbers of humans become superfluous to the capitalist market economy, humans
who have adapted to exist in symbiosis with the machine, and whose welfare, and maybe survival,
depends on it, will fight to defend it . Hundreds of millions of small rentier capitalists, for example, are (sometimes unknowingly)
complicit with, and dependent on, the harm the machine will do, through their pension funds and investment accounts. Millions more work
directly for the corporations doing the harm, and depend on the income they’re paid for that work for their survival. The
machine will
give them the ability and incentives to undermine the political and social structures that might be
created by those who want to repair the planetary support systems, or assign an inherent value to human welfare
and human lives. They will be better resourced than those whose lives are under threat, and as ruthless as the machine
that controls them.

In particular, the climate crisis has proved an intractable problem for politicians, lawyers, and economists. Despite their
best efforts at regular international climate conferences since 1979, the overwhelming drive for a return to capital from
the vast fossil fuel extraction corporations, and the markets that depend on them, has led to decades of
inaction, and no reduction in the rate of increase of the concentration of carbon dioxide in the atmosphere. In the process, our
politics and our channels of information have been poisoned as extensively as our natural environment.
States, and even coalitions of states, are unable to restrain the global corporate market machine, even
in the face of an existential threat.

Our superintelligent market machine will not bury us in paperclips. But, unless we find a way to take
back control, it will destroy the environmental systems that sustain human life , and, before that, it will
kill millions of humans that is has deemed economically superfluous; the privileging of economic
activity over the lives of the elderly in the ongoing pandemic might be understood as a first example of
exactly that, and a clear warning of what’s to come. And the machine has enough control over the way we think,
and what we know, and what we value, and how we stay fed, and housed, and warm, and secure that
we will find ourselves in a very weak position if we choose to resist it now. There’s a good chance it may already be
too late to act.

So what’s the purpose of seeing global corporate capitalist markets as a superintelligent machine? It
allows us to strip away the pervasive ideologies about markets and freedom, and to look at the global
economic system through an AI safety lens. What we see is a very simple algorithm that mandates
infinite expansion, and that iterates into immensely complex structures that evade and defy attempts
at restraint. Perhaps there’s a way to hold it in check, but if there is, we need to look at the core itself,
the legal structures that allow corporations and markets to exist in the form they do , and that demand
that they pursue unending growth, not tinker with the extremities that, right now, are causing harm.
Because as soon as we cut back one tentacle, it is in the nature of the creature that another will grow in
its place. So perhaps it’s time for AI safety researchers to suspend their speculations about the future and
address this immediate, urgent, practical problem of an out-of-control superintelligence - and to try to work
out whether it’s possible to restrain it in any way, without causing more harm than good, before it’s too late?

As for the concerns of Bostrom, Musk, Harris and others; If


we think forward, just a few years, to the black box that they
imagine, and the super-intelligent general AI that it might contain - we already know how that AI is
likely to act. Unless something radical changes, it will be constructed by the corporate market machine
that requires a return to capital, and, even if it is given a cute baby face, even if it appears to adore the
humans that interact with it, providing that return to capital will be its fundamental goal. And it will be
as ruthless in pursuing that goal as the machine that created it, and that it is part of.

So, it’s
good to examine in detail the tentacles that are starting to curl around us, but if we want to
remain free - and, perhaps if we want to survive as a species - we need to first shine a light on the
Beast in the darkness behind them .

In the centuries-long history of white European colonialism and slavery, all threads
lead back to the limited liability corporate form. This structure is the loom connecting
the interwoven violence of anti-blackness, colonial eradication, and ecocide.
David Whyte 20, poet and an associate fellow at Saïd Business School at the University of Oxford, “2
From Colonialism to Ecocide: Capital’s Insatiable Need to Destroy,” Ecocide: Kill the Corporation before
It Kills Us, Manchester University Press, 09/01/2020

A rudimentary grasp of this historical development shows clearly that are some basic features of the
colonial-capitalist system that tell us it is palpably not sustainable; that colonial capitalism is an
intrinsically ecocidal system. In his series of notebooks written at the end of the 1850s, Fundamentals of Political Economy Criticism
(more commonly known as Grundrisse), Karl Marx describes the inherent tendency in capitalism to push back any barriers or limits that stand in
its way. The driving
force of capital is a continual process of expansion into new places where it can
extract natural resources and make things, and into new markets.

Capital by its nature drives beyond every spatial barrier. Thus, the creation of the physical conditions of
exchange of the means of communication and transport — the annihilation of space by time becomes an extraordinary necessity for
it.•

Marx argued that the capitalist economic system had an in-built tendency: that capital would always
seek to break through any geographical barrier that stood in its way . His analysis reflected upon how
transportation and communication in international commodity markets continually required the
diminishing of space (or in Marx's terms "annihilation") between where something is produced and
where it is sold or used. When we are able to eat string beans from a farm 6,000 miles away in Kenya, just
two days after they were picked, this is the annihilation of space by time. When a stock market can
enable shares in the corporation that exported the beans to be sold a fraction of a second after those
shares had been bought, this is the annihilation of space by time . And when those shares are bought and sold from a
computer terminal in New York 7,300 miles away from the farm in Kenya and 3,500 miles away from the company headquarters in London, this
is the annihilation of space by time.18

The shrinking of time and space by capital is not merely an effect of the drive to reproduce itself. It is a
necessary part of the process of reproducing capital . The continual process of reducing the time to buy
and sell things and the incursion into new spaces where natural resources can be extracted , where new
factories can be established, and where new markets can be created, is necessary precisely because capital must
reproduce itself. It cannot stand still. This is a dynamic driving force of capitalism: capitalists invest their
wealth in order to either maintain their position or to improve it. And if wealth, in the form of capital, is to reproduce
itself, it must continue to make money from money. This is not a straightforward process in a world in
which almost everyone with wealth is trying to increase their own share. As wealth increases , so the
magnitude and energy of its growth increases; as does the wealth available to other capitalists. And so,
capital seeks ever expanding opportunities to reproduce itself at a continually accelerating pace in continually
shrinking spaces.

Put differently, this


is a way of understanding the ecological consequences of growth economics. The view
that capital's drive to expansion is unsustainable is quickly becoming a mainstream view. It is certainly not
controversial anymore to assert that the economic growth paradigm is impossible to decouple from
environmental destruction.

In all of this, the important point that both Marx and more mainstream contemporary thinkers make is that capital
is insatiable; it is
ultimately incapable of ever being satisfied. It is this in-built impetus towards the reproduction of capital
that means the boundaries of wealth production — the barriers to capital — are always pushed to their limit. As
sociologist John Bellamy Foster has noted, the greater the energy of capital's expansion, "the greater are capital's
ecological demands, and the level of environmental degradation". The natural world is just a limit to be
overcome in the same way as a spatial limit; everything and anything might be destroyed by the "juggernaut of
capital ". In a very orthodox Marxist sense, the drive to destroy nature is an impulse that is based on the drive to
expand production. It is in the drive to extract minerals from the earth, to rip up forests, to develop
industrial scale agriculture, and to expand the capacity of factories to make things, that we see the
insatiable appetite of capital for "devouring" nature.

Nature can only be devoured at this rate if there are consumer markets every bit as ravenous . The US
economic sociologist James O'Connor has shown how the relentless expansion of capital , particularly in the twentieth century,
has led to a kind of "hyper-capitalism" in which circles of consumption are ever-expanding. In this
sense, capitalism "tends to maximize the overall toxicity of production and to promote accelerated
habitat destruction, creating problems of ecological sustainability ".23 In this process, both nature and
workers are robbed of the conditions that enable them to live, as "polluted air and water" becomes the mode of
existence for workers in industrial capitalism.

Corporations are essential conduits for the global mobility of capital; they seek to move freely across
borders, making global spaces smaller for investors, and speeding up the time needed to develop or
extract resources, enabling them to locate production across the globe. It is this capacity of capital —
the annihilation of space by time — which was perhaps the most important single dynamic in the
development of European colonialism. And in the course of the colonial project, corporations could be
relied on to overcome every natural barrier that stood in their way; corporations were mobilised by the
colonial powers for the annihilation of nature and the annihilation of people on an unprecedented scale.
Corporations have been, since the early seventeenth century, a central driving force in the struggle to
overcome nature's limits.

If we are to fully grasp the ecocidal tendencies of the corporation, we need to understand that this
dynamic of annihilation was
the modus operandi of the colonial corporation. The annihilation of time and space in order to enable
the annihilation of nature and people is the method by which corporations reproduce capital . It is to a
deeper understanding of how this method was developed in the early colonial period that the chapter now turns.

Corporate racial exploitation


In 1721, English inventor James Puckle set up a joint-stock company to attract investment in a new weapon that used the first machine gun
technology. The whole premise on which Puckle's gun was based was racist. He designed it to enable the operator to shoot two types of bullets
for use in two types of wars. Round bullets could be shot at European Christians, and square bullets at Muslim Turks. The square bullets were
designed to cause more devastating injuries. The barrel of the gun was engraved with the motto: "Defending King George, your country and
lawes is defending yourselves and the protestant cause".25 It was the perfect gun for a racist, colonial state.

Puckle had no trouble attracting investors to this project. Indeed, his company was one of thousands of schemes, inventions and adventures at
the turn of the seventeenth and eighteenth centuries that sought to attract investment through a shareholder model. In the end, Puckle's
invention was rejected for government and military use, not for lack of enthusiasm for a machine that sought to intensify the violence of its
user's racism, but more simply because the mechanism did not work properly. The lack of any prospect of a contract to make the guns meant
that the company collapsed. Every penny of investment in the company had been spent on the development of Puckle's gun. It was a typical
event in an age of failed schemes. Although the market in joint-stock shares was not a particularly significant part of the economy in the late
seventeenth/early eighteenth century, there was a proliferation of share ownership schemes in this period. As early as 1695, there were said to
be more than 150 English companies whose shares were being dealt in the coffee shops around London's "Exchange Alley" P Many of those
schemes turned out to be fraudulent, or unsuspecting investors at least were purchasing shares for ventures that had no serious chance of
generating a capital return.

Perhaps the most famous of those failed schemes is the South Sea Company, formed in 1711 with the British government's promise of a
monopoly on trade in South America. This story starts with the ongoing war with Spain. At the time, the Spanish controlled most of South
America, including all of the main Atlantic ports. However, the end of the War of the Spanish Succession and subsequent Treaty of Utrecht of
1713 granted Britain a licence to supply the Spanish colonies with 4,800 slaves per year. The value of the South Sea Company's share price rose
quickly with the promise that port concessions could be secured. In 1718 war broke out with Spain again and the company's assets in South
America were seized. As soon as the company was refused access, the share price collapsed and the event, known as the "South Sea Bubble",
precipitated the first stock market crash. The collapse of the South Sea Company led the British Parliament to take action against this form of
speculative investment. A subsequent parliamentary inquiry found that a number of politicians, officials and company officers had profited
unlawfully from the company and their assets were confiscated as a result. The 1720 Bubble Act banned the buying and selling of shares by
anyone who was not involved in the management or direction of the company. The Act led to a number of companies being wound up or taken
into ownership of the Crown. In the hundred or so years following the Bubble Act, the issue of charters was largely limited to large scale public
works and building projects. Any partnerships tended to be unincorporated, and although these were numerous, in practice, Parliament
regulated those associations and considered legal matters pertaining to them on a case by case basis. The Bubble Act was eventually repealed
in 1825 and the practice of unhindered buying and selling of shares was once again legalised. The 1844 Joint Stock Companies Act allowed this
model of investment without the need for a charter. Under the 1844 Act, companies could now simply establish a constitution and apply to the
government body, the Registrar of Joint Stock Companies for incorporated status.

In standard historical accounts, the South Sea Bubble is described as the first stock market crash; in more critical histories, it is used as the
archetypal example to indicate the fundamental instability of financial markets. Yet critical histories rarely highlight precisely what those
“victims” of the South Sea Bubble sought to profit from. The main purpose of the South Sea Company was to expand British control of the slave
trade from the Caribbean and North America into South America. Those “victims” were rubbing their hands at the prospect of a major
expansion in the human slave trade. Indeed, the South Sea Company prospectus promised a monopoly on the supply of slaves to Spanish
plantations for 30 years. The investors knew they were investing in a scheme to ship 145,000 slaves from Africa to South America. They would
not have described themselves as such, but those investors were slave traders. The fact that history characterised the plight of those investors
as victims and not slave traders is hugely significant. The partial history of the South Sea Company, and indeed, the more general failure of
those histories to recognise the racist colonial utility of the joint-stock corporation and the savage logic that shaped its form has allowed
historians to miss the point of the corporation.

The point of the corporation is to reduce both the financial exposure and the moral accountability of investors and enable them to look the
other way. In the process of buying stock in a corporation, an important social relationship is established. Investors put money into a joint fund
for one reason: because they expect a good return on this investment. In the process, they do not need to have anything do with the way their
money is spent. As shareholders, they have the right to turn up at an annual general meeting to vote on some strategic issues, such as the
composition of the board of directors. But if they choose not to, they don’t need to know anything about the affairs of the corporation. They do
not need to think of themselves as slave traders, merely as anonymous investors.

Just as the corporation is the institution that assists capital to disregard the limits of geography, or labour
supply, or resource scarcity, at the same time the corporation enables capitalists to disregard moral limits
or indeed any limits on their profiteering that we might expect to be consistent with a basic sense of
humanity. Capital pushes beyond both moral limits and limits of humanity. There is no less polemical way
to put this: the corporation is the perfect mechanism for a brutally violent, racist colonial state. As the
Marxist international lawyer Robert Knox points out, colonial racism was always “part and parcel of the logic of capital
accumulation”.27 This is also the basic fact of the corporation. Corporate racial exploitation is buried
deep in the foundations of the history that made capitalism . This basic fact must be understood in order to understand
the corporation’s parallel drive to overcome nature’s limits.

The colonising machine

From the late 1500s onwards, the corporation became the preferred organisational model that was used
by the European colonial powers to enclose land, organise slavery and monopolise trade. The colonial
corporations were established to open new trade routes and to settle new lands for the English, and latterly,
British, Crown. They included the Company of Merchant Adventurers to New Lands, chartered in 1553 to open up a new trade route to China
and Indonesia. This company was a partnership of 240 investors, each of whom invested £25 in exchange for a share in the company. The East
India Company, chartered in 1600, probably the most infamous of the colonial companies, was granted exclusive rights to trade and to establish
trading posts in the Indian sub-continent and South East Asia. As such, the East India Company played a crucial role in establishing the forms of
commodity extraction and land-grab upon which the British Empire was founded. In order to build the capacity for the expansion of commerce
and trade, most of the colonial corporations were given a monopoly on their trade. The London Company and the Plymouth Company were
established to open up monopoly trading routes to the Americas. The State of Virginia was founded by the Virginia Company in 1607 and the
State of Massachusetts by the Massachusetts Bay Company in 1628.28 Those companies were granted charters by the Crown that gave them
“the right to transport settlers and their supplies into the colonies, with the power to defend them”.29 Indeed, the
right to bear arms
enshrined in the second amendment of the US constitution uses a strikingly similar formulation to the
constitution of the Massachusetts Bay Company.30 It is in this model of the colonial corporation that the
state delegated its own monopoly over the right to use legalised violence .
Many of the British and other European colonial corporations operated on a joint-stock basis. The various Dutch colonial trading enterprises
amalgamated under one joint-stock corporation, the United East Indies Company, in 1612. In the early years of the English East India Company,
wealthy merchants and the English ruling class owned the company’s shares, and they were owned on a temporary basis. In 1657 it was
established as a permanent jointstock company. Advantages to investors were clear. Since the colonial companies were invariably given a
monopoly over a trade route, or the governance over a resource-rich region, the returns were high.

The slavery market was developed and overseen by the African trading companies. Perhaps the most
important of those, the Royal African Company, was established in 1662 by the English Crown and
merchants from the City of London, originally to trade and mine gold on the west coast of Africa. By the 1680s,
the company was trading 5,000 people as slaves every year. Although the company lost its monopoly
twenty years after it was founded, because the right to trade slaves “was recognized as a fundamental
and natural right of Englishmen”,31 by this time it had served its purpose as an “academy” for the large
number of slave traders that followed it .32

The Royal African Company had established forts across West Africa, engaged in wars with trading rivals ,
particularly the Dutch, and between 1694 and 1700 was a major participant in the Komenda Wars in modern-day
Ghana. The English (latterly, British) East India Company33 was also to play a key role in the exercise of military and administrative control in
India, acting as a proxy for English, latterly British, foreign policy. Throughout two and a half centuries, the East India
Company was implicated in countless atrocities as part of an ongoing military suppression of local
peoples until its rapid demise following the “Indian Rebellion” of 1857.34 Its proxy colonial role on behalf of the British government
gave it the necessary political cover to engage routinely in bribery and illegal trade.35 Indeed, the company
waged war against its French, Portuguese and Dutch counterparts to protect its access to raw materials and the factories and warehouses set
up along the Indian coastline. As one history succinctly put it, the
East India Company was the product of a contractual
bargain with the state, despite existing “on constant life support, repeatedly having to justify its
existence to the state”.36 It was their close alliance of mutual interest that enabled the colonial
corporations to run their domains as the commissaries and direct representatives of the colonial state.
Indeed, many of those companies had close connections to government ministers and the Royal family.37

The Indian Marxist economist Utsa Patnaik has carefully documented how a
total of $45 trillion wealth was extracted from
India during the colonial period.38 This was a theft that has probably not been equalled in scale or audacity
before or since. The East India Company played the key role in this grand theft. The company would pay artisans, farmers
and merchants for raw materials and products and export them. The money they used to pay them was then taken back from local people as
taxes. For investors, this was therefore the perfect no-risk business model.

Other colonial companies used stolen land as collateral to underwrite a no-risk business model. Although
the Virginia Company leaked money in its early days, the board simply parcelled up land and gave each investor 100 acres per share. Most of
those shareholders planted tobacco, a key commodity in the North American slave trade. The Massachusetts Company upped its game by
providing 200 acres of land for each share.39

The capture of new territories and the securing of trade routes was a competitive business that required
a national mobilisation of capital. Those companies were established to enable the burden of economic
risk in the colonies to be spread more broadly and to be absorbed by the state and by private wealthy
individuals. This, in essence, is why the first limited liability companies were created: to enable private
investors to shoulder the financial burden that alone the Crown could barely afford during this period
of rapid global expansionism. The joint-stock corporation gave the state the financial capacity to
colonise faster and further afield; the colonial corporation harnessed the veracity of capital’s drive to
reproduce itself.40 In the capture of foreign territories, in the treatment of their rivals and of the native
occupants of the lands they seized, the corporation could be ruthless and indifferent to the human
consequences of the colonial project.

The consequences of corporate colonisation for human ecosystems was disastrous. Corporate
colonisation was based upon the enclosure of traditionally owned and managed land . This meant exerting
absolute control over forests and farmlands, rivers and lakes. The colonial corporation was crucial to this process of
agricultural industrialisation in the Americas, in India and in Africa.41 In a famous passage that became known
as the “Lauderdale paradox”,42 Scottish politician James Maitland, 8th Early of Lauderdale wrote in 1804 that the
Dutch East India Company was destroying crops at a time of plenty, and that the Virginia Company burned a
proportion of their tobacco crop in order to maintain scarcity, and therefore a healthy price for their product. The apparent
paradox that this presented to economists of the time was that the public interest (plentiful and cheap supply of
crops) always suffers in the drive for private gain (the manipulation of scarcity and price).

In Capital, Marx had written extensively on a very different paradox. He witnessed a cycle that later generations of economists might have
called “creative destruction”. The rapid development of industrial agriculture was causing the exhaustion of soil.
This led to a boomerang effect in which new nitrite-based nutrients were added, and the soil exhausted
further. Larger quantities of nutrients would then need to be added to the soil, thus continuing a self-
destructive cycle. This was a paradox that affected the new industrial farmers of Britain and the US most acutely. The only way to
deal with the problem of soil exhaustion, without reverting to traditional methods of agriculture, was to
“mine” bird droppings (guano) from islands off the coast of Peru and latterly Chile and replenish the soil
artificially with nitrites. Indentured Chinese workers in the guano islands absorbed the appalling human costs of this trade. John
Bellamy Foster and his colleagues concluded that this was a typical example of “ecological imperialism” that brutally
gave impetus to the “enormous net flow of ecological resources from South to North” and facilitated
labour conditions that Marx called “worse than slavery”.43 Crucial to this story was that the extraction of guano from
Peru’s islands and coastline was organised by colonial trading companies. The trade to Britain was organised by
Anthony Gibb and Son (now part of the transnational insurance firm Marsh and McLennan) and to the US by Grace Brothers and Company (now
the US chemical firm W.R. Grace and Company). Those corporations
provided the necessary capital investment and as
part of the deal offset part of the Peruvian national debt. In doing so they were the motor force behind
this bold attempt to overcome nature’s limits.

The argument in this chapter is not that this cycle of never-ending “creative destruction” could not have
happened without the corporation. The European states would most likely have committed the
atrocities of colonisation without being aided and abetted by corporations. Yet, at the same time, it is
undeniable that corporations were crucial to the speed and the force with which colonisation
proceeded. From the period of early European colonialism onwards, close and mutually reinforcing relationships
between corporatations and nation states have been the key powerbrokers in capitalist states.

Indeed, thereis a relatively hidden history of the twentieth century that reveals the collusion of major
corporations with the most brutal of states. It is a story in which many of the largest household name
corporations have collaborated and supported the most brutal and violent states and have participated
in the most reviled acts of war and even genocide. It is a history that reveals how the limitless expansion
of capital pushes corporations to make profits from the most notorious totalitarian regimes, even when they
are the governments of “enemy” states.
The tyrannical corporation

In the previous chapter, we noted that ChevronTexaco have been implicated in the cultural genocide of two Amazon tribes that no longer exist,
the Tetetes and the Sansahuari. This is by no means the only human atrocity that the company was linked to in the twentieth century. In 1935,
Texaco signed a deal with the Spanish Republican government that would have made Texaco its major fuel supplier.44 Texaco was so desperate
to sell the oil into Spain, that a year later it changed sides, breaking the US embargo against the Spanish dictator Francisco Franco. Texaco
offered Franco oil on credit and used its network of tankers to smuggle fuel directly from American ports to nationalist Spain. The Franco
regime was the regime that “disappeared” more people in the second half of the twentieth century than any state other than Cambodia.45 For
his services to the Fascist state, Franco awarded Texaco chairman Torkild Rieber the title of Knight of the Grand Cross of the Order of Isabella
the Catholic.46

The centrality of domestic corporations in the rapid rise to power of the Italian and German Fascists was excavated as early as 1936 by French
scholar Daniel Guerin in his classic account Fascism and Big Business.47 However, Mussolini and Hitler were sustained not only by their own
loyal businesses, but also by the patronage of international capital.48 The roll call of names, of just the main corporate players, is astounding.
The banks Union Bank of Switzerland, Credit Suisse,49 Barclays Bank,50 and Chase Bank (now JPMorgan Chase51) were implicated in assisting
the theft of Jewish property. General Motors,52 ITT,53 Standard Oil of New Jersey (now ExxonMobil)54 and, perhaps most famously, IBM,55
are alleged to have knowingly provided vehicles, weapons, fuel and surveillance technology, without which the Nazi regime may not have been
able to commit the holocaust.

Throughout the second half of the twentieth century, the most brutal, racist regimes across the globe
have been readily supported by corporations. The racist regimes of South Africa and Rhodesia were
sustained by US and European capital.56 In Latin America and in Asia, Western corporations have been routinely
implicated in the “disappearance” of trade unionists and community leaders in the fruit industry,57 the
soft drinks industry,58 in the oil industry59 and in clothing.60 It is household names like Chichita, Del
Monte, Coca- Cola, Dorothy Perkins and Primark that are implicated either directly, or via a supply chain
relationship, in such disappearances. In Guatemala, Argentina and Brazil between 1964 and 1986 corporations were involved in
hundreds of documented disappearances and assassinations. Amongst those mentioned in Argentina were local firms Ledesma and Dálmine, as
well as Mercedes, Ford and Fiat.61

Less than 30 years after ITT’s collaboration with the Nazis, Chilean socialist President Salvador Allende addressed the General Assembly of the
United Nations. He took with him a document written by ITT officers in Chile and New York which proved that the company had authored an
18-point plan to strangle Chile economically, to carry out diplomatic sabotage, and create panic among the population, which would cause
social disorder and precipitate a coup. A year later, the coup and the subsequent terror killed and “disappeared” over 20,000 people.62 But
what did the executives at ITT care, for the corporation was well rewarded by the regime, recovering a total of $235 million lost revenue and
assets from the military junta.63

Salvador Allende’s Chile was part of an anti-colonial movement of states in the developing world that sought to change a world order
dominated by Western governments and Western corporations (and by the Soviet bloc) sometimes called the “Non-aligned movement”.
Allende himself was involved in a series of initiatives in the 1960s and 1970s that sought to reset the balance of power from an anticapitalist
and left-socialist perspective. This movement can be traced back to the 1955 Bandung Conference in Indonesia, which brought together 29
Asian and African states. Most of them had very recently gained political independence and together they represented 54% of the world’s
population. The 1966 Solidarity Conference of the Peoples of Africa, Asia and Latin America, or the “Tricontinental” as it became known,
brought together representatives in Havana, Cuba, drawn from national liberation struggles and independent governments in 82 countries.
Meetings like Bandung and the Tricontinental gave impetus to initiatives that explicitly sought to unravel the development of “neo-
colonialism”.
Neo-colonialism

The core idea of neo-colonialism, perhaps most famously articulated in Kwame Nkrumah’s (1965) book of the same name,64 was
that colonialism had not ended with the liberation of former colonies that became “independent” in the
late twentieth century. Colonisation had been perpetuated though economic and cultural strategies that removed
the need for direct political rule. From the neo-colonial perspective, transnational corporations were
viewed as mechanisms for advancing the colonial power of the developed world in more easily hidden
and plausibly deniable forms. In a key passage, Nkrumah notes:

The British Empire has become the Commonwealth, but the proceeds from the exploitation of
British imperialism are increasing. Profits of British tin companies have ranged as high as 400%. The latest dividends to
British diamond shareholders are close to 350%. On one occasion Mr Nehru [the Prime Minister of India 1947–1964] made it clear
that British profits from independent India had more than doubled … [A] recent survey made plain the plunder of British
monopolies. It listed 9 out of 20 of Britain’s biggest monopolies as direct colonial exploiting companies: Shell, British Petroleum,
British American Tobacco, Imperial Tobacco, Burma Oil, Nchanga Copper, Rhokana Corporation, Rhodesian Mines and British South
Africa, five of which are directly engaged in chiselling away Africa’s natural resources … Incredibly the list leaves out two of the
world’s greatest combines, those states within a state – Unilever and Imperial Chemical Industries – whose operations are based
heavily in their overseas exploitations. The United Africa company leads for Unilever in Africa; about a third of I.C.I. and its
subsidiaries operate overseas.65

Nkrumah, who was President of Ghana and an important figure in the movement of unaligned states, offended the US State Department so
much with his book that it immediately cut $25m in aid to the country.

It also became clear from the 1960s onwards that the mere fact of having abundant natural resources would
not necessarily lead to post-colonial emancipation. As the example of countless African and Asian states demonstrates, the
structure of corporate capitalism ensures what has come to be known as a “resource curse”.66 The model
and its effects are always the same. Big Oil (or Big Mining or Big Diamonds) arrive on the scene with the finance
and the technical know-how to develop the “resource”. From this point on, the economy is driven by
corruption, power is concentrated in narrow elites, and most of the money leaves the country anyway. Indeed,
much of it leaves through the back door via a combination of opaque corporate structures and shady
offshore trusts.67 The population is normally left poor and the land and water supplies poisoned. This might seem
like a cynical summation of the resource curse, yet it is a pattern that is repeated time and time again.

Militarised conflicts in the Global South are almost all resource conflicts. Even when they look like
“ethnic” or “religious” conflict, they are almost invariably conflicts over the control of resources. And the
intervention of Global North countries in global conflict is normally for the same reason, albeit
positioned in a language of “humanitarianism” or “self-defence”. Even when Global North countries are
involved in conflicts where access to resources is not immediately at stake, the geo-strategic aim always
involves access to resources.68 And today, as has been the case in the history of colonialism, it is the corporation that
plays a key role in securing access to those resources. Although it was controversial to claim it at the
time, we now know, due to the release of government documents, that major figures in the US and UK
oil companies had been involved in the planning of the invasion and the securing of the oil fields in the
2003 Iraq War.69 And we know that UK ministers intervened directly on behalf of the British oil company BP in
the negotiations of the carve-up of Iraq’s oil fields .70 In almost every region where there is oil, there is
also conflict. As we have seen already in this book, the Global North oil companies understand their
role well, and across Latin America, Africa and Asia are implicated in a large number of oppressive
regimes and low intensity militarised conflicts. Those resource wars are now poised to be fought over
water and food, as well as minerals.71
A number of initiatives emerged in the UN in the 1970s that sought to deal with the problem of neo-
colonialism generally, and the grip of transnational corporations on the resources of the developing
world in particular. But they faced a rising tide of neoliberal political and economic practices pushed by the
US and its allies, known as the Washington consensus.72

The neo-colonial model has characteristics that are common across industries and across geographical
contexts. First, the corporation gains the political support of the host state. A host nation state permits natural resources or local labour to
be exploited, or facilitates the opening up of local markets for products because of some perceived economic or political advantage. This
advantage may be measured in the form of power and influence, the potential for increasing the GDP or earnings potential for a state, or in
some cases it may simply be measured in favours and bribes. Corporate bribes are ubiquitous across a number of industries. Second, those
advantages accrue disproportionately to elites rather than the local population, and often the process of resource exploitation and
development is opposed by the people living in the locale. The process of neo-colonisation therefore commonly involves conflict between the
host state and the local people. Third, the home states in which corporations are primarily based provide a number of political and economic
incentives to both corporations and to the elites of host states. The former may involve the mobilisation of diplomatic support, and export
subsidies and credit guarantees; the later may involve aid, lobbying and fine-print negotiations in trade deals and the promise of strategic
military or geo-political alliances. As we have seen, the neo-colonial model has required the enforcement capacities of host military forces and
paramilitaries in a number of contexts from Nigeria to Colombia.

Those three characteristics of neo-colonialism add up to a pattern of power-brokering that is remarkably


consistent across all of the major industrial processes which pose the most acute problems for the eco-
system. It is the same pattern of power-brokering that characterises the oil and gas industries across the
world’s oil producing regions and enables big agriculture and chemical companies to facilitate the uncontrolled
global spread of toxic fertilisers and pesticides . The same pattern of power-brokering characterises all
major inward investment initiatives in the Global South.
Local autonomy over agriculture in the Global South, previously granted as a concession under the direct rule of colonial corporations and
occupier governments, is now eradicated completely by a combination of voracious corporate land grabs, predatory pricing and asymmetric
trade deals that leave local farmers at the mercy of international capital.73 Local farming traditions are eradicated, while wholly unsustainable
models of farming become globalised.74 The corporate control of global agriculture is argued by some food economists to be the most
significant contributor to global warming.75 Big Agri – the international agriculture industry – is responsible for driving processes that
irreversibly poison lakes and rivers and exhaust the fertility of land, creating a cycle of chemical use to compensate for a lack of nutrients, which
then kills more nutrients. The history of the Peruvian guano industry is repeating itself, except this time no nation state is able to exert control
over the supply chain. It is the large corporations who have the most control over those markets, and yet they themselves are locked into this
cycle of creative destruction. Neo-colonialism, like its antecedents contains the seeds of environmental disaster, devouring land and water and
people, and leaving behind a devastated eco-system.

The construction of large industrial dams has a similar dynamic. Dams, like most major infrastructure projects, displace and destroy important
habitats and evict communities. Arundhati Roy observed in 2001 that “big dams in India have displaced not hundreds, not thousands, but
millions – more than 30 million people in the last fifty years. Almost half of them are Dalit and Adivasi, the poorest of the poor”.76 Millions
more have been displaced since.

Perhaps the most aggressive products marketed in the neocolonial period are “soft” drinks. The pattern of the marketing strategy traditionally
(since the 1960s at least) has followed a similar path: target marketing strategies at the young and especially the more affluent sections of the
population and then work out to other groups from there, until the product is completely assimilated into the culture. In recent years,
companies like Coca-Cola are not just content with establishing their standard brands (Coke, Sprite, Fanta etc), but follow complex diversified
strategies that introduce a more varied range of products into the market. Coca-Cola is currently embarking on a $5 billion investment strategy
in India – which it aims to make its third biggest global market after the US and Mexico – to introduce new brands and new tastes, including the
“Indian” fruit juice-based drink “Minute Maid Mosambi”.77 All of this sounds very innocent (no pun intended) until we take account of the
social and environmental impact of Coca-Cola in India. Its plants in India and Mexico, in particular, have been widely condemned for their
intense use of local water supplies, which depletes groundwater to perilously low levels and releases polluting chemicals into the water
system.78 In many plants, they give the by-product of toxic sludge to local farmers as “fertiliser”. This creates a dangerous chemical feedback
loop. One NGO, the Centre for Science and Environment, tested carbonated drinks made by Coca-Cola and PepsiCo at 25 of their bottling plants
and reported a “cocktail of between three to five different pesticides in all samples”.79 The Coca-Cola plant in Uttar Pradesh was closed down
after regulators found that high levels of cadmium, lead and chromium and the “excessive” depletion of groundwater by the plant had led to
water shortages and exacerbated droughts.80

In India – as is the case in much of the Global South – many of the most immediate and direct threats to the environment stem directly from
consumer markets. The scandal of “sachet packaging” is the paradigm example.81 Companies such as Unilever have begun to issue small
sachets of products normally available in larger packs, such as soap powder or shampoo, for sale to people who can’t afford the normal sized
packets of those products. Because the packaging is generally plastic, and based on a high volume of smaller packs, the result is that drains are
blocked and water supplies polluted more easily and intensively with discarded packaging, and waste is multiplied.82 Of course, it is the
communities that corporations target in aggressive marketing strategies, the communities that need smaller (i.e. cheaper) pack sizes, who
suffer the most.

The end point of the relentless drive for profits is the creation of new consumer markets that at the
same time exploit poverty and make the conditions for the poor even worse. Those are the consumer
markets that we have begun to realise are deadly for both people and for the planet. This model of
neo-colonialism not only threatens the sustainability of the industry itself, but now threatens the future of the planet.
The ecocidal corporate chain

In contrast with the classical colonial model, transnational corporations have morphed into forms that allow them to push beyond any moral or
legal limits on what they do. The modern forms adopted by corporations operating in neo-colonial contexts typically use complex ownership
chains and/or sit at the head of supply chains in which a number of parties are responsible for organising the production of commodities and of
manufactured goods. Those complex ownership and supply chains maximise the ability of transnational corporations to distance themselves
from responsibility for their environmental impacts, since often the most devastating impacts of corporate activity are not organised directly by
corporations, but are organised by subsidiaries, or by other parties in the supply chain.

This is the issue in the garment industry, when sportswear companies like Nike or Adidas, or high street stores like H&M and Primark use
complex supply chains to ensure their goods are produced in the most cost-effective way. The companies that they contract to do carry out this
work produce goods to highly detailed specifications. Included in those specifications are average times taken to produce garments at each
stage of the process, and costs that are apportioned for each part of the process, worked out to fractions of seconds and dollars. In other
words, although the supply chain may be micro-managed at a level one would normally expect only to find inside the organisational structure,
the supply chain ensures that they relate to each other as separate contracting parties (though in practice they are obviously part of the same
enterprise). Thepoint is that those companies can and do plausibly deny any knowledge of what is
happening further down the “supply chain”, and formally are not responsible for employees’ pay and
conditions. Supply chains and chains of ownership effectively insulate primary owners and buyers from
liability for violations of rights at the labour-intensive end of the supply chain. The same goes for their
responsibilities to the eco-system.

The destruction of forests accounts for 10–15% in the rise in global greenhouse gas emissions.83 And deforestation
occurs in order to meet Western corporations’ demand for raw materials. Four types of products are responsible for
70% of tropical forest loss: beef and cattle, timber, palm oil and soy (mainly used for animal feed).84 Some of the companies most implicated,
like MacDonalds (one of the largest buyers of beef and chicken feed produced in deforested land) have been constantly in the spotlight. Yet
only 30% of the companies that have an impact on tropical forests know exactly where their supply chain begins.85 Just like the Peruvian
Rubber Company corporate executives and investors who, as we saw in the previous chapter, had no incentive to know anything about the
devastation that their company was causing, the same goes for today’s corporate elite. They don’t need to know.

In Peru today, like any other Global South economy, we can find numerous examples of an ecocidal
corporate chain. Let us briefly examine gold mining. The gold mining industry in Peru is in the midst of a crisis that
links an epidemic of worker deaths to a bio-diversity catastrophe. A huge network of illegal gold mines has been
established in Madre de Dios in South East Peru. This network of mines is responsible for the destruction of around 170,000 acres of primary
rainforest in the Peruvian Amazon between 2013 and 2018.86 In addition, the use of mercury in the production process
pollutes lakes and rivers, affecting the fish, a major source of the diet of local people. One study by Peru’s Ministry of Health
found that 78% of the local Nahua community had dangerously high levels of mercury in their blood .87

This network of small-scale (or “artisanal”) mining needs no major investment or heavy equipment and has relied on a steady stream of poor
workers from the Andean highlands to sustain it. It is estimated that there are tens of thousands of child labourers in Peruvian artisanal gold
mines, and there are widespread reports of forced labour conditions.88 Appalling working conditions mean the risks of death and injury are
high. Major risks include poor ventilation, long working hours, malaria, and mercury poisoning. Research shows that exposures to silica – the
cancer-causing dust – are probably over 200 times greater in artisanal mines than in large mines.89 Victims of silica suffer a very long and very
painful death. This is not just an epidemic-waiting-to-happen in Peru. There are at least 15 million “artisanal” miners working worldwide, a
figure that is many times more than those employed in formal sector mines. Those workers tend to be working without any dust control
measures, are rarely protected by a trade union, and very often have no legal protection whatsoever.90
The story of Peru’s illegal gold mining, where slave labour-like conditions are the norm, is not, as first
appears, a story of mercenary local gangsters and people smugglers. This is part of the story. But it is
much more obviously a story of complex corporate structures. As in all global supply chains, there are
large corporations – normally based in the Global North – at the top end of the chain. Those corporations take
the highest cut. Insight Crime, a group of investigative journalists dedicated to uncovering organised crime in Latin American allege that those
corporations either profiting from or financing the mines include: Metalor Technologies and MKS Finance from Switzerland; Northern Texas
Refinery Metals and Republic Metals Corporation from the US; Italpreziosi from Italy and the Kaloti group based in Dubai.91

Those companies at the top of the supply chain universally rely on a complex corporate structure that spans the length of the chain. Further
investigations by Insight Crime have shown that “shell” mining
companies92 are incorporated in Peru to give financiers
the plausible excuse that they are using legitimate companies. The revenue is channeled through those
shell companies although they are not doing any actual mining; the mining is done by illegal companies
that are unincorporated. This is a clear example of how the corporate structure is perfectly designed to
mask accountability and to create distance between the real beneficiaries of the supply chain and the
point of production.

Precisely the same principle is applied when complex corporate chains are organised through secrecy
jurisdictions, otherwise known as tax havens. It is estimated that 101 companies quoted on the London Stock Exchange control
over $1 trillion worth of minerals extracted from Africa in five commodities (oil, gold, diamonds, coal and platinum).93 In total, $46 billion a
year leaves African countries as multinational company profits,94 with approximately 60% of the value of those funds as tax avoidance,
funnelled through offshore companies.95 A further $35 billion a year leaves African countries in illicit financial flows.96 Tax havens enable the
extraction of both illicit and legitimate wealth in this form to be hidden through a web of complex corporate structures. It normally works like
this: a “shell” company is established somewhere like the Seychelles or the Cayman Islands, and then a complex web of other parent and
subsidiary “shell” companies are established in a number of secrecy jurisdictions to create a long ownership chain. The
result is that the
origins of the extracted wealth and the beneficiaries at the end-point of the chain cannot be connected.
No matter how environmentally destructive those investments, the source of the profit remains hidden from
plain sight.
Investors are able to maximise wealth extraction precisely because the corporations they invest in are able to maintain a position at the head of
the corporate ownership structure, or at the head of the supply chain. Those subsidiary chains and supply chains are the mode that
transnational corporations based in Global North countries use to extract resources, deplete nature and risk environmental catastrophe. And
typically, this destruction takes place in the Global South, or, when it comes to the change in weather systems, its effects are felt most in the
Global South. At the same time, those complex corporate structures enable the steady growth of industrial development and investor return
regardless of the human and environmental cost.

Conclusion: the perfect vehicle for devouring nature

There is a very clear nexus of capitalist development that links colonialism, genocide and ecocide.97 The
vital life force at the heart of this nexus is the corporation. Naomi Klein talks of the “braided historical
threads of colonialism, coal and capitalism”.98 In this braided history, the corporation is the loom.
As this chapter has argued, the motor force of the corporation is driven by the necessity for capital to reproduce itself. And as part of this
ongoing reproduction of capital, corporations are involved in a continual struggle to overcome nature’s limits. As we have seen, the capitalist
corporation was absolutely central to the long project of European colonisation. Indeed, it was through the European colonial project that the
corporation became the primary vehicle used by investors and by colonising governments to devour nature and human labour. The extraction
of natural resources, particularly from colonised lands, was done on a scale and at a rate that would not have been possible without the
colonial corporation.

The architecture of the corporation made it ideal for colonial adventure; in its current form it is perfectly
designed for a neocolonial world. As it has adapted complex subsidiary and supplychain structures, the corporation has
vastly expanded its capacity to overcome the limits on the global mobility of capital and limits on
resource scarcity. This adaptation of the modern corporation has expanded its capacity to devour
nature, as if there were no limits to the exploitation of nature itself. At the same time, investors are
even more empowered to disregard any moral limits placed upon them or indeed disregard basic values
of humanity.

The AFF theorizes law as the cloth from which capital is cut. Trusts, protected by
private law, are the root of capital accumulation and rising inequality globally.
Antitrust uses public law to prevent this exercise of private law.
Trust are the root cause of economic inequality --- inequality impact

Root cause of capitalist formation is trust protection

US (NY state) legal trust system modeled globally

Must focus on the political economy above all else

Matthias Thiemann 20, Professor of European Public Policy at Sciences Po Paris, specializing in
investigating post-crisis regulatory changes in the US, “The Political Economy of Private Law: Comment
on ‘The code of capital- how the law creates wealth and inequality‘”, August,
https://www.researchgate.net/publication/343954617_The_Political_Economy_of_Private_Law_Comm
ent_on_'The_code_of_capital_-_how_the_law_creates_wealth_and_inequality'

Katharina Pistor’s book The


code of capital- how the law creates wealth and inequality (2019) is an original and
insightful intervention in the quest to understand both the rising inequality of the last 40 years, as well
as the inner dynamics of capitalism, a social formation that has ruled in western societies for about 200 years now. Pistor shares
many of the convictions of the publications in the journal Accounting, Economics and Law, such as the dangers to democracy inherent in the
corporate form (Strasser and Blumberg 2011, Robé 2011), the fact that firms and corporate form need to be distinguished (Biondi et al 2007)
and that shareholders do not own corporations, but just their shares, it is only appropriate to discuss and present it to the wider audience of
the journal, pointing to its fundamental insights and potential for follow-up research. The title of the book and its set-up evoke both Luhmann’s
system theory with its penchant for binary code as well as Marx’s capital (1955 [1867]). Combining the coding of social systems and their
relentless dynamic in innovating and generating new forms by recursively referring to established elements (Luhmann 1984; 1995) with Marx’s
focus on the structuring effects capital has on society is making this a very inspiring book, which at the same time evokes many follow up
questions.

But before I go to these linkages, the insights they provide and the questions they provoke, I think it is crucial to appreciate Pistor’s intervention
in her own right and situate it in the context of the discussions of wealth and inequality in the second decade of the 21st century. Doing so, one
sees that Pistor’s intervention is framed alongside the intervention of Piketty (2014), who uses a more phenomenological understanding of
capital as amassed wealth, which is secured and passed on over generations. This understanding of capital is crucial to see Pistor’s intervention
in the proper light, even though she will repeatedly return to Marx’s more advanced notion of capital as a social relationship. Secondly, when
Pistor speaks of wealth creation, there is a certain ambiguity which at times she nurtures herself in the book between on the one hand the
hegemonic understanding of this phrase as the actual production of wealth (a dynamic process Marx sought to capture with the creation of
surplus value) and on the other hand a more phenomenological understanding of the creation of “wealth” as the durable ownership of assets.

Pistor in fact focuses on the latter and looks at how assets are coded in modules of private law to gain priority,
durability, universality and convertibility for these claims to property, thereby gaining for the asset holder not
only priority rights to this asset, but also durability of this claim, universality over all other claimants as well as convertibility into state
money (Pistor, 2019, 12f). In that sense, wealth creation is to be taken literally, as the transformation of claims
that are easily perishable into claims that are made durable and long lasting, hidden away from creditors
and the state, thereby establishing dynastic wealth. These assets cannot simply be taxed or be acquired
during bankruptcy, because using the trust, they are made safe from bankruptcy and hidden from the state’s
view. It is these assets, which are better coded than others and thereby become “more equal than others”, which
drive the dynamic towards rising inequality. These asset-holders are optimizing the trade-off between the exposure to gain
and the risk of loss; or in her words, “business owners…have found ways to capture the upside, while shifting the
downside to others” (Pistor 2019, p.59).
In this way, “The Code of Capital” is a book that seeks to theorize rising inequality during neoliberalism. It is a book about the
capacity of
private law to make wealth durable and links the increasing success to do so in the neoliberal era to
rising wealth inequality. The book thereby is not and cannot be about how law creates revenue growth, although it can speak about
how these are capitalized (s. Wansleben, this issue). Instead, it focuses on the structuration of property rights which permits
certain asset-holders to better weather the realization of adverse events . But the book also has a broader ambition,
as it seeks to theorize dynamics of capitalism in the longue durée, making this not only a book that seeks to theorize the current
neoliberal era, but also dynamics of the capitalist formation as a whole.

capital is a social relation (ibid, 10f), but she insists that it is brought about in
In this context, Pistor agrees with Marx that
the realm of law, with law not being a representation but the form within which capital is formed . In her
words, “law is the cloth from which capital is cut ”. This is an important counterpoint to Marx’s analysis,
which never fully integrated the role of the state, even though law and the enforcement of property
rights are crucial for the accumulation of capital. In particular, the granting of property rights and their defense for assets and
the use of contract law, collateral law and trusts to achieve the securing of the wealth accumulated are central,
as Pistor shows. Thereby, her focus is not only on accumulation, but also on securing the gains capital has made ,
an aspect which helps us to make sense both of past dynamics but also of the demand for safe assets by financial investors, as these seek to
optimize the trade-off of not only increasing the exposure to gain, but also to minimize the exposure to losses.

This contribution is remarkable and her conversation with Marx is at its most obvious at this point. To
Marx’s dictum “Accumulate,
accumulate! That is Moses and the prophets!” for capitalists, she adds the desire to protect what they have already
acquired. Her study suggests that whenever elites managed to tear down a barrier to accumulation, such as the
lacking commodification of land, they then use the vestiges of the remaining feudal legal order to prevent from
falling backward. It is this persistence of what she, based on Rudd calls “the feudal calculus” (Pistor 2019, 5) which is one of the
greatest insights in the larger dynamic of capitalist evolution: elites’ use of the legal vestiges of feudalism to
protect what has been acquired. By introducing the two-step dynamic of amassing wealth (encroachment into
commons) and defending wealth, based on the wealth defense industry of lawyers, she provides great insights into
historical dynamics which can be witnessed in capitalism . It is the survival of feudal legal remnants which are still in use by
lawyers in present capitalism to make ownership of assets durable where her greatest insights lie.

Structure through agency: the capacities of lawyering

By focusing on the work of lawyers to generate the durability of wealth for an elite, we gain a better understanding of the creation of the
durable structures in which dynamics unfold. Here, her focus on the incremental steps of weaving new legal cloth for capital by recursively
referring to elements of past legal constructions puts her in line with Luhmann’s system theoretical understanding of law who has insisted upon
understanding law as a communication systems which unfolds over time (Luhmann 1995, Teubner 1997, Fischer-
Lescano and Teubner 2004). This stance is uniquely apt to capture the dynamics of common law, which is
characterized in its evolution by its decentered unfolding: new legal structures do not have to be
validated as legal, as long as they are not invalidated by court decisions . Crucial here is that this process can operate
under the assumption of as-if legality, maintained by legal opinions of lawyers. It is hence the claim to legality in private law
and in property rights which drives this communication system forward .

As the author points out, there is a fuzziness at the edges of property rights and it is at this boundary that the evolution of the system unfolds.
By focusing on the work of “the Masters of the Code”, which use their ingenuity to “cloak the assets of their
clients in new legal cloth”, she identifies an engine in the evolution of legal forms that circulate that is persistently innovating, seeking
to protect the assets of their clients from all negative eventualities , structuring it to make it secure and offer degrees of
freedom to the capitalists, while, and this we might want to add with Marx, guaranteeing them exposure to the process of surplus
value production (for a similar conception of law and the corporate form as a mechanism to structure the exposure of capital owners to risks of
gain and loss in an advantageous way from a neo-marxist perspective, s. Tuerk 1999). Her insight that “the legal code of capital does
not follow the rules of competition; instead, it operates according to the logic of power and privilege”
(Pistor 2019, p. 118) is crucial here.

In one of the highlights of the book in chapter 2, Pistor theorizes the historical evolution of property rights in enclosure movement, how agents
fought against the king and his notion of feudal property rights and then, once they installed the new notion of modern property rights in which
something can inalienably belong to a person. There are of course forerunners of that analysis, first and foremost in Marx’s chapter on primitive
accumulation, where he not only traces the application of modern property rights systems to indigenous population with their horrible
implications, but most importantly their application to landed property in early modern Britain and Scotland. In that sense, Marx’s account of
primitive accumulation is very much an account of a legal revolution, whereby the installation and enforcement of modern property rights
takes center-stage, shielding landowners from legal obligations to their tenants and being able to dispel them from the land to place sheep
there instead (Marx 1955, 762f). These battles over time made irrelevant the relationship between the lord and the vassal inherent in the
property rights understanding of feudal times, which implied that the lord must secure the livelihood of his tenants (ibid, 768). This
understanding is made void and a new understanding of law, modern property rights of land come about.

The famous chapter 24 in Marx’s Volume 1 of Capital (Marx 1955) has given rise to the theory of accumulation by dispossession, which
encompasses the dynamics of encroachment into collectively held property, enclosures and primitive accumulation by including in the realm of
commodities social affairs which were excluded beforehand. First formulated by Rosa Luxemburg, this theory has been revived by David Harvey
in his work on neoliberalism (Harvey 2005), pointing to the increasing enclosures of services previously supplied as a common good, such as
education or healthcare. These analyses of encroachment to secure persistent profitability for capital sit well side by side with Pistor’s analysis.
What she adds to these analyses of increasing commodification, to open up ever more areas of social life to capitalist production and by
accumulating the public wealth by private capital is an analysis of how the wealthy seek to protect their wealth from the vagaries of business.

What Pistor thereby adds to this account of the enclosures is the inverted dynamic, which takes hold once the land is thus accumulated and
secured, namely, to protect it with the vestiges of the past. By drawing on the private solicitors which themselves made the law in practice in
England at that time, the landed elites drew upon old legal constructs such as the use and the trust to make these landed estates bankruptcy
remote. In this vein, her investigation gives empirical material to those calling for non-essential theories of value, which point to the plasticity
and path-dependency of value (Konings 2018, Thiemann 2018). Legal
constructs forged for one purpose at one point in
time are reused at other points and in different circumstances to bring about the conservation of values.
And it is here where Professor Pistor provides an important nuance to the broad analytical view of Marx, whereby the partial
decommodification of land is not necessarily an act of push-back against capitalist dynamics, or of social re-embedding, as Polanyi (1944) would
have it, but rather of the fortification of privilege and power by the ruling elite through law. This
narrower focus on legal
conditions thereby provides a first deeper understanding of historical dynamics: in essence, inequality
rises if the techniques of coding are perfected and left unchallenged.

The rise in inequality becomes the outcome of the prevalence of coding techniques which lead to
different strata of society being unequally affected by tail event s. In this way, private law structures wealth
inequalities and provides the foundation for accumulation. Pistor’s book thereby provides a provocative answer to Max
Weber and his astonishment that common law countries saw the rise of capitalism, whereas he had emphasized the need for
predictability which was greater in civil law countries. The answer Pistor’s work suggests is that it is because the mechanisms of the coding of
capital operate quicker and go longer unchallenged in common law countries, allowing a certain predictability for the owners of capital. It often
operates based on an “as-if legality” relationship, whereby constructions of law are deemed valid until
found otherwise, if they can be based on similar prior constructions. This facet of the operation of law in common law
countries provides the basis for the claim of the structure-generating agency of lawyers, acting as agents of capital, an activity into which this
book so superbly provides insights. It is Pistor’s claim that these capacities
of private lawyers to protect their clients’
capital have been amplified by the form globalization has taken over the course of the last 50 years,
allowing them to weave new legal cloth, which make many of these assets unreachable for creditors and
the state.
The global Dimension of the Empire of Law and the Neoliberal Era

Pistor’s most important point is that the coding of capital over the last 50 years has been globalized in a way that allows entrepreneurs to
choose to incorporate in any jurisdictions, as the “incorporation theory” of the corporate form was undermined over the years, allowing
companies to operate in one country based on laws from another. She therefore analyzes the relationship between splintered authority of
sovereigns and the private agents pushing the coding of their capital to escape both private creditors reach as well as the largest “creditor” of
them all, the nation state providing the infrastructure for the business of corporations. The tools she details for bringing about such a plurality
of legal regimes in the same physical place are the institutions of bilateral and multilateral trade treaties, the WTO as well as the EU as a project
to generate a single market.
In this context of collision of systems of law and conflict resolution through courts of arbitration (Teubner and Fischer-Lescano 2004) private
arbitration has gained ever greater importance, finding its apex in investor state dispute settlement schemes which permit corporations to sue
governments when laws are enacted that are seen to infringe upon their liberty to do business. Operating at the edges of different legal
systems, these arbitration schemes recall the Lex Mercatoria, the alleged private law that was used by merchants in the middle ages in Europe
to solve legal disputes that went beyond one legal order. And yet, as Pistor points out, these private arbitration courts are insufficient, as there
is a need to be backed up by the power of a state. Here, her argument is that the realm of private law (dominium) has liberated itself from the
reach of the sovereigns (imperium) (Pistor 2019, 138f). Using
the Anglo-Saxon legal systems of England and New York State,
it has globalized the legal protection of the trust scheme, that was initially invented to protect crusading knights’ estate
from the reach of the king (Kim 2014) and made it accessible to business owners globally.

The fact that it is these two, and not any other jurisdictions, could be further theorized in the book. While the connection to the fact that New
York City and London are financial center and that hence in an era of financialized capitalism it is these two which are responsible for 57% of
transnational business transactions, the question remains to what degree we are actually observing a path-dependent process, which has to do
with the fact that it is not simply any state which could guarantee the “pacta sunt servanda” for these contracts, but that there is indeed
something specific about these two. In this context, it would be interesting to further pursue the metaphor of the empire of law and to inquire
into its link to the actual British empire and its former colony . It would have been worthwhile to link the British colonial rule to the remarkable
predominance of English law globally, making up 40% of the total.

Having established the predominance of these two systems further strengthens Pistor’s claim about the preponderance of Anglo-Saxon
common law in the current phase of globalization. But it would have invited further inquiry into the role of the legal profession in these
countries and its impact on the evolution of law as well as the interaction between private and public law. As pointed out by her, judges in
these countries are selected from the bar, which she suggests means that these former practitioners are more inclined to accept legal
innovations than in code law countries, where judges sit at the apex of the professional hierarchy and are rather distinct from practicing
lawyers. She points to the power of private solicitors in the UK in the 19th century, defending the use of feudal contractual constructs to defend
the assets of the powerful and for 21st century capitalism, she points to the increasing role of out-of-court settlements which allow the masters
of the code to operate in an “as-if” kind of legal reality, as legal constructs are not explicitly ruled out. Her book in this sense does shed light on
the interaction between public and private law making, but it traces these developments stemming from private initiative.

But what is the role of public agents, of public law and state attorneys? Is there really only constrained space for action to limit the negative
consequences of private contractual law for the public? One is left to wonder whether there is no movement within the legal profession against
the undermining of public power in the US, e.g. regarding the capacity to tax? And if it exists, how does it find its expression in the
Pistor uses the image of the traffic lights to
interpretations of existing laws and its impact on law making by parliaments?
visualize public law, which has to be aligned to allow private interests to expedite the pursuit of their
interests and which are aligned due to pressure on law-makers . This shows that public decision making can
indeed be a hindering block for private initiative of asset holders , but the book does not trace the impulses that limit
this kind of encroaching behavior on the public side. In other words, in this book we learn a lot about private law and how private lawyers have
interacted with public lawmakers to make sure that all lights are green, but we learn little about public law and the historical episodes when the
relationships between the two were actually fundamentally changed.

This was the case in moments where the reassertion of state authority occurred, such as in the
evolution of anti-trust legislation in 1890s in the US, which was carried by a large populist movement. Similar
attempts, albeit at a much smaller scale occur today to gain a grip on the tax evading behavior by US
citizens and corporations were made by law-makers in the US in the wake of the financial crisis of 2008 ,
when the US used its reach beyond its borders in the Foreign Account Tax Compliance Act to force US citizens to declare and pay their taxes in
These attempts to limit dominium by imperium, to limit private law with the help of public law indicate a
the US.
beginning reversal of the neoliberal triumph Pistors book describes, which from its right beginning sought to limit imperium in
order to have dominium rule supreme (Slobodian 2018).

It is here where we might need more understanding of the factors, which are driving these momentous changes between public and private law
that the book leaves a bit of a lacunae, making it able to generate an account that explains the movement towards the neoliberal apex of
private interests being placed above public ones, but providing the reader with an insufficient understanding for when the pendulum might
swing in the other direction. For that to occur we need to shed more light on the sphere of interaction between public and private law, looking
at it from the angle of how public law curtails private prerogatives. Here, her brief account of the shift in property rights in 1881 indicates that it
is the unsustainability of the prior debt regime and its very negative impact on agricultural production in the 1870s which was an important
factor.

The shifting applicability of the code of capital to different asset classes


Once one appreciates her evolutionary understanding of the law that draws upon prior elements and couples it with power and class dynamics
in society, the question about the factors that bring about a change of the applicability of modules to different classes of assets comes into clear
relief. As Pistor points out at the beginning of the book, “the analysis offered in this book will show that the metamorphosis of capital goes
hand in hand with grafting the code’s module onto ever new assets, but also from time to time, stripping some assets of key legal modules”
(Pistor 2019, p. 5). While this goal is achieved in a very laudable way, there is a certain lack of synthesizing and theorizing regarding the
dynamics which are responsible for the shift to which assets the code of capital is applicable. In Luhmann’s vocabulary (1986), we are observing
a reprogramming of where the codes are applicable, but the question of how these tectonic shifts in the programming of the code themselves
could be understood remains open. That is, which factors explain the expansion and limitation of the code of capital to different “raw material”
over time?

Here, Pistor’s emphasis on the constant, persistent incrementalism that is entailed in the practice of weaving new cloth based on the legal
modules of the past by her ever-innovating Masters of the code is important, but insufficient. As her example of
the 1881 change in
the validity of trusts to protect landed estates shows , these legal modules are undergirded by power
relationships (power in law), which requires changes in power relationships to be undone. How this
happened in this case is extremely insightful. After a crisis of the 1870s showed the unsustainability of the protection of landed estates from
bankruptcy proceeding, shielding them from creditors, the new parliament of 1880, which for the first time did not have a majority of
landowners in it decided to change the applicability of the code. It was hence a societal crisis based on the unsustainability of then-current
debtor-creditor arrangements protecting one class of asset-holders over others, which brought change to protections of landed property in the
UK. Would it be correct to argue that the crises of agricultural production brought about by a lack of investment due to overly generous debtor
protection brought about that change in the laws in 1881?

This question has important implications, as we are trying to understand the fate of the current era of neoliberalism, which
seems to suffer from similar problems of unsustainability.

Revisiting the political economy of law


By asking, who is made whole and who is set to lose in moments of crises, Pistor places the analytical attention on the distribution of the
materialization of losses rather than gains. In this sense, her approach holds the promise to provide us with a better understanding of the
current dynamics of widening wealth inequalities over the last two decades, dynamics which can best be summarized as “heads they win, tail,
we lose.” But when looking at this new constellation that emerged over the course over the last half century and came to fruition in the last 20,
what can and cannot be explained by Pistor’s account? In a sense, this question also relates to the fact in how far her account can at the same
time accommodate general dynamics and the historical specificity of the neoliberal era, which is peculiar.

Today, in
the neoliberal era, inequality is enshrined in an institutional configuration of splintered state
authority and crisis-prone finance as the main driver of growth (Aglietta 2000), making inequality not only
self-sustaining, but actually self-propelling. This is the case because we are living in an era of financial
dominance (Diessner and Linsi 2020), where central banks are underwriting financial markets’ tail risks , countering
the decline of asset values through central bank purchases of assets. Central banks are doing so as they are seeking to maintain the stable flow
of credit to the economy (Braun 2018) and are seeking to generate welfare effects that are supposed to stimulate the economy through
expanding demand. Today, we can state with certainty that these
interventions mostly only benefit the top 1% who see
their financial wealth reconstituted despite financial crashes, further contributing to rising inequality. By
pumping up banks and creditors and making sure shareholders and debt investors are made whole when an
unexpected future arrives, the current institutional constellation cements and propels existing
inequalities.
To better understand how this institutional edifice of present day capitalism was erected and how it might change, we need a deeper
understanding of what motivates state action, which Pistor’s empirical work shows to be of crucial importance for the evolution of coding
practices, but her theoretical work does little to incorporate. This is a great pity, because the sovereign not only enforces contract and property
rights, but is also the agent providing the main material in which wealth today is stored, namely state debt, pointing to the quintessential
hybridity of private and public agents in the constitution and preservation of wealth.

When Pistor points out how private lobbying extended the safe harbor provision, which allows repo-creditors to secure the assets they had
accepted as collateral in case of bankruptcy of the debtor and which initially applied for treasury securities only to other assets that are used as
collateral for repos, she downplays the shared interests of the US and the EU, which aligned with interests of finance to push for financial
deepening (s. Gabor 2016, Gabor and Ban 2016). It is this shared interest of states and finance for financial market deepening and the provision
of instant liquidity brought with it the expansion of infrastructural power of finance (Braun 2018), which underlies central bank action today. It
is however very likely that these different, but joint interests are going to clash given the rising indebtedness of states.
In this context, I am wondering if there is any likelihood that the current institutional configuration will be reversed and that asset-holders,
which are protected both by private law and the configuration of financial capitalism, which secures their wealth in financial markets based on
the systemic importance the latter have gained for the conduct of public policy will be robbed of these prerogatives, reversing the “heads we
win, tail you all lose” configuration. Could it be that the
instability of finance and the persistent necessity to bail out
market-based finance by central banks, both in 2008 and in 2020 (Menand 2020) might lead to a rethinking of the
current prerogatives of private asset-holders ?
Pistor is uncertain over whether such change can really occur. Her main point, based on a Hegelian philosophy of rights (as pushed for by
Menke 2018) is that property not only bestows rights, but also duties upon those who hold them. With Menke, she maintains that all rights
need to be assessed in light of other people’s rights, that they need to be reflexive (Pistor 2019, p. 231f). The problem with an open re-
politicization of economic and social life (Menke), whereby legal change results from an open political process is that altering existing
rights of asset holders will be fought against as “expropriation ” and hence require recompensation, much like the
slavery holders were recompensated when their prerogatives were abolished (Klein 2014). As she sees these sums to be astronomic, she ends
her book with a plea for persistent incrementalism, chipping away the edge from capital assets and empowering non-capital holders as the way
forward against the persistent incrementalism in the realm of private law driven by the Masters of the Code. But at this point, it remains
unclear who is to push for it and how? It seems to remain a very idealistic proposal as the driving force animating such action is missing:
whereas the masters of the capital are paid very well to protect their clients, can the public muster comparable pay to motivate lawyers to limit
these privileges?

Seeking to re-politicize the question to what the ownership of “capital” actually obliges is an important
matter and Pistor is right to point to the dangers of either violent revolutions or the slow demise of the
legitimacy of law to justify the ordering of society (Pistor 2019, p. 229ff), which is likely to occur if trends
towards inequality persist. If law is replaced by naked power, it might make us all worse off in the
process. But pinning one’s hope on an incremental process of chipping away at the edges of capital’s
prerogatives to stop this process seems to me to be no credible solution either. After all, one might be left with the
joke she shares at the end of the book, whereby two peasants asked about how to get to Dublin answer: “not from here” (ibid, 232).
1AC---Solvency
The 1AC re-interprets antitrust law’s central purpose. The purpose of antitrust law is
not to promote competition, but rather to allocate coordination rights over the
economy. Antitrust’s main permissive function is the firm exemption, which grants
coordination rights to corporations with limited liability legal protections. This
fundamental goal makes antitrust the lynchpin of economic organization, and a
necessary component of any anti-neoliberal politics. By closing the firm exemption,
the AFF enables a social democratic vision of economic control, whereby coordination
rights are granted instead to the state, co-ops, collectives, and/or unions.
Marshall Steinbaum et al 20, Assistant Professor of Economics at the University of Utah, Left Anchor,
podcast episode 155: “Socialism vs. Antitrust with Marshall Steinbaum,” 9/12/20, transcribed by Otter,
https://leftanchor.podbean.com/e/episode-155-socialism-vs-antitrust-with-marshall-steinbaum/
Marshall Steinbaum 31:39

But yeah, I mean, there's a kind of what you were saying, I definitely agree with that, I guess I would push back a little bit on the kind of
interpretation of the states moving away. And so like, the only thing that matters is what whether Tim Cook allows Uber to make a living, as
opposed to whether, you know, the taxing authorities of every city and their state labor departments and the FTC FTC have a say on it. Like
they're, they're, you know, small potatoes in comparison to the CEO of some company. I think I mean, that's true about, you know, who wields
power in the economy. But it's not right to say that that's because the state has retreated and sort of ceded all
control to, to the capitalist, I think we have to understand the state's involvement or policies involvement
as being, you know, kind of inescapable. So the question is like, okay, so you've got, you know, like, incorporation
statutes, like who's allowed to be a company to enjoy limited liability or whatever, like, people don't think
of that as being part of economic policy. But it absolutely is not just, you know, is Apple allowed to be a
corporation or not a corporation as, as you know, say it's a California Corporation? I mean, it's probably a Delaware Corporation,
but whatever, you know, can it operate across state lines? You know, these were big issues in the 19th century. Nowadays, we get
things like, oh, if you're a corporation, then basically anything you want to do is legal under the
antitrust laws, you know, but people who are not corporations cannot act together under the antitrust
laws. So for example, you know, you're talking about like, oh, Uber could be liable under antitrust for this gigantic
price fixing conspiracy. Through, executed through verticals restraints, yes. You know, who has actually
been found to be liable under the antitrust laws? Uber drivers for potentially collectively bargaining their
wages against Uber. So that it's this idea that like, Oh, you know, these individual drivers, like they're independent businesses operating on
this neutral platform, but they can't get together. That's what the antitrust laws forbid. Whereas this one gigantic
corporation that dominates them that is absolutely allowed to do whatever it wants. So this is the kind of
concept that my my colleague and collaborator Sanjukta Paul is called the allocator, antitrust is an allocator of
coordination rights and the title of her paper. This idea is like, who's allowed to coordinate economic activity?
Is it it, and what she says is that antitrust has what's called the firm exemption. So here she's drawing on what what,
you know, most every antitrust person recognizes and is known in the jurisprudence is the labor exemption, which is that labor
unions bargaining wages within a recognized bargaining framework cannot violate the antitrust law
through that collective bargaining. So that the idea is that's an exemption to antitrust's usual, preference for
competition. What she says is, you know, we have to reinterpret that as being, as there being a firm exemption to
antitrust, which is Uber telling everybody what to do, that has an exemption from antitrust law by
virtue of the fact that Uber is a corporation and or the way that we have chosen to allocate
coordination rights in her framework is to allow Uber to coordinate entire markets in the case of Apple to
allow Apple to determine what is presented on its on its app store and you know, it has, you know, pretty, you know,
strong representation in the retail smartphone market. So it's like okay, you know, Uber is probably going for relative upscale clientele, they all
have iPhones, if it can't get on the iPad, if it can't get on the App Store can't get on the iPhone. And if you can't get on the iPhone, they have no
business. You know, that is the allocation of coordination rights over that market to Apple, as opposed to
some other mechanism for allocating coordination rights. And this is where, you know, to get back to what
we were talking about earlier, anti monopolist framework would say, you know, there's no reason why
Apple gets to be the one who decides who sees what, why don't we potentially, you know, in a kind of Co Op
context, give, give that right to, you know, a consortium or, you know, quote unquote, union of app developers, or in the case
of, say, ride sharing, like, why don't we have a union of taxi drivers, and they determine, you know, who gets who gets
matched with which customer and what the fare is, as opposed to the company determining that
Alexi 35:48

this is so important, and I think it's


really worth emphasizing , you know, the point about how jurisprudence and an
antitrust enforcement does what she said, and so far as it, it chooses sides, and who can coordinate these things
and who's autonomous and who has power. And since we're speaking of Apple, maybe you can talk a bit about how sanitation
workers right at Kodak, Kodak back in the 80s had more power to coordinate and kind of exert their their power over sanitation workers at
Apple, right in contemporary times, and then you write about how that is kind of an example of, you know, how the separation of workers from
lead firms is kind of a simultaneous erosion of the in the jurisprudence of the Sherman act prohibitions on vertical restraints. So, yeah, maybe
talk even a bit more about about the importance of this.

Marshall Steinbaum 36:40

Yeah, so that's getting to what a great economist David Weil has called the fissured workplace. And I think you're referring specifically to a
article that was published, I think, by Neil Irwin, if I recall, correctly, in the New York Times, a couple years ago, that was profiling two specific
people, one of whom had been kind of janitorial worker on payroll at Kodak in the early 80s. And like, she had basically benefited from their,
you know, corporate policies that included incentives to like go to community college and get credentials. And so she got qualified as I you
know, sort of IT person, she was like, trained on Lotus 123, or something from the, you know, from the dark history of personal computing. You
know, she kind of worked her way up through the ranks at Kodak, thanks to the fact that she started in the ranks of Kodak that is that she was a
janitorial worker on the payroll, she was able to be promoted, basically, to the point of being the head of it for the entire company at some at
one point. So she was a senior executive, you know, and that kind of social mobility via the mechanism of a major economy leading firm that
employs people kind of every stratum of the occupational hierarchy of the income hierarchy, and is itself a like, somewhat egalitarian
organization in its own right. I mean, insofar as any corporation could be egalitarian within capitalism, you know, I think this is kind of what
Wynand was talking about, when he referred to, you know, this sort of New Deal state that was created by the National Labor Relations Act and
other other, you know, kind of New Deal reforms, it's like that, that kind of somewhat egalitarian corporate organization is, you know, a thing of
the past. And my argument would be well, it's and it's the erosion of antitrust that made that not the case. So in the instance of Apple, the
contrary, the contrasting individual was, you know, janitorial services worker who was contracted, so she was employed by some, you know,
janitorial services contractor whom Apple contracted with to clean its offices, but like, there's no way that she's ever going to be promoted to
be an employee of Apple, let alone a senior executive at Apple, you know, nowadays, Apple is one of the economies leading firms. So there's
different, you know, just, you know, take and both firms are like, somewhat are considered somewhat technologically innovative in their time.
So like, think of these, you know, kind of economy leading like blue chip companies that are that like defined the apex of the American
economy in two different eras. One of them is constructed such that it's possible for a janitor to eventually become a senior executive, the
other is constructed so as to make that impossible at all costs. And and and, you know, I think Irwin's piece gets exactly at this question of
employment classification as being a crucial constituent of that changing reality. I would say that the ability to contract everything out and yet
control everything so minutely use a, you know, arms legally at arm's length, but like economically, you know, at a very close distance and with
total control to the boss, you know, that is we have to understand the erosion of antitrust is being just as much a part of that as the non
enforcement of labor laws, the erosion of of enforcement of those and so on.

Ryan Cooper 39:59

Yeah, Yeah, that's that's a great dichotomy. I


wanted to also, I wanted to bring up the the welfare state. I n, in, in a couple of
these articles, you've mentioned how, you know, the
gig economy and various sort of like, anti trust , you know, trying to
escape any kind of liability for, for being responsible for one's, you know, employees has materially harmed
workers by sort of excluding them from, you know, like traditional welfare state stuff, which is often
administered through, you know, through the employment relation. But you've you've also written about how, like the cares
act, part, partly helped with that, and then partly maybe, sort of entrenched the bad relationship. But, you know, in general, the cares act was
like a pretty astounding piece. I mean, it's seems mostly expired now. But, like, it was a really interesting piece of legislative legislation that, that
helped people out a lot and kind of revealed a lot of underlying, you know, deficiencies in the way that people in DC have done policy for the
last like, 40 years. So can you can you kind of go through, like, the how the welfare state interacts with, you know, anti trust, and and, you
know, kind of kind of how the two can can complement each other? And how they that might be fixed?

Marshall Steinbaum 41:41

Yeah, absolutely. So,we've


been talking a lot about this question of the legal employment relationship, and why
that matters so much for workers. And a big reason why it matters so much is exactly as you said, that so
much of our welfare state
is conditioned on employment. And so that's what you know. So in some sense, this like category, that's kind of, you know, not the
main focus of attention at the time of the New Deal. You're that this distinction, the question of like employment independent contractor, and
that is an important distinction, as I was referring to in the antitrust cases that we talked about earlier. But like, this idea that, you know, a lot
matters for you economically, on the question of whether you are legally an employee or not, that's not true to the New Deal era, per se, it's
that's what's been layered on since and especially since we kind of adopted the backlash to the Great Society view that the problem with the
welfare state is that it causes people not to work and inculcates a culture of poverty. You know, all of that is basically racist drivel. But it's had
an enormous impact on the kind of Orthodoxy around welfare policy, especially in DC. So as I've talked about, either of I've talked about in this
podcast, certainly a couple of times on podcasts with bruenig. And in some other writings, you know, there's this sort of mania for the Earned
Income Tax Credit among DC policy wonk types, which is this, basically wage subsidy for people who were employed in market labor, and it
doesn't help you if you're not employed in market labor, and arguably, it hurts you, even if you are employed to market labor, and you don't
receive it, because it by causing people to, you know, as sort of have to be employed to market labor in order to gain the benefit and arguably
depresses wages for people who aren't beneficiaries, so reduces the market wage, basically. You know, that cares act is kind of by chance, the
opposite of that. So first of all, you said that the cares act was like this revolutionary thing. It was that with respect to that unemployment
insurance position, provision, so called pandemic unemployment compensation, and then pandemic unemployment assistance, we'll get to
what those two things are in a second, the rest of the cares act for you know, it also included a, you know, sort of like one off $1200 check from
the IRS, you know, for people earning about, I guess, it was like below 100,000 a year. And then there was like, a ton of stuff that was basically
an indefinite extension of a whole, like firehose of money to, you know, the economy's leading corporations via the Federal Reserve and the
Treasury. But I think, especially the Federal Reserve, so you're saying it's, like, mostly expired now? Well, not the part that gave capital,
everything they want it that part's not expired, and that's exactly why the other part hasn't been renewed. So there was a sense, you know, the
kind of political calculus that gave rise to the cares act is like, you know, we have like, suddenly a pandemic has hit the economy, it's going to be
temporary, you know, so we need to, like, we need something to tie people over, let's juice up the unemployment insurance system, give
people $1200 checks. And make sure all these businesses are able to borrow, you know, that are facing, you know, huge sudden shortfalls. It's
like, oh, but you know, by the way, the last of those things that will be permanent, the first of those things will be temporary, because the
pandemic is assumed to be temporary, and oh, wait, the pandemic is not temporary, or at least it's less temporary than we thought it was
gonna be. You know, those people are suddenly high and dry because capitalists already got everything they wanted. So it's like we're in a
pretty shitty situation, frankly, visa for pretty much all working people, but the stock market's doing great. Okay, so what did the cares act have
for unemployment insurance? And why is that such a challenge to kind of policy received wisdom, it basically added this lump. So the PUC part,
pandemic unemployment compensation added a lump sum $600 per week, on to traditionally eligible workers for unemployment. So that's
PUC so if you're eligible for unemployment, there's some state formula that says that's a function of what your wages were pre layoff. You
know, generally as as the lingo and unemployment insurance is replacement rates, so it's how much of your loss of your lost wages are, quote,
replaced by unemployment insurance, you know, the average in the United States for people who are eligible is something like 50%. And like
50% of unemployed people aren't eligible or was not able to collect it, you know, very, like leaky sieve type system, that P You see, element of
the cares act up to that number by whatever the replacement rate was under state law plus $600, which for a lot of workers is basically, you
know, a gigantic windfall relative to the shittiness of the jobs that they actually have to do. So many workers, especially in low wage
occupations experienced, you know, better pay when they were receiving the PVC than they did in their jobs and that they're ever likely to get
in their jobs. PUA was the version of that for the gig economy. Basically, it was for workers who were not eligible for traditional unemployment
insurance. And many gig economy workers were dis employed by the pandemic, this was a fully federal system that essentially gave them
access to a temporary pool of unemployment insurance. And the key thing there is at the time, I wrote a letter with Sen. jepto, whom I
mentioned earlier, I wrote a letter to Congress about that they have basically done a kind of ex post bailout of the of all of the misclassification
that gig economy firms have been doing for a decade now. Because they're saying, Oh, you know, Uber has never paid a dime in unemployment
insurance premiums for its workers, and they become unemployed all the time. Suddenly, in this pandemic, many of those workers are eligible
for unemployment insurance, thanks to PUA. So that's great that they're, you know, able to subsist, but instead of paying into it, you know,
Uber gets to skate for 10 years on its premiums, and then the federal government pays for that. So that was, you know, kind of, you know, a,
under the radar screen bailout of the gig economy, employers. Anyway, now, you know, we're in this position where these things have been
taken away, and what that has meant, you know, so the interesting thing that's come out in the economics research about the effect of the
cares act, and specifically these UI provisions, is that, you know, that pandemic is and has been devastating to the low wage workforce, huge,
extreme spike in unemployment, it's still very high, you know, a lot of service workers have been disappointed. But actually poverty rates went
down, and earnings went up, or income went up, because their income was more than replaced by these temporary, generous provisions that
were not conditional on showing up for work, because they couldn't be conditional on showing up for work, the whole point of the pandemic is
that people can do their work, you know, now, you know, and, you know, given that like that, like, in the midst of an economic catastrophe, we
reduce the poverty rate, you know, that like flies in the face of everything that we know about how the poverty, you know, the poverty rate
usually goes up when there's an economic recession. And what we just found out is like, if you don't want that to happen, if you do want to
reduce poverty, you have to enact these policies that aren't conditional on work. That's how you reduce policy, you give people money,
basically, and in this case, unemployed people are the people who are likely to be dev low income to be in poverty. So that's how you get
money to. So now, you know, we're kind of I mean, because of this political misjudgment that had, you know, given capital, everything and
wanted while workers bailouts was temporary, you know, now it's like, Okay, well, like, please give us something for workers. You know, I think
the the view had been that, like the election would be the leverage that, you know, pro worker interests would have over the federal political
system, but that's not the case, actually, because the outcomes of elections aren't terribly responsive to the the well being of the population,
which is a big problem that we should probably do something about at some point. But But, you know, so now it's like, Okay, well, we're sort of
like pleading for scraps the way that we have been for the last decades, and everyone's reverted to, you know, basically versions of the EITC
expansions that have been on their, you know, to do list for for a long time. So it's like, okay, you know, the wanks have guy kind of gotten back
control in control of the message and the asks and whatever. And, you know, consequently, the agenda has gotten shittier.

Alexi 49:39

never a good idea to give the Wong's power. But now, like so far, I just want to recap for the audience. We
have number one left anchor
Steinbaum, synthesis of anti trust and democratic socialism , to new idea breaking news, let's make government
responsive to the needs of the people. That's that's that's what we've so these two important things that we're offering now. But
But no, I think first of all the point point very well taken that, you know, our favorite game about the Democrats, is it malfeasance and or is it
malice? You know, is it is it just, you know, bad politics or or is it just intentional, you know, slap in the face to the working people of this
country into the poor. So, so yeah, yeah, point point well taken that the the corporations were given a, you know, indefinite Lifeline, and then I
think they accidentally helped the poor and helped the working class, probably because they didn't realize how low pain, you know, jobs were
out there. Yeah.

Marshall Steinbaum 50:39

Yeah. I mean, that's exactly right. It was pretty clear at the time that like, there was just sort of No, I mean, I think the rhetoric in Washington is
like, somewhat responsive to, you know, insofar as there's any responsiveness to workers, it's like, you know, people who are not precariously
employed. So, you know, that I, you know, so it's like they don't it's like any job is a good job, or they are not, that's a little bit of an
overstatement. But it's like, you know, what we want to prevent as people losing their jobs, as long as they have their a job, there'll be fine.
And, you know, there's just a very, very little apprehension on the part of like, the policy elite of like, just how bad most jobs

Alexi 51:18

but look, Marshall, we all know, worst case scenario, as Mitt Romney said back in the day, if you're really in a tough situation, just sell your
stocks if you have to just

Marshall Steinbaum 51:28

Yes, yeah, yeah, right. Right. Just that Yeah, Dad stock at American Motors or whatever, you know, what you can afford? Right? I

Ryan Cooper 51:33

mean, it was a tough thing to have to do. But sometimes you got to just bootstrap it.

Marshall Steinbaum 51:40

Yeah, so well, you know, now now, Romney is a resistance hero. He's doing everything he can to bring our Trump Reign of Terror to an end

Ryan Cooper 51:47

he is, thank thank God for him, honestly. Yeah, so so to, I guess, to
kind of like, like, tie a tie that together a little bit. You know,
like, the welfare state is , you know, just like a critical lifeline. You know, like the cares act shows, you know, that, that,
that four decades of neoliberalism was all bullshit, actually, we could solve poverty quickly and easily,
just by, you know, dumping money on people who don't have money. That's literally It's that easy. But I
think, you know, the interesting thing to me about, like, this whole discussion about, like market regulation, and so on and so forth, is that, like,
I'm pretty convinced that the, you know, in so far as your, the economy is based to some degree around, you know, private businesses, you
know, doing their thing, competition is a is a fairly useful tool, if it's done, right. And that means competition, that's that that happens, you
know, through a sort of regulated process, because you can have competition that just means trying to cheat, and like drive the other guy out
of business, so you can seize more market share, you know, try and try to force companies to compete on price and quality. And that means big
government, basically. You know, an example I've seen recently, you know, the computer chip market for for like desktop PCs. That's like a
pretty concentrated market. But there is competition there between AMD and Intel. And Intel's had like a big chunk of you know, the
marketplace for for many years, AMD has been sort of a laggard for the last couple years AMD like they basically just beat Intel, it's better,
better chips for cheaper. And suddenly Intel's on the backfoot. And they're doing all this stuff, they're retooling their, their machine to try to
sort of, like, exceed, and like, that, I think is a reasonable process, so long as it's not, you know, like, you don't you don't end up with
competition that takes place like, okay, we're shipping all of our, you know, all of our factories to Tanzania, and we're just gonna pay everyone
$1 you know, make them buy all their stuff in company script, that kind of competition. But, you know, and then also, you could, you could say,
like, oh, we're going to set up something like the post office as explicitly a monopoly, but it's going to be a monopoly with a sort of government
policy purpose, like everybody has to get the same service for the same price even if it's like ridiculously uneconomical to provide it in a certain
location. And that's like a kind of different that's like about quality government and how do you set up a agency with some sort of a spirit a core
that like, does a good job. But like, I think the, you know, my sort of like fundamental takeaway, and maybe you can sort of quibble with this or
qualify, Marshall is that like, like, the anti
trust, and, you know, breaking up, like, like full on monopolies and like forcing the businesses to
compete decently and, you know, the sort of like welfare state, you know, social democratic vision, these things like there are
two, they can be two great tastes that taste great together. And, you know, like, there's not necessarily a
trade off. And then like, one could sort of enable the other. What do you think?
Marshall Steinbaum 55:40

Yeah, I mean, I think that you can have a, you know, what might be called Race to the Top type of competition, I'm not exactly sure what's going
on in the, you know, desktop computer chip market, but like, branding, what you the way you characterized it, or you can have race to the
bottom competition, which is basically about sort of chiseling out your company's own regulatory arbitrage, or like, You're the one who gets to
run the taxi company, but not actually charge the regulated rate, or you're the one who locates the factory in Tanzania so that you can pollute
all you want and pay your workers like crap. And then you know, then you're in, you know, quote, unquote, competition with domestic
producers, you know, who are then obviously incentivized to do the same themselves, I have tended to move away from the concept of
competition, exactly, in some ways, exactly. For the reason that you're saying it. And for the reasons I just said, which is that it is not, it doesn't
really work as like, we want more of it, or we want less of it, because there's different forms of it, as we were just saying, Yeah, and, you know,
in particular, Ihave moved away from that concept of competition vis a vis antitrust law , like I just don't agree,
now, now I have come to the view that I don't agree that the purpose of the antitrust laws is to
promote competition. I think it is because, you know, for the reasons like that the world in which, you know,
a US domestic manufacturer relocates overseas to take advantage of poor environmental and labor
standards, you know, that's like, an act, you know, that could be understood as an anti competitive act vis a vis the
workers, but like a pro competitive act vis a vis competitors, potentially. And so I don't think like it's, you know, a
policy regime that gives workers that gives companies the ability to undercut their own workers through the threat of outsourcing isn't
about promoting competition or repeating competition, it's about, you know, who gets to decide
and the economy who has power, as Sanjukta said, who, to whom are coordination rights granted. And so
my view is like, antitrust has one disposition of the allocation of coordination rights or, you know, who gets to operate as a
monopoly or as a dominant firm versus who is subjected to their domination , which is designed subjected to
competition under the current way of doing things that would be workers, so like, a dominant employer, you know, subjects workers to
competition, so the workers have plenty of competition, and that's what reduces their labor standards. And I think that is exactly what
is kind of tripped up or created this false dichotomy between like, anti monopoly ism versus socialism,
because from a workers perspective, more competition is bad. Because they, you know, that's exactly what the economy already consists of,
whereas from a, you know, sort of corporate perspective, you know, exactly what characterizes the economy is a lack of competition, that is to
say, you know, dominance, not just in any one market, you know, where, you know, many major industries are basically, you know, an oligopoly
if not a monopoly, and then, you know, vertical integration and vertical control, you know, that subjects, disadvantage actors to competitive
forces and insulates powerful actors from those competitive forces. And what we want is the erosion of the concentration
of power, which is to say, to, at least, you know, through the mechanism of competition that would be to subject powerful actors to
competitive forces and protect unpowerful actors from them.

Ryan Cooper 59:00

Well, well said. Go ahead. I was gonna just do a just out of left field kind of question about, because it
seems like non domination
seems to be the maybe the principle that would kind of work through the synthesis of democratic socialism
and the antitrust, kind of coalitional movement. And what do you think? How would you understand that principle, working with
other ideas that the left is is kind of fighting over whether it's job guarantee or UBI? You know, how do you think this overall leftist synthesis
should think through what principles can help us kind of navigate these contests or which policies to to kind of fight over and propose as the
most important to push for?

Marshall Steinbaum 59:48


Yeah, well, I absolutely do think that non domination is the principle that's at play here. And that's why I support both UBI a
job guarantee and I don't believe that there needs to be a clash between those two things. I mean, I have often thought and if I, you know, had
a vast research budget of my command, I would indeed, commission this, you know that there should be a sort of left pro labor like pro low
wage labor agenda that consists of a UBI, like the cares act, except not just for unemployed people, but including them, a job guarantee, which
is to save full employment, you know, macroeconomic commitment to full employment, and a $15, minimum wage, as well as the enforcement
of other labor standards, like maximum hours, and, you know, safe workplaces and that sort of thing. All of those things together to me form
like the tripartite are the three legs of the stool of like a, you know, pro labor left agenda as against the EITC. And basically anything that's
conditional on supply, market labor for in order to receive benefits. So like all three of the things I mentioned, what characterizes them is rights,
and entitlements accruing to the worker that's independent of any one employer. And that's all of that is at odds with existing policy orthodoxy,
for example, the EITC, the other thing that I have written about a great deal is a student debt and labor market credential is Asian. So I interpret
the rise of student debt as being basically the federal government's most ambitious labor market policy of the last few decades, which is the
idea that like, oh, if people are earning enough in the labor market, they need more human capital, so they need more higher education, and
we'll lend them the money to get that higher education, and then their earnings will go up, like that has, you know, kind of spiraled out of
control, because people's earnings haven't gone up. So they're left with a bigger pile of debt than they would have had otherwise, and
consequently, aren't paying it off. But like, all
the real big reason why the whole, like student debt and Higher
Education and Human Capital approach to labor market policy hasn't worked, it's because it also
doesn't take into account employer power and the domination, that bosses are able to exercise over
workers in a capitalist economy. So what the effect of that, you know, student debt thing in the labor market has been to basically
shift the cost of training or being trained for your job or qualified for your job to individuals from employers or from, you know, the public
higher education system, you know, these, this is just the transfer of those costs to the shoulders of the agent that's like least able to shoulder
them.

Applying antitrust law to the business firm itself invites a re-structuring of the
economy towards forms of coordination currently presumed illegitimate by antitrust’s
firm exemption, such as worker’s organizations, co-ops, and democratic social control.
Firm exemption is one leg of a 3 legged stool upholding the economic SQ – removing it collapses
corporate coordination rights

Sanjukta Paul 20, assistant Professor of Law at Wayne State Law School, “Antitrust As Allocator of
Coordination Rights,” UCLA Law Review, Vol. 67, No. 2, 2020, https://papers.ssrn.com/sol3/Papers.cfm?
abstract_id=3337861
INTRODUCTION

The central function of antitrust law is to allocate economic coordination rights. This means that
private decisions to engage in economic coordination are always subject to public approval, which
antitrust law grants either expressly or tacitly. Currently, its methods for accomplishing this function have
the effect of anointing control and concentrated power as the preferred form of economic
coordination, and to frown upon forms of economic coordination in which power and decisionmaking
are more broadly dispersed. Antitrust law’s current methods for allocating coordination rights include
what I call its firm exemption, as well as its preference for vertical over horizontal coordination beyond firm boundaries.
Antitrust’s methods of allocating coordination rights are ultimately indigenous, and cannot be
explained away by external referents: neither by other areas of law, nor by putatively neutral
conclusions of social science. They are also historically contingent, and have shifted over time.

Practically speaking, the reigning antitrust paradigm authorizes large, powerful firms as the primary
mechanisms of economic and market coordination, while largely undermining others: from workers’
organizations to small business cooperation to democratic regulation of markets. While deploying the
legal concept of competition to undermine disfavored forms of economic coordination, antitrust law
also quietly underwrites certain major exceptions to principles of competition, notably, the business
firm itself. In surfacing the firm exemption, this Article also isolates the underlying, largely unexamined decision criteria for allocating
coordination rights that it employes.

The current paradigm for thinking and decisionmaking within antitrust law has a professed commitment
to implementing the insights of neoclassical economic theory in legal decisionmaking.1 According to
that framework, the aggregate of individual market transactions, rather than direct coordination, will
result in an optimal allocation of society’s resources. But this process of market allocation, which the law
is supposed to facilitate but not displace, itself has no existence independent of prior legal allocations
of economic coordination rights. Those coordination rights are shaped by numerous areas of law—from
property to corporate law to labor law to antitrust, among others. This Article focuses on antitrust law, where this
function is rarely acknowledged. Although the law and economics paradigm has enormous institutional sticking power in current
antitrust law, the basic purposes and methods of antitrust law are also up for debate today in a way that
they have not been in decades. Recent contributions to the antitrust revival have emphasized the law’s traditional concerns with
corporate power and fairness, which were largely written out of antitrust law in the Chicago School revolution. 2 Dissenting voices asserted
these as legitimate antitrust concerns even prior to the current challenge. 3 Mirroring the reformist call to put some limits upon the broad
coordination rights of the powerful, a growing chorus of scholarship has emphasized the need to expand the coordination rights of small
players to some extent or another, beginning with the question of workers and microenterprises caught between labor and antitrust
regulation.4

However, proposalsto reform antitrust, or to reconceptualize it, have thus far generally stopped short of
questioning the basic premise that its primary function is to promote competition. At least officially, if
increasingly uneasily, competition is still king. To be sure, many posit that antitrust performs this stated
function badly, or does not perform it at all in certain markets .5 Even when reintroducing values such as fairness and
deconcentrating power, for the most part the reform camp has characterized those values as flowing from—or at least coextensive with—
promoting or protecting competition. Thus,
the political debate over antitrust has been characterized by all sides
claiming the idea of competition and defining what it means to promote competition in different ways.

In the current moment of paradigm instability,6 this Article aims to serve a clarifying role. Defenders of
Chicago School antitrust tend to view reformers’ concerns—for example, fairness or deconcentrating corporate power—are extraneous to the
fundamental function of antitrust law. That view, however, relies upon the idea that the function of antitrust law is to promote competition and
that the law does so by following the independent guidance of economics. But neither of these things is true. Antitrust
law decides
where competition will be required and where coordination will be permitted . And in accomplishing
that task, its most fundamental judgments are not ultimately derived from a neutral external referent,
such as economic theory. Meanwhile, as the opposition to antitrust’s targeting of small players’ economic cooperation builds, some
have begun to respond that this opposition evinces an inconsistency within the antitrust reform program, which otherwise generally favors
increased antitrust enforcement. But, again, this objection only makes sense if one assumes that antitrust’s purpose is to promote competition,
full stop.
By showing that antitrust in fact already allocates coordination rights, I also show that a
conscious reallocation would not constitute a special exemption from a general principle. Instead, it
would simply be a different allocation of coordination rights, requiring justification no more and no less
than the current one. By reframing antitrust law as this Article does, we can clarify what we are actually
debating: what criteria should antitrust law use to allocate economic coordination rights? What forms
of economic coordination should it permit or even promote, and what forms of economic coordination
should it discourage or even prohibit?
Part I of the Article sets out the doctrinal and logical argument that a core function of antitrust law is to allocate economic coordination rights,
that its disfavor of horizontal coordination beyond firm boundaries is an example of this function, and that this function cannot be reduced to
the operation of other areas of law. Part II then shows how antitrust’s
firm exemption, as embodied in Supreme Court case law,
involves the concentration of economic coordination rights—a preference that is mirrored in other aspects of antitrust
doctrine as well.
Part III briefly describes how these
criteria for allocating coordination rights—preferring control over
cooperation, and naturalizing the coordination embodied in hierarchically organized business firms—
resulted from a historically contingent process within the development of antitrust law itself. Part IV
addresses the contention that this allocation of coordination rights can be rationalized and justified by reference to economic theory, focusing
on a now-foundational argument articulated by Robert Bork.

I. ANTITRUST LAW’S OVERALL ALLOCATION OF ECONOMIC COORDINATION RIGHTS

Antitrust law’s core function is to allocate coordination rights to some economic actors and deny them
to others. This makes private decisions to engage in economic coordination subject to public approval,
which antitrust law grants either expressly or tacitly. Importantly, this reframing is an analytic claim that
redescribes existing reality; it is not a normative claim about what antitrust law ought to do. That said, reframing antitrust
law this way renders visible economic coordination that has been naturalized and invites us to consider
anew forms of economic coordination that have been presumed illegitimate. Ultimately, transparency
about antitrust law’s core function should lead to transparency in performing it—that is, in articulating
and defending the criteria by which coordination rights are allocated . Currently, those criteria are often obscure and
implicit; where they are acknowledged at all, they are often presumed, incorrectly, to be derived from the independent conclusions of social
science.

Economic coordination is always either authorized by antitrust law, or not. For any given instance of
economic coordination, and certainly for any instance of economic coordination implicating prices, antitrust asks—either
explicitly or implicitly—whether that coordination is justified, and then answers that question one way or
the other. Moreover, the answers that antitrust gives to these questions are not derivable from property,
contract, or corporate law—though its answers interact with each of these.

Currently, antitrust law tends to allocate coordination rights , across doctrinal areas, according to criteria that
systematically prefer concentrated control over dispersed coordination or cooperation. If we envision
antitrust’s approach to allocating economic coordination rights as a three-legged stool, its conception of
the firm is one leg. The other two are its treatment of horizontal coordination beyond firm boundaries
and its treatment of vertical coordination beyond firm boundaries. In deciding how to evaluate interfirm coordination,
antitrust law first decides whether that coordination is horizontal (between competitor firms in the same market) or vertical(between firms in
adjacent markets, such as supplier or distributor relationships). Antitrust law’s stark preference for coordination accomplished through vertical
contracting over horizontal interfirm coordination mirrors the criteria according to which the firm exemption itself is applied. Both preferences
embody the preference for control over cooperation, which is to say, for the concentration of economic coordination in fewer rather in many
hands. This Article focuses primarily on the firm exemption because it is the most obscure of the three legs, and because both vertical interfirm
coordination and horizontal coordination beyond firm boundaries are dealt with in greater detail in other work.7 For context, I briefly
summarize the doctrinal content of the other two legs of the stool, and their relationship to the firm exemption. I also briefly describe the role
of the Chicago School revolution in establishing this overall allocation of coordination rights, although this Article does not provide an
exhaustive account of historical origins or etiology of current doctrine.8
2AC
Case
Chinese ai supremacy -> war
It structurally fails and doesn’t promote democracy or prevent war
Dr. Paul Staniland 18, Associate Professor of Political Science and Chair of the Committee on
International Relations at the University of Chicago, 7/29/18, "Misreading the ‘Liberal Order:’ Why We
Need New Thinking in American Foreign Policy;" Lawfare, https://www.lawfareblog.com/misreading-
liberal-order-why-we-need-new-thinking-american-foreign-policy

Pushing back against Trump’s foreign policy is an important goal. But moving forward requires a more
serious analysis than claiming that the “liberal international order” was the centerpiece of past U.S. foreign-policy
successes, and thus should be again. Both claims are flawed. We need to understand the limits of the liberal international order, where it
previously failed to deliver benefits, and why it offers little guidance for many contemporary questions.

First, advocates of the order tend to skim past the policies pursued under the liberal order that have not worked. These mistakes need to be
directly confronted to do better in the future.

Proponents of the order, however, often present a narrow and highly selective reading of history that ignores much of the coercion, violence,
and instability that accompanied post-war history. Problematic outcomes are treated as either aberrant exceptions or as not truly
characterizing the order. One recent defense of the liberal order by prominent liberal institutionalists Daniel Deudney and G. John Ikenberry,
for instance, does not mention Iraq, Afghanistan, Vietnam, or Libya. Professors Stephen Chaudoin, Helen Milner, and Dustin Tingley herald the
order’s “support for freedom, democracy, human rights, a free press.” Kori Schake writes that Western democracies’ wars are “about enlarging
the perimeter of security and prosperity, expanding and consolidating the liberal order.” Historian Hal Brands argues that the order has
advocated “political liberalism in the form of representative government and human rights; and other liberal concepts, such as nonaggression,
self-determination, and the peaceful settlement of disputes.”

Other analysts have persuasively argued that these accounts create an “imagined” picture of post-World War II history. Patrick Porter outlines
in detail how coercive, violent, and hypocritical U.S. foreign policy has often been. To the extent an international liberal order ever actually
existed beyond a small cluster of countries, writes Nick Danforth, it was recent and short-lived. Thomas Meaney and Stephen Wertheim further
argue that “critics exaggerate Mr. Trump’s abnormality,” situating him within a long history of the pursuit of American self-interest. Graham
Allison—no bomb-throwing radical—has recently written that the order was a
“myth” and that credit for the lack of great
power war should instead go to nuclear deterrence . Coercion and disregard for both allies and political
liberalism have been entirely compatible with the “liberal” order.
The last two decades have been a bumpy ride for U.S. foreign policy. Since 9/11, we have seen the disintegration of Syria, Yemen, and Libya, a
war without end in Afghanistan, the collapse of the Arab Spring, the rise and resurgence of the Islamic State, and the distinctly mixed success of
strategies aimed at managing China’s rise. At home, the growth of a national-security state has placed remarkable power in the hands of
Donald Trump. Simply returning to the old order is no guarantee of good results. Grappling openly with failure and self-inflicted wounds—while
also acknowledging clear benefits of the order—is essential for moving beyond self-congratulatory platitudes.

Second, the
liberal order in its idealized form had very limited reach into what are now pivotal areas of U.S.
security policy: Asia, the Middle East, and the “developing world” more broadly. The core of the liberal order remains
transatlantic, but Asia is now growing dramatically in wealth and military power. What is the record of the order in the region? There was
certainly some democracy promotion when authoritarian regimes began to totter, but there was also deep, sustained cooperation with
dictators like Suharto and Ferdinand Marcos; while there are some regional institutions (such as ASEAN), they are comparatively weak; while
there are some rules, they have been deeply contested. Close U.S. allies like Japan, Taiwan, and South Korea (the latter two experiencing long
bouts of U.S.-allied autocracy) were not integrated into a broad alliance pact like NATO. India and Pakistan were never part of the core order,
and China was only very partially integrated (primarily into the economic pillar of the order, and through ad hoc security cooperation from the
1970s). Southeast Asia has been a site of warfare and authoritarianism for much of its post-1945 history.

The United States has long considered the Middle East vital to its security, but the extent to which the United States should invest its own blood
and treasure in protecting the area was always up for debate. It was only in the 1970s that the United States decided it was prepared to use
force to defend the region; “dual containment” in the 1990s was always controversial, while the
invasion of Iraq and its chaotic
aftermath revealed deep fissures over how much presence was enough. Meanwhile, liberalism,
democracy, human rights, and international institutions have not made much of a mark in the region.
Jake Sullivan, in a rather odd defense of the order, suggests that “ Middle Eastern instability has been a feature, not a bug,
of the system.” This is not reassuring about the order’s ability to structure politics in the area. The same
can be said about the order’s history in Africa, with deep Western involvement in civil wars, support for authoritarian
regimes, and often-counterproductive demands for economic liberalization contributing to ongoing instability.

The legacy of the “liberal order” is both far more complex and shallower outside of the north Atlantic core than within it. Invocations of the
order are seen with greater cynicism and suspicion in these areas than in Washington or Berlin. Yet they are precisely the regions that are
increasingly the focus of U.S. security policy.

Finally, and as the preceding already suggests, the


idea of “liberal order” is itself frequently too vague a concept, and
was too incomplete a phenomenon, to offer guidance on a number of key contemporary questions .
Allison goes so far as to call it “conceptual Jell-o.” The extremely abstract principles that experts use to
define the order are confronted with a reality of extreme historical variation. This amorphousness
undermines its usefulness as an actual guide to future foreign policy.

U.S. alliances in Western Europe since World War II looked dramatically different than those in East Asia. Both have
achieved their basic goals, so which should be the model for the future? The United States often applied pressure to coerce its allies into
adopting economic and security policies conducive to U.S. interests—going so far as to threaten abandonment of close European allies—even
as it simultaneously built key elements of the liberal order. The
core of the liberal order was a more tenuous and
contested political space than we often remember.

This inconsistency applies to involvement in the domestic politics of other states. The
United States has regularly embraced
authoritarian leaders (and distanced itself from democratic regimes), while at other times it has helped
to push these leaders out in the face of domestic mobilization . Advocates of the order tend to stress the latter and
dismiss the former as aberrant, but both strategies contributed to the ultimate victory of the liberal order over the Soviet bloc.

The order’s history offers support for aggressively promoting democracy, accepting democratization when it emerges, and strongly supporting
friendly dictators. This
makes it unhelpful for grappling with the question of whether and how to promote
democracy. The same is true of military interventions and covert operations abroad. U.S. leaders invested heavily in Cold War proxy wars
and the overthrow of foreign regimes, while at other times and places they avoided such interventions.

This history carries important implications for addressing today’s policy challenges. Simply appealing to the order does not, for instance, tell us
much about how to deal with a rising China: Since the liberal order included highly institutionalized alliances, loose “hub-and-spoke”
arrangements, and coalitions of the willing, and was characterized by both preventive wars and containment, it is
extremely unclear what the order suggests for America’s China strategy. While “rules-based” order is a term in vogue, it doesn’t tell us what the
rules should actually be, or how they should be decided.

Nor does appealing to the liberal order help us understand whether the United States needs to be deeply involved or largely absent from the
Middle East, or somewhere in between. Under
the order, democracy promotion and assertive liberal intervention
sometimes occurred, but so too did restraint and an acceptance of autocracy. There are no answers in the liberal
international order for navigating the enormously difficult terrain of the contemporary Middle East.

The liberal order is resilient---China isn’t a threat


Dr. Fareed Zakaria 19, PhD in Government from Harvard University, Former Managing Editor of
Foreign Affairs, Columnist for The Washington Post, “The New China Scare: Why America Shouldn’t
Panic About Its Latest Challenger”, Foreign Affairs, 12/6/2019,
https://www.foreignaffairs.com/articles/china/2019-12-06/new-china-scare
NEITHER LIBERAL NOR INTERNATIONAL NOR ORDERLY

To many, Beijing’s rise has sounded the death knell of the liberal international order—the set of policies and
institutions, forged largely by the United States after World War II, that compose a rules-based system in which interstate war has waned while
free trade and human rights have flourished. China’s domestic political character—a one-party state that brooks no opposition or dissent—and
some of its international actions make it an uneasy player in this system.
It is, however, worth remembering that the liberal international order was never as liberal, as
international, or as orderly as it is now nostalgically described. From the very beginning, it faced
vociferous opposition from the Soviet Union, followed by a series of breakdowns of cooperation among allies
(over the Suez crisis in 1956, over Vietnam a decade later) and the partial defection of the United States
under Nixon, who in 1971 ended Washington’s practice of underwriting the international monetary order using U.S. gold
reserves. A more realistic image is that of a nascent liberal international order, marred from the start by exceptions,
discord, and fragility. The United States, for its part, often operated outside the rules of this order, making
frequent military interventions with or without UN approval; in the years between 1947 and 1989, when the United States was
supposedly building up the liberal international order, it attempted regime change around the world 72 times. It reserved the same right in the
economic realm, engaging in protectionism even as it railed against more modest measures adopted by other countries.

The truth about the liberal international order , as with all such concepts, is that there never really was a golden
age, but neither has the order decayed as much as people claim. The core attributes of this order—
peace and stability—are still in place, with a marked decline in war and annexation since 1945. (Russia’s behavior
in Ukraine is an important exception.) In economic terms, it is a free-trade world. Average tariffs among industrialized
countries are below three percent, down from 15 percent before the Kennedy Round of international trade talks, in the 1960s. The
last decade has seen backsliding on some measures of globalization but from an extremely high
baseline. Globalization since 1990 could be described as having moved three steps forward and only one
step back.

China hardly qualifies as a mortal danger to this imperfect order. Compare its actions to those of Russia—
a country that in many arenas simply acts as a spoiler, trying to disrupt the Western democratic world and its international
objectives, often benefiting directly from instability because it raises oil prices (the Kremlin’s largest source of wealth). China plays no
such role. When it does bend the rules and, say, engages in cyberwarfare, it steals military and economic secrets rather
than trying to delegitimize democratic elections in the United States or Europe. Beijing fears dissent and opposition
and is especially neuralgic on the issues of Hong Kong and Taiwan, using its economic clout to censor Western
companies unless they toe the party line. But these are attempts to preserve what Beijing views as its sovereignty—
nothing like Moscow’s systematic efforts to disrupt and delegitimize Western democracy in Canada, the United States,
and Europe. In short, China has acted in ways that are interventionist, mercantilist, and unilateral—but
often far less so than other great powers.

The rise of a one-party state that continues to reject core concepts of human rights presents a challenge. In
certain areas, Beijing’s
repressive policies do threaten elements of the liberal international order, such as its efforts to water down global
human rights standards and its behavior in the South China Sea and other parts of its “near abroad.” Those cases need to be
examined honestly. In the former, little can be said to mitigate the charge. China is keen on defining away its egregious human
rights abuses, and that agenda should be exposed and resisted. (The Trump administration’s decision to withdraw from the UN Human Rights
Council achieved the exact opposite by ceding the field to Beijing.)

But the liberal international order has been able to accommodate itself to a variety of regimes—from
Nigeria to Saudi Arabia to Vietnam—and still provide a rules-based framework that encourages greater
peace, stability, and civilized conduct among states. China’s size and policies present a new challenge to the expansion of human rights that
has largely taken place since 1990. But that one area of potential regression should not be viewed as a mortal threat
to the much larger project of a rules-based, open, free-trading international system.
States CP
2AC
Labor law and patent law preempt.
Richard A. Samp 14, Chief Counsel, Washington Legal Foundation, J.D. from the University of Michigan,
SYMPOSIUM: The Role of State Antitrust Law in the Aftermath of Actavis, 15 Minnesota Journal of Law,
Science, and Technology. 149, Nexis Uni

This paper concludes that state antitrust liability can be imposed on parties to patent settlements so
long as the state action "parallels" federal antitrust law. On the other hand, state law is preempted to
the extent that it seeks to impose antitrust liability for conduct not deemed actionable under federal
law; under such circumstances, state-law liability would be impliedly preempted because it would stand
as an obstacle to accomplishing the purposes of federal patent law. The scope of preemption likely
would include any effort by states to apply a stricter standard of review to reverse payment patent
settlements--either a "quick look" review accompanied by a presumption of illegality, or a declaration
that such settlements are "per se" illegal.

Part I of this paper summarizes federal preemption law as it has been applied to state antitrust actions.
It explains that the U.S. Supreme Court has never interpreted federal antitrust law as imposing a limit on
states' authority to regulate business practices deemed by states to have anticompetitive effects.
Nonetheless, federal courts have not hesitated to rule that state antitrust law is preempted by federal
law when they determine that state law comes into conflict with some other federal statute. In this
instance, the relevant "other federal statute" is federal patent law.

I. STATE ANTITRUST LAW

Congress has passed a series of laws over the past 125 years designed to prevent businesses from
engaging in anticompetitive conduct that results in higher prices for consumers. Most prominently, it
adopted the Sherman Act in 1890. 4 Section 1 of the Sherman Act prohibits "[e]very contract,
combination in the form or trust or otherwise, or conspiracy, in restraint of trade or commerce among
the several States." 5 Among the types of agreements deemed to constitute per se violations of section
1 are agreements among competitors to limit output. 6

Many states have also adopted antitrust statutes. While those laws tend to be similar to federal law,
their language is not identical, and state courts routinely interpret state antitrust laws in ways that
diverge sharply from federal law. 7 For example, California's antitrust statute, the Cartwright Act, 8
diverges in a number of respects from federal antitrust law. The California Supreme Court recently
cautioned, "[i]nterpretations of federal antitrust law are at most [*152] instructive, not conclusive,
when construing the Cartwright Act. . . ." 9

The U.S. Supreme Court has rejected claims that state antitrust law is preempted whenever it diverges
from federal antitrust law. For example, the Court permitted the Attorneys General of Alabama, Arizona,
California, and Minnesota to file antitrust claims under their respective state laws against a group of
cement producers even though those state governments, because they did not purchase cement
directly from the producers but rather purchased only through intermediaries, would not have been
proper plaintiffs under federal antitrust law. 10 Under federal law, when producers conspire to fix
prices, only direct purchasers, and not subsequent indirect purchasers, are permitted to sue to recover
losses incurred as a result of the conspiracy. 11 In contrast, antitrust laws from the four states permitted
recovery by indirect purchasers. 12 The Supreme Court rejected the defendant cement producers'
assertion that federal antitrust law was intended to serve as a ceiling on businesses' liability for engaging
in anticompetitive conduct. 13 It stated, "Congress intended the federal antitrust laws to supplement,
not displace, state antitrust remedies. And on several prior occasions, the Court has recognized that the
federal antitrust laws do not preempt state law." 14

On the other hand, state antitrust laws--like all state laws--are subject to the restrictions imposed by
the Supremacy Clause of the U.S. Constitution, 15 and are impliedly preempted [*153] to the extent
that they conflict with federal law. 16 Such a conflict arises when "compliance with both federal and
state regulations is a physical impossibility," 17 or when a state law "stands as an obstacle to the
accomplishment and execution of the full purposes and objectives of Congress." 18 On a number of
occasions, the Supreme Court has concluded that state antitrust law is preempted because it conflicts
with a federal statute other than federal antitrust law. 19

The Court has been particularly quick to find preemption when state antitrust law has an impact on
labor law, an area in which federal law is pervasive. 20 Indeed, on at least one occasion, the Court found
that a claim arising under state antitrust law was preempted by federal labor law even though the
Court concluded that the conduct that gave rise to the state claim could proceed as a claim under
federal antitrust law. 21 The Court explained that "Congress and this Court have carefully tailored the
antitrust statutes to avoid conflict with the labor policy favoring lawful employee organization, not only
by delineating exemptions from antitrust coverage but also by adjusting the scope of the antitrust
remedies themselves." 22 The Court said that state antitrust laws "generally have not been subjected to
this process of accommodation" and thus that "[t]he use of state antitrust law . . . [must] be pre-empted
because it creates a substantial risk of conflict with policies central to federal labor law." 23

Accordingly, in any challenge to a "reverse payment" patent settlement arising under state antitrust law,
a court will likely be required to address whether the claim conflicts with the "balance" between federal
antitrust law and federal patent law established by the Supreme Court's Actavis [*154] decision. If such
state-law antitrust claims stand as an obstacle to the accomplishment and execution of the full
purposes and objectives of Congress in adopting the patent laws, it will be preempted by federal law.
II. "REVERSE PAYMENT" PATENT SETTLEMENTS When parties to litigation enter into a settlement, one would normally expect that any cash payments would flow from the defendant to the plaintiff. The defendant pays cash in return for something from the plaintiff: the abandonment of a
legal claim. The normal expectations have been reversed in the context of litigation involving prescription drug patents, however, as the result of financial incentives created by the Hatch-Waxman Act, 24 a federal statute adopted in 1984. Hatch-Waxman was designed to ensure that
generic versions of prescription drugs enter the market more quickly, thereby driving down drug prices. 25 The Act includes a provision that permits generic companies, by announcing plans to market a drug before expiration of the drug's patent, to essentially force the patent holder to
immediately file a patent infringement suit. 26 That provision sets drug patent litigation apart from all other types of patent litigation. It allows generics to challenge the validity of a drug patent in a virtually risk-free manner-- because they can induce a patent lawsuit without actually
selling an infringing product, 27 generics can place a patent's [*155] validity at issue without the risk of incurring the potentially bankrupting damage awards normally associated with patent litigation. 28 While the cost of litigation is about the only loss that a generic company is likely to
suffer if it loses a drug patent infringement lawsuit, the stakes are much higher for the typical prescription drug patent holder. It likely spent hundreds of millions of dollars to obtain FDA approval to market its product. 29 It can hope to recoup those costs only if it can maintain the validity
of its patent and thereby prevent competition from generic manufacturers. 30 For a typical brand-name prescription drug manufacturer, its patents on the drugs it produces are far and away its most valuable assets. A brand-name drug manufacturer often stands to lose billions of dollars
in future revenues if one of its key drug patents is declared invalid. 31 In light of the dynamics created by the Hatch-Waxman Act, it is hardly surprising that generic companies-- even though they are the defendants in drug patent infringement litigation--are in a position to demand cash
or other valuable assets in return for agreeing to settlement of the patent litigation. [*156] III. ACTAVIS: AN ANTITRUST CHALLENGE TO "REVERSE PAYMENT" SETTLEMENTS The Federal Trade Commission (FTC) has long complained about the allegedly anticompetitive effects of "reverse
payment" patent settlements--settlements whose terms include a cash payment from the drug patent holder to the alleged infringer. 32 The FTC contends that by making such payments, patent holders are in effect paying potential competitors not to compete, thereby restricting supply
and driving up prices. 33 Drug companies have responded that such settlements cannot have an anticompetitive effect so long as the settlement does not prohibit any competition that was not already barred under the terms of the patent. 34 They argue that because litigation is always a
drain on productivity, settlements of patent disputes ought to be encouraged for their pro-competitive effects 35 and that existing patents ought to be presumed valid. 36 Lower federal courts have struggled for more than a decade to craft a coherent theory for addressing antitrust
challenges to reverse payment settlements. 37 On the one hand, there is reason for concern about the competitive consequences of settlements that include substantial payments from the patent holder to the alleged infringer. Very large payments may be an indication that the settling
parties recognized that the patent was particularly vulnerable to invalidation, 38 and thus that competition would have begun much sooner had the infringement suit been permitted to proceed to a trial, at which [*157] the patent almost surely would have been declared invalid. Viewed
in that light, payments from the patent holder to the alleged infringer can be seen as a device for sharing monopoly rents made possible by the alleged infringer's agreement not to compete. 39 On the other hand, a patent holder has a legal right to a monopoly on the sale of its patented
product. It thus is hard to fault the patent holder for taking steps to enforce that right, even if those steps include making payments to litigation opponents where economic incentives created by the Hatch-Waxman Act essentially require such payments as the price of settling litigation. 40
One potential solution to this dilemma is to instruct trial courts to examine the strength of the underlying patent. 41 Under that approach, a reverse payment patent settlement would be deemed anticompetitive, and thus in violation of federal antitrust law, if and only if the court
determined that the patent was weak and likely would have been declared invalid had the patent infringement suit been allowed to go to trial. But advocates on both sides of the issue have resisted that approach because it would require overly complex trials. District courts conducting
an antitrust trial would be required to retry the previously-settled patent dispute, hearing voluminous evidence regarding patent validity. 42 To avoid that [*158] result, the FTC has argued that reverse payment settlements should be deemed per se antitrust violations, 43 reasoning that
patent holders should be required to agree to an earlier onset of generic competition in lieu of making cash payments to the alleged infringers. Prior to 2012, the FTC and private plaintiffs had lost their challenges to reverse payment drug patent settlements. The Second, Eleventh, and
Federal Circuits each adopted the so-called "scope of the patent" test, which holds that agreements that do not extend beyond the exclusionary effect of a patent do not injure lawful competition unless the patent was procured by fraud or the infringement claim was objectively baseless.
44 Under this standard, the patent holder's right to exclude infringing competition is fully respected unless the antitrust plaintiff can demonstrate that the patent had no exclusionary effect at all. 45 In 2012, the Third Circuit created a split among the federal appeals courts by adopting the
FTC's position. It held that reverse payment settlements are prima facie evidence of an unreasonable restraint of trade, and that the settling parties can rebut that presumption only if they are able to demonstrate, during a "quick look" analysis, that the settlement actually has pro-
competitive effects. 46 The Third Circuit added that it agreed with the FTC that there is no need to consider the merits of the underlying patent suit because absent proof of other offsetting consideration, it is logical to conclude that the quid pro quo for the payment was an agreement
[*159] by the generic to defer entry beyond the date that represents an otherwise reasonable litigation compromise. 47 Actavis resolved that circuit split. The Actavis litigation was an FTC challenge to a reverse payment settlement of patent infringement litigation involving a brand-name
drug called AndroGel. 48 Under the terms of the settlement, the alleged infringers (several generic drug manufacturers) agreed not to market their generic versions of AndroGel until August 2015, sixty-five months before the AndroGel patent was scheduled to expire. 49 The settlement
also required the patent holder to pay many millions of dollars to the generic manufacturers; it stated that the payments were in return for other services to be performed by the generics for the patent holder. 50 The FTC filed a complaint against all settling parties under federal antitrust
law, contending that "the true point of the payments was to compensate the generics for agreeing not to compete against AndroGel until 2015." 51 Applying its previously adopted "scope of the patent" test, the Eleventh Circuit affirmed the district court's dismissal of the FTC's complaint.
52 It reasoned that the settlement could not be deemed to have anticompetitive effects because it permitted generic competition sixty-five months before the underlying patent was scheduled to expire. 53 The Supreme Court granted review to resolve the conflict between the Third
Circuit on the one hand and the Second, Eleventh, and Federal Circuits on the other hand. In its June 2013 decision, the Supreme Court rejected both the "scope of the patent" test and the Third Circuit's "presumption of unreasonable restraint" test. 54 The Court declined to adopt any
bright line test and instead directed lower courts to analyze the potential anticompetitive effects of reverse payment settlements under a traditional "rule of reason" analysis. 55 The Court repeatedly emphasized that courts must "balance" the competing interests of federal [*160]
antitrust and patent law, explaining that "patent and antitrust policies are both relevant in determining the 'scope of the patent monopoly'--and consequently antitrust law immunity--that is conferred by a patent." 56 The Court acknowledged the legitimacy of the concerns that had led
the Eleventh Circuit to adopt the "scope of the patent" test: the desirabdity of promoting settlements and the "fear that antitrust scrutiny of a reverse payment agreement would require the parties to litigate the validity of the patent in order to demonstrate what would have happened
to competition in the absence of the settlement" and would "prove time consuming, complex, and expensive." 57 The Court nonetheless held that other considerations led it "to conclude that the FTC should have been given the opportunity to prove its antitrust claim." 58 Chief among
those considerations was the Court's conclusion that settlements have the "potential for genuine adverse effects on competition," 59 particularly when reverse payments are so large that they cannot be explained as an amount necessary to bring about a settlement. 60 The Court added
that a plaintiff should not necessarily be required to demonstrate the weakness of the underlying patent in order to establish a prima facie case of antitrust unlawfulness, stating that "[a]n unexplained large reverse payment itself would normally suggest that the patentee had serious
doubts about the patent's survival." 61 In rejecting the FTC's argument that reverse payment settlements should be deemed "presumptively unlawful" and should proceed via a "quick look" approach, the Court explained: [A]bandonment of the "rule of reason" in favor of presumptive
rules (or a "quick-look" approach) is appropriate only where "an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on consumers and markets." We do not believe that reverse [*161]
payment settlements, in the context we here discuss, meet this criterion. 62 Remanding the case to the Eleventh Circuit for further consideration, the Court said that it would "leave to the lower courts the structuring of the present rule-of-reason litigation." 63 IV. WHAT DID ACTAVIS
DECIDE? Before determining the extent to which state antitrust regulation of reverse payment settlements is preempted by federal law, one must first determine what was actually decided by Actavis. While there is disagreement regarding which side actually won the case, all agree that
the decision left a considerable number of issues undecided. The Court explicitly rejected the Eleventh Circuit's "near-automatic antitrust immunity to reverse payment settlements" and the FTC's "presumptively unlawful" approach, 64 but provided relatively vague guidance for
determining which such settlements violate federal antitrust laws and which do not. The Court said that "[a]n unexplained large reverse payment itself would normally suggest that the patentee has serious doubts about the patent's survival," and that the existence of such serious doubts
"in turn, suggests that the payment's objective is to maintain supracompetitive prices to be shared among the patentee and the challenger rather than face what might have been a competitive market. . . ," 65 But those sentences raise more questions than they answer; they do not
explain how a trial court is to determine whether the reverse payment is "unexplained" or "large" or when the "normal" inference from an "unexplained large reverse payment" might not be appropriate. The Court punted those issues to trial courts with instructions to do their best in
applying a "rule of reason" analysis. 66 Moreover, while the Court determined that a plaintiff challenging a reverse payment settlement can establish a prima facie case without establishing that the patent was weak and would likely have been invalidated had [*162] the infringement suit
gone to trial, 67 it did not determine whether trial courts may impose any limitations on defendants' rights to make the opposite showing: that the patent almost surely would have been upheld if the infringement suit had gone to trial and thus that the reverse payment settlement could
not possibly have had any anticompetitive effects. The Court did not even determine whether a reverse payment can ever be actionable when it takes the form of something other than cash. Use of the word "payment" at least suggests that the Court did not intend to address transfers of
value other than cash, but the FTC has already rejected that interpretation and is attempting to use Actavis to challenge patent litigation settlements in which the value transferred to the infringing party consisted of an exclusive license to market a generic version of the drug during the
first 180 days following expiration of the patent. 68 Of course, as Seventh Circuit Judge Richard Posner has pointed out, "any settlement agreement can be characterized as involving 'compensation' to the defendant, who would not settle unless he had something to show for the
settlement. If any settlement agreement is thus to be classified as involving a forbidden 'reverse payment,' we shall have no more patent settlements." 69 The Actavis Court did make clear, however, that a license permitting an alleged infringer to bring its product to market prior to
expiration of the patent cannot be classified as an unlawful reverse payment. 70 Suppose that a patent is not scheduled to expire for another ten years and that the parties reach a settlement whereby the alleged infringer agrees not to compete for the first seven years in return for an
exclusive license to market its product during the final three years. Arguably, the alleged infringer has received something of considerable value (a three-year exclusive license) in return for agreeing not to compete for seven years. The Court nonetheless indicated that such agreements
not to compete are not actionable under federal antitrust law 71 --perhaps because the [*163] exclusive license increases competition and thus benefits not only the alleged infringer but also consumers, even though the agreement not to compete arguably harms consumers. Moreover,
the Court's repeated use of the word "balance" and the phrase "accommodate patent and antitrust policies" 72 made clear that any "rule of reason" analysis undertaken by a district court must seek to balance the competing interests of federal antitrust law (to promote competition) and
the federal patent law (to provide monopoly profits to the developers of new and useful products and thereby encourage development of more such products in the future). Those holdings suggest some limits on the extent to which states should be permitted to impose antitrust liability
on companies that enter into reverse payment drug patent settlements. In particular, any state-law liability is preempted to the extent that it would upset the balance between federal antitrust law and patent law established by Actavis because such liability would "stand[] as an obstacle
to the accomplishment and execution of the full purposes and objectives of Congress." 73 V. ACTAVIS'S PREEMPTIVE EFFECT Application of state antitrust law to reverse payment settlements is not merely a hypothetical possibility. There are a fair number of pending lawsuits that
challenge reverse payment settlements on state-law grounds. The California Supreme Court has agreed to review one such suit. 74 In seeking affirmance of the appeals court's dismissal of the suit, the [*164] defendants argue inter alia that the suit is preempted by federal law. 75 As
noted above, there is precedent for a finding that state antitrust law is preempted to the extent that it conflicts with the policy underlying a federal statute. 76 Moreover, in the context of patent law, federal courts have not hesitated to preempt state laws that the courts deem to stand
as an obstacle to accomplishing Congress's objectives (i.e., encouraging efforts to develop new and useful products). 77 To the extent that any portions of Actavis's holding can be deemed to reflect the Court's perception of Congress's new-product-development objectives, a state law is
preempted if it is inconsistent with that holding and seeks to impose a greater degree of antitrust liability on the parties to a reverse payment settlement. Actavis's treatment of settlements involving a compromise entry date appears to meet that description. Actavis held that federal
antitrust liability could not arise from a settlement in which the generic manufacturer agrees not compete for a number of years and in return is rewarded with an exclusive license to market its product several years in advance of the patent's expiration date. 78 Accordingly, states are not
permitted to impose antitrust liability under similar circumstances because doing so would upset the balance that, according to Actavis, Congress sought to achieve between antitrust and patent law. Other issues left open by Actavis are likely to be answered in the years ahead. For
example, the Supreme Court did not specify whether noncash benefits received by a generic manufacturer in connection with a patent settlement can ever serve as the basis for federal antitrust liability. If the Supreme Court eventually answers that question by stating: "No, federal [*165]
antitrust law will not examine settlement benefits other than cash that flow to the infringing party," then it is likely that state antitrust law would be required to conform to that rule. The potential grounds for such a ruling (a desire both to promote settlement of patent disputes and to
uphold reliance interests in existing patents) are based largely on values embedded in federal patent law. There is little reason to believe, however, that the Court would prevent application of state antitrust law to patent settlement agreements where state law is fully consistent with
federal antitrust law. Even in areas subject to extensive federal regulation, the Supreme Court has upheld the authority of states to engage in parallel regulation that is not inconsistent with the federal regulation. 79 Unless the Court were to determine, as in Connell, 80 that states could
not be trusted to properly accommodate the objectives of the federal statute at issue (here, federal patent law), there is no reason to conclude that Congress would not have wanted states to be permitted to police the same sorts of anticompetitive conduct that is policed by federal
antitrust law. Moreover, states are likely free to impose greater penalties on the proscribed conduct than is available under federal law. As the Court explained in California v. ARC America Corp., state antitrust law is not required to adhere to the same set of sanctions imposed by federal
antitrust law. 81

It seems reasonably clear, however, that Actavis prohibits states from adopting the procedural devices
rejected by the U.S. Supreme Court--either a per se condemnation of reverse payment settlements or a
presumption of illegality accompanied by "quick look" review. The Supreme Court rejected those
approaches because it determined that in many cases there might well be pro-competitive economic
justifications for reverse payment settlements and that presuming their illegality could result in the
suppression of economically useful conduct. 82 State antitrust laws that adopted the FTC's proposed
presumption of illegality would be subject to similar criticism, and thus would likely be impliedly [*166]
preempted as inconsistent with the careful balance between antitrust and patent law established by
Actavis.
2AC---Econ---TL
No impact---emergency measures solve
Steven T. Dennis 8/16, staff writer, Bloomberg, “Democrats’ dare on debt sets up high-stakes
shutdown fight,” Detroit News, 8/16/21,
https://www.detroitnews.com/story/news/politics/2021/08/16/democrats-dare-debt-sets-up-high-
stakes-shutdown-fight/8149517002/

If Republicans don’t blink, and Washington instead plunges into a shutdown scenario in the middle of a pandemic and
a historic debt default, there are various break-glass strategies that could be employed . Each has its drawbacks.

Democrats could try to amend the budget resolution after the fact – something that would be extremely messy
procedurally, and, at minimum, require yet another all-night vote-a-rama with unlimited, politically-fraught amendment votes and precious
wasted time.

Kick the can

Another potential outcome this fall could be simply kicking the can down the road. Citigroup Inc., for
one, has penciled that in as the “base case.”

“Congressional action to avoid a destabilizing issue whereby the Treasury cannot fulfill its obligations ”
is expected, Citigroup economists led by Andrew Hollenhorst wrote in a note to clients Friday. “The simplest solution might be
a short-term suspension of the debt limit to the end of the year, paired with a deal to keep the government open ”
in a stopgap spending measure, they wrote.

“A longer-term compromise could be worked out over that period,” Citigroup suggested, without outlining what that
could be.

Or the White House could again look at untested maneuvers to raise the debt limit, which Obama considered
and rejected the last time around. One would have relied on the 14th Amendment provision saying the federal
debt cannot be questioned.
Scariest night

Another would have exploited the ability of the Treasury to mint platinum coins of any denomination –
say, $1 trillion each – and deposit them into the Federal Reserve. (The #MintTheCoin hashtag periodically gets some play on Twitter in
reference to the scheme.)

3---AI---Whyte says that corporations are forms of AI that kill us all---existential threat.
Themistoklis Tzimas 21, Aristotle University of Thessaloniki, Faculty of Law, “Chapter 2: The
Expectations and Risks from AI,” Legal and Ethical Challenges of Artificial Intelligence from an
International Law Perspective, Springer, 2021, pp. 9–32 Open WorldCat, https://doi.org/10.1007/978-3-
030-78585-7

Therefore, it
is only natural to be at least skeptical towards a future with entities possessing equal or superior
intelligence and levels of autonomy; the prospect even of existential risk looms as possible.7
AI that will have reached or surpassed our level of intelligence make us wonder why would highly autonomous and intelligent AI want to give
up control back to its original creators?8 Why remain contained in pre-defined goals set for it by us, humans?

Even AI in its current form and narrow intelligence poses risks because of its embedded-ness in an ever-
growing number of crucial aspects of our lives. The role of AI in military, financial,9 health, educational,
environmental, governance networks-among others—are areas where risk generated by AI—even
limited— autonomy can be diffused through non-linear networks, with significant impact— even
systemic.10
The answer therefore to the question whether AI brings risk with it is yes; as Eliezer Yudkowski comments the greatest of them all is that people conclude too early that they understand it11 or that they assume that they can
achieve it without necessarily having acquired complete and thorough understanding of what intelli- gence means.12

Our projection of our—lack of complete—understanding of the concept of intelligence on AI is owed to our lack of complete comprehension of human intelligence too, which is partially covered by the prevalent and until now self-
obvious, anthropomorphism because of which we tend to identify higher intelligence with the human mind.

Yudkowski again however suggests that AI “refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in
general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire
map floats in a still vaster space, the space of optimization processes.”13

Regardless of what our well-established ideas are, there are many, different intelligences and even more significantly, there are potentially, different intelli- gences equally or even more evolved than human.

From such a perspective, the unprecedented—ness of potential AI developments and the mystery surrounding them emerges as not only the
outcome of pop culture but of a radical transformation of our—until recently—self—obvious identification of humanity with highly evolved and
dominant intelligence.14

The lack of understanding of intelligence and therefore of AI may be frightening but does not lead
necessarily to regulation—at least to a proper one. We could even be led into making potentially
catastrophic choices, on the basis of false assumptions.

On top of our lack of understanding, we should add a sentiment of anxiety as well as of expectations, which
intensifies as an atmosphere of emergency and of expected groundbreaking developments grows. The most graphic
description of this feeling is the potential of a moment of singularity, as mentioned above according to the description by Vinge and Kurzweil.
As the mathematician I. J. Good–Alan Turing’s colleague in the team of the latter during World War II—has put it: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man
however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the
intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”15 This is in a
nutshell the moment of singularity.

The estimates currently foresee the emergence of ultra or super intelligence—as it is currently labelled—or in other words of singularity, somewhere between 20 and 50 years from today, further raising the sentiment of
emergency.16 We cannot even foretell with precision how singularity would look like but we know that because of its expected groundbreaking impact, both states and private entities compete towards gaining the upper hand in
the prospect of the singularity.17

Despite the fact that such predictions have been proven rather optimistic in the past18 and therefore up to some extent inaccurate, there are reasons to assume that their materialization will take place and that the urgency of
regulation will be proven realistic.

After all, part of the disappointments from AI should be blamed on the fact that certain activities and standards, which were considered as epitomes of human intelligence have been surpassed by AI, only to indicate that they were
not eventu- ally satisfactory thresholds for the surpassing of human intelligence.19 Partially because of AI progress we realize that human intelligence and its thresholds are much more complicated than assumed in the past.

The vastness’s of definitions of intelligence, as well as its etymological roots are enlightening of the difficulties: “to gather, to collect, to assemble or to choose, and to form an impression, thus leading one to finally understand,
perceive, or know”.20

As with other relevant concepts, the truth is that until recently our main way to approach intelligence for far too long was “we know it, when we see it”. AI is an additional reason for looking deeper into intelligence and the more
we examine it, the most complicated it seems.

The combination of lack of complete understanding of intelligence, the unpredictability of AI, its rapid evolution and the prospect of singularity explain both the fascination and the fear from AI. Once the latter emerges, we have no
real knowledge about what will happen next but only speculations, which until recently belonged to the area of science fiction.

We are for example pretty confident that the speed of AI intelligence growth will accelerate, once self—improvement will have been achieved. The expected or possible chain of events will begin from AI capacity to re-write its own
algorithms and exponentially self—improve, surpassing human intelligence, which lacks the capacity of such rapid self—improvement and setting its own goals.21

We can somehow guess the speed of AGI and ASI evolution and possibly some of its initial steps but we cannot guess the directions that such AI
will choose to follow and the characteristics that it will demonstrate. Practically, we credibly guess the prospects of AI beyond a certain level of
development.

Two existential issues could emerge: first, an imbalance of intelligence at our expense—with us, humans
becoming the inferior species—in
favor of non-biological entities and secondly a lack of even fundamental
conceptual communication between the two most intelligent “species”. Both of them heighten the fear
of irreversible changes, once we lose the possession of the superior intelligence.22

However, we need to consider the expectations as well. The positive side focuses on the so-called
friendly AI, meaning AI which will benefit and not harm humans, thanks to its advanced intelligence .23

AI bears the promise of significantly enhancing human life on various aspects, beginning from the
already existing, narrow applications. The enhanced automation24 in the industry and the shift to
autonomy,25 the take—over by AI of tasks even at the service sector which can be considered as “tedious”—i.e. in the
banking sector—climate and weather forecasting, disaster response,26 the potentially better cooperation
among different actors in complicated matters such as in matters of information, geopolitics and
international relations, logistics, resources ex.27

The realization of the positive expectations depends up to some extent upon the complementarity or
not, of AI with human intelligence. However, what friendly AI will bring in our societies constitutes a matter of debate, given our
lack of unanimous approach on what should be considered as beneficial and therefore friendly to humans—as is analyzed in the next chapter.

Friendly AI for example bears the prospect of freeing us from hard labor or even further from unwanted
labor; of generating further economic growth; of dealing in unbiased, speedy, effective and cheaper
ways with sectors such as policing, justice, health, environmental crisis, natural disasters, education,
governance, defense and several more of them which necessitate decision-making, with the
involvement of sophisticated intelligence .

The synergies between human intelligence and AI “promise” the enhancement of humans in most of
their aspects. Such synergies may remain external—humans using AI as external to themselves, in terms of analysis, forecasts, decision—
making and in general as a type of assistant-28 or may evolve into the merging of the two forms of intelligence either temporarily or
permanently.
The second profoundly enters humanity, existentially—speaking, into uncharted waters. Elon Musk argues in favor of “having some sort of merger of biological intelligence and machine intelligence” and his company “Neuralink”
aims at implanting chips in human brain. Musk argues that through this way humans will keep artificial intelligence under control.29 The proposition is that of “mind design”, with humans playing the role that God had according to
theologies.30

While the temptation is strong—exceeding human mind’s capacities, far beyond what nature “created”, by acquiring the capacity for example to connect directly to the cyberspace or to break the barriers of biology31—the risks
are significant too: what if a microchip malfunction? Will such a brain be usurped or become captive to malfunctioning AI?

The merging of the two intelligences is most likely to evolve initially by invoking medical reasons, instead of human enhancement. But the merging of the two will most likely continue, as after all the limits between healing and
enhancement are most often blurry. This development will give rise, as is analyzed below, to signif- icant questions and issues, the most of crucial of which is the setting of a threshold for the prevalence of the human aspect of
intelligence over the artificial one.

Human nature is historically improved, enhanced, healed and now, potentially even re-designed in the future.32 Can a “medical science” endorsing such a goal be ethically acceptable and if yes, under what conditions, when, for
whom and by what means? The answers are more difficult than it seems. As the World Health Organi- zation—WHO—provides in its constitution, “Health is a state of complete physical, mental and social well-being and not merely
the absence of disease or infirmity”.33

Therefore, why discourage science which aims at human-enhancement, even reaching the levels of post-humanism?34 Or if restrictions are to be imposed on human enhancement, on what ethics and laws will they be justi fied?
How ethically acceptable is it to prohibit or delay technological evolution, which among several other magnificent achievements, promises to treat death as a disease and cure it, by reducing soul to self, self to mind, and mind to
brain, which will then be preserved as a “softwarized” program in a hardware other than the human body?35

After all, “According to the strong artificial intelligence program there is no fundamental difference between computers and brains: a computer is different machinery than a person in terms of speed and memory capacity.”36

While such a scientific development and the ones leading potentially to it will be undoubtedly, groundbreaking technologically-speaking, is it actually—ethically- speaking—as ambivalent as it may sound or is it already justi fied by
our well— rooted human-centrism?37

Secular humanism may have very well outdated religious beliefs about afterlife in the area of science but has not diminished the hope for immortality; on the contrary, science, implicitly or explicitly predicts that matter can in
various ways surpass death, albeit by means which belong in the realm of scientific proof, instead of that of metaphysical belief.38

If this is the philosophical case, the quest for immortality becomes ethically acceptable; it can be considered as embedded both in the existential anxiety of humans, as well as in the human-centrism of secular philosophical and
political victory over the dei-centric approach to the world and to our existence.

From another perspective of course and for the not that distant philosophical reasons, the quest for immortality becomes ethically ambiguous or even unacceptable.39 By seeking endless life we may miss all these that make life
worth living in the framework of finiteness. As the gerontologist Paul Hayflick cautioned “Given the possibility that you could replace all your parts, including your brain, then you lose your self-identity, your self-recognition. You
lose who you are! You are who you are because of your memory.”40

In other words, once we begin to integrate the two types of intelligence, within ourselves, until when and how we will be sure that it is human intelligence that guides us, instead of the AI? And if we are not guided completely or—
even further—at all by human intelligence but on the contrary we are guided by AI which we have embodied and which is trained by our human intelligence, will we be remaining humans or we will have evolved to some type of
meta-human or transhumant species, being different persons as well?41

AI promises tor threatens to offer a solution by breaking down our consciousness into small “particles” of information—simplistically speaking—which can then be “software-ized” and therefore “uploaded” into different forms of
physical or non-physical existence.

Diane Ackerman states that “The brain is silent, the brain is dark, the brain tastes nothing, the brain hears nothing. All it receives are electrical impulses--not the sumptuous chocolate melting sweetly, not the oboe solo like the
flight of a bird, not the pastel pink and lavender sunset over the coral reef--only impulses.”42 Therefore, all that is needed—although it is of course much more complicated than we can imagine—is a way to code and reproduce
such impulses.

Even if we consider that without death, we will no more be humans but something else, why should we remain humans once technologies allow us be something “more”, in the sense of an enhanced version of “being”? Why are
we to remain bound by biological evolution if we can re-design it and our future form of existence?

Why not try to achieve the major breakthrough, the anticipated or hoped digita- lization of the human mind, which promises immortality of consciousness via the cyberspace or artificial bodies: the uploading of our consciousness
so that it can live on forever, turning death into an optional condition.43

Either through an artificial body or emulation-a living, conscious avatar—we hope—or fear—that the domain of immortality will be within reach. It is the prospect of a “substrate-independent minds,” in which human and machine
consciousness will merge, transcending biological limits of time, space and mem- ory” that fascinates us.44

As Anders Sandberg explained “The point of brain emulation is to recreate the function of the original brain: if ‘run’ it will be able to think and act as the original,” he says. Progress has been slow but steady. “We are now able to
take small brain tissue samples and map them in 3D. These are at exquisite resolution, but the blocks are just a few microns across. We can run simulations of the size of a mouse brain on supercomputers—but we do not have the
total connectivity yet. As methods improve, I expect to see automatic conversion of scanned tissue into models that can be run. The different parts exist, but so far there is no pipeline from brains to emulations.”45
The emulation is different from a simulation in the sense that the former mimics not only the outward outcome but also the “internal causal dynamics”, so that the emulated system and in this particular case the human mind
behaves as the original.46 Obviously, this is a challenging task: we need to understand the human brain with the help of computational neuroscience and combine simplified parts such as simulated neurons with network structures
so that the patterns of the brain are comprehended. We must combine effectively “biological realism (attempting to be faithful to biology), completeness (using all available empirical data about the system), tractability (the
possibility of quantitative or qualitative simulation) and understanding (producing a compressed representation of the salient aspects of the system in the mind of the experimenter)”.47

The technological challenges are vast. Technologically speaking, the whole concept is based on some assumptions which must be proven both accurate and feasible.48 We must achieve technology capable of scanning completely
the human brain, of creating software on the basis of the acquired information from its scanning and of the interpretation of information and the hardware which will be capable of uploading or downloading such software.49 The
steps within these procedures are equally challenging. Their detailed analysis evades the scope of this book.

Some critical questions—they are further analyzed in the next chapters—emerge however: how will we interpret free will in emulation? What will be the impact of the environment and of what environment? How will be missing
parts of the human brain re-constructed and emulated? What will be the status of the several emulations which will be created—i.e. failed attempts or emulations of parts of the human brain—in the course of the search for a
complete and functioning emulation? Will they be considered as “persons” and therefore as having some right or will they be considered as mere objects in an experimental lab? How are we going to decode the actual subjective
sentiments of these emulations? Essentially, are emulations the humans “themselves” who are emulated or a different person? Even further what will human and person mean in the era of emulation?

From a different perspective, the victory over death may be seen as a danger of mass extinction, absorption or de-humanization. In this new, vast universe of emulations will there be place for humans?50

From the above—mentioned discussion, it becomes obvious that at a large extent, the prospect of risk or of expectation is a matter of perspective, for which there is no unanimous agreement in the present. This may be the
greatest danger of all, for which Asimov warned us: unleashing technology while we cannot communicate among us, in the face of it.

The existential prospect as well as the risks by AI may self-evidently emerge from technological advances but are determined on the basis of politico—philosophical or in the wider sense, ethical assumptions. This is where the need
for legal regulation steps in. Such a need was often underestimated in the past in favor of a solely technologically oriented approach—although exceptions raising issues other than technological can be found too.51 The gradual
raising of ethic—political, philosoph- ical and legal issues constitutes a rather recent development, partially because of the realization of the proximity of the risks and of the expectations.

The public debate is often divided between two “contradictory” views: fear of AI or enthusiastic optimism. The opinions of the experts differ respectively.

Kurzweil, who has come with a prediction for a date for the emergence of singularity—until 2045—expects such a development in a positive way: “What’s actually happening is [machines] are powering all of us,” Kurzweil said
during the SXSW interview. “They’re making us smarter. They may not yet be inside our bodies, but, by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”52

In a well-known article—issued on the occasion of a film—Stephen Hawking, Max Tegmark, Stuart Russell, and Frank Wilczek shared a
moderate position: “The
potential benefits are huge; everything that civilization has to offer is a product of
human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the
tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list.
Success in creating AI would be the biggest event in human history. . . Unfortunately, it might also be
the last, unless we learn how to avoid the risks.”53

C---Environmental Limits---capitalism devours nature which ends humanity. More ev---


innovation can’t solve due to bad incentives and rebound effects---only collapse that
spurs degrowth can avert extinction
Luiz Marques 20, associate professor, Department of History, State University of Campinas
(UNICAMP) Campinas/SP - Brasil, editor and consultant for the Historia Viva magazine, worked as an
analyst at the National Center for Scientific Research in France, has written or collaborated on hundreds
of articles and dozens of books, “Climate Feedbacks and Tipping Points,” through “The Illusion of a
Sustainable Capitalism,” Capitalism and Environmental Collapse, Springer International Publishing, 2020,
pp. 199–361, DOI.org (Crossref), doi:10.1007/978-3-030-47527-7_8

mechanisms of climate warming that are positively influenced by


Let us first define these two points. Positive climate feedbacks are defined as

climate itself. As warming increases, so too will the magnitude of these mechanisms, forming a positive
feedback loop (Dean et al. 2018). Such positive feedback loops are already at work in the climate system (which includes the oceans, land surface, cryosphere,
biosphere, and atmosphere). As Rijsberman and Swart (1990) state, “ temperature increases beyond 1°C may elicit rapid, unpredictable, and

non-linear responses that could lead to extensive ecosystem damage.” Tipping points have been defined as critical thresholds in [climate] forcing1
or some feature of a system, at which a small perturbation can qualitatively alter the state or development of a system

(Lenton and Schellnhuber 2007; Duarte et al. 2012; Lenton et al. 2015, 2019). In The Seneca Effect: Why Growth is Slow but Collapse is Rapid (2017), Ugo Bardi describes how, in many processes, the

buildup of tensors in a system leads to its collapse: “One way to look at the tendency of complex
systems to collapse is in terms of tipping points. This concept indicates that collapse is not a smooth
transition; it is a drastic change that takes the system from one state to another, going briefly through an unstable state.” The notion
of tipping point revives not only Seneca’s fascinating dictum that gave Bardi’s book its title2 but also the famous Hegelian question of the transition from quantity to quality, a relatively neglected mechanism in accounting for
change, either in evolution (Eldredge and Gould 1972/1980) or in large-scale components of the Earth’s system, such as climate change, biomass loss, ocean circulation change, ice melting, etc. In his Encyclopedia of the
Philosophical Sciences (1817), Hegel writes3:

On the one hand, the quantitative determinations of existence can be altered without its quality being thereby affected (…). [O]n the other hand, this process of indiscriminately increasing and decreasing has its
limits and the quality is altered by overstepping those limits. (…) When a quantitative change occurs, it appears at first as something quite innocuous, and yet there is still something else hidden behind it and this
seemingly innocuous alteration of the quantitative is so to speak a ruse (List) through which the qualitative is captured.
In complex systems, it is generally impossible to predict the precise moment when an accumulation of successive
quantitative changes will, in Hegel’s terms, “overstep a limit.” For Antônio Donato Nobre (Rawnsley 2020), we have already crossed the climate red line: “in

terms of the Earth’s climate, we have gone beyond the point of no return. There’s no doubt about this.” Many other scientists agree , more or less explicitly, with this perception. Whether

or not we have crossed this red line, it can be said that the further one proceeds in a process of small quantitative
changes, the greater the risk of crossing a tipping point. Michael Mann uses a suggestive image to speak of this situation: “we are walking
out onto a minefield. The farther we walk out onto that minefield, the greater the likelihood that we set
off those explosives.”4 Tipping points, therefore, involve analyzing processes that present increasing risks
of inflection or nonlinear evolution. Regarding climate change specifically, two hypotheses primarily occupy the debate between experts: the risk that a 2 °C warming
could take the Earth system to much higher temperatures (recently dubbed “Hothouse Earth”) and the hypothesis contended by James Hansen and
colleagues (2016) of an exponential rise in sea level, well above 2.5 meters by 2100 (discussed in the Sect. 8.3 “Higher Rises in Sea Level”).

8.1 The “Hothouse Earth” Hypothesis

irreversible
In a high-impact article, Will Steffen and 14 colleagues (2018) examine the possibility of combined positive feedback loops leading the Earth system to cross tipping points, beyond which

warming dynamics “could prevent stabilization of the climate at intermediate temperature rises and
cause continued warming on a ‘Hothouse Earth’ pathway even as human emissions are reduced.”
Especially important in this paper is the hypothesis that this tipping point can be crossed at a level of average
global warming no higher than 2 °C above the pre-industrial period: “a 2 °C warming could activate important tipping
elements,5 raising the temperature further to activate other tipping elements in a domino-like cascade that
could take the Earth System to even higher temperatures.” The current trajectory “ could lead to conditions that resemble planetary states that were

last seen several millions of years ago, conditions that would be inhospitable to current human societies and to many other
contemporary species.” The Hothouse Earth hypothesis suggests an increasing warming along the following pathways:

(A) Current warming of 1.1 °C–1.5 °C (2015–2025/2030) above the pre-industrial period, here defined as ~200 years BP. The average global temperature is already significantly warmer than the Mid-Holocene
Warm Period (~7000 years BP, 0.6 °C–0.9 °C above the pre-industrial period) and is nearly as warm as the Eemian, the last interglacial period (129–116 thousand years ago). “The Earth System has likely departed
from the glacial-interglacial limit cycle of the late Quaternary” (Steffen et al. 2018).

(B) Current atmospheric CO2 concentrations (410–415 ppm) have already reached the lower bound of Mid-Pliocene levels (400–450 ppm). During the Mid-Pliocene (4–3 million years ago), global average
temperature was 2 °C–3 °C higher than the pre-industrial period. Although already catastrophic, stabilizing global warming at this level would still be theoretically possible if, and only if, the Paris Agreement were
fully complied with, including successive revisions of the GHG emission reduction goals. This best-case scenario is becoming more unrealistic as each day goes by.

temperatures of as much as 4 °C–5 °C above the pre-industrial


(C) Mid-Miocene Climate Optimum (16–11.6 million years ago), with

period. This is, according to the authors, the most likely scenario by 2100, under current GHG emission levels.
Note that the projections of heating by 2100 proposed by Will Steffen and colleagues are analogous to those of the IPCC’s RCP8.5 W/m2 scenario. The Hothouse Earth hypothesis distances itself from the IPCC analysis precisely
because it proposes that this level of warming can become irreversible as soon as a 2 °C warming threshold is crossed.

8.1.1 Heightened Climate Sensitivity

It is possible that the next IPCC Assessment Report (AR6 2021) will incorporate the Hothouse Earth hypothesis because new climate models point to heightened climate sensitivity. One must remember that the various projections
of magnitude and speed of global warming depend, in part, on estimates of how the global climate system will respond to a given change in atmospheric concentrations of GHG (expressed in CO2-eq). The debate revolves around
specifying the magnitude of global warming once these concentrations double from 280 ppm in 1880 to 560 ppm. (As seen in the previous chapter, these concentrations exceeded 410 ppm in 2019 and are increasing rapidly.) The
climate system immediate response (or transient climate response, TCR) is defined as “the global mean surface warming at the time of doubling of CO2 in an idealized 1% per year CO2 increase experiment” (Knutti et al. 2017). The
climate system long-term response to the doubling of atmospheric concentrations of GHG is referred to in climate models as Equilibrium Climate Sensitivity (ECS). ECS “has reached almost iconic status as the single number that
describes how severe climate change will be” (Knutti et al. 2017). The timescale difference between these two parameters (TCR and ECS) is mainly explained by the fact that the ocean takes centuries or millennia to reach a new
thermal equilibrium. In any case, both parameters define the carbon budget, that is, the amount of GHG that can be emitted if we maintain a certain probability that a given level of global surface warming will not be exceeded. Of
course, the higher the climate sensitivity, the lower the carbon budget. In turn, the carbon budget is strongly conditioned by the ability of positive climate feedbacks to increase the climate response to a given level of atmospheric
GHG concentrations.

Although the problem of specifying the magnitude of these climate responses to higher atmospheric concentrations of GHG and to positive climate feedbacks dates back to Svante Arrhenius (1896), climate scientists have not been
able to decisively narrow the margins of uncertainty on this (Rahmstorf 2008). In its Fourth Assessment Report (AR4, 2007), the IPCC estimated climate sensitivity “likely [> 66% probability] to be in the range 2°C–4.5°C with a best
estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded.” This estimate of 3 °C (best estimate) was the consensus some years ago (Knutti and Hegerl 2008). But

the current Coupled Model Intercomparison Project Phase 5 (CMIP5) has reported values that are between 2.1 °C
and 4.7 °C, slightly higher, therefore, than those of the IPCC (2007), but always with an uncomfortable margin of
uncertainty. In its latest report (AR5 2013), for example, the IPCC defined the upper range, with a 10% probability of exceeding 6 °C. That said, the CMIP5 is now giving
way to the next generation of climate models, finalized since March 2019, which will make up the sixth Coupled Model Intercomparison Project (CMIP6). In at least eight of these new models, the ECS has come in at 5 °C or warmer
(2.8 °C–5.8 °C) (Belcher et al. 2019). The IPCC scientists who should draw on these new models in their Sixth Assessment Report (AR6 2021) do not yet know how to explain this heightened sensitivity. But, according to Reto Knutti,
the trend “is definitely real. There’s no question” (Voosen 2019).

In addition to greater climate sensitivity, the Hothouse Earth hypothesis becomes all the more likely
given the combined influence of several closely interconnected positive climate feedback mechanisms on the
magnitude and speed of warming. We shall take a quick look at five of them:
(1) Anthropogenic aerosols (sulfur dioxide, nitrogen oxides, ammonia, hydrocarbons, black soot, etc.) and their interactions can both cool
and warm the atmosphere. As Joyce Penner (2019) said, “these particles remain one of the greatest lingering
sources of uncertainty” in climate change predictions. An eventual lower use of fossil fuels, especially coal, will
diminish the so-called aerosol masking effect, leading to additional global warming estimated between 0.5 °C and 1.1 °C (Samset et
al. 2018). Yangyang Xu et al. (2018) assert that this additional warming is already at work in the climate system, and, therefore, “global

warming will happen faster than we think.” This is because the recent decline in coal use in China and other countries, however modest, has led to a faster
decrease in air pollution than the IPCC, and most climate models have assumed: “lower pollution is better for crops and public health. But aerosols, including sulfates,

nitrates, and organic compounds, reflect sunlight. This shield of aerosols has kept the planet
cooler, possibly by as much as 0.7°C globally.” The urgently needed transition toward low-
carbon energy sources will, therefore, lead to an immediate additional increase of at least 0.5
°C in global average temperatures, with relevant regional differences.

Oceans sequester 93% of the current Earth’s energy imbalance (EEI). Much of this thermal energy will
(2)

remain submerged for millennia (with already catastrophic consequences for marine life). But there are recent signals that “the oceans might be
starting to release some of that pent-up thermal energy, which could contribute to significant
global temperature increases in the coming years. (…) Given the enormity of the ocean’s thermal load, even a
tiny change has a big impact” (Katz 2015).

and terrestrial carbon cycles. The more the oceans heat up, the less their ability to absorb and store
(3) Change in oceanic

CO2 and, hence, the greater the amount of these gases warming the atmosphere. With regard to changes in the
carbon cycle specifically on land, the Global Carbon Project (Global Carbon Budget 2016) has warned that:

Climate change will affect carbon cycle processes in a way that will exacerbate the increase of CO2 in the atmosphere. Atmospheric CO2 growth rate was a record high in 2015 in spite of no
growth in fossil fuel and industry emissions because of a weaker CO2 sink on land from hot and dry El Niño conditions.

approximately 2% of the global terrestrial net primary productivity (NPP)—the net amount of
According to Michael Zika and Karl-Heinz Erb (2009),

carbon captured by vegetation through photosynthesis—are already lost each year “due to dryland degradation, or between 4% and 10% of the potential NPP in

drylands. NPP losses amount to 20–40% of the potential NPP on degraded agricultural areas in the global average and above 55% in some world regions.”6 But according to Alessandro Baccini and colleagues (2017), even

tropical forests are no longer carbon sinks. Actually, they have become a net carbon source due to
deforestation and reductions in carbon density within standing forests (degradation or disturbance), with the latter accounting for 68.9% of
overall losses. Based on a 12-year study of MODIS pantropical satellite data, the authors provide evidence that “the world’s tropical forests are a net carbon source of 425.2 ± 92.0 teragrams of carbon per year [425 Mt year−1]. This
net release of carbon consists of losses of 861.7 ± 80.2 Tg C year−1 and gains of 436.5 ± 31.0 Tg C year−1.”

(4) The ice–albedo feedback. Albedo is the fraction of solar energy that is reflected from the Earth into space. Because ice and snow are white, they have a high albedo. Ice surface
has an albedo of 90%, and, inversely, dark water has an albedo of less than 10%. Peter Wadhams (2015) estimates that “the effect of sea ice retreat in the Arctic alone has been to reduce the average albedo of the
Earth from 52% to 48% and this is the same as adding a quarter to the amount of the heating of the planet due to the GHG.”

The potentially explosive magnitude and speed of GHG (CO2, CH4, and N2O) release into the atmosphere.
(5)

This is due to forest fires, the melting of ice in shallow seabeds surrounding the Arctic Ocean (Wadhams 2016), and collapse of land
permafrost that is thawing much faster than previously thought, often for the first time since the
last glaciation (Turetsky et al. 2019).
8.2 The Arctic Methane Conundrum

These accelerators of global warming cannot be studied separately because they act in synergy. We will discuss,
however, only one aspect of these climate feedbacks: the ongoing processes that release carbon, especially methane (a gas that has been gaining visibility in research over the past two decades), into the atmosphere, at high
latitudes. Until the early 1970s, methane was considered to have no direct effect on the climate. Since then, many studies evidenced by Gavin Schmidt (2004) have shown the growing importance of methane as a powerful factor in
global warming.

Unlike the multi-secular permanence of CO2 (with a multi-millennial “long tail”; see Archer 2009), methane remains in the atmosphere for only about 9–12 years. However, its global warming potential (GWP) is much higher than
that of CO2. The IPCC Fifth Assessment (2013) updated methane’s GWP upward by 20%, making it up to 100 times higher than that of CO2 over a 5-year horizon, 72–86 times higher over a 20-year horizon, and 34 times higher over
a 100-year horizon. Furthermore, according to Maryam Etminan and colleagues (2016), the radiative forcing of methane between 1750 and 2011 is about 25% higher (increasing from 0.48 W/ m2–0.61 W/m2) than the value found
in the IPCC (2013) assessment.7 In the short term—as the next 20 years are the most important in the threshold situation we find ourselves in—the global warming potential of methane is, therefore, 72–100 times greater than
that of CO2.

It is important to remember, moreover, that the greenhouse effect of methane does not cease completely after 12 years, since it is then oxidized and destroyed by hydroxyl radicals (OH*), being transformed (through various
mechanisms in the troposphere and the stratosphere) into CO2 and water vapor (H2O), two other greenhouse gases.8 Finally, and perhaps more importantly, methane and the warming of the atmosphere reinforce each other. As
Schmidt (2004) summarizes well, “methane rapidly increases in a warming climate with a small lag behind temperature. Therefore, not only does methane affect climate through greenhouse effects, but it in turn can evidently be
affected by climate itself.” The importance of this interaction was underlined by Joshua Dean and colleagues (2018), notably in ecosystems characterized by high carbon concentration, such as wetlands, marine and freshwater
systems, permafrost, and methane hydrates: “increased CH4 emissions from these systems would in turn induce further climate change, resulting in a positive climate feedback.” In this paper, Joshua Dean and colleagues quantify
the direct relationship between heating and the production of methane:
Experimental temperature rises of 10 °C have caused CH4 production to increase by 2 orders of magnitude, with 30 °C causing CH4 production to increase by as much as 6 orders of magnitude; short-term
temperature rises of this magnitude above zero are possible during the Arctic growing season and could become more common as regional air temperatures increase.

Methane atmospheric concentrations increased from about 720 ppb in the eighteenth century to 1870 ppb in September 2019, according to NOAA measurements. Figure 8.1 shows continued acceleration since 2007, after a 7-year
period of very slow growth.

Since 2014, methane concentrations have been rising faster than at any time in the past two decades, leading us to approach the most greenhouse gas-intensive scenarios (Saunois et al. 2016; Ed Dlugokencky, NOAA/ESRL 2019).9
An average annual growth in atmospheric methane concentrations of around 5 ppb would be sufficient for the warming produced solely by methane to squash any hope of maintaining a global warming average of 2 °C above the

pre-industrial period, the Paris [[FIGURE 8.1 OMITTED]] Agreement’s least ambitious target (Nisbet et al. 2019). According to NOAA data, between 2014 and 2019, methane atmospheric concentrations
had an annual average increase of over 9 ppb, against 5.1 ppb in the previous 5 years (2009–2013). This methane escalation was not expected at all in any of the IPCC future greenhouse gas scenarios and clearly raises extreme
concern, as anthropogenic methane emissions already caused a warming effect that is about one-third that of CO2 emissions (Myhre 2017). The current acceleration is probably a result of three complementary causes, identified by
Euan Nisbet and colleagues (2019) and then by Sara E. Mikaloff Fletcher and Hinrich Schaefer (2019):

(1) A surge in biogenic methane emissions, whether from wetlands, ruminants, waste, or all of these. The increase in ruminants is certainly an important factor, as “livestock inventories show that ruminant
emissions began to rise steeply around 2002 and can account for about half of the CH4 increase since 2007” (Fletcher and Schaefer 2019). The growth of livestock in Brazil, which accounts for almost 80% of the
original vegetation cover removal in the Amazon and the Cerrado (Meirelles 2005, 2014; De Sy 2015; Barbosa et al. 2015; Marques 2019; Azevedo 201910), plays a relevant role in this global increase in methane
emissions. In 2018, Brazil was the world’s largest exporter of beef, providing close to 20% of total global beef exports, and was the second largest commercial cattle herd in the world (232 million head in 2019).
Currently, Brazilian livestock occupies approximately 2.2 million km2, of which 700 thousand km2 are in the Brazilian Amazon (Barbosa et al. 2015). Methane emissions in Brazil, mainly from enteric fermentation
of cattle and from manure deposition in pastures, have grown 163% since 1970 (Azevedo et al. 2018). As is well-known, a slowdown in global warming implies a drastic reduction or abandonment of the
carnivorous diet, especially of cattle and sheep (see Chap. 12, Sect. 12.4 “Hypobiosphere: Functional and Nonfunctional Species to Man”).

(2) A decline, likely caused by anthropogenic pollution, in the amount of CH4 destroyed in the atmosphere through CH4 oxidation.

(3) A strong rise in methane emissions from fossil fuels. This third factor is not only due to the global increase in natural gas, oil, and coal consumption but also, and perhaps most importantly, to methane leaks in
all stages of the fossil fuel industry, notably during gas extraction through hydraulic fracturing, as well as methane leaks from active and abandoned coal mines, as discussed in Chap. 5 (Sect. 5.4 “Unconventional
Oil and Gas: Maximized Devastation”) (Alvarez et al. 2012; Tollefson et al. 2013; Pandey 2019). A crucial point in these processes must be highlighted: the possibility that at least part of the recent increase in
atmospheric methane concentrations is already due to feedback loops independent of human action. As noted above, methane rapidly increases in a warming climate (Schmidt 2004; Dean et al. 2018). From this
finding, Nisbet and colleagues (2019) advance the hypothesis that the acceleration of global warming since 2014 could be at the root of an increase in wetland CH4 production. In the words of Fletcher and
Schaefer (2019): “If natural wetlands, or changes in atmospheric chemistry, indeed accelerated the CH4 rise, it may be a climate feedback that humans have little hope of slowing.”

8.2.1 The Permafrost Carbon Feedback

In addition to the three sources of anthropogenic methane emission noted above, a fourth factor is the carbon release of rapidly melting permafrost. The process is well detected, but its current magnitude and, above all, its short-
term acceleration have been the subject of intense debate in the scientific community. Given the high global warming potential of methane, this uncertainty represents an important hindrance to the development of more reliable

Permafrost is defined as ground exposed to temperatures equal or less than zero for more than 2 consecutive years; “it is composed of soil, rock or sediment, often with large chunks of ice mixed
climate projections.

have remained frozen since the last glaciation


in” (Turetsky et al. 2019). Permafrost covers 24% of the Northern Hemisphere’s land (23 million km2). Most of these soils

and may have depths of over 700 meters in some regions of northern Siberia and Canada (Schaefer et al. 2012). The estimate of Gustaf Hugelius and colleagues (2014) on the
amount of carbon stored globally in permafrost is slightly lower than previous estimates. In their study, soil organic carbon stocks were estimated separately for different depth ranges (0–0.3 m, 0–1 m, 1–2 m, and 2–3 m).

total estimated soil organic carbon storage for the permafrost region is around 1300
According to the authors,

petagrams (uncertainty range of 1100–1500 Pg or Gt), of which around 500 Gt is in non-permafrost soils, seasonally thawed,
while around 800 Gt is perennially frozen. These soils contain approximately twice the amount of
carbon currently present in the atmosphere.
Permafrost status is monitored by two networks—Thermal State of Permafrost (TSP) and Circumpolar Active Layer Monitoring (CALM)—coordinated by the International Permafrost Association (IPA). About 7 years ago, a UNEP
report, titled Policy Implications of Warming Permafrost (Schaefer et al. 2012), warned:

Overall, these observations [from TSP and CALM] indicate that large-scale thawing of permafrost may have already started. [...] A global temperature increase of 3 °C means a 6 °C increase in the Arctic, resulting in
anywhere between 30 to 85% loss of near-surface permafrost. […] Carbon dioxide (CO2) and methane emissions from thawing permafrost could amplify warming due to anthropogenic greenhouse gas emissions.
This amplification is called the permafrost carbon feedback.

Since 2012, the permafrost carbon feedback has become more and more clear. Given the amplification of Arctic surface air temperature increase, average temperatures in some of its areas have already reached about 3 °C above
the 1961–1990 period. As we have seen in the previous chapter, other measurements show a 2.7 °C rise in the whole region’s average air temperature since 1971 (3.1 °C during the winter) (Box et al. 2019). This faster Arctic
warming (more than double the global average) is causing greater thawing of these soils, activating bacterial decomposition of the organic matter contained in them so that most of their carbon will be released into the
atmosphere, either in the form of CO2 (aerobic decomposition) or CH4 (anaerobic decomposition), exacerbating climate change. Christian Knoblauch and colleagues (2018) emphasize the importance of methane released in
bacterial decomposition in anoxic soils, which may contribute more substantially to global warming than previously assumed. Elizabeth M. Herndon (2018) accepts the results of research done by Knoblauch and colleagues and
reiterates that “CH4 production from anoxic (oxygen-free) systems may account for a higher proportion of global warming potential (GWP) than previously appreciated, surpassing contributions of CO2.” Guillaume Lamarche-
Gagnon and colleagues (2019) have provided “evidence from the Greenland ice sheet for the existence of large subglacial methane reserves, where production is not offset by local sinks and there is net export of methane to the
atmosphere during the summer melt season.”

8.2.2 Methane Hydrates or Clathrates: The Thawing of Subsea Permafrost


Joshua Dean and colleagues (2018) assert that “methane hydrates11 are not expected to contribute significantly to global CH4 emissions in the near or long-term future.” Although shared by the IPCC (AR4 2007)12 and by many
other scientists,13 this assumption has been challenged by Natalia Shakhova, Igor Semiletov, Evgeny Chuvilin, Paul Beckwith, John Nissen, Peter Wadhams, Gail Whiteman, and Chris Hope, among many other climate scientists and

there is an increasing risk of irreversible destabilization of


Arctic specialists, in several articles and public interventions.14 According to these authors,

sediments in the Arctic Ocean’s seafloor, with a potentially disastrous release of methane into the atmosphere in the
horizon of a few decades or even abruptly. There is also consensus among them that the greatest and most imminent danger comes from the East Siberian Arctic Shelf (ESAS), the most extensive continental shelf in the
world’s oceans, encompassing the Laptev Sea, the East Siberian Sea, and the Russian part of the Chukchi Sea. Its mean depth is 50 meters, and more than 75% of its area of 2.1 million km2 is less than 40 meters deep. In the past,
sea ice protected the ESAS seabed from solar radiation. But that was the past. The Arctic sea ice is currently 40% smaller than it was only 40 years ago (Urban 2020). Figure 8.2 shows the extent of the Arctic sea ice retreat in
October 2019, since 1979.

As can be seen from this chart, monthly sea ice extent reached a record low in October 2019. According to the NSIDC, “the linear rate of sea ice decline for October is 81,400 square kilometers (31,400 square miles) per year, or
9.8% per decade relative to the 1981–2010 average.” The 13 lowest sea ice extents all occurred in the last 13 years. Arctic sea ice extent averaged for October 2019 was 5.66 million km2 (2.19 million square miles), the lowest in the
41-year continuous satellite record. This was 230,000 km2 (88,800 square miles) below that observed in 2012— the previous record low for the month—and 2.69 million square kilometers (1.04 million square miles) below the
1981–2010 average.

[[FIGURE 8.2 OMITTED]]


During this period, there were sharp declines in the summers of 1989, 1997–1999, 2007, 2012, and 2019.
The date of the first so-called Blue Ocean Event (BOE), that is, the moment when the extent of the Arctic sea ice in September will fall below one million square kilometers, is uncertain. According to Mark Urban (2020), “Arctic

the
summers could become mostly ice-free in 30 years, and possibly sooner if current trends continue.” How much sooner is still open to debate. What is certain is that the first BOE looks more and more imminent and that

ESAS has become increasingly exposed, for several months, to solar radiation, which obviously accelerates its heating. It
has warmed by up to 17 °C, and the thawing of the subsea permafrost is already releasing methane in annual quantities
no smaller than those of terrestrial Arctic ecosystems . The current atmospheric CH4 emissions from the ESAS are estimated to be between 8 and 17 teragrams (or
million tons) annually (Shakhova et al. 2017, 2019). Peter Wadhams never ceases to warn us of the particular importance of this fact, as we can see, for example, in his book from 2016: “We must remember—many scientists, alas,
forget— that it is only since 2005 that substantial summer open water has existed on Arctic shelves, so we are in an entirely new situation with a new melt phenomenon taking place.”

Already in 2008, a scientific expedition aboard the Russian ship Jacob Smirnitskyi recorded, for the first time, large amounts of methane release from the ESAS seafloor. Orjan Gustafsson wrote about this (Connor 2008):

An extensive area of intense methane release was found. At earlier sites we had found elevated levels of dissolved methane. Yesterday, for the first time, we documented a field where the release was so intense
that the methane did not have time to dissolve into the seawater but was rising as methane bubbles to the sea surface. These ‘methane chimneys’ were documented on echo sounder and with seismic
instruments.

Flying over the Arctic at latitudes as far as 82° north, Eric Kort and colleagues (2012) found significant amounts of methane being released from the ocean into the atmosphere over open leads and regions with fractional sea ice
cover:

We estimate that sea–air fluxes amount to around 2 mg d−1 m−2, comparable to emissions seen on the Siberian shelf. We suggest that the surface waters of the Arctic Ocean represent a potentially important
source of methane, which could prove sensitive to changes in sea-ice cover.

The estimate of Eric Kort and his colleagues, reported by Steve Connor (2012), is that the quantities of methane observed could be large enough to affect the global climate: “We were surprised to see these enhanced methane
levels at these high latitudes. Our observations really point to the ocean surface as the source, which was not what we had expected.” This ebullition of methane is now observed through the Arctic Ocean’s water column15 even in
winter, through flaw polynyas (areas of open water surrounded by sea ice) which increased by up to five times during the last decades (Shakhova et al. 2019).

In addition to Eric Kort and his team, many other scientists warn that current amounts of methane incorporated into the atmosphere from the Arctic seabed are already sufficient to strongly interfere with the pace of global
warming. According to Peter Wadhams (2016), the global feedback arising from the Arctic snow and ice retreats is already adding 50% to the warming which results from the addition of CO2:

We have reached the point at which we should no longer simply say that adding CO2 to the atmosphere is warming our planet. Instead we have to say that the CO2 which we have added to the atmosphere has
already warmed our planet to the point where ice/snow feedback processes are themselves increasing the effect by a further 50%. We are not far from the moment when the feedbacks will themselves be driving
the change – that is, we will not need to add more CO2 to the atmosphere at all, but will get the warming anyway. This is a stage called runaway warming, which is possibly what led to the transformation of Venus
into a hot, dry, dead world.

once dark water replaces brilliant ice, Earth could warm substantially, equivalent to the warming triggered by
Mark Urban (2020) estimates that “

the additional release of a trillion tons of carbon dioxide into the atmosphere .” There seem to be strong arguments estimating that the
current level of interference from Arctic methane release, and in particular from ESAS, is still, however, very small compared to its immense potential for accelerating global warming. The ESAS became subsea permafrost 12 or 13
millennia ago at the end of the last glaciation, but during the Last Glacial Maximum (from 26.5 to 19 ka), sea level was more than 100 meters lower than it is today (Englander 2014), and the entire shelf area was exposed above sea
level, allowing for the accumulation of sediments of high organic content. It is estimated that shallow methane hydrate deposits currently occupy about 57% (1.25 million km2) of the ESAS’ seabed and that they could preserve
more than 1400 Gt of methane, which makes this region the largest and most vulnerable store of subsea CH4 in the world (Wadhams 2016; Shakhova et al. 2019).

8.2.3 A Slow-Motion Time Bomb or a Methane and CO2 Burst?

If all these observations suggest that the release of CO2 and methane will continue to increase in the Arctic, exacerbating warming, the rate of increase is still uncertain, especially on the horizon of this century. Ted Schuur created
the metaphor “slow-motion time bomb,” exploding initially in a way that is imperceptible (Borenstein 2006). The expression was reused in the film Death Spiral and the Methane Time Bomb (2012). Slow motion because, according
to Schuur and colleagues (2015):

At the proposed rates, the observed and projected emissions of CH4 and CO2 from thawing permafrost are unlikely to cause abrupt climate change over a period of a few years to a decade. Instead, permafrost
carbon emissions are likely to be felt over decades to centuries as northern regions warm, making climate change happen faster than we would expect on the basis of projected emissions from human activities
alone.

But, some studies in recent years project a melting of the permafrost (land and subsea) that is faster or much faster than previously supposed. Kevin Schaefer and colleagues (2011), for instance:

predict that the permafrost carbon feedback will change the Arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42–88% of the total global land sink. The thaw and decay of
permafrost carbon is irreversible and accounting for the permafrost carbon feedback (PCF) will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration.

An abrupt thaw in terrestrial permafrost could be initiated or exacerbated, for example, by wildfires or through thermokarst
processes.16 Fires in the boreal regions (above 50oN) reduce the albedo by covering the reflective white snow
with black soot that absorbs sunlight, accelerating permafrost melting, bacterial activity, and the subsequent release of
CO2 and methane into the atmosphere . James L. Partain Jr. and colleagues (2015) estimate that climate change in Alaska has increased the risk of a fire year as severe as 2015 by 34%–
60%. Alaska wildfires have increased dramatically since 1990 (Sanford et al. 2015), and fires in this state have spread for more than five thousand square kilometers only in June and July 2019, the hottest 2 months on record. In
these 2 months, satellites detected more than 100 wildfires raging above the Arctic Circle. Clare Nullis from the WMO declared that in June 2019 alone, Arctic fires emitted 50 Mt of CO2 into the atmosphere: “this is more than was
released by Arctic fires in the same month between 2010 and 2018 combined” (WMO 2019). The magnitude of these wildfires is unprecedented, and, more importantly, they have ignited peat soils, which burn deeper in the
ground and release huge quantities of methane and CO2.

Regarding thermokarst processes and the formation of thaw lakes in which seeps and ebullition (bubbling) of methane are created, more than 150,000 seeps have already been identified (2019). Already in the beginning of the
century, measurements of methane emissions from thaw lakes in northern Siberia, especially through ebullition, were much higher than those recorded previously. Katey Walter and colleagues (2006) showed that “ebullition
accounts for 95% of methane emissions from these lakes, and that methane flux from thaw lakes in our study region may be five times higher than previously estimated.” Walter warned that the effects of those emissions “can be
huge. It’s coming out a lot and there’s a lot more to come out” (Borenstein 2006). Merritt Turetsky and colleagues (2019) claim that current models of GHG release are mistaken in assuming that the permafrost thaws gradually
from the surface downward:

Frozen soil doesn’t just lock up carbon — it physically holds the landscape together. Across the Arctic and Boreal
regions, permafrost is collapsing suddenly as pockets of ice within it melt. Instead of a few

centimetres of soil thawing each year, several metres of soil can become destabilized within days or weeks.
The land can sink and be inundated by swelling lakes and wetlands . (…) Permafrost is thawing much more quickly than models have
predicted, with unknown consequences for greenhouse-gas release.

The authors estimate that abrupt permafrost thawing will occur in less than 20% of frozen land and could release between 60 and 100 Gt of carbon by 2300. They also project that another 200 Gt of carbon will be released in other
regions that will thaw gradually. However, given that abrupt processes release more carbon per square meter—and particularly more methane—than does gradual thaw, “the climate impacts of the two processes will be similar.
So, together, the impacts of thawing permafrost on Earth’s climate could be twice that expected from current models.”
ESAS subsea permafrost. Shakhova et al. (2019) observe that there is “a potential for possible massive/abrupt release of
Let us come back to the

CH4, whether from destabilizing hydrates or from free gas accumulations beneath permafrost; such a
release requires only a trigger.” This trigger can occur at any moment because, as the authors remind us:

The ESAS is a tectonically and seismically active area of the world ocean. During seismic events, a large amount of over-pressurized gas can be delivered to
the water column, not only via existing gas migration pathways, but also through permafrost breaks.

Although ESAS is possibly the biggest problem, acceleration in the speed of methane release has also been observed from other Arctic continental shelves, such as the Kara Sea shelf. Irina Streletskaya and colleagues (2018) studied
the methane content in ground ice and sediments of the Kara seacoast. The study states that “ permafrost degradation due to climate change will be exacerbated along the coasts where declining sea ice is likely to result in
accelerated rates of coastal erosion (…), further releasing the methane which is not yet accounted for in the models.” Likewise, Alexey Portnov and colleagues (2013, 2014) measured methane release in the South Kara Sea shelf
and in the West Yamal shelf, in northeast Siberia. In their 2013 report, they show that “this Arctic shelf region where seafloor gas release is widespread suggests that permafrost has degraded more significantly than previously
thought.” Their studies provide an example of another Arctic marine shelf where seafloor gas release is widespread and where permafrost degradation is an ongoing process. Commenting on both of these studies, Brian Stallard
cites the following projection proposed by Portnov: “If the temperature of the oceans increases by two degrees as suggested by some reports, it will accelerate the thawing to the extreme. A warming climate could lead to an
explosive gas release from the shallow areas.”17

Natalia Shakhova and Igor Semiletov fear there will be an escape of 50 Gt of methane into the atmosphere in the short-term only from ESAS. For both of
them, a vast methane belch is “highly possible at any time” (Pearce 2013; Mascarelli 2009). Whiteman et al. (2013) reinforce this perception: “a 50-gigatonne (Gt) reservoir of methane, stored in the form of hydrates, exists on the
East Siberian Arctic Shelf. It is likely to be emitted as the seabed warms, either steadily over 50 years or suddenly.” Peter Wadhams calculates that in case of a decade-long pulse of 50 GtCH4, “the extra temperature due to the

such a
methane by 2040 is 0.6°C, a substantial extra contribution.” And he adds: “although the peak of 0.6 °C is reached 25 years after emissions begin, a rise of 0.3–0.4°C occurs within a very few years.” The likelihood that

methane pulse will occur is considerable, according to Wadhams. If it does, it will accelerate positive feedback loops that can release even
more methane into the atmosphere at an exponential rate of progression, leading to warming that is
completely outside even the most pessimistic projections. This is, in any case, the reason for the various interventions of the Arctic Methane Emergency Group
(AMEG), a UK-based group of scientists, who affirm (2012) that18:

The tendency among scientists and the media has been to ignore or understate the seriousness of the situation in the Arctic. AMEG is using best available evidence and best available explanations for the processes
at work. These processes include a number of vicious cycles which are growing in power exponentially, involving ocean, atmosphere, sea ice, snow, permafrost and methane. If these cycles are allowed to

continue, the end result will be runaway global warming.

lead the Earth toward conditions that prevail


The “runaway global warming” conjecture, feared by the AMEG scientists, but rejected by the IPCC,19 would be able to

today in Venus. This conjecture may be interesting from a strictly scientific point of view, but it is totally useless from the point of view of
the fate of vertebrates and forests, because both would cease to exist under conditions that are much
less extreme. The scenario of warming above 3 °C over the next few decades , defined as “catastrophic” (Xu and Ramanathan 2017),
and the collapse of biodiversity already underway, addressed in Chaps. 10 and 11, will possibly be enough to cross the tipping
points conducive to a “Hothouse Earth” pathway, as described above by Will Steffen and colleagues (2018). When Gaia Vince (2011) asked what the chances of survival of
our young species would be in such circumstances, Chris Stringer, a paleoanthropologist at London’s Natural History Museum, affirmed:

One of the most worrying things is permafrost melting. If it continues to melt as we think it’s
already doing in some regions, we may well have a runaway greenhouse effect. We’re also very
dependent on a few staple crops, such as wheat and rice. If they get hit by climate change, we’re
in trouble. We’re medium- to large-size mammals, we take a long time to grow up, we only
produce one child at a time and we’re demanding of our environment — this type of mammal is
the most vulnerable. So, no, we’re not immune from extinction. (...) The danger is that climate change
will drive us into pockets with the best remaining environments. The worst-case scenario could
be that everyone disappears except those who survive near the North and South poles — maybe
a few hundred million, if the environment still supports them — and those will be the remaining
humans. The problem is that once you’ve got humans isolated in small areas, they are much
more vulnerable to extinction through chance events.
Perhaps the most privileged part of the human species will be able to adapt to the consequences of a drastic shrinking of the cryosphere, with global average warming above 5 °C and widespread degradation of the biosphere. At
bay, in high latitudes, however, they will live in a terribly hostile world, one that is tragically depleted of animal and plant life and certainly unrelated to the organized societies of our times.

8.3 Higher Rises in Sea Level

“I don’t think 10 years ago scientists realized just how quickly the potential for rapid sea level rise was,” affirmed Maureen Raymo, Director of the Lamont–Doherty Core Repository at the Lamont–Doherty Earth Observatory of
Columbia University (Lieberman 2016). The rise in sea level is another central factor in the destabilization of contemporary societies. It threatens coastal ecosystems, urban and transportation infrastructure, and many nuclear
power plants, in addition to flooding and salinizing aquifers and deltas that are suitable for agriculture. As we shall see, the IPCC AR5 (2013) projections have not captured the increasing speed of this process because their
predictions of sea level rise do not include the contribution of melting ice sheets, which has emerged as the main driver of this. Sea level rise is linked to two major factors driven by climate change; both are in acceleration: (1)
thermal expansion and (2) melting glaciers and the loss of Greenland’s and Antarctica’s ice sheets. According to GISS/NASA, measurements derived from coastal tide gauge data and, since 1993, from satellite altimeter data indicate
that between 1880 and 2013, global mean sea level (GMSL) rose by 22.6 cm, or an average of 1.6 mm per year over 134 years. Between 1993 and 2017 alone, GMSL rose by more than 7 centimeters. “This rate of sea-level rise is
expected to accelerate as the melting of the ice sheets and ocean heat content increases as GHG concentrations rise” (Nerem et al. 2018). And indeed, according to Habib-Boubacar Dieng and colleagues (2017), GMSL rise since the
mid-2000s shows significant increase compared to the 1993–2004 time span. Since at least 2007 (Livina and Lenton 2013), the thawing of the cryosphere reached a tipping point, entering into a phase of acceleration and
irreversibility.

8.3.1 An Average Rise of 5 mm per Year That Is Accelerating


The rise in sea level doubled or tripled in the years following 1993, when compared to the rise observed during most of the twentieth century. Whether it doubled or tripled depends on the somewhat uncertain estimates of the
pace of sea level rise in the last century. For Sönke Dangendorf and colleagues (2017), GMSL rise before 1990 was 1.1 (+/−0.3) mm per year, while from 1993 to 2012, it was 3.1 (+/−1.4) mm per year. In this case, the speed of sea
level rise almost tripled when compared to that of the twentieth century. According to NOAA, “the pace of global sea level rise doubled from 1.7 mm/year throughout most of the twentieth century to 3.4 mm/ year since 1993”
(Lindsey 2017). Steven Nerem and colleagues (2018) estimate the climate change-driven acceleration of GMSL rise over the last 25 years to be 0.084 ± 0.025 mm/y2. Finally, Fig. 8.3 shows an increase by a factor of more than seven
in the pace of mean sea level rise between 1900–1930 (0.6 mm/year) and 2010–2015 (4.4 mm/year).

And John Englander (2019) finally shows a mean rise of 5 mm per year between 2012 and 2017.

8.3.2 Greenland

“Since the early 1990s, mass loss from the Greenland Ice Sheet has contributed approximately 10% of the observed global mean sea level rise” (McMillan et al. 2016). The second-largest ice sheet in the world is melting at a
breakneck speed. At its highest point, this ice sheet still stands more than 3000 meters above sea level. GMSL would rise to 7.4 meters, according to the IMBIE team (2019), should it all melt and run off into the ocean. This could

occur within a millennium in the absence of drastic reductions in GHG emissions (Aschwanden et al. 2019). Greenland lost about 3.8 trillion tons of ice (1 trillion ton = 400,000,000 Olympic Pools) between [[FIGURE
8.3 OMITTED]] 1992 and 2018, causing GMSL to rise by 10–11 mm (it takes about 360 Gt of ice loss to raise GMSL 1 mm). This loss has raised GMSL by 13.7 mm since 1972, half during the last 8 years (Mouginot et
al. 2019), and will likely raise GMSL between 2.5 cm and 10 cm by 2050 (Merzdorf 2019).The pace of ice loss is much higher than predicted by the models, and the observed acceleration is stunning. It has risen from 33 Gt a year in
the 1990s to 254 Gt a year in the 2010s, according to the IMBIE team (2019). There are two other slightly larger estimates of ice loss (Bevis et al. 2019; Mouginot et al. 2019). In any case, Greenland lost almost 200 Gt of ice in July
2019 alone in the wake of one of the largest heat waves in recorded climate history. Isabella Velicogna and colleagues (2020) recorded “a mass loss of 600 Gt by the end of August 2019, which is comparable to the mass loss in the
warm summer of 2012.” The island crossed one or more tipping points around 2002 or 2003. Rain is becoming more frequent, and it is melting Greenland’s ice even in the dead of winter (Fox 2019). Mostly important, Andy
Aschwanden and colleagues (2019) have shown that warmer waters along the west coast of Greenland “led to a disintegration of buttressing floating ice tongues, which triggered a positive feedback between retreat, thinning, and
outlet glacier acceleration,” a process they call outlet glacier-acceleration feedback.

8.3.3 Antarctica

As is well-known, the Antarctic thaw has a potential, in the long run, to cause a rise in sea level of about 58 meters. The continent lost 2720 (+/− 1390) Gt of ice between 1992 and 2017, leading to a rise in sea level of 7.6 (+/− 3.9)
millimeters, with a marked increase in ice loss in recent years. In fact, the pace of glacial loss in Antarctica almost tripled since 2012, jumping from 76 Gt in 2012 to 219 Gt in 2017. In that period alone (2012–2017), Antarctic thaw
increased global sea levels by 3 millimeters (Shepherd and the IMBIE team 2018). According to the World Meteorological Organization, about 87% of the glaciers along Antarctica’s west coast have retreated over the past 50 years.
Almost a quarter of its glaciers, among them the huge Thwaites and Pine Island Glaciers, can now be considered unstable. More recently, the glaciers spanning an eighth of the East Antarctica coastline are also being melted by
warming seas. In February 2020, Antarctica logged its hottest temperatures on record, 18.3 °C and 20.75 °C (at Seymour Island), beating the previous record (2015) by more than 1 °C. If confirmed by the World Meteorological
Organization, this temperature anomaly will be the first 20 °C-plus temperature measurement for the entire region south of 60° latitude (on July 2019, the Arctic region also hit its own record temperature of 21 °C). As pointed out
by James Renwick, “it’s a sign of the warming that has been happening there that’s much faster than the global average (Readfern 2020). Thawing in parts of the West Antarctic Ice Sheet has already put it in a stage of collapse and
the retreat is unstoppable (Joughin et al. 2014; Paolo et al. 2015; DeConto and Pollard 2016). Pietro Milillo and colleagues (2019) measured the evolution of ice velocity, ice thinning, and grounding line retreat of the huge Thwaites
Glacier (around 170 thousand km2) from 1992 to 2017. The Thwaites Glacier is melting very fast and is currently responsible for approximately 4% of global sea level rise. It contains enough ice to raise sea levels by about 60 cm.
When it breaks off into the Amundsen Sea Embayment, it will probably trigger the collapse of five other nearby glaciers that are also melting (Pine Island, Haynes, Pope, Smith and Kohler). Together, they “have three meters of sea
level locked up in them” (Englander 2019).

8.3.4 Projections of 2 Meters or More by 2100

According to Qin Dahe, Co-Chair of the IPCC-AR5 (2013), “as the ocean warms, and glaciers and ice sheets reduce, global mean sea level will continue to rise, but at a faster rate than we have experienced over the past 40 years.”
How much faster this rise will be is still uncertain. As stated above, the IPCC (2013) predictions about sea level rise until 2100 (26–98 cm) have long been considered too conservative (Rahmstorf et al. 2012). Anders Levermann,

from the Potsdam Institute for Climate Impact Research (2013), calculates an impact of 2.3 meters for every 1 °C rise in [[FIGURE 8.4 OMITTED]] average global temperatures.20 For its part, NOAA
stated in 2012 that GMSL should increase by at least 20 centimeters and not more than 2 meters by 2100, considering the 1992 average level as a starting point.21 Figure 8.4 shows these two extreme possibilities and their two
intermediary variables.

In 2013, a GMSL of more than 1 meter throughout the century was no longer considered unlikely, given the acceleration of the thawing of Greenland and West Antarctica. But in 2017, NOAA redid its estimates and raised the upper
limit of estimated mean sea level rise to 2.5 meters and the lower limit to 30 centimeters by 2100. Table 8.1 discriminates the median values of GMSL in each decade up to 2100 and successively in 2120, 2150, and 2200, according
to six scenarios.

Five Observations:

(1) The Low and Intermediate-Low scenarios are no longer plausible because they assume no acceleration in GMSL rise (constant rise at rates of 3 and 5 millimeters per year by 2010, respectively, as per Table 8.2).

(2) The High scenario is the most likely, maintaining the current pace of RCP8.5 W/ m2, as defined by the IPCC.

(3) “New evidence regarding the Antarctic ice sheet, if sustained, may significantly increase the probability of the Intermediate-High, High, and Extreme scenarios, particularly for RCP8.5 projections” (Sweet et al.
2017, p. 21).

(4) By 2030, there should be a GMSL rise between 16 cm and 24 cm above 2000 levels. This is an average of 20 cm, almost equivalent to the 22.6 cm rise observed between 1880 and 2013, as seen above. In the

2020s alone, GMSL is [[TABLE 8.1 OMITTED]] [[TABLE 8.2 OMITTED]] expected to rise between 6 and 13 cm, which is enough to cause recurrent and
potentially catastrophic flooding in various cities around the world.

(5) As shown in the table below, the speed of GMSL rise is multiplied by factors between 3 (intermediate) and 7.3 (extreme) by 2100.

In certain regions, these rises are much higher than the global average and are already equivalent to the IPCC projections for the 2081–2100 period. In New York, the US Federal Emergency Management Agency (FEMA) predicts
elevations of 25 centimeters by 2020, 76 centimeters in the 2050s, 147 centimeters in the 2080s, and 191 cm in the first decade of the twenty-second century.22 According to Andrew Shepherd, “around Brooklyn you get flooding
once a year or so, but if you raise sea level by 15 cm then that’s going to happen 20 times a year” (Pierre-Louis 2018).

8.3.5 A New Projection: “Several Meters Over a Timescale of 50–150 Years”

Scary as they may be, the above reported NOAA projections (Sweet et al. 2017) were largely surpassed by a new analysis (Hansen et al. 2016), based primarily on two arguments:

(1) Information provided by paleoclimatology: “Earth is now as warm as it was during the prior interglacial period (Eemian), when sea level reached 6–9 m higher than today” (Hansen et al. 2016).

(2) The amplifying feedbacks caused by the ongoing thawing in Antarctica and Greenland, which should increase subsurface ocean warming and mass loss of ice sheets. According to the authors, because of these
feedbacks, the ice sheets that come into contact with the ocean are vulnerable to accelerating disintegration, causing much bigger GMSL increases on a much smaller timescale:

We hypothesize that ice mass loss from the most vulnerable ice, sufficient to raise sea level several meters, is better approximated as exponential than by a more linear response. Doubling times
of 10, 20 or 40 years yield multi-meter sea level rise in about 50, 100 or 200 years. (…) These climate feedbacks aid interpretation of events late in the prior interglacial [Eemian], when sea level
rose to +6–9 m with evidence of extreme storms while Earth was less than 1 °C warmer than today.

This projection, put forth by 18 scientists and coordinated by James Hansen, is very similar to and is reinforced by the Hothouse Earth hypothesis (Steffen et al. 2018) discussed above. Both studies consider an average global
warming of “only” 2 °C above the pre-industrial period as a planetary threshold that, if crossed, can trigger climate feedback loops and nonlinear climate responses. In fact, James Hansen and colleagues warn that “the modeling,
paleoclimate evidence, and ongoing observations together imply that 2°C global warming above the pre-industrial level could be dangerous.” They predict that this level of warming will cause:

(1) Cooling of the Southern Ocean, especially in the Western Hemisphere

(2) Slowing of the Southern Ocean overturning circulation, warming of ice shelves, and growing mass loss of ice sheets

(3) Slowdown and eventual shutdown of the Atlantic overturning circulation with cooling of the North Atlantic region
(4) Increasingly powerful storms

(5) Nonlinearly growing sea level rise, reaching several meters over a timescale of 50–150 years

8.3.6 Climate Refugees

In a video about this study, James Hansen discusses some implications of climate change and rising sea levels. He underlines the interactions between ocean warming and the melting of the Antarctic and Greenland ice sheets, in
addition to analyzing the positive feedback loops at play in global warming, sea level rise, and extreme weather events, such as “superstorms stronger than any in modern times” and “powerful enough for giant waves to toss 1000
ton megaboulders onto the shore in the Bahamas.” These feedbacks, he states:

raise questions about how soon we will pass points of no return in which we lock in consequences that cannot be reversed on any timescale that people care about. Consequences include sea level rise of several
meters, which we estimate could occur this century, or at latest next century, if fossil fuel emissions continue at a high level. (…) The IPCC does not report these effects for two reasons. First, most models used by
IPCC exclude ice melt. Second, we conclude that most models, ours included, are less sensitive than the real world to the added fresh water.

sea level rise, larger and more extreme hurricanes, and superstorms capable of flooding
There is no need to stress the obvious impacts of these combined processes of

thousands of square kilometers in the most densely populated coastal regions. The first of these impacts is the emergence of hundreds of millions of
people who may fall into a new category of so-called “natural” disaster victims, the climate refugees , a term
that does not exist yet in international law. According to a Norwegian Refugee Council (NRC) report, between 2008 and 2013, “natural” disasters caused, on average, the forced displacement of 27.5 million people per year. In 2013
alone, about 22 million people in at least 119 countries were displaced from their homes because of these disasters; this is triple the number of displaced people due to armed conflict in the same period (Yonetani et al. 2014). In
the past two decades, the global population of displaced people jumped from 33.9 million in 1997 to 65.6 million in 2016, the third consecutive year in which the number of refugees has broken the all-time high, according to the
United Nations High Commissioner for Refugees (UNHCR). We present further data from the UNHCR report, Global Trends: Forced Displacements in 2016. In 2016, 20 people were forced to leave their homes every minute. These
forced displacements resulted in 22.5 million refugees (50% of them under the age of 18), the highest number in historical records, with 40.3 million moving involuntarily within the borders of their own countries. Just a decade
ago, 1 in 160 people was a refugee. In 2016, 1 in 113 people was sentenced to this condition. In 2016, asylum seekers totaled 2.8 million people in 164 countries.23 The UNHCR’s report Climate Change and Disaster Displacement in
2017 states that “between 2008 and 2015, 203.4 million people were displaced by disasters, and the likelihood of being displaced by disasters has doubled since the 1970s.”

Not all these more than 203 million displaced people can be considered climate refugees, of course, but, as stated in the same document, “climate change is also a threat multiplier, and may exacerbate
conflict over depleted resources.” Indeed, given that many wars and armed conflicts are fueled and/or intensified by climate
problems, such as droughts, water depletion, food shortages, floods, and hurricanes, one must consider the “hidden” weight of
environmental crises in intensifying conflicts that are predominantly political, ethnic, and religious in nature. In any case, the 2017 “Lancet Countdown” states that “annual weather-related disasters have increased by
46% from 2000 to 2013.” Referring to the next three decades, a press release titled “Sustainability, Stability, Security,” published in 2016 by the UN Convention to Combat Desertification (UNCCD) for an African inter-ministerial
meeting at COP22, reminds us that more than 90% of Africa’s economy depends on a climate-sensitive natural resource, like rain-fed, subsistence agriculture, and issues a very clear warning: “unless we change the way we manage
our land, in the next 30 years we may leave a billion or more vulnerable poor people with little choice but to fight or flee.”

8.3.7 Consequences of a Rise in Sea Level of Up to 1 Meter

(1) A GMSL rise of only about 50 cm by 2050 (Sweet et al. 2017: Intermediate-High = 44 cm; High = 54 cm) relative to 2000 will cause the forced migration of over 40 million people, according to simulations
proposed by the NGO GlobalFloodMap.org. Several points on the coast of Africa (almost five million people), Europe (almost six million people), and Asia (14 million people) will be particularly affected. The case of
Bangladesh, with a population of 153 million concentrated in just 144,000 km2, is one of the most serious, since two- thirds of its land is less than 5 meters above the current sea level. According to UN projections,
by 2050 Bangladesh could lose 40% of its arable land. Hasan Mahmud, the country’s Minister for Environment, Forest and Climate Change, told Le Monde that “the sea level in the Gulf of Bengal has already risen
and if scientists’ predictions are confirmed, 30 million people are expected to flee their lands by the end of the century.” (Bouissou 2013).

(2) The next scenario entails a rise in sea level between 60 cm and 1.3 m between 2060 and 2080 (Sweet et al. 2017: 2060 Intermediate-High = 60 cm; High = 77 cm; 2080 Intermediate-High = 1 m; High = 1.3 m).
This rise in sea level will be enough to cause flooding of deltas; changes in shorelines; complete submergence of islands, lowlands, and arable lands; as well as destruction of coastal ecosystems and displacement
of populations that today live near the coast. This, in turn, will produce waves of new refugees with social traumas and colossal losses of urban infrastructure, as well as an overload of territories, some of which
are already saturated due to human occupation. In this second scenario, a rise in sea level will directly reach almost 150 million people (Anthoff et al. 2006). Among the 20 most populous cities in the world, 13 are
sea or river ports in coastal areas. A study coordinated by Susan Hanson (2011) identified 136 port cities over one million people most exposed to the impacts of rising sea levels and climate extremes by the 2070s.
Of these 136 cities, 52 are in Asia, 17 in the USA, and 14 in South America.

“The Ocean Conference” promoted by the United Nations (2017) estimates that people currently living in coastal communities represent 37% of the global population. Immense human contingents and entire populations of
countless other species will be increasingly affected by more and more aggressive and frequent flooding, gradually being condemned to the condition of climate refugees. According to CoastalDEM, a new digital elevation model
produced by Climate Central (2019), land currently home to 300 million people will fall below the elevation of an average annual coastal flood by 2050. Scott Kulp and Benjamin Strauss (2019) estimate that “one billion people now

occupy land less than 10 meters above current high tide lines, including 230 million below 1 meter.” Under high emissions, the authors add, “CoastalDEM indicates up to 630 million people live on land
below projected annual flood levels for 2100 .”
But let us focus on the catastrophic current situation. The future is now in South Florida, for instance, where some 2.4 million people are at risk of flooding from even a moderate hurricane-driven storm surge. The odds of a
catastrophic 100-year flood by 2030 are now 2.6 times higher than they would have been without global warming (Lemonick 2012). The same can be said of the 52 nations called SIDS (Small Island Developing States), where
environmental collapse is already an ongoing reality, as the oceans are about to wipe out these small paradises which are home to a huge amount of diverse cultures, to biodiversity, and to almost 1% of humanity. For terrestrial
species, including humans, living in the Pacific Islands made up of coral atolls, this collapse has a scheduled date, if the projections of Curt D. Storlazzi and colleagues (2018) are confirmed:

We show that, on the basis of current greenhouse gas emission rates, the nonlinear interactions between sea-level rise and wave dynamics over reefs will lead to the annual wave-driven overwash of most atoll
islands by the mid-twenty-first century. This annual flooding will result in the islands becoming uninhabitable because of frequent damage to infrastructure and the inability of their freshwater aquifers to recover
between overwash events.

8.3.8 Cyclones, Hurricanes, Typhoons, Tornadoes… and Nuclear Power Plants

Extreme weather events are thermodynamic phenomena that are classified in five categories on the Saffir-Simpson scale, based on the increasing speed of their winds, starting with the weakest of them, which can reach a speed of
over 117 km/h. One of the conditions that induce these events is when surface layers of the ocean, up to 50 meters deep, reach temperatures above 26 °C. There is evidence that the number of Category 4 and 5 hurricanes on the
Saffir-Simpson scale has been rising. According to Jeffrey St. Clair (2019), in the last 169 years, only 35 Atlantic hurricanes have attained Category 5 status, and 5 of them occurred in the last 4 years: Matthew (2016), Irma and Maria
(2017), Michael (2018), and Dorian (2019). Humans are increasingly exposed to extreme flooding by tropical cyclones, and “there is also growing evidence for a future shift in the average global intensity of tropical cyclones towards
stronger storms” (Woodruff et al. 2013). NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) assessment, titled Global Warming and Hurricanes: An Overview of Current Research Results (2019), summarizes its main conclusions
for a 2 °C global warming scenario in three points:

(1) Tropical cyclone rainfall rates will likely increase “on the order of 10–15% for rainfall rates averaged within about 100 km of the storm.”

(2) Tropical cyclone intensities globally will likely increase on average by 1% to 10%. “This change would imply an even larger percentage increase in the destructive potential per storm, assuming no reduction in
storm size.”

(3) The global proportion of tropical cyclones that reach very intense levels (Categories 4 and 5) will likely increase over the twenty-first century.

The growing vulnerability of nuclear power plants is perhaps the most worrying consequence of these processes. Since they are cooled by water, their
reactors are always next to rivers and estuaries, or by the sea. Although caused by a tsunami, the 2011 Daiichi disaster in Fukushima showed that electrical installations, indispensable
for the functioning of the reactor cooling system, are vulnerable to flooding, whatever may be its cause:
hurricanes, torrential rains, dam failure, or rise in sea level (Kopytko 2011). The vulnerability of nuclear plants tends to increase because their
design, in most cases dating from the 1970s to 1990s, calculates safety margins for such phenomena on the basis of historical
limits, which are now being exceeded due to climate change, in particular the intensification of hurricanes and rising sea levels.
Examples of increased risks abound. In 1999, the Blayais Nuclear Power Plant in the Gironde estuary in France was flooded by a tide and unprecedented winds, damaging two of its reactors. In June 2011, a flood in the Missouri River endangered the Fort Calhoun Nuclear Generating Station in Nebraska, and in October 2012 Hurricane Sandy seriously threatened the safety of the Salem and Oyster Creek Nuclear Plants in New Jersey.24 In England, “12 of Britain’s 19 civil nuclear sites are at risk of flooding and coastal erosion because of climate change,” 9 of which “have been
assessed […] as being vulnerable now” (Edwards 2012). In Brazil, there are growing risks of this phenomenon for the reactors in the Angra dos Reis power plants (Pereira 2012).

8.3.9 A Call to Arms

This is the title of the last chapter of Peter Wadhams book, A Farewell to Ice: A Report from the Arctic (2016). Its conclusion is one of brutal simplicity:

We have destroyed our planet’s life support system by mindless development and misuse of technology. A mindful development of technology, first for geoengineering, then for carbon removal, is now necessary to save us. It is the most serious and important activity in which the human race can now be involved, and it must begin immediately.

Wadhams does not ignore the costs, risks, and limitations of his proposal at the present stage of human technology, hence his appeal for an agenda to improve science, so as to make viable, technically and financially, a geoengineering with risks that are less imponderable. Yes, it is now necessary to contemplate all technological alternatives that can alleviate the present and future impacts of the climate emergency. None of them, however, will be able to prevent a global collapse, at the same time environmental and social. Technology, the economy, and universal carbon
taxation (proposed since the Toronto Conference on Climate Change in 1988!) are all important and necessary. But only politics can significantly mitigate this ongoing global collapse. This perception that politics is at the center of our survival possibilities as organized societies does not yet seem clear enough among economists and scientists. The paradigm that has enabled the immense development of our societies is based on an energy system built on burning fossil fuels and on an intoxicating, animal protein-based, globalized food system that destroys forests and the
foundations of life on the planet. This paradigm has exhausted itself. A paradigm exhausts itself when its harms far outweigh its benefits. This is precisely what is happening with increasing clarity, especially since the last quarter of the twentieth century. There are still those who think—especially among economists—that the benefits continue to outweigh the harms. But they ignore science, for science never ceases to warn us that the worst evils are yet to come and that they are already inevitable, even if the worst-case scenarios do not materialize.

The human species has, therefore, come to the most important crossroad in its history. If the decision on strategic investments in energy, food, and transportation systems is left in the hands of state–corporations, there will be no technology to save us. In this case, we will be condemned, within the horizon of current generations, to a warming of 2 °C, with increasing probabilities of warming equal or above 5 °C by 2100, due to the action of positive climate feedbacks. A warming of 3 °C or 4 °C is already considered beyond adaptation and implies unspeakable suffering to
all, even the tiny minority who consider themselves invulnerable. An average global warming of 5 °C above the pre-industrial period implies extreme precariousness and, ultimately, the extinction of our species and of countless others.

This realization should not and cannot feed useless despair because another way is still possible. The way forward is one of full political mobilization to keep warming as low as is still possible. Maintaining global warming at the levels set by the IPCC and pursued by the Paris Agreement already is, in all likelihood, a socio-physical impossibility, as seen in the preceding chapter. But politics is not the art of the possible, as those who resign themselves to the status quo like to claim. To long for the impossible in order to broaden the horizon of the possible is the fundamental
teaching of political reason. Is maintaining global warming “well below” 2 °C—to remember the aim of the Paris Agreement—impossible? “Of course, it is impossible, but we have to try,” exhorts Piers Forster (Lawton 2018). And Graham Lawton adds, wisely: “Even if we go over 1.5°C, every bit of extra warming we shave off makes the world more liveable.” The whole political program of our society can now be summed up by this attempt.

[[REFERENCES OMITTED]]

[[CHAPTER 9 BEGINS]]
“The long-lasting debate involving population, development and environment appears to have become an insoluble equation.” Such is the conclusion of George Martine and José Eustáquio Diniz Alves (2019) about the apparently insurmountable difficulties in harmonizing the three terms of the demographic equation. That said, one can at least safely assert that the number of people currently inhabiting the planet is not, in and of itself, a fundamental stressor of ecosystems as an isolated factor. It is not possible to subscribe to the opinion of Sir David King, former science
adviser to the UK government, for whom “the massive growth in the human population through the twentieth century has had more impact on biodiversity than any other single factor” (Campbell et al. 2007). In reality, the cumulative model of contemporary societies—not population growth itself—“has had more impact on biodiversity than any other single factor.” The expansive, environmentally destructive, and socially exploitative dynamics of global capitalism, its energy system based on the burning of fossil fuels, and the new globalized food system centered on
consumption of animal protein would continue to drive humanity and biodiversity to collapse, even if the human population were reduced to less than half, that is, to its 1970 levels. Population increase is, undoubtedly, an aggravating factor, but it is not the engine of this dynamic of collapse. As George Martine states (2016):

Obviously the more people who are consuming, the faster the rate of degradation under the present system. Reducing population size is part of any long-term solution. Yet, it is disingenuous to cite population size and growth as the main culprit of environmental degradation or to suggest that family planning programs could provide a quick fix. (…) What really matters is who has access to “development”, such as we know it. Of the 7.3 billion people currently on Earth, only a third can be minimally construed as “middle class”
consumers and the remainder contribute marginally to insoluble global environmental threats.

There is a clear correspondence, for example, between environmental degradation and anthropogenic CO2 emissions. These emissions do not increase simply as a result of population growth, but as a result of a per capita increase

[[FIGURE 9.1
in emissions. Per capita anthropogenic CO2 emissions did not cease to increase year after year in the period from 1950 to 2010. On average, each individual on the planet emitted more

OMITTED]] than twice as much CO2 as an individual in 1950, according to the following progression in million tons per capita: 1950, 0.64; 2001, 1.12; and 2010, 1.33 (Boden & Andres 2013). Moreover, the increase in
per capita consumption should not obscure who is largely responsible for GHG emissions, the richest 10% of humanity, since these GHGs are emitted as a function of their consumption levels. Figure 9.1 shows how these emissions
are distributed in the global income pyramid.

The richest 10% of humanity produce nearly half of anthropogenic CO2 emissions, and the richest 30%, which George Martine classifies as “middle class” in the aforementioned quote, produce almost 80% of them, while the
poorest half of the planet’s population produce only 10%. Yangyang Xu and Veerabhadran Ramanathan (2017) bring more data to Oxfam’s analysis. Focusing on the poorest three billion people, they find that:

Their contribution to CO2 pollution is roughly 5% compared with the 50% contribution by the wealthiest 1 billion. This bottom 3 billion population comprises mostly subsistent farmers, whose livelihood will be severely impacted, if not destroyed, with a one- to five-year megadrought, heat waves, or heavy floods.

Hannah Ritchie and Max Rosen (2017, revised in 2019) also highlight this extreme inequality1:

The world’s poorest have contribute less than 1% of emissions. (…) When aggregated in terms of income, (…) the richest half (high and upper-middle income countries) emit 86% of global CO2 emissions. The bottom half (low and lower-middle income) only 14%. The very poorest countries (home to 9% of the global population) are responsible for just 0.5%. This provides a strong indication of the relative sensitivity of global emissions to income versus population. Even several billion additional people in low-income countries —
where fertility rates and population growth is already highest — would leave global emissions almost unchanged. 3 or 4 billion low income individuals would only account for a few percent of global CO2. At the other end of the distribution however, adding only one billion high income individuals would increase global emissions by almost one-third.

Furthermore, it is not overpopulation that is causing migratory waves from Central America toward the United States. Climate change, drought, an unsustainable agricultural model, and other environmental degradation factors are the real culprits for the growing agricultural crisis in these countries. All in all, the demographic factor plays only a secondary role in the causal link between globalized capitalism and the ongoing socio-environmental collapse. That said, population size is not an irrelevant factor, far from it. According to the 2019 UN World Population Prospects
(WPP 2019):

The global population is expected to reach 8.5 billion in 2030, 9.7 billion in 2050 and 10.9 billion in 2100, according to the medium-variant projection, which assumes a decline of fertility for countries where large families are still prevalent, a slight increase of fertility in several countries where women have fewer than two live births on average over a lifetime, and continued reductions in mortality at all ages.

We know, however, that there is inherent uncertainty in demographic projections. Thereby, it would suffice for the decline in current high-fertility countries to be a little slower than expected for demography to return to the forefront of the socio-environmental crises. Moreover, even if population growth remains within the bounds of the medium-variant projection, the WPP 2019 concludes that “with a certainty of 95%, the size of the global population will stand between 8.5 and 8.6 billion in 2030, between 9.4 and 10.1 billion in 2050, and between 9.4 and 12.7 billion in
2100.” The difference between the two ends of the projection for 2030 (100 million people) is not very relevant, since it only represents a difference of about 1.2% (between the highest and lowest predictions). However, by 2050, this uncertainty range jumps to 700 million people, a difference of about 7.5% relative to the lower value of 9.4 billion, and by 2100 there is a potential difference of 3.3 billion, that is, of about 35% between the lowest and highest projection. This is a wide gap. Just to measure its extent, imagine that by mid-2020 the planet was home to 11 billion
humans and not the current 7.8 billion. In spite of these uncertainties, what is proposed here can be summarized in two points:

(1) 2019 data from the WPP confirms trends that have been well-detected for over a decade:

The rate of population growth remains especially high in the group of 47 countries designated by the United Nations as least developed, including 32 countries in sub-Saharan Africa. With an average growth of 2.3% annually from 2015 to 2020, the total population of the least developed countries (LDCs) as a group is growing 2.5 times faster than the total population of the rest of the world.

According to 2019 data from the WPP, there is a population of 1.066 billion in sub-Saharan African countries, 1.37 billion in India, 271 million in Indonesia, and 204 million in Pakistan. Together, these countries are home to 2.911 billion people, or 39% of the planet’s current human population, and their demographic growth rate is still very high. By 2050, the population of these countries is expected to reach 2.118 billion, 1.643 billion, 327 million, and 403 million, respectively. By 2050, therefore, these countries will have nearly
4.5 billion people or about 47% of the total estimated mid-century global population of 9.7 billion. Moreover, in keeping with their current trajectory, India, Indonesia, and a number of African countries will tend to lead the global capitalist accumulation (if there is no global environmental collapse by 2030, which is increasingly likely) and provide commodities for that accumulation, with increasing impacts on global ecosystems and on biodiversity. Economic globalization, coupled with a four-decade decline food self-sufficiency at
the country level in a growing number of countries (Schramski et al. 20192), as well as a diet increasingly based on meat, will lead many commodity-supplying countries, including Brazil and other Latin American and African countries, to destroy their forests— large carbon stockpiles and also rich in biodiversity—and to replace them with agriculture. It is not the population and local economy of tropical countries that drive the destruction of wildlife habitats, but the globalization of the food and energy system. Only a radical
change in the animal-based protein diet and a dismantling of economic globalization have the power today to reduce or even reverse environmental degradation.

(2) Furthermore, waste and overconsumption of humanity’s richest 30% must radically decrease so that the rising socioeconomic inequality can be reversed, safeguarding the ability of planetary ecosystems to support human population at current levels. Even if the most optimistic projections of rapid deceleration in population growth are confirmed, the planet’s richest 30% will always have the greatest environmental impact—in absolute terms and per capita (measured by indicators such as the ecological footprint)—should we
persist with the capitalist model of higher energy production and expansion of surplus and consumption. The capitalist supermarket cannot be universalized, even in an imaginary scenario of zero or negative demographic growth. As George Martine (2016) states, “under the current paradigm, it is simply absurd to imagine that the current living standards of the richer minority can be adopted by the entire world population — whether of 8 or 15 billion — without drastically overstepping planetary boundaries.”

The crux of the demographic problem is not just knowing how much the human population will increase by 2050 and 2100, but, above all, what the economic system’s impact on the biosphere and the climate system will be. And, not unlike the other socio-environmental crises, the magnitude of this impact will depend on societies’ capacity for democratic governance, whence the title of this chapter.

The Ehrlich Formula I = PAT

Paul and Anne Ehrlich (1974/1990, p. 58) conceptualized—correctly, according to our understanding—the demographic impact on the biosphere:

The impact of any human group on the environment can be usefully viewed as the product of three different factors. The first is the number of people. The second is some measure of the average person’s consumption of resources (which is also an index of affluence). Finally, the product of those two factors–the population and its per-capita consumption–is multiplied by an index of the environmental disruptiveness of the technologies that provide the goods consumed. (…) In short, Impact = Population x Affluence x Technology,
or I = PAT.

Paul Ehrlich and John Holdren (1971) wrote, always correctly in my view, that “the total negative impact” (I) of a given society on the environment “can be expressed, in the simplest terms, by the relation I = P × F, where P is the population, and F is a function which measures the per capita impact.” However, the authors overestimate the population factor. So, in the same article, they add: “The per capita consumption of energy and resources, and the associated per capita impact on the environment, are themselves functions of the population size. Our previous equation is
more accurately written I = P × F(P), displaying the fact that impact can increase faster than linearly with population.” This perception was understandable during the years of the publication of The Population Bomb (1968) and The Population Explosion (1974). In fact, during 1965–1970, the growth rate of the world’s population was increasing by 2.1%. In 2015–2020, it is growing by less than 1.1% per year. It is, of course, still too much, but it is estimated that this growth rate will slow down by the end of the century.

The concerns brought forth by these classics by Paul and Anne Ehrlich began to prove themselves outdated in the last two decades of the twentieth century. Today, more than ever, it is the energy and food systems inseparable from economic globalization that are the true drivers of deforestation, loss of wildlife habitats, ongoing collapse of biodiversity, and pollution of soil, air, and water by emissions of particulate matter, pesticides, fertilizers, plastic waste, and other pollutants, as discussed in Chap. 4.

If we return to the equation I = PAT (Impact = Population × Affluence × Technology), it is true that the stricto sensu demographic pressure on ecosystems is getting worse because in many populous countries, the increase in population (P Index) is decelerating too slowly. But, of extreme importance is the fact that the two other factors are definitely not diminishing but growing rapidly.

The second environmental impact factor (Affluence)—measured by GHG emissions, consumption of energy and natural resources, production of commodities, and generation of waste—is increasing at a rapid pace. In many cases, technology, the third factor, is becoming even more destructive, such as in the production of unconventional oil and the even more destructive regression to coal (Chaps. 5 and 6). Although there is scientific and technological knowledge to significantly lower the T index, the choices made by the corporate–state network, which retains control of
global investment flows, have not helped diminish this destructiveness. On the contrary, the observed trend, illustrated a thousand times in this book, is always the same: the less abundant natural resources become (fish schools, forests, soils, freshwater resources, liquid oil, and the potential for hydropower generation), the more invasive and destructive become the technologies used to obtain them.

9.1 Demographic Choices and Democracy: Reciprocal Conditioning

It is absolutely necessary to accelerate the demographic transition. But demographic transition is, above all, a function of democracy, without which societies will not be endowed with the five pillars of demographic rationality:

(1) A socioeconomic system that is environmentally friendly, understanding the economy as a subsystem of the biosphere

(2) Lower consumption, less waste, and less waste generation by the richest 30%, in order to reduce inequality in wealth and income

(3) Education for all, but especially for girls/women

(4) Female sexual freedom

(5) Secularism

Democracy is obviously the conditio sine qua non for the existence of these five pillars of demographic rationality. As is well known, items three to five are “spontaneous” promoters of a successful demographic transition. With regard to secularism, in particular, it must be stressed that, without democracy, societies will continue to suffer a blockade from the three great monotheistic religions on family planning, various contraceptive practices, and clinically assisted and state-guaranteed abortion. The belief in the existence of an immaterial entity, ontologically independent
of the body, which would a priori constitute the individual—nefesh or neshamah in Judaism, anima in Christianity, and nafs in Islam—leads monotheistic religions to postulate that the fetus is already endowed with a supposed “essence” in the intrauterine phase before the formation of its central nervous system, anatomical structures, and physiology. Abortion, therefore, would be prohibited according to these faiths. Religious beliefs became constant in human imagination, possibly from the moment when awareness of one’s own mortality became a central fact of our
ancestors’ awareness. The problem is not, of course, religiosity. The problem is the ability of religions to become a power system capable of obstructing protective legislation for women.

According to the Guttmacher Institute, pregnancy is often unwanted by women (Engelman 2013). Therefore, as long as women do not conquer, as they have done in some countries, the first and most elementary of democratic rights—the right over themselves, their bodies, their sexuality, and their procreation—and, in addition to this, as long as they do not have access to legal, medical, and financial means for family planning and abortion in the event of an unwanted pregnancy, all the UN’s calculations on probabilities will be simple statistical exercises on the evolution
of fertility rates, disregarding, at their own risk, a factor whose absence may distort all others: reproductive democracy. It should be feared, therefore, that Asia and Africa, continents that house 76% of the world’s population, and in which religious extremism, theocracies, and state religions are spreading, may not be able to evolve into secularism. As a result, they will suffer from calamitous demographic increases for themselves and for the whole world.

That said, not only Asia and Africa but also the Americas, and notably Brazil and the United States, are victims of a strong offensive against secularism. In the pulpits and in the Brazilian National Congress, the Catholic, Protestant, and neo-Pentecostal churches are united when it comes to barring the right to state-assisted abortion. The same is true in the United States where the freedom to have an abortion in the first 3 months of pregnancy, stemming from a landmark 1973 Supreme Court ruling (Roe v. Wade), is eroding, with brutal setbacks leading to the criminalization
of abortion in several states.

This is not to say that demography is a simple function of democracy, because the opposite is also true. The capacity for democratic governance also depends, to a large extent, on the pace and scale of population growth, since fertility rates above the replacement level hinder a minimum of political stability in the medium and long term, nullify government efforts for better education and sanitation, and leave the population at the mercy of religious obscurantism.
9.2 Beyond the Arithmetic Addition: Urbanization, Tourism, and Automobiles

Returning to the Ehrlich formula (I = PAT), the environmental impact of population growth is strongly conditioned by Affluence (A), that is, the average per capita consumption multiplied by the environmental destruction index of technologies (T) that provide energy, production assets, and consumer goods. Thus, for various reasons (GHGs, waste production, use or consumption of energy, water, soils, meat, minerals, wood, etc.), the environmental impact of an American or an European is, on average, obviously much greater than that of an African, Asian, or Latin American
who does not belong to the economic elite.

The association of the affluence index with the phenomenon of intense urbanization is a supplementary factor of anthropic pressure, since the urban footprint is larger than that of the population as a whole. According to the UN’s World Urbanization Prospects (The 2003 Revision), the urban population reached one billion in 1960, two billion in 1985, and three billion in 2002, and it should reach five billion in 2030. According to a 2014 review of the World Urbanization Prospects from the United Nations Population Division, the world’s urban population is expected to
exceed six billion by 2045. In many cases, the urbanization process is extreme, with the formation of gigantic urban and suburban patches that further enhance the environmental impact, especially in the new megacities in poor and infrastructure-deficient countries. In 1950, New York and Tokyo were the only cities with more than ten million inhabitants. In 1990, there were ten megacities with more than ten million inhabitants. In 2012, there were 23 megacities of this caliber, four of which were in China. In 2014, there were 28 megacities in the world, 16 of which were in
Asia, 4 in Latin America, 3 in Africa, 3 in Europe, and 2 in North America. By 2025, there will be 37 megacities with more than ten million inhabitants in the world, seven of them in China. By 2030, there will probably be 41 megacities with ten million inhabitants or more, with the 9 largest located in Asia and Africa (in descending order, Tokyo, Delhi, Shanghai, Mumbai, Beijing, Dhaka, Karachi, Cairo, and Lagos).

This process of mega-urbanization is spontaneous and seemingly inexorable within today’s state–corporations, committed to the dynamics of the global market and unable to carry out an agenda of urban planning and decentralization. This urbanization is sometimes even encouraged by governments. In China, for example, between 1982 and 2012, the urban population went from 200 million to over 700 million. In the next 15 years, another 300 million Chinese, equivalent to the US population, will migrate to the cities.

Far from pondering the negative effects of this process, the Chinese government plans to accelerate it by merging nine major cities in the Pearl River Delta in the South of the country into a single urban sprawl of 50 million. This urbanization was considered by the Chinese leaders and their 5-year plan (2011–2015) as the “essential engine” of economic growth. Thus, in 2010, there were 94 cities in China with over one million inhabitants. And according to Beijing’s plans, by 2025, there will be 143 cities of this scale. According to Peiyue Li, Hui Qian, and Jianhua Wu (2014), in
Lanzhou:

700 mountains are being levelled to create more than 250 km2 of flat land. (…) Land-creation projects are already causing air and water pollution, soil erosion and geological hazards such as subsidence. They destroy forests and farmlands and endanger wild animals and plants.

Now, simply because they will be concentrated predominantly in these consumer hotspots—cities of more than one million inhabitants or megacities of more than ten million—the additional two billion people who will be added around 2043, or even earlier, will tend to produce more heat radiation, more air pollution, more municipal solid waste, more industrial waste, more CO2, more methane, and more tropospheric ozone and will consume more energy and natural resources per capita than the two billion people that were added to humanity between 1987 and 2012.

9.2.1 Tourism

The tourism industry, among the largest in the world, promotes increasing pressure on the environment, as highlighted by UNEP: deforestation and degradation of forests and soils, loss of natural habitats, increased pressure on native species and increase in invasive species, increased forest vulnerability to fire, water scarcity, more land and sea pollution, and increased greenhouse gas emissions from more travel. The aviation sector accounts for more than 1 GtCO2, or about 2% of global emissions per year, and tourism now accounts for 60% of air transport (UNEP).
Emissions from this sector are expected to triple by 2050 or more than double if planes become more fuel-efficient. According to the World Tourism Organization (UNWTO), in 1995, 540 million tourists traveled outside their countries. In 2010, that number reached 940 million. In 2018, international tourist arrivals worldwide reach 1.4 billion, a mark which was reached 2 years ahead of UNWTO’s long-term forecast issued in 2010.3 In 2000, the number of Chinese tourists traveling the world was ten million, and in 2013, it rose to 98.19 million. In the first 11 months of 2014
alone, this number reached 100 million. The China National Tourism Administration informed that “Chinese tourists traveled overseas on 131 million occasions in 2017, an increase of 7% from the previous year,”4 and the China Daily (5/VIII/2019) reported that Chinese tourists made 149 million overseas trips in 2018.

9.2.2 Automotive Vehicles

A third classic example of how the factors “Affluence” and “Technological Destructiveness” enhance demographic impact is the amount and the per capita increase of oil-powered vehicles in the world. Table 9.1 gives a picture of this evolution:

In 40 years (1970–2010), the number of vehicles in operation (cars and light and heavy commercial vehicles) more than quadrupled, while the population did not double. In the next 10 years (2021–2030), it is estimated that this fleet will reach two billion vehicles, for a population that is about 20% greater than in 2010. The auto industry resumed its expansion globally since 2009 and, in Europe, since 2014. Global sales of the automotive industry have maintained a rate of between 77 and 79 million vehicles per year since 2016, as shown in Fig. 9.2:

[[TABLE 9.1 OMITTED]]

[[FIGURE 9.2 OMITTED]]

[[TABLE 9.2 OMITTED]]


Table 9.2 shows the number of motor vehicles by country between 2015 and 2019.

Furthermore, the increase in consumption of oil for transport should continue to increase in the foreseeable future. According to the EIA’s International Energy Outlook 2017:

Because of the increase in electric vehicle penetration, the share of petroleum-based fuel for light-duty vehicle use decreases from 98% in 2015, to 90% by 2040. But, liquid fuels consumption is still expected to increase by almost 20% between 2015 and 2040 as more petroleum-based cars are still being added to the stock and other uses of liquid fuels continue to grow.

The projected increase in this fleet obviously depends on the supply elasticity of oil, gas, ethanol, and batteries for electric vehicles. According to Daniel Sperling (2010), maintaining current fuel consumption conditions per kilometer driven, a fleet of two billion vehicles—which should be reached by 2030—would consume 120 million barrels of oil per day, almost 20% more than today’s total daily oil consumption.

9.3 The Destructiveness of Technology (the T Index)

The transport sector, in general, includes products that are derived from very destructive technology, whether it be the materials used, the energy consumed, or its direct contribution to global CO2 emissions. According to the Transport and Climate Change Global Status Report (2018), produced by the Partnership on Sustainable, Low Carbon Transport (SLoCaT), “transport sector CO2 direct emissions increased 29% (from 5.8 Gt to 7.5 Gt) between 2000 and 2016, at which point transport produced about 23% of global energy-related CO2 emissions, and (as of 2014) 14% of
global GHG emissions.” These numbers will increase not only because of the expected increase in fossil fuel-powered vehicles over the next decade but also because of the higher proportion of fuel (particularly in the United States) coming from unconventional oil, whose extraction emits more CO2. According to data from SLoCaT 2018, GHG emissions from rail transport accounted for only 3% of the total emissions in 2015, while emissions from individual automobiles and long-distance transport for tourism and trade accounted for 88% of global CO2 emissions in the
transport sector, as shown in Table 9.3.

The decarbonization of technologies in the transportation, cement, and steel industries is extremely difficult if we maintain the paradigms of economic growth and of today’s globalized economy. As Steven Davis et al. (2018) affirm,

“these difficult-to-decarbonize energy services include aviation, long-distance transport, and shipping; production of carbon-intensive structural materials such as steel and [[TABLE 9.3 OMITTED]]
cement; and provision of a reliable electricity supply that meets varying demand.” According to estimates from the OECD International Transport Forum, by 2050, CO2 emissions from oil-powered vehicles can multiply 2.5–3 times
relative to 2000 (with likely improvements in efficiency during that period already included in this estimate).

This set of findings and projections brings us back to the same juncture that at once motivates and runs through this book: either societies advance rapidly to overcome capitalism—and by overcoming capitalism we mean, and repeat exhaustively, to radically redefine man’s position in the biosphere and deepen democracy (starting with reproductive democracy)—or the factors that make up the demographic impact on natural resources and ecosystems (I = PAT) will, together, overcome capitalism in their own way, that is, through social and environmental collapse.

9.3.1 A Fragile Premise

The realization of any of the demographic impact scenarios discussed above depends, of course, on an implicit assumption that “the other variables” remain relatively unchanged. The cumulative apparatus of global capitalism is already leading to a collapse of water resources, soils, biodiversity, and ecosystems in general, as well as ever more severe changes in climate coordinates. Occurring in synergy, such phenomena will imply more or less brutal demographic contractions, squarely contradicting UN Population Division projections which are based mainly on variations in
fertility rates. This methodological condition of a ceteris paribus—that is, the condition that ecosystems remain functional and societies relatively organized—is increasingly questionable.

[[REFERENCES OMITTED]]

[[CHAPTER 10 BEGINS]]
Numerous scholars from various fields of science today are concerned with the ongoing collapse of biodiversity. The first Global Assessment of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES),1 published in 2019, estimates that:

The rate of global change in nature during the past 50 years is unprecedented in human history (…) Human actions threaten more species with global extinction now than ever before. (…) An average of around 25% of species in assessed animal and plant groups are threatened, suggesting that around 1 million species already face extinction, many within decades, unless action is taken to reduce the intensity of drivers of biodiversity loss.

Societies’ very survival depends on their ability to avert the impending threat of biological annihilation via the
ongoing sixth mass extinction of species, triggered or intensified by the globalization of capitalism over the last 50 years. Sir Robert
Watson, Chair of IPBES (2016), doesn’t mince his words to say what is at stake: “ We are eroding the very foundations of our economies, livelihoods, food security, health and quality

of life worldwide.” There is no hyperbole in the claim that the collapse of biodiversity and the acceleration of global warming,

two processes that interact in synergy, entail an increasing risk of extinction for the Homo sapiens. As pointed
out by Cristiana Paşca Palmer, Executive Secretary of the Convention on Biodiversity (2018), “I hope we aren’t the first species to document our own extinction.” Julia Marton-Lefèvre, former Director General of the International
Union for Conservation of Nature (IUCN), reiterates this warning for the umpteenth time in a statement to delegations meeting at Rio+20 in 2012:

a matter of life and death for people on the planet. A sustainable future cannot be
Sustainability is

achieved without conserving biological diversity—animal and plant species, their habitats and their genes—not only for nature itself,
but also for all 7 billion people who depend on it.
10.1 Defaunation and Biological Annihilation

Rodolfo Dirzo, Mauro Galetti, Ben Collen, and other co-authors of a review titled “Defaunation in the Anthropocene” (2014) conceptualize one of the central aspects of the current sixth mass extinction of species: the term defaunation is used to denote the loss of both species and populations of wildlife, as well as local declines in abundance of individuals. The defaunation process is in full swing:

In the past 500 years, humans have triggered a wave of extinction, threat, and local population declines that may be comparable in both rate and magnitude with the five previous mass extinctions of Earth’s history. Similar to other mass extinction events, the effects of this “sixth extinction wave” extend across taxonomic groups, but they are also selective, with some taxonomic groups and regions being particularly affected. (…) So profound is this problem that we have applied the term “defaunation” to describe it.

In a 2017 article, Gerardo Ceballos, Paul Ehrlich, and, again, Rodolfo Dirzo warn about the false impression that the threat of biological annihilation is not imminent:

The strong focus on species extinctions, a critical aspect of the contemporary pulse of biological
extinction, leads to a common misimpression that Earth’s biota is not immediately threatened,
just slowly entering an episode of major biodiversity loss. This view overlooks the current trends
of population declines and extinctions. Using a sample of 27,600 terrestrial vertebrate species, and a more detailed analysis of 177 mammal species, we show the
extremely high degree of population decay in vertebrates, even in common “species of low concern.” Dwindling population sizes and range shrinkages amount to a massive anthropogenic erosion of biodiversity
and of the ecosystem services essential to civilization. This “biological annihilation” underlines the seriousness for humanity of Earth’s ongoing sixth mass extinction event.

10.2 The 1992 Convention on Biological Biodiversity

This process of mass extinction of plant and animal species has accelerated despite diplomatic and other efforts . In 1992, 1 day after the United Nations
Framework Convention on Climate Change (UNFCCC) was signed, 194 states2 signed the Convention on Biological Diversity (CBD). In this robust 30-page document, the signatories solemnly affirm “that the conservation of
biological diversity is a common concern of humankind,” declare to be “concerned that biological diversity is being significantly reduced by certain human activities,” and claim they are “determined to conserve and sustainably use
biological diversity for the benefit of present and future generations.” In contrast to these moving intentions, this is the reality: while opening the third edition of the UN Global Biodiversity Outlook (GBO-3) in 2010, Ban Ki-moon,
then UN Secretary-General, highlighted the accelerating decline in biodiversity in the first decade of the twenty-first century—“the principal pressures leading to biodiversity loss are not just constant but are, in some cases,
intensifying.” This third report was presented at the Nagoya meeting (COP10) of the Convention on Biological Diversity. In this meeting, nearly 200 parties subscribed to the Strategic Plan for Biodiversity (the so-called Aichi
Biodiversity Targets), establishing 20 Targets (subdivided into 56 goals, grouped under 5 strategic goals) for the conservation of biodiversity between 2011 and 2020, with successive stages until 2050. After the first 4 years of this
first decade, the fourth edition of the UN Global Biodiversity Outlook (GBO-4), presented in October 2014 at CBD COP 12 in South Korea, admits that “pressures on biodiversity will continue to increase at least until 2020, and that
the status of biodiversity will continue to decline.”3 With regard to Target 12 (Reducing risk of extinction), the report concludes that:

Multiple lines of evidence give high confidence that based on our current trajectory, this target would not be met by 2020, as the trend towards greater extinction risk for several taxonomic groups has not
decelerated since 2010. Despite individual success stories, the average risk of extinction for birds, mammals, amphibians and corals shows no sign of decreasing.

Using 55 indicator datasets, Derek Tittensor and 50 colleagues (2014) reiterate this same assessment of progress on the Aichi Targets: “On current trajectories, results suggest that despite accelerating policy and management
responses to the biodiversity crisis, the impacts of these efforts are unlikely to be reflected in improved trends in the state of biodiversity by 2020.” For Richard Gregory, one of the authors of this report, “world leaders are
currently grappling with many crises affecting our future. But this study shows there is a collective failure to address the loss of biodiversity, which is arguably one of the greatest crises facing humanity” (Vaughan 2014). In
December 2016, the 13th meeting of the CBD in Cancun, Mexico, revealed the miserable failure of the plans established by signatory countries to achieve the Aichi Targets by 2020. No less than 90% of these targets, including
halting habitat loss, saving endangered species, reducing pollution, and making fishing and agriculture more sustainable, was then expected to be unmet by 2020 (Coghlan 2018).

Having failed to honor their Aichi commitments, countries will meet in October 2020, in Kunming, China, for the 15th Conference of the Parties (CBD COP15). A new landmark global pact, titled 2050 CBD Vision “Living in harmony
with nature”, is the expected outcome of this meeting. Unfortunately, according to Friends of the Earth, the draft that will be discussed in China is not ambitious enough. It fails to recognize the negative impacts of monoculture
agribusiness and pesticides, nor does it call for divestment from other destructive projects:4

Corporations have a vested interest in avoiding strict regulation and any attempts to scale down their profit-driven activities. As long as they have a seat at the negotiation table, no measures will be taken to live
within planetary boundaries. Yet instead of seeking to reduce corporate conflicts of interest—a controversial issue in the CBD—the Draft repeatedly promotes closer collaboration with the private sector, and
states that increased production will be necessary.

10.3 The Biodiversity of the Holocene

There is no unanimity on the number of species that make up today’s biodiversity. In 1988, Robert M. May suggested—on the assumption that the number of species increased inversely to the proportion of their size—that there
were an estimated 10 million to 50 million terrestrial species. He stressed, however, the uncertainties about this, especially from a theoretical point of view. In 1996, Richard Leakey and Roger Lewin considered that the planet
hosted perhaps 50 million species. For James P. Collins and Martha L. Crump (2009), the number of species of organisms living on Earth today would range from 10 million to 100 million, and Richard Monastersky (2014) accepts
estimates—that include animal, plant, and fungal species— oscillating between 2 million and over 50 million.

When it comes to the domain of microorganisms, which control major nutrient cycles and directly influence plant, animal, and human health, we enter further into terra incognita. As Diana R. Nemergut and co-authors (2010)

if global biodiversity quantification


stated in a paper on bacterial biogeography, “we know relatively little about the forces shaping their large-scale ecological ranges.” Thus,

estimates include the domains of bacteria and archaea, our planet could contain nearly one trillion species (Kenneth
and Lennona 2016). That said, at least for eukaryotic species (organisms whose cells have a nucleus enclosed within membranes), most estimates have shrunk to less than 10 million species. According to Nigel E. Stork et al. (2015),
there are about 1.5 million beetle species (estimates ranging from 0.9 to 2.1 million), 5.5 million insect species (2.6–7.8 million), and 6.8 million species of terrestrial arthropods (range 5.9–7.8 million), “which suggest that estimates
for the world’s insects and their relatives are narrowing considerably.” Camilo Mora and his team (2011) have proposed a new way of estimating the number of species. According to them:

the higher taxonomic classification of species (i.e., the assignment of species to phylum, class, order, family, and genus) follows a consistent and predictable pattern from which the total number of species in a taxonomic group can be estimated. This approach was validated against well-known taxa, and when applied to all domains of life, it predicts ∼8.7 million (±1.3 million Standard Error) eukaryotic species globally, of which ∼2.2 million (±0.18 million SE) are marine.

This study elicited diverse reactions, including criticism of the methodology used, which would be unable to account for a global biodiversity. But Sina Adl, one of the co-authors of this work, is the first to admit that “the estimate in the manuscript is a gross under-estimate” (as cited in Mathiesen 2013). In any case, a consensus seems to be outlined regarding the magnitude of global biodiversity, at least in regard to eukaryotic species. Thus, Rodolfo Dirzo, Mauro Galetti, and Ben Collen (2014) implicitly consolidate this proposal by stating: “of a conservatively estimated 5
million to 9 million animal species on the planet, we are likely losing ~11,000 to 58,000 species annually.” IPBES also works with this scale. As stated by Andy Purvis (2019), “in the [Global] Assessment, we have used a recent mid-low estimate of 8.1 million animal and plant species, of which an estimated 5.5 million are insects (i.e., 75%) and 2.6 million are not.”

10.4 The Sixth Extinction

Species extinction is an inherent fact of evolution; “some thirty billion species are estimated to have lived since multicellular creatures first evolved, in the Cambrian explosion” (Leakey and Lewin 1996), which gives us an idea of their transience, since the number of multicellular species existing today does not go beyond the millions, as we have seen. Therefore, at least 99% of all species that have ever existed are extinct. This is mainly due to the large mass extinctions of species that abruptly disrupted the increasing trend of biodiversity. One can speak of mass extinctions
when the Earth loses more than three-quarters of its species in a geologically short interval (Barnosky et al. 2011). Five major extinctions fall into this category: (1) in the end of the Ordovician period (440 million years ago), (2) in the Late Devonian Period (365 million years ago), (3) in the Permian–Triassic Period (251 million years ago), (4) in the end of the Triassic Period (210 million years ago), and (5) in the end of the Cretaceous Period (65 million years ago). Jun-xuan Fan and colleagues (2010) showed no evidence of a discrete mass extinction near the end of the
Devonian period, the so-called the Late Devonian Frasnian–Famennian extinction. There was instead “a protracted diversity decline that began in the Eifelian (392.72 Ma) and continued to ~368.24 Ma with no subsequent major recovery.” The fifth extinction (or the fourth one, according to this study) opened the Cenozoic Era, generally called the Age of Mammals, but one that might more appropriately be called the Age of Arthropods, according to Richard Leakey and Roger Lewin (1996), since they are the largest phylum in existence, encompassing over 80% of known
animal species.

In the late 1970s, a team of researchers led by Luis Alvarez of the University of California advanced the now widely accepted hypothesis that the impact of a large asteroid on Mexico’s Yucatan Peninsula (or of an asteroid rain) unleashed or dealt the final blow to the chain of events known as the fifth extinction. The hypothesis that external factors also caused other extinctions has gained traction since 1984, when David Raup and Joseph John Sepkoski from the University of Chicago propounded the occurrence of about 12 extinction events (including the five largest), with a
mean interval between events of 26 million years. The statistical analysis advanced by these scholars leads to the conclusion that 60% of all extinctions throughout the Phanerozoic Eon were at least triggered (if not caused) by the impact of asteroids or comets, which would act as first strikes, making the biotas vulnerable to other debilitating and destructive processes (Leakey and Lewin 1996). Whatever may be the current number of existing eukaryotic species—around eight million now being the most accepted number—since the 1980s, many scientists have been
warning that the Earth is in the midst of a massive biodiversity extinction crisis caused by human activities, a process called the sixth extinction. This threatens to be as or even more annihilating than the previous five, given three characteristics that are peculiar to it:

(1) It is not triggered by an exceptional, external event, but by a process internal to the biosphere—the increasing destructiveness of our globalized economic activity—a conscious, announced, increasingly well-known, and hitherto unstoppable process. The dynamics of the sixth extinction is not like the irradiation of waves caused by the impact of a stone on the surface of water, which tends to wane as its range extends into space and time; it is a process that is intensified in direct relation to the expansion of the commodity
market, especially after 1950 and in the ensuing wave of capitalism’s extreme globalization, causing pollution, degradation, and destruction of non-anthropized habitats. Figure 10.1 clearly shows the brutal acceleration of species extinctions from the 1980s onward, closely related to increase in human population, following the degradation of natural habitats by the expansion of global agribusiness and other economic activities.

(2) The second characteristic is that, far from implying the dominance of one taxonomic group over the others, the sixth extinction endangers the allegedly “dominant” species by destroying the web of biological

sustenance that allows it to exist (Jones 2011). We will return to this point in the last chapter as we analyze [[FIGURE 10.1 OMITTED]] the illusions of anthropocentrism, but it is clear
that the web of life that sustains our global society and, ultimately, our species is becoming weaker and weaker, as a multitude of scientists and diplomats never cease to repeat. In 2011, prefacing the Global
Biodiversity Outlook 3 report, Achim Steiner, former Executive Director of the UNEP (2006–2016), warned of this compelling fact: “The arrogance of humanity is that somehow we imagine we can get by without
biodiversity or that it is somehow peripheral: the truth is we need it more than ever on a planet of six billion heading to over nine billion people by 2050.”

(3) The third characteristic of the sixth extinction is its overwhelming speed. The current rate of extinction is speeding up. As pointed out by Andy Purvis (2019), “while it
is true that the very highest rate of confirmed extinctions was in the late 19th century, as flightless island birds fell prey to cats and rats introduced by Europeans, the rate was as high again by the end of the 20th
century.” And over the past two decades, it has accelerated even further. Speed is perhaps the most important feature of the sixth extinction because this almost sudden characteristic suppresses a crucial
evolutionary variable: the time it takes for species to adapt to and survive environmental changes. Based on extrapolations from a sample of 200 snail species, Claire Régnier and colleagues (2015) showed that we
may have already lost 7% of the previously described nonmarine animal species. In other words, 130,000 of these nonmarine species may have been lost forever since their taxonomic record (see also Pearce
2015). Unlike the previous ones, the sixth extinction is not measurable on a geological scale, but on a historical scale, and the unit of time by which this scale is measured is shortening. In 1900, it occurred on the
scale of centuries. Fifty years ago, the most appropriate observation scale would be the decade. Today, the unit of measurement for the advancement of the sixth extinction is the year or even the day. In June

150 to 200 species become extinct every 24 hours. This


2010, UNEP’s The State of the Planet’s Biodiversity document estimated that

speed of the sixth extinction—I must repeat—cannot be compared to the previous five mass extinctions. More than 20 years ago, Peter M. Vitousek et al.
(1997), reporting on then-recent calculations, stated that “rates of species extinction are now on the order of 100 to 1000 times

those before humanity’s dominance of Earth.” According to a global survey of plant extinctions, “since 1900, nearly 3 species of seed-bearing plants have disappeared
per year, which is up to 500 times faster than they would naturally” (Ledford 2019). In 2005, the Ecosystems and Human Well-being: Synthesis (2005), a report established by the Millennium Ecosystem
Assessment, stated:

The rate of known extinctions of species in the past century is roughly 50–500 times greater than the extinction rate calculated from the fossil record of 0.1–1 extinctions per 1,000 species per
1,000 years. The rate is up to 1,000 times higher than the background extinction rates if possibly extinct species are included.
And the same document estimates that “projected future extinction rate is more than ten times higher than current rate.” In line with this projection, Chris Thomas et al. (2004) “predict, on the basis of mid-range climate-warming
scenarios for 2050, that 15–37% of species in our sample of regions and taxa will be committed to extinction.”

10.5 The IUCN Red List of Threatened Species

The number species threatened with extinction has grown rapidly over the past 50 years, as the International Union for Conservation of Nature
(IUCN) records show. Today, this assessment divides species into nine categories: Not Evaluated, Data Deficient, Least Concern, Near Threatened, Vulnerable, Endangered, Critically Endangered, Extinct in the Wild, and Extinct. In
1964, the first IUCN’s Red List of Threatened Species was published, which periodically lists the species evaluated as Critically Endangered, Endangered, or Vulnerable.5 Table 10.1 gives us an idea of the speed of the process of
species extinction that is underway.

To date, more than 105,700 species have been assessed for the IUCN Red List, and more than 28,000 species are threatened with extinction (27%). Between 2000 and 2013, with the increase in the number of species evaluated
having quadrupled, the number of threatened species nearly doubled, from just over 10,000 in 2000 to 21,286 in 2013, as shown in Fig. 10.2.

Table 10.2 gives a picture of the evolution of the relationship between the evaluated species and those threatened with extinction over the last 10 years.

In the 2015 assessment, 14 species were shifted from the “Endangered” category to the “Critically Endangered (Possibly Extinct)” category.

[[TABLE 10.1 OMITTED]]

[[FIGURE 10.2 OMITTED]]

[[TABLE 10.2 OMITTED]]


10.6 The Living Planet Index (LPI) and the Decline in Terrestrial Vertebrate Populations

In the important 2017 study, quoted at the beginning of this chapter, Gerardo Ceballos, Paul Ehrlich, and Rodolfo Dirzo emphasize the global extent and speed of the recent decline in vertebrates:

Considering all land vertebrates, our spatially explicit analyses indicate a massive pulse of population losses, with a global epidemic of species declines. Those analyses support the view that the decay of vertebrate animal life is widespread geographically, crosses phylogenetic lineages, and involves species ranging in abundance from common to rare. (…) In the last few decades, habitat loss, overexploitation, invasive organisms, pollution, toxification, and more recently climate disruption, as well as the interactions among these
factors, have led to the catastrophic declines in both the numbers and sizes of populations of both common and rare vertebrate species. For example, several species of mammals that were relatively safe one or two decades ago are now endangered.

In this paper, the authors’ objective differs from that of the IUCN, which is mainly concerned with risks of extinction and, therefore, with species whose populations are already scarce or rare and that have less territorial distribution. Unlike the IUCN, the three authors cited here also take into account the decline in “common” or abundant species. Thus, they propose a picture of the present process of “biological annihilation” of terrestrial vertebrates, perhaps even more dramatic than the one proposed by the IUCN’s Red List. Starting from the IUCN assessments, they divide
this picture into two large groups. On one side, there are the species placed by the IUCN under the category “Low Concern” or “Data Deficient.” On the other side, there are the “Threatened Species,” according to their various threat levels (“Critically Endangered,” “Endangered,” “Vulnerable,” and “Near Threatened”). The result is captured in Fig. 10.3.

[[FIGURE 10.3 OMITTED]]


The first column (Land vertebrates), which summarizes the four successive ones (mammals, birds, reptiles, and amphibians), shows that populations of about two thirds of terrestrial vertebrates are in decline; “this figure emphasizes,” the authors conclude, “that even species that have not yet been classified as endangered (roughly 30% in the case of all vertebrates) are declining. This situation is exacerbated in the case of birds, for which close to 55% of the decreasing species are still classified as ‘low concern.’”

This evaluation converges with the Living Planet Report 2018 (twelfth edition) of the Living Planet Index (WWF/ZSL), which measured the state of 16,704 populations of 4005 species. The results are frightening; on average, vertebrate animal populations in 2014 were well under half their 1970 levels. Between 1970 and 2014, 60% of worldwide vertebrate animal populations had been wiped out. In freshwater habitats, populations have collapsed by 83%, and in South and Central America, the worst affected region, there has been an 89% total drop in vertebrate animal
populations. The key drivers of biodiversity decline, and the primary threats to these populations remain overexploitation (37%), habitat degradation (31.4%), and habitat loss (13.4%). As stressed by the authors of the Living Planet Report 2018, “of all the plant, amphibian, reptile, bird and mammal species that have gone extinct since AD 1500, 75% were harmed by overexploitation or agricultural activity or both.”

10.7 The Two Extinction Pathways

Global capitalism tears the web of life in two complementary ways: as a direct and immediate consequence of its activities—legal or illegal—and as a reflexive and systemic mode of impact on habitats. In deforestation, timber trade, hunting, fishing, and trafficking of wild species, especially in the tropical world and in particular in Brazil, legal activity which is profoundly destructive conceals illegality and is inextricably intertwined with it. In June 2014, a joint report from UNEP and Interpol looked at illegal wildlife trafficking, assessing the magnitude of the monetary values
involved (Nellemann 2009):

In the international community, there is now growing recognition that the issue of the illegal wildlife trade has reached significant global proportions. Illegal wildlife trade and environmental crime involve a wide range of flora and fauna across all continents, estimated to be worth USD 70–213 billion annually.

In Colombia, wildlife trafficking financed the FARC until the so-called pacification; in Uganda, it financed the Lord’s Resistance Army (LRA); in Somalia, Al-Shabaab; and in Darfur, the Janjaweed. The same symbiosis between trafficking and militias is established in the Congo, the Central African Republic, Mali, and Niger. The Russian mafia has long profited from the trafficking of sturgeons and caviar. According to Interpol’s David Higgins, environmental crimes of this nature have increased by a factor of five in the past 20 years (Barroux 2014; Baudet and Michel 2015).
According to the Brazilian I Relatório Nacional Sobre o Tráfico de Animais Silvestres [1st National Report on the Trafficking of Wild Animals] (2001), there are three types of illegal wildlife trade:

(a) Animals for zoos and private collectors. This type of wildlife trafficking prioritizes the most endangered species, also because the rarer the animal, the greater its demand and, thus, the higher its market value.

(b) Biopiracy for scientific, pharmaceutical, and industrial use. This type of trafficking focuses on species used in animal experiments and those that supply chemicals. It involves high monetary values, mostly coming from the pharmaceutical industry. In 2001, the value of one gram of poison from a brown spider (Loxosceles sp.) and from a coral snake (Micrurus frontalis), for example, could be as high as US$ 24,000 and US$ 30,000, respectively.

(c) Animals for pet shops. According to the report, this is the process that most encourages wildlife trafficking, at least in Brazil.

This report also notes that countries which are among the major wildlife exporters include Brazil, Peru, Argentina, Guyana, Venezuela, Paraguay, Bolivia, Colombia, South Africa, Zaire, Tanzania, Kenya, Senegal, Cameroon, Madagascar, India, Vietnam, Malaysia, Indonesia, China, and Russia. And among the countries that import them, the most are: the USA, Germany, Holland, Belgium, France, England, Switzerland, Greece, Bulgaria, Saudi Arabia, and Japan. According to the Animal Rights News Agency (ANDA), Brazil accounts for 15% of wildlife trafficking worldwide.6
According to the I Relatório Nacional Sobre Gestão e Uso Sustentável da Fauna Silvestre [1st National Report on Use and Management of Wildlife in Brazil] (2016) by Renctas, defaunation rates in Brazil are colossal:

At least 60 million vertebrates (mainly birds, mammals, and some reptiles) are hunted illegally each year in the Brazilian Amazon. There are no data for other regions of Brazil but poaching still occurs in all ecosystems. (…) It is estimated that Brazil illegally exports 1.5 billion wild animals per year, an amount that globally reaches 10 billion reais [US$ 2 billion], losing in revenue only to drug and arms trafficking.

Like all other types of trafficking—of drugs, timber, electronic waste, weapons, and people (prostitution, organs, and tissues)—this one is also exceptionally profitable. The governments of exporting and importing countries are complicit in this ecocide by not equipping their inspecting agencies with budgets that are consistent with their surveillance and repression duties. These agencies remain vulnerable to corruption and have been unable to thwart gangs and deter criminals.

10.8 The International Financial System

The international financial system is a key element in trafficking. Banks not only profit from crime, but—through a tangle of mechanisms that prevent the tracking of wealth from trafficking and that legalize this wealth—they make the line between legal and illegal economies invisible. The gains from wildlife trafficking end up sustaining the financial system through a complex network of resource transfusions among the various mafias, whether they are involved in environmental crimes or other types of crimes. Debbie Banks et al. (2008) are authors of a study by the
London-based Environmental Investigation Agency (EIA) on money laundering in illegal timber trafficking in Papua. The study reports that:

An expert in forging shipping documents working from Singapore boasted to EIA investigators that he was “timber mafia” and that the trade was “better than drug smuggling.” The vast profits from this illegal trade accrued in bank accounts in Singapore and Hong Kong.

In one of the cases investigated, a corrupt police officer received $120,000, paid through 16 bank transfers from individuals linked to companies accused of illegal logging. These were the very companies this police officer was supposed to be investigating. Similar mechanisms occur in deforestation, a major factor in the decline of tropical biodiversity, as well as in animal trafficking. In the case of deforestation, we must remember, for example, that over 90% of deforestation in Brazil is illegal (Wilkinson 2019; MapBiomas http://mapbiomas.org/). Gains from environmental
crimes (deforestation, hunting, illegal animal trafficking, etc.) navigate the network of international financial corporations and are in symbiosis with them: crime brings in wealth and banks legalize it and profit from it. This was evidenced by the collusion between organized crime and HSBC, which, according to Charles Ferguson, is not unique to that bank (Ferguson 2012). As Neil Barofsky (2012) emphasizes, for the “Judicial system”: “HSBC is not only too big to fail, but is also too big to jail.” In fact, when reported, their directors paid a fine corresponding to a few weeks’
earnings but were never defendants in a criminal lawsuit. The high-profit– low-risk mechanism reinforces the tacit symbiosis between the financial system and organized environmental crime.

10.9 Systemic Destruction: 70% of Biodiversity Loss Comes from Agriculture and Livestock

global capitalism or, more precisely, the globalized production of soft commodities is
Even when the destruction of plant and animal species is not illegal or is not an immediate business focus,

systemically the main cause of biodiversity collapse. Agribusiness is certainly one of the most important
causes of population decline and, ultimately, of the extinction of vertebrates and invertebrates . As the 2016 I Relatório Nacional Sobre
Gestão e Uso Sustentável da Fauna Silvestre [1st National Report on Use and Management of Wildlife in Brazil] produced by Renctas states:

The destruction of natural environments eliminates thousands of animals locally. In the Amazon, between 5 and 10 thousand square kilometers of forest
are cleared every year. In forests where there is no hunting, each square kilometer can house up to 95 medium and large mammals, depending on the forest. Therefore, deforestation
may be eliminating between 475 to 950 thousand animals per year in the Amazon.

In addition to advancing over native vegetation cover, agribusiness intoxicates the soil, causes
eutrophication of the aquatic environment through the use of chemical fertilizers, and poisons the environment with systemic pesticides. David
Gibbons, Christy Morrissey, and Pierre Mineau (2014) present the results of a review of 150 studies on the direct (toxic) and indirect (food chain) action of three insecticides— fipronil and two neonicotinoid-class insecticides
(imidacloprid and clothianidin)— on mammals, birds, fish, amphibians, and reptiles:

insecticides exert sub-lethal effects, ranging from genotoxic and cytotoxic effects, and impaired immune
All three

function, to reduced growth and reproductive success , often at concentrations well below those associated with mortality. Use of
imidacloprid and clothianidin as seed treatments on some crops poses risks to small birds, and
ingestion of even a few treated seeds could cause mortality or reproductive impairment to sensitive bird species.
Caspar Hallmann et al. (2014) also showed that regions with concentrations of more than 20 nanograms of imidacloprid per liter of water present a rapid decline in insectivorous bird population:

Recent studies have shown that neonicotinoid insecticides have adverse effects on non-target invertebrate
species. Invertebrates constitute a substantial part of the diet of many bird species during the breeding season and are indispensable for raising offspring. We investigated the hypothesis that the most
widely used neonicotinoid insecticide, imidacloprid, has a negative impact on insectivorous bird populations. Here we show that, in the Netherlands, local population trends were significantly more negative in
areas with higher surface-water concentrations of imidacloprid. At imidacloprid concentrations of more than 20 nanograms per litre, bird populations tended to decline by 3.5 per cent on average annually.

According to the fourth edition of the UN Global Biodiversity Outlook (GBO-4, 2014), “drivers [of biodiversity loss] linked to agriculture account for 70% of the projected loss of terrestrial biodiversity.” Indeed, Manfred Lenzen et al. (2012) show that about one third of species threatened with extinction in “developing nations” are in this condition as a result of the extreme globalization of commodities trade. The study is the first to detect and quantify the cause and effect relationship between the 15,000 commodities produced for international trade in 187 countries and
the threat to 25,000 animal species listed in the 2012 IUCN Red List. Orangutans, elephants, and tigers in Sumatra are on this list, for example, as victims of habitat degradation due to palm oil and pulpwood plantations. Within the past 25 years, agribusiness has destroyed 69% of the habitat of the Sumatran elephants (Elephas maximus sumatranus). Its population had declined by at least 80% during the past three generations, estimated to be about 75 years, and is now reduced to between 2400 and 2800 individuals. The IUCN has just reclassified the situation of these
animals, no longer placing them in the “Endangered” species category, but in the “Critically Endangered” category. Also, in Brazil or, perhaps, especially in Brazil, agribusiness (mostly livestock) is the main culprit of biodiversity loss. The Red Book of Endangered Brazilian Fauna (ICMBio, Vol., 2018) clearly points to the major role that agribusiness plays in the destruction of Brazil’s immense natural heritage:

Throughout the country, the main pressure factors on continental species relate to the consequences of agriculture and livestock activities, either by fragmentation and decrease in habitat quality in areas where the activity is consolidated or by the ongoing process of habitat loss in areas where the activity is expanding. These activities affect 58% of the 1014 endangered continental species.

Figure 10.4 quantifies these pressure factors on Brazilian biodiversity.

[[FIGURE 10.4 OMITTED]]


Of the 1014 continental species considered threatened, 592 are in this condition as a direct result of agriculture and livestock activities. Furthermore, 162 species are threatened by pollution, in particular by the use of pesticides, which affect “mainly invertebrates (river crabs, limnic mollusks, butterflies and springboards), but also fish, birds, amphibians, reptiles, and mammals.” Finally, 135 species are threatened by forest fires, the vast majority of which are caused by the agribusiness sector. This means that of the 1014 species evaluated in Brazil, close to 880, or about
87% of them, are directly threatened by agribusiness.

10.10 Mammals

Regarding mammal species that are threatened (i.e., Critically Endangered, Endangered, and Vulnerable), the exacerbation of the phenomenon can be established with a high degree of reliability, since the 5506 species evaluated by IUCN coincide with the number of species described (a number that is certainly close to that of existing mammal species) and were calculated as follows, 21% in 2009 against 25% in 2019, a huge difference in the span of just 10 years. Population decline in non-synanthropic mammals is widespread, yet it varies according to their trophic level
and is most acute among species at the top of the food pyramid: 77% of 31 large (over 15 pounds) carnivore species are declining, and more than half of these 31 species have already declined by 50% from their historical records (Ripple et al. 2014). In another paper, William Ripple et al. (2015) warn that large terrestrial herbivorous mammals, a group of about 4000 species, are also experiencing alarming rates of population decline:

Large herbivores are generally facing dramatic population declines and range contractions, such that ~60% are threatened with extinction. Nearly all threatened species are in developing countries, where major threats include hunting, land-use change, and resource depression by livestock. Loss of large herbivores can have cascading effects on other species including large carnivores, scavengers, mesoherbivores, small mammals, and ecological processes involving vegetation, hydrology, nutrient cycling, and fire regimes. The rate
of large herbivore decline suggests that ever-larger swaths of the world will soon lack many of the vital ecological services these animals provide, resulting in enormous ecological and social costs.

An example of the decline of large terrestrial herbivores (those with a body mass of ≥100 kg) also in developed countries: according to the 2018 Arctic Report Card (NOAA), since the mid-1990s, the size of reindeer and caribou herds has declined by 56%. That’s a drop from an estimated 4.7 million animals to 2.1 million, a loss of 2.6 million (Sullivan 2018). Boreal caribou are disappearing across Canada as their original habitat has been cut in half. Only 14 of 51 herds are considered self-sustaining, and another third of the remaining boreal caribou could disappear in the next
15 years (Berman 2019).

Within the mammalian class, the primate order is declining particularly rapidly. The 2013 IUCN Red List reported that of the 420 species evaluated, 49% were threatened globally. The 2018 Red List shows that of the 450 species evaluated, 59.8% are now threatened. This race toward extinction was brought to light by a comprehensive study in 2017 involving 31 scientists and coordinated by Alejandro Estrada (UNAM):

Current information shows the existence of 504 species in 79 genera distributed in the Neotropics, mainland Africa, Madagascar, and Asia. Alarmingly, ~60% of primate species are now threatened with extinction and ~ 75% have declining populations. This situation is the result of escalating anthropogenic pressures on primates and their habitats—mainly global and local market demands, leading to extensive habitat loss through the expansion of industrial agriculture, large-scale cattle ranching, logging, oil and gas drilling,
mining, dam building, and the construction of new road networks in primate range regions.

10.11 Birds

The situation of birds has worsened at least since 1988, the date of the first full IUCN assessment. In the 2012 BirdLife International assessment (accredited by the IUCN), there were 1313
bird species threatened with extinction (Vulnerable, Endangered, and Critically Endangered), representing 13% of the 10,064 bird species described in the world. In BirdLife International’s State of the World’s Birds 2018, this
percentage rises to 13.3%. In absolute terms, this means a huge increase in just 5 years, since now an additional 156 bird species are threatened with extinction, or 1469 species out of a universe of 11,000 species which have been
described. The biggest cause of this decline, according to the document, continues to be agribusiness.

Europe lost 400 million birds from 1980 to 2009, and around 90% of this decline can be attributed to the 36 most common species, such as sparrows, starlings, and skylarks (Inger et al. 2014). Kenneth Rosenberg et al. (2019) reported population losses across much of the North American avifauna over 48 years, including once-common species and from most biomes: “Integration of range-wide population trajectories and size estimates indicates a net loss approaching 3 billion birds, or 29% of 1970 abundance.” The State of India’s Birds 2020 assessed the status of 867
species. Of the 261 species for which long-term trends could be determined, 52% have declined since the year 2000, with 22% declining strongly. Current annual trends could be estimated for 146 species. Of these, 79% have declined over the last 5 years, with almost 50% declining strongly. Just over 6% are stable and 14% increasing. According to a study by the Brazilian government conducted in 2016, of the 1924 bird species described, 233 species and subspecies were threatened with extinction. Of these, 42 were critically endangered. In the Amazon, BirdLife
International’s 2012 IUCN Red List shows that almost 100 bird species are now closer to extinction.7 According to Leon Bennun, Director of science, policy, and information at BirdLife: “We have previously underestimated the risk of extinction that many of Amazonia’s bird species are facing. However, given the recent weakening of Brazilian forest law, the situation may be even worse than recent studies have predicted” (Harvey 2012).

Despite the international Convention on Migratory Species (1979), migratory birds continue to suffer from massive hunting. In Northeast India, 120,000 to 140,000 Amur falcons, or Eastern red-footed falcons (Falco amurensis), are hunted each year, mainly with nets as they pass through the Nagaland region on their annual migration from Siberia and northern China to Africa (Dalvi and Sreenivasan 2012). In Africa, the practice of catching migratory birds from northern latitudes dates back to the time of the pharaohs. But it is now performed on an infinitely larger scale, with
equipment imported from China, such as huge plastic nets and recordings that attract the birds. Such techniques allow as many as 140 million birds to be killed each migratory season. According to Brian Finch, author of a migratory bird census in Kenya, “there’s a massive die-off of birds. We should be aware of the implications of taking all these insectivores out of the ecosystem” (Mangat 2013).

This awareness is critical. The abominable extermination of birds deprives the biosphere of fantastically beautiful, sensitive, and intelligent creatures. Birds are pollinators. According to WWF, the seeds of over 90% of all woody tree species are moved around by birds. Moreover, they are indispensable for the ecological balance and, therefore, for humanity. They control the numbers of pests that would otherwise plague our natural environments and our crops. We must remember the terrible consequences of the loss of these and other insectivores. By causing a terrible
proliferation of insects, the near-extermination in China of the Eurasian tree sparrow (Passer montanus) under the Great Sparrow Campaign in 1958 was one of the causes of the so-called Great Chinese Famine (1958–1961), in which more than 20 million people starved to death. Today, climate change and the destruction of forest habitats, particularly by agriculture and livestock activities, are often cited as factors that explain the spread and sudden increase in populations of Aedes aegypti and Aedes albopictus, especially in the tropical and subtropical regions of the
planet. But, as a supplementary factor, one might point out the drastic worldwide decline in bird populations and in other natural insect predators, such as reptiles and amphibians which are even more threatened. The Great Sparrow Campaign of 1958, ordered by Mao Zedong, was an act of ecological dementia. But the current decimation of birds through loss of wildlife, deforestation, increased use of pesticides, hunting, and trafficking is no less insane. This equation was suggested by Philip Lymbery and Isabel Oakeshott (2014):

The effect of agricultural policy in Europe and the Americas in the past few decades has been almost exactly the same as Mao’s purge. Tree sparrows—the same species that Mao targeted—have declined in Britain by 97% over the last forty years, largely due to the intensification of agriculture. The figures for other well-loved birds like turtle doves and corn buntings are no less alarming. Modern farming has become so “efficient” that the countryside is now too sterile to support native farmland birds.

Drugs and pesticides also kill animals on a large scale. Diclofenac injected into cattle is leading to kidney failure in five out of eight vulture
species and in one eagle species (Aquila nipalensis) from India, which feed on their carcasses. This is an evil that now also affects 13 other eagle species in Asia, Africa, and Europe.8 Similarly, the US EPA has published a paper
assessing the lethality of nine rodenticides on birds (especially owls, hawks, eagles, and crows) that feed on poisoned rats and mice.9

10.12 Terrestrial Arthropods and the Decline in Pollinators

According to Nico Eisenhauer, Aletta Bonn, and Carlos Guerra (2019), the Red List of Threatened Species of the IUCN “is still heavily biased towards vertebrates, with invertebrates being particularly underrepresented.” Invertebrates include arthropods (insects, arachnids, myriapods, crustaceans), which, as seen above (Leakey and Lewin 1996), account for over 80% of all known living animal species. They are crucial to the functioning of ecosystems. Insects, for instance, pollinate 80% of wild plants and provide a food source for 60% of birds (Hallmann et al. 2017). Yet, a
comprehensive review of 73 historical reports of insect declines from across the globe (Sanchez-Bayo and Wyckhuys 2019) reveals that the current proportion of insect species in decline is twice as high as that of vertebrates. Every year about 1% of all insect species are added to the list of threatened species, “with such biodiversity declines resulting in an annual 2.5% loss of biomass worldwide.” These dramatic rates of decline “may lead to the extinction of 40% of the world’s insect species over the next few decades.” According to the authors:

Habitat change and pollution are the main drivers of such declines. In particular, the intensification of agriculture over the past six decades stands as the root cause of the problem, and within it the widespread, relentless use of synthetic pesticides is a major driver of insect losses in recent times.

Mikhail Beketov et al. (2013) examined 23 streams in Germany, 16 in France, and 24 in Australia. They classified streams according to different levels of pesticide contamination: uncontaminated, slightly contaminated, and highly contaminated. Highly contaminated streams in Australia showed a decrease of up to 27% in the number of invertebrate families when compared to uncontaminated streams (Oosthoek 2013):

Pesticides caused statistically significant effects on both the species and family richness in both regions [Europe and Australia], with losses in taxa up to 42% of the recorded taxonomic pools. Furthermore, the effects in Europe were detected at concentrations that current legislation considers environmentally protective.

In 2014, the Task Force on Systemic Pesticides (TFSP), bringing together 29 researchers, stated in its findings that systemic pesticides pose an unmistakable and growing threat to both agriculture and ecosystems. Jean-Marc Bonmatin, a CNRS researcher in this working group, summarized these results (as cited by Carrington 2014):

The evidence is very clear. We are witnessing a threat to the productivity of our natural and farmed environment equivalent to that posed by organophosphates or DDT. Far from protecting food production, the use of neonicotinoid insecticides is threatening the very infrastructure which enables it.

10.13 The Ongoing Collapse of Flying Insect Biomass

Caspar A. Hallmann et al. (2017) shed new light on the current scale of the decrease in flying insects in Europe; no longer using indicators of population abundance for specific species or taxonomic groups, they look at changes in insect biomass. From observations conducted during 27 years in 63 nature reserves in Germany, the authors estimate that the average flying insect biomass declined 76% (up to 82% in midsummer) in just 27 years in these locations. Tyson Wepprich et al. (2019) estimated the rate of change in total butterfly abundance and the population trends
for 81 species using 21 years of systematic monitoring in Ohio, USA. They showed that “total abundance is declining at 2% per year, resulting in a cumulative 33% reduction in butterfly abundance.”

The ongoing collapse of flying insects in Europe and in the USA is the most visible part of a much broader phenomenon which entomologists are characterizing as “an accelerated decline in all insect species since the 1990s” (Foucart 2014). Insecticides are the main reason for this decline in insects, as pointed out by hundreds of scientific papers on insecticides that are called systemic. The order Lepidoptera, which represents about 10% of the known insect species and is one of the most studied, is not among the most severely affected. Nevertheless, several species of
butterflies and moths are declining because of the destruction of their habitats and the elimination, by herbicides, of the plants that caterpillars feed on. According to Rodolfo Dirzo et al. (2014):

Globally, long-term monitoring data on a sample of 452 invertebrate species indicate that there has been an overall decline in abundance of individuals since 1970. Focusing on just the Lepidoptera (butterflies and moths), for which the best data are available, there is strong evidence of declines in abundance globally (35% over 40 years) Non-Lepidopteran invertebrates declined considerably more, indicating that estimates of decline of invertebrates based on Lepidoptera data alone are conservative.

Bill Freese and Martha Crouch (2015) showed that spraying Monsanto’s glyphosate herbicide on Roundup Ready corn and soybean seeds (genetically engineered to tolerate this herbicide) eradicated milkweeds, the only food source for the monarch butterfly (Danaus plexippus). As a result, its existence is threatened. In the state of Florida, five species of butterflies were considered extinct in May 2013 by entomologist Marc Minno, and two more (Epargyreus zestos oberon and Hesperia meskei pinocayo) became extinct in July 2013 in the same region. Another species
which is among the most beautiful, the so-called Madeiran large white (Pieris brassicae wollastoni), in Madeira, a Portuguese island, was declared extinct in 2006, falling victim to what IUCN’s Red List calls “natural system modification.”

The European Grassland Butterfly Indicator (EEA) estimates that in just over 20 years (1990–2011), half of the field butterflies disappeared from the landscape of 19 European countries. In Brazil, about 50 species of butterflies are threatened with extinction and listed in the IBAMA Red List.10 This decline is all the more worrying because butterflies are important bioindicators: they are considered indicators that represent trends observed in most terrestrial insects. Moreover, they are key to the preservation of these ecosystems, being a source of food for birds and, above
all, because of their role as pollinators. This terrible decline in invertebrate biodiversity is not restricted to insects. The Earth’s Endangered Creatures lists 30 endangered arachnid species, stressing that the list is not exhaustive.

10.14 Pollinators and the Crisis in Pollination

As pointed out by the IPBES (2016), over 40% of invertebrate pollinators and 16.5% of vertebrate pollinators are threatened with global
extinction (increasing to 30% for island species). Since at least the end of the twentieth century, the decline in invertebrate pollinators has attracted public attention (Allen-Wardell et al. 2008). Pollination is a vital
service for plant reproduction and biodiversity maintenance. Around 100,000 invertebrate species, especially insects, are involved in pollination, and “in extreme cases, their decline may lead to the extinction of plants and

Our food dependence on


animals,” since “of the estimated 250,000 species of modern angiosperms, approximately 90% are pollinated by animals, especially insects” (Rocha 2012).

pollinators is, therefore, immense, and the consequences of this crisis can be catastrophic. According to a UNEP report (2010), based
on FAO’s estimates, “out of some 100 crop species which provide 90% of food worldwide, 71 of these are bee-pollinated.” In the UK, the Living with Environmental Change Network (LWEC 2014) warned that:
Over three-quarters of wild flowering plant species in temperate regions need pollination by animals like
insects to develop their fruits and seeds fully. This is important for the long-term survival of wild plant populations and

provides food for birds and mammals. Pollinators improve or stabilize the yield of three-quarters of all crop types globally; these pollinated crops
represent around one third of global crop production by volume.

In addition, 90% of the vitamin C we need comes from insect-pollinated fruits, vegetables, oils, and seeds . This support network
for plant reproduction and biodiversity is weakening. The 2016 IPBES assessment on pollinators reaffirmed and updated these estimates: “75% of our food crops and nearly 90% of wild flowering plants depend at least to some
extent on animal pollination.” Furthermore, the assessment concluded that “a high diversity of wild pollinators is critical to pollination even when managed bees are present in high numbers.” A study by Berry Brosi and Heather M.
Briggs (2013) revealed the importance of this diversity:

loss of a single pollinator species reduces floral fidelity (short-term specialization) in the remaining pollinators,
We show that

with significant implications for ecosystem functioning in terms of reduced plant reproduction,
even when potentially effective pollinators remained in the system . Our results suggest that ongoing pollinator declines may have
more serious negative implications for plant communities than is currently assumed.

10.15 Colony Collapse Disorder (CCD) and the Pesticides

The threat of extinction that hovers over bees (Apis mellifera) affects both wild and managed bees. David Hackenberg, an
American commercial beekeeper, coined the term colony collapse disorder (CCD) in 2006: healthy hives have been deserted by the vast majority of worker bees who abandon the queen bee, a few nurse bees, the larvae, and the
food reserves. This phenomenon currently strikes apiaries in the USA, Europe, Africa, China, Taiwan, Japan, the Middle East, and Brazil. It has manifested itself since the 1990s, but most markedly since 2006 in the USA, with losses
reaching 30–90% of the hive population in Europe and in the USA. The winter 2018–2019 saw the highest honeybee colony losses on record in the USA (Reiley 2019).

Three groups of scientists—the French NGO Pollinis (Réseau des Conservatoires Abeilles et Pollinisateurs); the working group on systemic pesticides (TFSP, or Task Force on Systemic Pesticides), already mentioned in Chap. 4 (Sect. 4.10 “Industrial Pesticides”); and the IPBES—have accumulated evidence showing that the decline in bees is caused by habitat loss and systemic neurotoxic pesticides made from substances such as fipronil or neonicotinoids (clothianidin, imidacloprid, and thiamethoxam). The IPBES (2016) assessment concludes that “recent evidence shows
impacts of neonicotinoids on wild pollinator survival and reproduction at actual field exposure.” These molecules, commercialized under the names of Gaucho, Cruiser, Poncho, Nuprid, Argento, etc., comprise about 40% of the world’s agricultural insecticide market, representing a value of over 2.6 billion dollars. They are distinguished from previous generations of insecticides by their toxicity, which is 5000–10,000 times greater than the famous DDT, for example (Foucart 2014). In addition to disrupting the neurological orientation system of bees, these pesticides weaken
them and make them more vulnerable to viruses, mites (Varroa destructor), and other pathogens. In a study published by Jeffery Pettis et al. (2013), in which healthy bees were fed beehive pollen collected from 7 pesticide-contaminated agricultural crops, 35 pesticides (with high doses of fungicides) were detected in those samples, and contact with the samples made bees more susceptible to intestinal parasites. Ten categories of pesticides (carbamates, cyclodienes, formamidines, neonicotinoids, organophosphates, oxadiazines, pyrethroids, etc.) were considered:

Insecticides and fungicides can alter insect and spider enzyme activity, development, oviposition behavior, offspring sex ratios, mobility,
navigation and orientation, feeding behavior, learning and immune function. Reduced immune functioning is of particular interest because of recent disease-related declines of bees including honey bees.

Pesticide and toxin exposure increases susceptibility to and mortality from diseases including the gut parasite Nosema spp.. These
increases may be linked to insecticide-induced alterations to immune system pathways, which have been found for several insects, including honey bees.

Tomasz Kiljaneck et al. (2016) published a new method for detecting the presence of 200 insecticides in bees at concentrations of 10 ng/g or less. They conclude, with regard to European bees, that “in total, 57 pesticides and metabolites were determined in poisoned honeybee samples.” Furthermore, these pesticides “even at very low levels at environmental doses and by interaction could weaken bees defense systems allowing parasites or viruses to kill the colony.” The lethality of pesticides is sufficiently demonstrated for the subgroup of species that includes
bumblebees and Apis mellifera, which accounts for 80% of insect pollination. Juliet Osborne (2012) stressed the importance of a study conducted by Richard Gill, Oscar Ramos-Rodriguez, and Nigel Raine (2012) on bumblebees. The authors examined bumblebee responses to two pesticides:

We show that chronic exposure of bumblebees to two pesticides (neonicotinoid and pyrethroid) at concentrations that could approximate field-level exposure impairs natural foraging behaviour and increases worker mortality leading to significant reductions in brood development and colony success. We found that worker foraging performance, particularly pollen collecting efficiency, was significantly reduced with observed knock-on effects for forager recruitment, worker losses and overall worker productivity. Moreover, we
provide evidence that combinatorial exposure to pesticides increases the propensity of colonies to fail.

The EPILOBEE study in 17 European countries found a widespread incidence of CCD, albeit more severe in northern European countries, with beehive losses ranging from 27.7% in France to 42.5% in Belgium (Foucart 2014). According to the USDA and the Apiary Inspectors of America, hives have been decimated in this country at rates between 21% and 34% per year since the winter of 2006/2007.11 In Brazil, the world champion in pesticide use, bee mortality has already reached 14 states. In the state of Sao Paulo, 55% of bees have disappeared, according to Lionel Segui
Gonçalves (2017). According to Oscar Malaspina, there is a clear link between these occurrences and the use of pesticides, especially when launched by airplanes, since the wind carries the sprayed product to areas adjacent to those of the target plantation.12

10.16 Three Certainties Among So Many Uncertainties

During the Pleistocene, our planet was inhabited by gigantic and spectacular animals. By the end of this geological age, between 2.5 million and 11,700 years BP, many of these species, such as mammoths, ground sloths, and giant tortoises, in addition to the various species belonging to the sabertooth genera, had been extinct, many of them probably by our species. According to William Ripple and Blaire Van Valkenburgh (2010), “humans may have contributed to the Pleistocene megafaunal extinctions. The arrival of the first humans, as hunters and scavengers, through
top-down forcing, could have triggered a population collapse of large herbivores and their predators.” Since then, the still rich animal diversity of the Holocene has continued to become impoverished, with hundreds of species officially becoming extinct in the last 500 years and, especially, in the last half century (1970–2020). We are now suffering more and more from the so-called empty forest syndrome, a term coined by Kent Redford in 1992 to refer to a forest already depleted of its fauna by humans, even before it would be completely destroyed by them. Defaunation,
in fact, precedes the final devastation of tropical forests by globalized capitalism. The final triumph of globalization in recent decades has greatly accelerated and continues to accelerate the ongoing collapse of biodiversity.

Chinachem, Bayer-Monsanto, DowDuPont and BASF are the largest suppliers of pesticides in the world. They are definitely the main culprits for the collapse of biodiversity, especially among invertebrates. But climate change, habitat degradation, and landscape homogeneity combined can aggravate effects of pesticides in nature. Climate change has accelerated range losses among many species. Peter Soroye, Tim Newbold, and Jeremy Kerr (2020) evaluated how climate change affects bumblebee species in Europe and North America. Their measurements provided
evidence of rapid and widespread declines of bumblebees across Europe and North America: “bumble bees richness declined in areas where increasing frequencies of climatic conditions exceed species’ historically observed tolerances in both Europe and North America.” Thus, the current approach to regulatory environmental risk assessment of pesticides must be updated. “The overall picture is of a need to move to a more holistic view” (Topping et al. 2020). In fact, we are only beginning to understand the scale, dynamics, and consequences of the ongoing sixth mass
extinction. In 1992, in his classic The Diversity of Life, Edward O. Wilson stated: “extinction is the most obscure and local of all biological processes.” As the IUCN warns, extinction risk has been assessed for only a small percentage of the total number of species described, which are, in turn, a small fraction of the biodiversity universe, recently estimated, as seen above, at 8 million eukaryotic species. Among so many uncertainties, three certainties, however, stand out:

1. Defaunation and species extinctions in all taxonomic groups are undoubtedly accelerating, as evidenced by successive assessments from the IUCN, the Living Planet Index, and many other scientific assessments.

2. Every species is precious not only for its own sake but also for its multiple interactions with other species in the web of life. As Sacha Vignieri (2014) rightfully reminds us:

Though for emotional or aesthetic reasons we may lament the loss of large charismatic species, such as tigers, rhinos, and pandas, we now know that loss of animals, from the largest elephant to the smallest beetle, will also fundamentally alter the form and function of the ecosystems upon which we all depend.

3. With this increasing fragmentation of the web of life, we are approaching critical points. This is the fundamental message both of the IPBES Global Assessment Report (2019) and of the study by Rodolfo Dirzo et al. (2014), so often cited in this chapter: “Cumulatively, systematic defaunation clearly threatens to fundamentally alter basic ecological functions and is contributing to push us toward global-scale ‘tipping points’ from which we may not be able to return.” And Sacha Vignieri (2014) completes: if we are unable to reverse
this ongoing collapse, “it will mean more for our own future than a broken heart or an empty forest.”

[[REFERENCES OMITTED]]

[[CHAPTER 11 BEGINS]]

we are probably in the century that will see the extinction of marine fish.” In freshwater,
For Samuel Iglésias (Santi 2015), “

the collapse of biodiversity has definitely begun, as hundreds of these species have already disappeared because
where about 35,000 species of fish are known,

of human activities. Ten years ago, Charles J. Vörösmarty et al. (2010) reported estimates, based on the IUCN, that “at least 10,000–20,000 freshwater species are extinct or at risk.” The
ongoing death toll in freshwater ecosystems is obviously more overwhelming because of the smaller scale of these
ecosystems and their greater exposure and vulnerability to fishing, to large dams which interfere with fish spawning, and to human pollution. Oceans cover 71% of
the Earth’s surface, and in these huge ecosystems the impacts of humans are more diffuse and delayed. But already in 2011 (almost simultaneously, therefore, with the work coordinated by Vörösmarty on the extinction of
freshwater species), a press release signed by the International Programme on the State of the Ocean (IPSO), the IUCN, and the World Commission on Protected Areas (WCPA) summarized the conclusions of an interdisciplinary and

The world’s ocean is at high risk of entering a phase of extinction of


international workshop on the state of marine biodiversity in one sentence: “

marine species unprecedented in human history” (IPSO, IUCN, WCPA 2011). This scientific panel examined the combined effects of
overfishing, pollution, ocean warming, acidification, and deoxygenation and concluded that:

is creating the conditions associated with every previous major extinction of


The combination of stressors on the ocean

species in Earth history. The speed and rate of degeneration in the ocean is far faster than anyone has
predicted. Many of the negative impacts previously identified are greater than the worst
predictions. Although difficult to assess because of the unprecedented speed of change, the first steps to globally significant extinction may have begun with a rise in the extinction threat to marine
species such as reef forming corals.

In fact, over the past three decades, marine defaunation has been developing with increasing brutality and speed , as Douglas McCauley
et al. (2015) warn: Although defaunation has been less severe in the oceans than on land, our effects on marine animals are increasing in pace and impact. Humans have caused few complete extinctions in the sea, but we are
responsible for many ecological, commercial, and local extinctions. Despite our late start, humans have already powerfully changed virtually all major marine ecosystems.

Green sea turtles (Chelonia mydas), for example, might become extinct within the timespan of a generation or so. This is because their sex
determination depends on the incubation temperature during embryonic development. The higher the temperature of the sand where the eggs are incubated, the higher the incidence of females. This warming also

causes high mortality rates in their offspring. Michael Jensen et al. (2018) state that:
Turtles originating from warmer northern Great Barrier Reef nesting beaches were extremely female-biased (99.1% of juvenile, 99.8% of subadult, and 86.8% of adult-sized turtles). (…) The complete feminization
of this population is possible in the near future.

Extermination of aquatic life is caused by a wide range of anthropogenic impacts, including overfishing,
pollution, river damming, warming of the aquatic environment, aquaculture, decline in phytoplankton,
eutrophication, deoxygenation, acidification, death, and destruction of corals . Each endangered species in the aquatic environment is
vulnerable to at least one of these factors. Just to name two examples:

1. Sturgeon (the common name for the 27 species of fish belonging to the family Acipenseridae): this magnificent water giant that has been able to evolve and adapt to so many transformations of the Earth’s system since the Triassic period, over 200 million years ago, is now succumbing to overfishing and to anthropic interference in its habitats. No less than 24 of its 27 species are endangered, and, according to IUCN, sturgeons are “more critically endangered than any other group of species.”

2. In the 1950s, thousands of baiji (Lipotes vexillifer), a species that had evolved over 20 million years, lived in the waters of the Yangtze; by 1994, fewer than 100 individuals remained, and by 2006, the dolphin had become extinct by pollution, dam building, and reckless navigation (Bosshard 2015).

11.1 Mammals

After the Baiji extinction, there are, according to WWF, only six remaining species of freshwater dolphins in the world, three in South America, and three in Asia, five of them vulnerable, endangered, or critically endangered: (1) the Amazon river dolphin (Inia geoffrensis), also known as boto or pink river dolphin; (2) the Bolivian river dolphin (Inia boliviensis); (3) the tucuxi (Sotalia fluviatilis), found throughout the Amazon and Orinoco river basins; (4) the Ganges river dolphin (Platanista gangetica), also known as the Ganga or the soons; (5) the Indus river dolphin (Platanista
gangetica minor); and (6) the Irrawaddy dolphin (Orcaella brevirostris). The population of the Irrawaddy dolphin in Laos has been decimated by the fishing industry, and the fate of this species is definitely sealed by the construction of the Don Sahong Dam on the Mekong River in southern Laos (Russo 2015). Two Amazonian freshwater dolphin species, the pink river dolphin and the tucuxi, are critically endangered if the reduction rate of their populations in the observed area (the Mamirauá Sustainable Development Reserve) is the same in all of their habitats. Both species
are now in steep decline, with their populations halving every 10 years (botos) and 9 years (tucuxis) at current rates, according to a study conducted between 1994 and 2017 (Silva et al. 2018). The extinction of the soons, the river dolphins of India, is imminent, despite their status as the National Aquatic Animal since 2009. Between the end of the nineteenth and early twentieth centuries, its population was estimated at 50,000 in the Ganges alone. It had fallen to 5000 in 1982 and to 1200–1800 in 2012 on the Brahmaputra and Ganges rivers and their tributaries. The
proposed Tipaimukh dam on the Barak River in northeast India may sound the death knell for the Ganges river dolphin in Assam’s Barak River system (Ghosh 2019).

Similar existential threats also loom over many marine mammals, and species extinctions have started to occur in recent years. The Japanese sea lion (Nihon
ashika, Zalophus japonicus) was last observed in 1974 and is considered extinct. The Caribbean monk seal (Monachus tropicalis), a mammal over 2 m long, was officially declared extinct by the IUCN in 2008. The vaquita (Phocoena
sinus), a small cetacean endemic to the northern part of the Gulf of California, is on the brink of extinction. According to IUCN, “the vaquita is the most endangered marine mammal species in the world. (…) Its status has continued
to deteriorate [since 1958] because of unrelenting pressure from incidental mortality in fisheries.” In fact, the vaquita usually dies entangled in nets used to catch totoaba (Totoaba macdonaldi), a species that is also “critically
endangered” (IUCN), as its swim bladders are in high demand, especially in China. In the 1990s, there were already only 600 to 800 vaquitas and in 2014 only 100. Its extinction is imminent because in 2018 its population was
reduced to no more than 15. The South Island Hector’s and Māui dolphins are subspecies of the Hector’s dolphin Cephalorhynchus hectori, whose only habitat is New Zealand’s coastal waters, which is critically endangered. Both
subspecies are threatened with extinction. The Māui dolphin (Cephalorhynchus hectori maui) or popoto is on the edge of extinction. Its population is estimated at 55 individuals, and this species, the smallest of the world’s 32
dolphin species, may be extinct in a near future (Rotman 2019).

According to the Marine Mammal Commission of the USA, the population of the Florida manatee (Trichechus manatus latirostris) is listed as threatened under the Endangered Species Act and designated as depleted (8810 individuals in 2016) under the Marine Mammal Protection Act. An estimated 824 Florida manatees died in 2018, “a nearly 50% increase over the number of deaths in 2017 and the second-highest death count ever” (Platt and Kadaba 2019). According to Jacqueline Miller (2016):

[t]he historical population of the blue whale [Balaenoptera musculus] has been estimated at somewhere between 300,000 and 350,000. Between 1909 and 1965, the reported kill of the blue whales in the Antarctic alone (…) was 328,177. By 1966 this number had increased to an estimated 346,000 blue whales killed. When commercial whaling of the blue whale was finally banned, it was estimated they had suffered up to a 99% population loss. Present estimates are between 5,000 and 10,000 in the Southern Hemisphere, and
between 3,000 and 4,000 in the Northern Hemisphere.

Currently, the IUCN considers the blue whale the largest animal ever known to have existed (24–30 m), and other 20 marine mammals to be endangered, and the following whale species to be critically endangered (Findlay and Child 2016): southern blue whale (Balaenoptera musculus intermedia), western gray whale (Eschrichtius robustus), Svalbard-Barents sea bowhead whale (Balaena mysticetus), North Pacific right whale (Eubalaena japonica), Chile-Peru southern right whale (Eubalaena australis), and Cook Inlet beluga whale (Delphinapterus leucas).

Just like the Japanese sea lion (Nihon ashika, Zalophus japonicus) which was wiped out by hunting, whales can also suffer the same fate, given the fact that Japan, Iceland, and Norway, which are currently considered pirate whaling nations, continue to hunt and kill fin, minke, and sei whales every year. Japan alone has killed more than 8000 whales under the guise of scientific research since the 1986 Moratorium on Commercial Whaling established by the International Whaling Commission (IWC). On the other hand, since 2002, over 6000 whales have been saved from the
Japanese whalers by the Sea Shepherd Conservation Society (SSCS), the heroic and combative NGO, created by Paul Watson and severely repressed by the governments of the USA, Australia, and New Zealand, in addition to being deemed an eco-terrorist organization by the Japanese government. Due to such pressures, Paul Watson declared in 2017 that the SSCS was abandoning the pursuit of Japanese whalers.

11.2 Noise Pollution

One of the biggest stressors of marine life is man-made noise pollution. “Hydrophones anchored to the continental slope off California, for instance, have recorded a
doubling of background noise in the ocean every two decades since the 1960s” (Brannen 2014). Alison Stimpert et al. (2014) documented the susceptibility of Baird’s beaked whales (Berardius bairdii) to mid-frequency active

military sonar. There is a well-documented link between the military sound pulses and mass strandings in which dozens of the mammals have died (Batchelor 2019). The Apache Alaska
Corporation deafened Beluga whales by detonating—every 10 or 12 seconds for 3–5 years—air guns for underwater oil and gas prospecting in the only habitat in which they still survive (Broad 2012). A 2012 New York Times
editorial underlines that:1

damaging to marine mammals.


Sound travels much faster through water than it does through air, magnifying its impact, and many of the sounds the Navy plans to generate fall in the frequencies most

More than five million of them may suffer ruptured eardrums and temporary hearing loss, in turn disrupting normal behavioral
patterns. As many as 1800 may be killed outright, either by testing or by ship strikes.

Between 2014 and 2019, the US Navy conducted firing exercises in the Atlantic and Pacific Oceans and in the Gulf of Mexico.

11.3 Overfishing and Aquaculture

Dirk Zeller and Daniel Pauly (2019) highlighted the two factors that contributed to the Great Acceleration after 1950, referring specifically to industrial scale fishing: “(1) the reliance on and availability of cheap fossil fuels; and (2) the gradual incorporation of technologies developed for warfare (e.g., radar, echo sounders, satellite positioning, etc.).” Due to these two factors, worldwide per capita fish consumption more than doubled in the last 60 years. According to FAO (2018), between 1961 and 2016, the average annual increase in global food fish consumption (3.2%)
outpaced population growth (1.6%). In per capita terms, this consumption grew from 9 kg in 1961 to 20.2 kg in 2015, at an average rate of about 1.5% per year. Already in 2013, industrialized countries consumed 26.8 kg of fish per capita. Consumption of fish in the European Union increased for nearly all of the main commercial species. It reached 24.33 kg per capita in 2016, 3% more than in 2015. The top five species eaten in the EU—tuna, cod, salmon, Alaska pollock, and shrimp—amounted to 43% of the market in 2016. These species were mostly imported from non-EU
countries (EUMOFA 2018).

Global seafood production by fisheries and aquaculture was estimated at 167 million tons in 2014 and at 171 million tons in 2016, of which 88% was utilized for direct human consumption (FAO 2018). In 2012, 15 countries accounted for 92.7% of all aquaculture production, with 88% of global production coming from Asia. Twenty years ago, Rosamond Naylor et al. (2000) had already announced the unsustainable character of aquaculture:

Many people believe that such growth [of aquaculture] relieves pressure on ocean fisheries, but the opposite is true for some types of aquaculture. Farming carnivorous species requires large inputs of wild fish for feed. Some aquaculture systems also reduce wild fish supplies through habitat modification, wild seedstock collection and other ecological impacts. On balance, global aquaculture production still adds to world fish supplies; however, if the growing aquaculture industry is to sustain its contribution to world fish supplies,
it must reduce wild fish inputs in feed and adopt more ecologically sound management practices.

China’s aquatic farms provide more fish than the country’s fishing activities, and the interaction between sewage, industrial and agricultural waste, and pollution caused by aquaculture itself has devastated Chinese marine habitats (Barboza 2007).

In 2016, lakes, rivers, and oceans were scoured by 4.6 million boats and fishing vessels, 3.5 million of which were Asian (75%), searching for declining shoals of fish (FAO 2018). In fact, the decline in shoals explains why fishery output (excluding discarded fish) reached 93.8 million tons in 1996 and stabilized thereafter to around 90–92 million tons. Since then, the growth in seafood consumption has been increasingly supplied by aquaculture, which more than doubled during the 1990s with a 10% annual growth rate, although its average annual growth fell to 6% between
2000 and 2014 and to 5.8% between 2010 and 2016 (FAO 2018). According to a Rabobank report (2017), “aquaculture is expected to continue its growth in 2018, albeit at a gradually declining rate. Growth in both 2017 and 2018 is estimated to be in the 3% to 4% in volume terms.” This growth rate decline could be irreversible as aquaculture is susceptible to climate change and environmental extremes (Plagányi 2019; FAO 2018). Figure 11.1 differentiates between wild capture and aquaculture.

With regard to fishing, Daniel Pauly and Dirk Zeller (2016) show that this graph does not reflect the actual world fishing curve because 30% of global fish catch go unreported every year. According to these two experts on seafood depletion, this curve has peaked at a much higher number (130 million tons) and has since been declining three times faster than the FAO curve suggests, with a loss of one million tons a year due to overfishing (see also Carrington 2016). Figure 11.2 provides a clearer picture of both overfishing and the decline in marine fish stocks beginning in the
1990s:

The graph above shows that this decline is undoubtable and fishing corporations, a sector that is heavily controlled by oligopolies, bear sole responsibility for it. In fact, according to Henrik Österblom et al. (2015):

Thirteen corporations control 11–16% of the global marine catch (9–13 million tons) and 19–40% of the largest and most valuable stocks, including species that play important roles in their respective ecosystem. They dominate all segments of seafood production, operate through an extensive global network of subsidiaries and are profoundly involved in fisheries and aquaculture decision-making.

Of these 13 multinationals, 4 have headquarters in Norway, 3 in Japan, 2 in Thailand, 1 in Spain, 1 in South Korea, 1 in China, and 1 in the USA. The revenues of these [[FIGURE 11.1 OMITTED]]
[[FIGURE 11.2 OMITTED]] 13 corporations (0.5% of the 2250 fishing and aquaculture companies worldwide) corresponded to 18% of the value of seafood in 2012 (USD 252 billion).
11.4 Fish Stocks on the Verge of Collapse

Back in 2006 Boris Worm and many other ocean scientists published an alarming report on the state of fisheries:
most major declines in stock biomass and in corresponding yields are due to
There is no reasonable doubt that

unsustainable levels of fishing pressure. Global assessments of stock biomass support our conclusions that a large fraction of the world’s
fished biodiversity is overexploited or depleted (24% of assessed stocks in 2003), that this fraction is increasing (from 10% in 1974), and that recovery of
depleted stocks under intense management is still an exception (1% in 2003).

Graham Edgar et al. (2014) showed that “ total fish biomass has declined about two-thirds from historical baselines as a result of fishing.” Assessing
5829 populations of 1234 marine vertebrate species, the Living Blue Planet Report (WWF-ZSL 2015) provides a comprehensive picture of this situation until 2012:

Marine vertebrate populations declined 49% between 1970 and 2012. Populations of fish species utilized by humans have fallen by half, with some of the most important species experiencing even greater declines. Around one in four species of sharks, rays and skates is now threatened with extinction, due primarily to overfishing. Tropical reefs have lost more than half their reef-building corals over the last 30 years. Worldwide, nearly 20% of mangrove cover was lost between 1980 and 2005. 29% of marine fisheries are
overfished. If current rates of temperature rise continue, the ocean will become too warm for coral reefs by 2050. (…) Just 3.4% of the ocean is protected, and only part of this is effectively managed.

As Ken Norris, Director of Science at the Zoological Society of London (ZSL), said in a statement (Doyle 2015), “this report suggests that billions of animals have been lost from the
world’s oceans in my lifetime alone. (…) This is a terrible and dangerous legacy to leave to our grandchildren.” This harsh reality, one that our grandchildren will have no means of coping with, is one that even we
are now beginning to feel. Populations of some commercial fish stocks, such as tuna, mackerel, and bonito, fell by almost 75% between 1970 and 2012. Other less iconic and smaller species face similar situations (Zakaib 2011).
Over the past 20 years, stocks of hake, flounder, sole, halibut, mackerel, mackerel scad (Decapterus macarellus), horse mackerel, and other fish belonging to the Carangidae family have gone from 30 million tons to just 3 million. To
continue to supply the market, fishing must now extend over 6000 km from Peru to the Antarctic boundary and cover 120° of longitude, half the distance between Chile and New Zealand. The stocks of Chilean jack mackerel
(Trachurus murphyi of the family Carangidae) decreased by 63% between 2006 and 2011 alone and, according to Daniel Pauly, “when Chilean jack mackerel disappears, everything else will be gone.”2 Data from Sea Around Us on
fishing corporations allow Pauly to state that “the catch is going down by 2% a year, but it looks more stable than it is. What they do is deplete one stock and then move onto another, which means they’re going to run out of fish in
a few decades” (Pala 2016). Boris Worm et al. (2006) project the global collapse of all taxa currently fished by the mid-twenty-first century. Also for Paul Watson (Facebook 8/XII/2019), founder of the Sea Shepherd Conservation
Society, “by 2048 there will be no commercial fishing industry because there will be no fish. (…) If the fishes die out, the Ocean dies and when the Ocean dies, humanity dies with it.”

The current decline in fish populations explains why “aquaculture has recently superseded wild-capture fisheries as the main source of seafood for human consumption” (Guillen et al. 2019). In fact, industrial fishing would already be unprofitable if it were not supported by subsidies that encourage overfishing, mostly in developed countries. These subsidies “are worth an estimated US$14-35 billion— even though the global fishing fleet is 2–3 times larger than the ocean can sustainably support” (WWF-ZSL 2015).

11.5 Regional Declines, Some Irreversible

The decline in marine life has, in many cases, reached irreversible levels. In the USA, for example, the Sustainable Fisheries Act (SFA) of 1996 set specific quotas for
the 44 species protected by law. Even so, eight of them were no longer recoverable. Moreover, these restrictive fishing quotas in US waters may have an even greater impact on oceans that are distant from the country, as 62–65%
of seafood consumed in the USA is imported, according to Jessica Gephart, Halley Froehlich, and Trevor Branch (2019). Richard Conniff (2014) adds a factor that aggravates this picture: one-third of US fish imports hail from
trafficking, that is, they are considered IUU fishing, the technical term for illegal, unreported, and unregulated fishing. On the west coast of the USA, shoals of sardines have decreased by about 90% since 2007, from an estimated
1.4 million tons (2007) to roughly 100,000 tons in 2015. In light of this critical situation, the Pacific Fishery Management Council decided to ban fishing of sardines from Mexico to Canada, starting on July 1, 2015 (Fimrite 2015). This
was too late, though. Four years after this moratorium, a new stock assessment released by the National Marine Fisheries Service in July 2019 revealed that the population of Pacific sardines amounted to only 27,547 tons. The
Pacific sardine population has plummeted by 98.5% since 2006. The impacts of this collapse are echoing in the entire marine ecosystem. According to Elizabeth Grossman (2015):

Sardines, which are exceptionally nutritious because of their high oil content, are vital to mother sea lions feeding their pups and to nesting brown pelicans. (…) Starving pups have been seen on California’s Channel Islands (…). In addition, California brown pelicans have been experiencing high rates of nesting failure and thousands have been dying.

In January 2015, fishing corporations in Brazil obtained, from the government, amendments to the legislation that regulates the capture of 409 fish species and 66 aquatic invertebrates (Annex I list of IUCN). Alexander Lees (2015) warns that such a change may represent “a serious setback for conservation and for the sustainable management of fisheries in Brazil.” In 2010, according to a survey conducted in the seas of Brazil, overfishing devastated 80% of marine species. One hundred thousand tons of sardines were fished, for example, between the coast of Rio de Janeiro
and Santa Catarina alone.3 The increasing pressure of overfishing on Brazilian shoals has been ignored because the last survey on fishing in Brazil dates from 2011 and, since then, there has been no quantitative or qualitative data. In the Western and Eastern Mediterranean, 96% and 91% of fish stocks are overfished, respectively. European fishermen are catching fish on average six times more than the so-called maximum sustainable yield (MSY), a limit beyond which the reproduction of these species becomes impossible. This limit is respected by only 4% of European
fisheries.

11.6 Bottom Trawling: Fishing as Mining

as natural resources become scarce, capitalism generates new


With industrial fishing, we observe once again the most defining law of global capitalism:

technologies and more radical means of exploration and increasing devastation. Therefore, the first declines or collapses of shoals in
the 1990s typically led to the use of an even more brutal method of exploitation than those that caused the scarcity: bottom trawling. According to the Living Blue Planet Report (WWF-ZSL 2015):

Only a few decades ago it was virtually impossible to fish deeper than 500 meters: now, with
technological improvements in vessels, gear and fish-finding equipment, bottom trawling is occurring at depths of up to
2,000 meters. Most deep-sea fisheries considered unsustainable have started to target fish
populations that are low in productivity, with long lifespans, slow growth and late maturity. This leads to rapid declines in the
population and even slower recovery once the stock has collapsed.
Trawling can refer to bottom trawling in search of demersal species or to midwater trawling, which involves the use of probes and GPS to fix the exact depth of the net. In both cases, the result is catastrophic for marine
populations. As Elliott A. Norse, President of the Marine Conservation Institute in Washington, and colleagues (2011) reveal how bottom trawling, which scrape the seafloor, are a form of mining for marine life:

As coastal fisheries around the world have collapsed, industrial fishing has spread seaward and deeper in pursuit of the last
economically attractive concentrations of fishable biomass. (…) The deep sea is by far the largest but least productive part of the oceans, although in very limited places fish biomass can be very high. (…) Many
deep-sea fisheries use bottom trawls, which often have high impacts on nontarget fishes (e.g., sharks) and invertebrates (e.g., corals), and can often proceed only because they receive massive government
subsidies. The combination of very low target population productivity, nonselective fishing gear, economics that favor population liquidation and a very weak regulatory regime makes deep-sea fisheries
unsustainable with very few exceptions. Rather, deep-sea fisheries more closely resemble mining operations that serially eliminate fishable populations and move on.

According to the FAO and the authors of this study, bottom trawling increased sevenfold between 1960 and 2004. It has also expanded since the 1950s toward the southern seas at an average rate of 1° latitude per year.

11.7 Hypoxia and Anoxia

“There is no environmental variable of such ecological importance to marine ecosystems that has
changed so drastically in such a short period of time as a result of human activities as dissolved oxygen”
(Laffoley and Baxter 2019). The loss of oxygen from the world’s ocean is a growing threat. The oxygen concentrations have already begun to decline in the
twentieth century, both in coastal and offshore areas. They declined by roughly 2% between 1960 and 2010 (Laffoley and Baxter 2019), and the 2013 IPCC Assessment Report (AR5) predicts that they will decrease by 3–6% during
this century, only in response to water warming.4 Multidecadal trends and variability of dissolved oxygen (O2) in the ocean from 1958 to 2015 were quantified by Takamitsu Ito et al. (2017). The authors conclude that:

A widespread negative O2 trend is beginning to emerge from the envelope of interannual variability. The global ocean O2 inventory is negatively correlated with the global ocean heat content. (…) The trend of O2
falling is about 2 to 3 times faster than what we predicted from the decrease of solubility associated with the ocean warming.

The primary causes of ocean deoxygenation are eutrophication, nitrogen deposition from the burning of fossil fuels,
and ocean warming. Eutrophication in coastal areas, a phenomenon described by Richard Vollenweider in 1968, is the degenerative response of an ecosystem to the abnormal accumulation of nitrogen and
phosphate in water, which stimulates the proliferation of algae and results in the formation of toxic material; release of hydrogen sulfide (H2S), an equally toxic gas; and obstruction of solar light by algae. When the algae die, they
sink and are broken down by microorganisms in a process called bacterial respiration, one that consumes oxygen. This results in hypoxia or, ultimately, anoxia (insufficient or no oxygen concentrations in the water), which leads to
the death and decomposition of aquatic organisms and, consequently, to more bacterial activity, in a snowball effect of environmental intoxication and decrease in biodiversity. Although eutrophication may occur for natural
reasons, this process has recently been the result of anthropogenic activities, such as agriculture, industry, and sewage disposal. According to a new IUCN report (Laffoley and Baxter 2019), “over 900 areas of the ocean around the
world have already been identified as experiencing the effects of eutrophication. Of these, over 700 have problems with hypoxia.”

There is a cause and effect relationship between the discharge of municipal effluents, nitrogenated fertilizers, and other phytostimulation compounds composed of nitrogen (N), phosphorus (P), and phosphate (K)—a sector dominated by ten global corporations— and the pollution of soil, atmosphere, and water. By prefacing the 2013 report of Our Nutrient World (Sutton et al. 2013), Achim Steiner summarizes the problem:

Excessive use of phosphorus is not only depleting finite supplies, but triggering water pollution locally and beyond while excessive use of nitrogen and the production of nitrogen compounds is triggering threats not only to freshwaters but to the air and soils with consequences for climate change and biodiversity.

11.8 Industrial Fertilizers

Since the second half of the twentieth century, there has been a growing consumption of fertilizers, one that is occurring at much higher rates than population growth. According to Denise Breitburg et al. (2018):

The human population has nearly tripled since 1950. Agricultural production has greatly increased to feed this growing population and meet demands for increased consumption of animal protein, resulting in a 10-fold increase in global fertilizer use over the same period. Nitrogen discharges from rivers to coastal waters increased by 43% in just 30 years from 1970 to 2000, with more than three times as much nitrogen derived from agriculture as from sewage.

Figure 11.3 shows this growth curve by one order of magnitude. World consumption of industrial fertilizers jumped from about 18 million tons in 1950 to 180 million tons in 2013.

In 1998, the world produced 137 million tons of chemical fertilizers, 15% of which were consumed in the USA. Between 1950 and 1998, worldwide use of petrochemical fertilizers increased more than four times per capita (Horrigan et al. 2002). In addition, fertilizer use per hectare of plowed land is increasing, from 110.2 kg/ha in 2003 to 122.4 kg/ha in 2009, according to data from the World Bank. This overconsumption of fertilizers is a consequence of soil depletion, but it is also induced by the dictate of profit maximization. We can speak of hyper-consumption because
most of the nitrogen and/or phosphorus contained in these fertilizers is not absorbed by plants. David Tilman (1998) estimates that agriculture absorbs only 33% to 50% of the nitrogen contained in it. For its part, Our Nutrient World, the report mentioned above, states that 80% of nitrogen and between 25% and 75% of phosphorus from fertilizers are not incorporated into plants and are dispersed into the environment. Part of this excess penetrates the water tables, and part of it is carried by rain to rivers, lakes, and the sea (Sutton et al. 2013).

There is no recent assessment of the current degree of water eutrophication worldwide. A survey of water eutrophication at the end of twentieth century by the UNEP showed that, globally, 30% to 40% of lakes and reservoirs

were then affected [[FIGURE 11.3 OMITTED]] to varying degrees of eutrophication (Zhang et al. 2019). In 1988, 1990, 1991, 1992, and 1994, the International Lake Environment Committee (ILEC)
published interim reports, titled Survey of the State of World Lakes. They indicated that eutrophication affected 54% of lakes in Asia, 53% of lakes in Europe, 48% in North America, 41% in South America, and 28% in Africa. In the
USA, the EPA’s first National Rivers and Streams Assessment 2008–2009 (NRSA), published in 2013, examined 1924 sites in rivers and streams in the country. The scope of the NRSA was to determine to what extent US rivers and
streams provide healthy biological conditions, as well as the magnitude of chemical stressors that affect them:

NRSA reports on four chemical stressors: total phosphorus, total nitrogen, salinity and acidification. Of these, phosphorus and nitrogen are by far the most widespread. Forty-six percent of the nation’s river and stream length has high levels of phosphorus, and 41% has high levels of nitrogen.

In summary, only 21% of the country’s rivers and streams are in “good biological condition,” and 23% of them are in “fair condition,” while 55% of them are in poor condition.” Eastern rivers face an even worse fate, with 67.2% of rivers and streams in poor biological condition.

11.9 Ocean Dead Zones: An Expanding Marine Cemetery

In seawater, oxygen exists in different concentrations, varying between 6 ml/l and 8.5 ml/l, depending on the water’s temperature and salinity. The term hypoxia
applies to when the oxygen concentration is less than 2 ml/l and the term anoxia to when this concentration is less than 0.5 ml/l. Under these conditions, fish that cannot flee in time tend to become disoriented, or they faint and
die from asphyxia. Organisms that cannot move with speed, such as crustaceans, and those that live fixed on other structures die in their entirety and their putrefaction retroactively feeds (through bacterial respiration) hypoxia
and anoxia.

hypoxia or anoxia may occur naturally, but they are rare, small-scale, and only seasonal. The anthropogenic
The phenomena of

factors listed above turn them into frequent and growing phenomena that are large scale and sometimes
permanent. An anoxic zone, thus, becomes a marine cemetery where there is no place for vertebrates and other
multicellular species. Ocean dead zones with almost zero oxygen have quadrupled in size since 1950, but between 2003 and 2011 alone they more than tripled. “Currently [2016], the total surface area of Oxygen Minimum Zones
(OMZ) is estimated to be ~30 × 106 km2 (~8% of the ocean’s surface area).”5 According to Robert J. Diaz of the Virginia Institute of Marine Science, these areas have roughly doubled every decade (Perlman 2008). Figure 11.4 shows
the 10-year increase in the number of dead zones between the early twentieth century and 2006.

In 2011, the WRI and the Virginia Institute of Marine Science identified 530 dead zones and 228 zones displaying signs of marine eutrophication.6 Some of the most acute cases of dead zones are the North Adriatic Sea; the Chesapeake Bay in the USA, an area of over 18,000 km2 off the northern shores of the Gulf of Mexico; the mouth of the Mississippi River; Tokyo Bay; certain sea areas that bathe China, Japan, Southeast Australia, and New Zealand; as well as Venezuela’s Gulf of Cariaco. Other larger ocean dead zones are found in parts of the North Sea, the Baltic Sea,
and the Kattegat Strait between Denmark and Sweden. In 2012, Osvaldo Ulloa from the Center for Oceanographic Research at the University of Concepción in Chile noted the emergence of new anoxic zones off northern Chile’s Iquique coast. According to Ulloa, prior to this study, it was not thought that there could be completely oxygen-free areas in the open sea, let alone at levels so close to the surface: “fish lose their habitat and die or move away because they are unable to survive. Only microorganisms, especially bacteria and archaea, can survive” (Gutiérrez 2012).

[[FIGURE 11.4 OMITTED]]


11.10 Up to 170% More Ocean Acidification by 2100

“The surface ocean currently absorbs approximately one-third of the excess CO2 injected into the atmosphere from human fossil fuel
use and deforestation, which leads to a reduction in pH and wholesale shifts in seawater carbonate chemistry” (Donney et al. 2009). Since
1870 oceans have absorbed 150 Gigatons (Gt) of CO2, and between 2004 and 2013 alone, they absorbed on average 2.6 Gt of this gas per year. Oceans have a huge ability to absorb cumulative impacts. The dynamic of ocean
response to acidification is very slow and is not easily detected within the timeframe of scientific experiments. For example, the increase in ocean CO2 concentrations resulting from human activities over the past 50–100 years has
so far only reached a depth of 3000 m below the surface. But this absorption increases as CO2 concentrations in the atmosphere increase. This causes chemical changes in the aqueous environment. One of these changes is
acidification, that is, a change in ocean pH, or ionic hydrogen potential, which is a measure of hydrogen ion levels (H+) on a scale that indicates the acidity (low pH), neutrality, or alkalinity (high pH) of an environment. A lower pH
means that the ocean has become more acidic.

Ocean acidification has been called “the other CO2 problem” (Donney et al. 2009) and Carol Turley of Plymouth University’s Marine Laboratory called it its “evil twin.” Rising atmospheric CO2 concentrations are causing changes in

oceanic acidification decreases the


the ocean’s carbonate chemistry system. The dissolution of CO2 in water produces carbonic acid (H2CO3), and this process of

concentrations of calcium carbonate, such as calcite (CaCO3), high-magnesium calcite (HMC), and aragonite, which makes it more difficult for a
vast group of marine organisms—corals, crustaceans, urchins, mollusks, etc.—to use these minerals to turn them into their shells
or exoskeletal structures. The deficit and/or weakening of these protections slows embryo growth, prevents them from fully
forming, or makes their calcareous protections less adherent to stone, less dense, more brittle, and more vulnerable to predators and pathogens.
The rapidity of ocean acidification has surprised scientists. In 1999, it was predicted that changing ocean chemistry could affect corals in the mid-twenty-first century. Robert H. Byme (2010) showed that pH changes in the northern
Pacific reached a depth of 500 m between 1991 and 2006. Extrapolating from laboratory tests, A. Kleypas et al. (2006) predict that:

Calcification rates will decrease up to 60% within the 21st century. We know that such extrapolations are oversimplified and do not fully consider other environmental and
biological effects (e.g., rising water temperature, biological adaptation); nor do they address effects on organism fitness, community structure, and ecosystem functioning. Any of these factors could increase or
decrease the laboratory-based estimates, but it is certain that net production of CaCO3 will decrease in the future.

According to Carbon Brief projections (2018), if GHG emissions continue unabated (RCP8.5 W/m2), CO2 atmospheric concentrations will increase to 550 ppm by 2050 and to about 950 ppm by 2100. Richard Feely et al. (2008) state that “this increase [500 ppm by 2050 and 800 ppm by 2100] would result in a decrease in surface-water pH of ~0.4 by the end of the century, and a corresponding 50% decrease in carbonate ion concentration.” According to an assessment conducted in 2013 by a US National Academy of Sciences (NAS) committee on the worsening of
acidification:7

The rate of release of CO2 is the greatest for at least the past 55 million years. Since the start of the Industrial Revolution in the middle of the 18th century, atmospheric CO2 levels have risen by ~40% and the pH of seawater has decreased by ~0.12 pH units, which corresponds to an approximately 30% rise in acidity. By the end of this century, models based on “business as usual” scenarios for CO2 release predict a further decrease in pH that would lead to an approximately 100–150% rise in ocean acidity relative to the mid-18th
century.

A report from the International Geosphere-Biosphere Program (IGBP) presented in 2013 at the 19th United Nations Conference on Climate Change (COP 19) in Warsaw went beyond the estimate of a 100% to 150% increase in ocean acidification, stating that water acidification can increase by up to 170% by 2100, as a result of human activity, leading to the likely disappearance of around 30% of ocean species.

11.11 Ongoing Effects

Ocean acidification is already affecting, for example, the reproductive capacity of oysters grown in the northern Pacific coastal regions. In fact, between 2005 and 2008, the death of millions of oyster larvae on the US Pacific coast was recorded (Welch 2009). According to a study by Nina Bednarsek et al. (2014), acidification is also dismantling the protection of pteropods in California’s sea:

We show a strong positive correlation between the proportion of pteropod individuals with severe shell dissolution damage and the percentage of undersaturated water in the top 100 m with respect to aragonite. We found 53% of onshore individuals and 24% of offshore individuals on average to have severe dissolution damage. (…) Relative to pre-industrial CO2 concentrations, the extent of undersaturated waters in the top 100 m of the water column has increased over sixfold along the California Current Ecosystem (CCE). We
estimate that the incidence of severe pteropod shell dissolution owing to anthropogenic Oceanic Acidification has doubled in near shore habitats since pre-industrial conditions across this region and is on track to triple by 2050.

One of the authors of this study, William Peterson, from the NOAA, stated: “We did not expect to see pteropods being affected to this extent in our coastal region for several decades.”8 Two species of edible bivalve shellfish (Mercenaria mercenaria and Argopecten irradians) were affected by exposure to varying levels of acidity in the aquatic environment, as shown by Stephanie C. Talmage and Christopher J. Gobler (2010):

Larvae grown under near preindustrial CO2 concentrations (250 ppm) displayed significantly faster growth and metamorphosis as well as higher survival and lipid accumulation rates compared with individuals reared under modern day CO2 levels. Bivalves grown under near preindustrial CO2 levels displayed thicker, more robust shells than individuals grown at present CO2 concentrations, whereas bivalves exposed to CO2 levels expected later this century had shells that were malformed and eroded. These results suggest that
the ocean acidification that has occurred during the past two centuries may be inhibiting the development and survival of larval shellfish and contributing to global declines of some bivalve populations.

The decline of these organisms has a chain of repercussions, given their multiple functions in the marine biosphere. Many of them, such as pteropods, are critical to the feeding of other species and to water filtering, making water less toxic to marine organisms. Finally, a lower pH in the blood and cellular fluids of some marine organisms may decrease their ability to capture oxygen, impairing their metabolic and cellular processes. Research is beginning to outline the broad range of these harms to marine life, which include photosynthesis, respiration, nutrient acquisition,
behavior, growth, reproduction, and survivability.
11.12 Corals Die and Jellyfish Proliferate

Corals are cnidarian animals that secrete an exoskeleton which is formed of limestone or organic matter. Corals live in colonies considered “superorganisms” which are incomparable repositories of sea life. More than a quarter of

Corals are anchors of various marine systems and their disappearance


all marine species are known to spend part of their lives in coral reefs (Caldeira 2012).

may be lethal to underwater life. Warming waters are the most important causa mortis of corals . Warming
causes corals to expel the microscopic algae (zooxanthellae) living in endosymbiosis with them from
their tissues. This, in turn, occasions a process called coral bleaching, from which they do not always recover. The extent and frequency of
these bleaching phenomena are increasing. Sixty major mass bleaching events occurred between 1979 and 1990 worldwide. In Australia, scientists from AIMS (Australian Institute of
Marine Science) have been monitoring mass bleaching events on the Great Barrier Reef since the early 1980s. Mass bleaching events in 1998, 2002, 2006, 2016, 2017 (Vaughan 2019), and most recently 2020 were caused by
unusually warm sea surface temperatures during the summer season. On March 26, 2020, David Wachenfeld, Chief Scientist of the Great Barrier Reef Marine Park Authority, stated in a video posted on its website: “We can confirm
that the Great Barrier Reef is experiencing its third mass bleaching event in five years.” Some reefs that did not bleach in 2016 or 2017 are now experiencing moderate or severe bleaching. Normally, mass bleaching events are
associated with the El Niño climate phenomenon, but the 2017 and 2020 bleaching episodes took place without one. Terry Hughes (dubbed “Reef Sentinel” by the journal Nature) and co-authors published an article in Science
(2018) on a survey of 100 coral reefs in 54 countries; the conclusions could not be more alarming:

Tropical reef systems are transitioning to a new era in which the interval between recurrent bouts of coral bleaching is too short for a full recovery of mature assemblages. We analyzed bleaching records at 100 globally distributed reef locations from 1980 to 2016. The median return time between pairs of severe bleaching events has diminished steadily since 1980 and is now only 6 years. (…) As we transition to the Anthropocene, coral bleaching is occurring more frequently in all El Niño–Southern Oscillation phases, increasing
the likelihood of annual bleaching in the coming decades.

The authors also show that the percentage of bleached corals per year increased from 8% in the 1980s to 31% in 2016. In a subsequent publication, Terry Hughes et al. (2019) show that the reproductive capacities of corals are decreasing after unprecedented back-to-back mass bleaching events caused by global warming: “As a consequence of mass mortality of adult brood stock in 2016 and 2017 owing to heat stress, the amount of larval recruitment declined in 2018 by 89% compared to historical levels.” Pollution also plays an important role in coral decline, as tiny plastic
particles found in polyps alter their feeding capacity (Hall et al. 2015). Other causes of coral reef death are acidification of water by increasing CO2 absorption, direct discharge of sewage and domestic and industrial effluents, and the use of dynamite or cyanide to kill fish, which also destroys or poisons reefs. According to the ReefBase: A Global Information System for Coral Reefs (with data from the World Atlas of Coral Reefs from the UNEP-World Conservation Monitoring Center), fishing with dynamite and/or cyanide continues to be practiced, although it has been
prohibited since 1985. This is the case, for example, in Indonesia, a nation that, together with Australia, is home to the largest coral reefs in the world.

Up to 90% of coral reefs in the seas off the Maldives in the southern Atlantic and off the Seychelles in the Indian Ocean have been killed by global warming. While coral reef systems have
existed for over 500 million years, “the Great Barrier Reef is relatively young at 500,000 years of age, with the most recent addition developing only 8,000 years ago” (Esposito 2020). Since 1985, the 2000-km Great Barrier Reef off
Australia’s east coast—a crucial reservoir for the survival of 400 coral species, 1500 species of fish, and 4000 species of mollusks—has been destroyed by 50%, falling victim to industrial ports, global warming, ocean acidification,
pollution, and, additionally, acanthaster, a starfish whose proliferation is due mainly to the agribusiness sector which dumps increasing quantities of industrial fertilizers in the ocean.

An assessment of the Caribbean coral decline between 1970 and 2012, promoted by the IUCN (Jackson, Donovan, Cramer, Lam 2014), states:

Caribbean coral reefs have suffered massive losses of corals since the early 1980s due to a wide range of human impacts including explosive human population growth, overfishing, coastal pollution, global warming, and invasive species. The consequences include widespread collapse of coral populations, increases in large seaweeds (macroalgae), outbreaks of coral bleaching and disease, and failure of corals to recover from natural disturbances such as hurricanes. Alarm bells were set off by the 2003 publication in the journal
Science that live coral cover had been reduced from more than 50% in the 1970s to just 10% today [2012].

Finally, on July 13, 2012, Roger Bradbury, a Specialist on Corals from the Australian National University, wrote an understandably outspoken article in the New York Times:

It’s past time to tell the truth about the state of the world’s coral reefs, the nurseries of tropical coastal fish stocks. They have become zombie ecosystems, neither dead nor truly alive in any functional sense, and on a trajectory to collapse within a human generation. There will be remnants here and there, but the global coral reef ecosystem—with its storehouse of biodiversity and fisheries supporting millions of the world’s poor—will cease to be. Overfishing, ocean acidification and pollution are pushing coral reefs into oblivion.
Each of those forces alone is fully capable of causing the global collapse of coral reefs; together, they assure it. The scientific evidence for this is compelling and unequivocal, but there seems to be a collective reluctance to accept the logical conclusion—that there is no hope of saving the global coral reef ecosystem. (…) Coral reefs will be the first, but certainly not the last, major ecosystem to succumb to the Anthropocene—the new geological epoch now emerging.

11.13 Jellyfish

While corals die, jellyfish thrive. In 2010, scientists at the University of British Columbia established that global warming was causing a proliferation and an earlier yearly appearance of
numerous species of jellyfish (McVeigh 2011). Once kept in balance by the self-regulating mechanisms of ecosystems, the jellyfish belonging to the subphylum Medusozoa of the phylum Cnidaria,

which dates back half a billion years, now proliferate wildly in the oceans, benefiting from (1) warming waters ; (2) their transportation to all

ports of the world in the ballasts of ships; (3) the multiplication of hard surfaces at sea—piers, boat hulls, oil rigs, and garbage—all ideal nurseries for their eggs; (4) the

decline of predatory species, such as sharks, tuna, and turtles (that die by eating pieces of plastic, thinking they are jellyfish); (5) the extinction of competing
species caused by overfishing, pollution, fertilizers, and habitat destruction; and (6) lower concentrations of diluted oxygen in the sea. Jellyfish devour huge amounts of
plankton, depriving small fish of food, impacting the entire food chain. Tim Flannery (2013) illustrates well the behavior of one jellyfish species:

Mnemiopsis acts like a fox in a henhouse. After they gorge themselves, they continue to collect and kill prey. As far as the ecosystem goes, the result is the same whether the jellyfish digest the food or not: they go
on killing until there is nothing left. That can happen quickly.

By seizing niches deserted by locally extinct species or declining populations, jellyfish complete the work of man. Moreover, they exterminate the eggs and
larvae of other species. They have also proved to be creatures that have enormous adaptive advantages to the ocean’s new environmental conditions, characterized by waters that are more polluted, warmer, more acidic, and less
oxygenated, since their metabolism is exceptionally efficient. Lisa-ann Gershwin (2013), Director of the Australian Marine Stinger Advisory Services, issues a terrible warning:

We are creating a world more like the late Precambrian than the late 1800—a world where jellyfish ruled the seas and organisms with shells didn’t exist. We are creating a world where
we humans may be soon unable to survive, or want to.
11.14 Plastic Kills and Travels Through the Food Web

Globally, it is estimated that over 100 million marine animals are killed each year by plastic waste. Plastic damages habitats, entangles wildlife, and causes injury via ingestion. Sarah
Gall and Richard Thompson (2015) reviewed and updated the current knowledge on the effects of plastic marine debris on marine organisms:

A total of 340 original publications were identified documenting encounters between marine debris and marine organisms. These [publications] reported encounters for a total of 693 species. 76.5% of all reports listed plastic amongst the debris types encountered by organisms making it the most commonly reported debris type. 92% of all encounters between individual organisms and debris were with plastic.

According to the IUCN, marine plastic pollution has affected at least 267 species and at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Moreover, leachate from plastics has been shown to cause acute toxicity in some species of phytoplankton, such as the fresh species Daphnia magna and Nitocra spinipes (Bejgarn et al. 2015). It is now a well-established fact that, along with overfishing, ocean warming, acidification, eutrophication, and pollution caused by aquaculture, marine plastic pollution, predicted to increase
by an order of magnitude by 2025 (Jambeck et al. 2015), is among the relevant factors responsible for the current annihilation of marine life. Let us return for a moment to this problem, already covered in Chap. 4 (Sect. 4.9 Plastic in the Oceans), to examine its impact on biodiversity in the oceans. The French Research Institute for Exploration of the Sea (IFREMER) estimates that there are 150 million pieces of macrowaste (macrodéchets) in the bottom of the North Sea and more than 175 million of them in the northwest Mediterranean basin, with a total of about 750
million floating in the Mediterranean Sea as a whole (Rescan and Valo 2014). Like oil, its raw material, plastic floats, at least in most of its forms. As Andrés Cózar et al. (2014) claim:

Exposure of plastic objects on the surface waters to solar radiation results in their photodegradation, embrittlement, and fragmentation by wave action. (…) Persistent nano-scale particles may be generated during the weathering of plastic debris, although their abundance has not been quantified in ocean waters.

Each of these particles in this plastic soup retains its chemical characteristics and toxicity. Mammals, fish, birds, and
mollusks cannot digest these fragments or particles, which they involuntarily ingest or confuse with plankton, jellyfish, or other food sources. A study published in the Bulletin of Marine Pollution (Lusher et al. 2012)
detected the presence of microplastics in the gastrointestinal tract of 36.5% of the 504 fish examined from ten different species in the English Channel. The fish were collected at a distance of 10 km from Plymouth and at a depth of
55 m. Alina Wieczorek et al. (2018) studied the presence of plastics in Mesopelagic fishes from the Northwest Atlantic. The results are even more alarming: “Overall 73% of fish contained plastics in their stomachs with Gonostoma
denudatum having the highest frequency of occurrence (100%), followed by Serrivomer beanii (93%) and Lampanyctus macdonaldi (75%).” The ingestion of plastic by fish and other marine organisms may cause asphyxiation,

blockage of the digestive tract, perforation of the intestines, loss of nutrition, or a false sense of satiety. Philipp Schwabl et al. (2018) explain that:

Microplastic may harm via bioaccumulation (especially when the intestinal barrier is damaged) and can serve as a vector for
toxic chemicals or pathogens. Moreover, ingested plastic may affect intestinal villi, nutritional uptake and can induce hepatic stress.
Microplastics (between 5 mm and < 1 μm in size) are now more numerous than larger fragments. They are infiltrating ecosystems and transferring their toxic components to tissues, as well as to the tiny organisms that ingest them.
Mark Anthony Browne et al. (2013) showed the impacts of these components on lugworms (Arenicola marina), a marine species of the phylum Annelida, exposed to sand containing a 5% concentration of microplastics presorbed
with pollutants (nonylphenol and phenanthrene) and additive chemicals (Triclosan and PBDE-47). The authors warn, more generally, that “as global microplastic contamination accelerates, our findings indicate that large
concentrations of microplastic and additives can harm ecophysiological functions performed by organisms.” Stephanie Wright et al. (2012) also underline that there are many other species at risk, as their eating behavior is similar
to that of A. marina, and conclude that “high concentrations of microplastics could induce suppressed feeding activity, prolonged gut residence times, inflammation and reduced energy reserves, impacting on growth, reproduction

and ultimately survival.” Microplastics can ultimately carry bacteria and algae to other regions of the ocean, causing invasions of species and imbalances
with unknown consequences to marine ecosystems.

Since plastic travels the food chain and has become ubiquitous in contemporary societies, it is not surprising that its components are now in human stool, as first detected by
Philipp Schwabl et al. (2018). In a pilot study, they identified the presence of 11 types of plastic in human stools: polypropylene (PP), polyurethane, polyethylene, polyvinyl chloride, polyethylene terephthalate (PET), polyamide
(nylon), polycarbonate, polymethylmethacrylate (acrylic), polyoxymethylene (acetal), and melamine formaldehyde. PP and PET were detected in 100% of the samples. From this pilot study, the authors conclude that “more than
50% of the world population might have microplastics in their stool.” For the authors, the two main sources of plastic ingestion are (1) ingestion of seafood correlated with microplastics content and (2) food contact materials
(packaging and processing).

11.15 Ocean Warming, Die-Offs, and Decline of Phytoplankton

More than 90% of the warming that has happened on Earth over the past 50 years has occurred in the ocean (Dahlman and Lindsey 2018). As seen in Chap. 7 (item 7.3 between 1.1 °C and 1.5 °C and accelerating, Fig. 7.6), compared to the 1880–1920 period, global ocean warming reached 1 °C in 2016 and 0.8 °C in 2018. The linear trend of Ocean Heat Content (OHC) at a depth of 0–700 m during 1992–2015 shows a warming trend four times stronger than the 1960–1991 period (Cheng et al. 2017). The staggering scale of ocean warming and the more frequent and intense
marine heatwaves already have devastating consequences on ecosystems. For example, this warming is driving shoals away from the equator at a rate of about 50 km per decade. Thus, human populations in tropical countries, whose diet is particularly dependent on fish, will be the most quickly affected. As Daniel Pauly (Davidson 2017) states:

In temperate areas you will have the fish coming from a warmer area, and another one leaving. You’ll have a lot of transformation but they will actually—at least in terms of fishery— adapt. In the tropics you don’t have the replacement, you have only fish leaving.

But the problem also presents itself for fish in high latitudes, for which there are no colder places to go. In general, ocean warming is predicted to affect marine species through physiological stress, making them more susceptible to diseases. Ocean warming has also caused mass die-offs of fish and other marine species. Blobs of warm water are killing clams in the South Atlantic (Mooney and Muyskens 2019). Since 2013, a sea star wasting disease, tied to ocean warming, has affected more than 20 sea star species from Mexico to Alaska (Wagner 2014; Harvell et al. 2019;
Herscher 2019). About 62,000 dead or dying common murres (Uria aalge) washed ashore between summer 2015 and spring 2016 on beaches from California to Alaska. John Piatt et al. (2020) estimate that total mortality approached 1 million of these sea birds. According to the authors, these events were ultimately caused by the most powerful marine heatwave on record that persisted through 2014–2016 and created an enormous volume of ocean water (the “Blob”) from California to Alaska with temperatures that exceeded average by 2–3 standard deviations. Overall,
die-offs or mass mortality events (MMEs) are becoming increasingly common among species of fish and marine invertebrates over the last 70 years, although it remains unclear whether the increase in the occurrence of MMEs represents a true pattern or simply a perceived increase (Fey et al. 2015).

In addition, rising sea levels lead to the progressive decline of beaches, endangering the species that live on or depend on them to reproduce. According to Shaye Wolf of the Center for Biological Diversity in San Francisco, the loss of beaches threatens 233 species protected by the Endangered Species Act (ESA) in the USA. These include the large loggerhead turtle (Caretta caretta), which lays its eggs on Florida’s beaches, and the Hawaiian monk seal (Neomonachus schauin- slandi) (Schneider 2015). The same fate is reserved for five turtle species that lay eggs on Brazilian
beaches, two of which are endangered and all of which are considered vulnerable by the IUCN.

plastic pollution is poisoning some species of phytoplankton, such as Daphnia magna and Nitocra spinipes. But warming waters, along with
We saw in the preceding section that

a decline that is now


other factors, also threaten phytoplankton. Daniel Boyce, Marlon Lewis, and Boris Worm (2010) were the first to warn of a long-term downward trend in phytoplankton,

occurring at alarming rates:

We observe declines in eight out of ten ocean regions, and estimate a global rate of decline of
approximately 1% of the global median per year. Our analyses further reveal interannual to decadal phytoplankton fluctuations superimposed on long-term trends. These
fluctuations are strongly correlated with basin-scale climate indices, whereas long-term declining trends are related to increasing sea surface temperatures. We conclude that global phytoplankton concentration
has declined over the past century.

The decline in phytoplankton observed by the authors is so threatening to life on the planet that this finding gave rise to intense discussion, summarized by David Cohen (2010). According to
Kevin Friedland from NOAA, new measurements taken in 2013 confirm the lowest ever recorded levels of these organisms in the North Atlantic (Fischetti 2013; Spross 2013). Looking at satellite data collected between 1997 and
2009, a study coordinated by Mati Kahru (2011) links Arctic warming and melting to the hastening of phytoplankton’s maximum spring bloom by up to 50 days in 11% of the Arctic ocean’s area. This puts the phytoplankton out of
sync with the reproductive cycles of various marine mammals that feed on it. In addition, a study published by Alexandra Lewandowska et al. (2014) reaffirms the phenomenon of phytoplankton decline:

Recently, there has been growing evidence that global phytoplankton biomass and productivity are changing over time. Despite increasing trends in some regions, most observations and physical models suggest
that at large scales, average phytoplankton biomass and productivity are declining, a trend which is predicted to continue over the coming century. While the exact magnitude of past and possible future

marine phytoplankton biomass declines constitute a


phytoplankton declines is uncertain, there is broad agreement across these studies that

major component of global change in the oceans. Multiple lines of evidence suggest that changes in phytoplankton
biomass and productivity are related to ocean warming.
In September 2015, Cecile S. Rousseaux and Watson W. Gregg published an analysis from data obtained through NASA’s satellite observations. The authors confirm the decrease in diatoms, the largest and most common form of phytoplankton: “We assessed the trends in phytoplankton composition (diatoms, cyanobacteria, coccolithophores and chlorophytes) at a 16 global scale for the period 1998–2012. (…) We found a significant global decline in diatoms (−1.22% y−1).” If this decline persists, it could be one of the most destructive factors, reducing biodiversity to levels
never faced by our species, levels that could produce what we call, in the next chapter, hypobiosphere.

[[REFERENCES OMITTED]]

[[CHAPTER 12 BEGINS]]
The Anthropocene Working Group of the Subcommission on Quaternary Stratigraphy defines the Anthropocene as1:

[…] the present geological time interval, in which many conditions and processes on Earth are profoundly altered by human impact. This impact has intensified significantly since the onset of industrialization, taking us out of the Earth System state typical of the Holocene Epoch that post-dates the last glaciation. (…) Phenomena associated with the Anthropocene include: an order-of-magnitude increase in erosion and sediment transport associated with urbanization and agriculture; marked and abrupt anthropogenic
perturbations of the cycles of elements such as carbon, nitrogen, phosphorus and various metals together with new chemical compounds; environmental changes generated by these perturbations, including global warming, sea-level rise, ocean acidification and spreading oceanic ‘dead zones’; rapid changes in the biosphere both on land and in the sea, as a result of habitat loss, predation, explosion of domestic animal populations and species invasions; and the proliferation and global dispersion of many new ‘minerals’ and
‘rocks’ including concrete, fly ash and plastics, and the myriad ‘technofossils’ produced from these and other materials.

James Syvitski (2012) rightly observed that “the concept of the Anthropocene has manifested itself in the scientific literature for over a century under various guises.” In fact, the underlying idea that man’s actions shape the Earth system more decisively than non-anthropic forces is more than two centuries old. It dates back to the late eighteenth century, a time of lively reflection on the relationship between man and nature. Its history must be remembered here with its essential milestones; otherwise, we run the risk of not understanding the intellectual foundation of the
concept of the Anthropocene. This concept culminates in one of the richest chapters in the history of ideas of the contemporary age (Lorius and Carpentier 2010; Bonneuil and Fressoz 2013) and, at the same time, in the most crucially decisive chapter of human experience on this planet.

In 1780, in his book, Époques de la nature and, more precisely, in the seventh and last part of the book titled Lorsque la puissance de l’homme a fécondé celle de la Nature (II, 184–186), Buffon notes that “the whole face of the Earth today bears the mark of man’s power.” But in claiming the superiority of a nature “fertilized” by man over a “brute” nature, he still understands human omnipresence as a benevolent force:

It is only about thirty centuries ago that man’s power merged with that of nature and extended over most of the Earth; the treasures of its fertility, hitherto hidden, were revealed by man. (...) Finally, the whole face of the Earth today bears the mark of man’s power, which, although subordinate to that of nature, has often done more than she did, or has, at least, made her wonderfully fruitful, for it is with the help of our hands that nature has developed in all its extension (…) Compare brute nature with cultivated nature (…).

This idea of the superiority of cultivated nature over brute nature rests on a philosophical tradition that was well described a century before Buffon in the conclusion of Leibniz’s De rerum originatione radicali (On the Ultimate Origination of Things, 1697, paragraph 16), where one reads:

It must be recognized that there is a perpetual and most free progress of the whole universe towards a consummation of the universal beauty and perfection of the works of God, so that it is always advancing towards greater cultivation. Just as now a large part of our earth has received cultivation, and will receive it more and more.

But already in the end of the eighteenth century and early nineteenth century, diagnoses quite different from those of Leibniz and Buffon, especially on man’s disastrous impact on forests, were beginning to emerge from the pen of naturalists, such as Lamarck (1820), José Bonifácio de Andrada e Silva (Padua 2002), Alexander von Humboldt, Dietrich Brandis, and George Perkins Marsh, as well as Gifford Pinchot in the early twentieth century. Due to the immense breadth of his intellectual capacity, Humboldt is perhaps the most appropriate starting point for a prehistory of
the Anthropocene, at least in the realm of modern science. As Andrea Wulf (2015) clearly points out, he seems to have been the first to scientifically conceive of the Earth as a “natural whole” animated by inward forces to the point that he had thought (more than a century before James Lovelock) to give the title Gäa to the volumes that he would eventually title Cosmos. Of even greater importance to the Anthropocene issue, Humboldt was, along with Lamarck, the first to realize that human activity had a decisive impact on ecosystems. From his analysis of the devastation
of forests by colonial plantations at Lake Valencia in Venezuela in 1800 (Personal Narrative 1814–1829, vol. IV, p. 140), “Humboldt became the first scientist to talk about harmful human-induced climate change” (Wulf 2015). As Wulf also states, “he warned that humans were meddling with the climate and that this could have an unforeseeable impact on future generations.”

A brief historical retrospect of the ideas that underlie the Anthropocene concept since the nineteenth century was proposed by Steffen et al. (2011). The authors’ starting point is neither Lamarck nor Humboldt, but Marsh who was, in fact, among the first to realize that human action on the planet had become a threat to life. Hence, in The Earth as Modified by Human Action (1868/1874), a largely rewritten version of Man and Nature: Or, Physical Geography as Modified by Human Action (1864), he had a goal that was diametrically opposed to Buffon’s défense et
illustration of human power over nature. Starting from the preface to the first edition, Marsh spells out his concerns:

The object of the present volume is: to indicate the character and, approximately, the extent of the changes produced by human action in the physical conditions of the globe we inhabit; to point out the dangers of imprudence and the necessity of caution in all operations which, on a large scale, interfere with the spontaneous arrangements of the organic or the inorganic world; to suggest the possibility and the importance of the restoration of disturbed harmonies and the material improvement of waste and exhausted regions;
and, incidentally, to illustrate the doctrine that man is, in both kind and degree, a power of a higher order than any of the other forms of animated life, which, like him, are nourished at the table of bounteous nature.

In the same period, that is, between 1871 and 1873, abbot Antonio Stoppani (1824–1891) coined the term Anthropozoic; although not devoid of religious connotations, it clearly referred to human interference in the geological structures of the planet:

It is in this sense, precisely, that I do not hesitate in proclaiming the Anthropozoic era. The creation of man constitutes the introduction into nature of a new element with a strength by no means known to ancient worlds. And, mind this, that I am talking about physical worlds, since geology is the history of the planet and not, indeed, of intellect and morality. (…) This creature, absolutely new in itself, is, to the physical world, a new element, a new telluric force that for its strength and universality does not pale in the face of the
greatest forces of the globe.

In 1896, Svante Arrhenius (1859–1927) first confronted the difficult question of the doubling of CO2, calculating by how much a given change in the atmospheric concentrations of gases that trap infrared radiation would alter the planet’s average temperature. He states, for example, that a 40% increase or decrease in the atmospheric abundance of the trace gas CO2 might trigger the glacial advances and retreats, an estimate similar to that of the IPCC (Le Treut and Somerville 2007 IPCC AR4). He also stated that “the temperature of the Arctic regions would rise about 8 °C
or 9 °C, if the [atmospheric concentrations of] CO2 increased 2.5 to 3 times its present value.” This estimate is also probably not far from the truth, given the phenomenon known as Arctic amplification, briefly discussed in Chaps. 7 and 8. In 1917, Alexander Graham Bell finally coins the term “greenhouse effect” to describe the impact of CO2 concentrations on planetary temperatures. In the early 1920s, Vladimir I. Vernadsky (1926) introduced the idea that just as the biosphere had transformed the geosphere, the emergence of human knowledge (which Teilhard de
Chardin (1923) and Édouard Le Roy would name noosphere) was transforming the biosphere (Clark et al. 2005).

These pioneering scientific contributions, from 1860 to 1920, go hand in hand (especially in England and the USA) with the first philosophical and moral reactions to industrialization and urbanization by artists and intellectuals such as John Ruskin, George Bernard Shaw (Dukore 2014), Henry Thoreau, and John Muir.2 They also go hand in hand with the first legal initiatives and environmental organizations, such as the Sea Birds Preservation Act (1869), considered the first conservationist law in England, the Plumage League (1889) in defense of birds and their habitats, the
Coal Smoke Abatement Society (1898), the Sierra Club (1892), the Rainforest Action Network (1895), the Ecological Society of America (1915), the Committee for the Preservation of Natural Conditions (1917), and the Save the Redwoods League (1918), which were mobilized especially to safeguard sequoias.

The environmental awareness strongly emerging in this period will wane with Europe sinking into World War I and with the extreme political tensions that would lead to World War II. The idea that man’s power could be equated to the forces of nature takes on—with the carnage of World War I—the characteristics that Freud had attributed to it: humans could, thenceforth, use their growing technological control over these forces of nature to give free rein to their inherent aggressive drives,3 which now threaten to destroy us. In 1930, in the conclusion of his Civilization
and Its Discontents, Freud expresses this fear:

The fateful question for the human species seems to me to be whether and to what extent their cultural development will succeed in mastering the disturbance of their communal life by the human instinct of aggression and self-destruction. It may be that in this respect precisely the present time deserves a special interest. Men have gained control over the forces of nature to such an extent that with their help they would have no difficulty in exterminating one another to the last man. They know this, and hence comes a large
part of their current unrest, their unhappiness and their mood of anxiety.

12.1 Cold War, the Atomic Age, and the Industrial and Agrochemical Machine
After World War II, industrial capitalism reaches its Golden Age (1945–1973). In France, this period is called Les Trente Glorieuses (The Glorious Thirty): the “economic miracles” of Germany, Italy, and Japan take place, the USA achieves complete economic hegemony, the socialist bloc under Soviet leadership maintains an equally strong economic performance, and economic growth rates remain generally very high, even in some countries with late and incipient industrialization. But this widespread rush of economic activity is precisely what produces its inevitable
counterpart: the first major pollution crises. The tension between economic benefits and environmental harms—of a civilization model based on the accumulation of surplus and capital—gradually becomes inevitable. The perception of an environmental threat begins to rival the threat of total destruction by war that was forecast by Freud well before Hiroshima, as seen.

In 1973, at the end of his life, Arnold Toynbee would attempt, in Mankind and Mother Earth (1976, posthumous), a new synthesis of the history of civilization: new because he would now symptomatically place it under the man–biosphere antinomy. Indeed, the British historian’s approach is not distant from the Freudian analysis of humanity’s (self)destructive drives. It only emphasizes, half a century after Freud, the environmental consequences of this destructiveness, anticipating the prospect of an uninhabitable Earth which is later taken up in David Wallace-Wells’
famous article and book (2017, 2019):

Mankind’s material power has now increased to a degree at which it could make the biosphere uninhabitable and will, in fact, produce this suicidal result within a foreseeable period of time if the human population of the globe does not now take prompt and vigorous concerted action to check the pollution and the spoliation that are being inflicted on the biosphere by short-sighted human greed.

Between Freud’s late work (1930) and Toynbee’s posthumous (1976) one, the great expansion of capitalism had unleashed, alongside the big Cold War crises, immense impacts on nature, which were beginning to generate counter-impacts. The impacts of World War II and then the Cold War on the degradation of the biosphere were considerable. Jan and Mat Zalasiewicz (2015) provide some examples of this:

After both world wars, the exhausted armies were left with millions of unused bombs, including chemical weapons. There was neither the time nor the resources to make them safe; most were simply shipped out to sea and thrown overboard. There are over a hundred known weapons dumps in the seas around north-west Europe alone. (…) Even larger dumping grounds were improvised elsewhere in the world. (…) Around a trillion [bullets] have been fired since the beginning of the second world war—that’s a couple of
thousand for every square kilometre of Earth’s surface, both land and sea.

According to the same authors, between 1965 and 1971, Vietnam, a country smaller than Germany, was bombarded with twice as much high explosives as the US forces used during the entire World War II: “The region was pulverized by some 26 million explosions, with the green Mekong delta turning into ‘grey porridge,’ as one soldier put it.” Vietnam and other Southeast Asian countries are undoubtedly among the main victims of the Cold War. But its most systemic victim was the biosphere, obviously including humanity as a whole, primarily as a result of the arms race.
Between 1945 and 2013, there were 2421 atomic tests of which over 500 occurred in the atmosphere in the 1950s and 1960s (Jamieson 2014).4 In the 1950s, scientists began warning of their impacts. Hence, after October 10, 1963, atomic tests generally happened underground, as established by the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under Water, which entered into force on that day. Nevertheless, France and China continued to carry out nuclear tests in the atmosphere until 1974 (France) and 1980 (China).

The full assessment of the effects of these tests on life on the planet will only be achieved by future generations. Among its most feared and delayed effects is the impact of 30 tons of plutonium-239 waste with a half-life of 24,000 years, abandoned by the Americans on the Pacific Islands. No less than “67 nuclear and atmospheric bombs were detonated on Enewetak and Bikini between 1946 and 1958—an explosive yield equivalent to 1.6 Hiroshima bombs detonated every day over the course of 12 years,” writes Jose et al. (2015). With rising sea levels, this radioactive
waste, stored since 1979 in the Marshall Islands in a concrete dome called Runit Dome, will be submerged, likely causing it to leak into the ocean. In fact, as these three authors warn, “underground, radioactive waste has already started to leach out of the crater: according to a 2013 report by the US Department of Energy, soil around the dome is already more contaminated than its contents.” In this regard, they quote Michael Gerrard, Director of the Sabin Center for Climate Change Law at Columbia University, who visited the dome in 2010:

Runit Dome represents a tragic confluence of nuclear testing and climate change. It resulted from US nuclear testing and the leaving behind of large quantities of plutonium. Now it has been gradually submerged as result of sea level rise from greenhouse gas emissions by industrial countries led by the United States.

In short, if there are many historical reasons for the birth of the term Anthropocene, the main one is the fact that the geological strata and the Earth system in general have been profoundly shaped—and will be even more so in the twenty-first century— by numerous consequences of the world wars that ravaged the twentieth century. These range from the atomic weapon and the various regional wars of the Cold War to the scorched-earth war against the biosphere, driven by the deadliest of weapons: the fossil fuel and agrochemical machine of globalized capitalism.

12.2 Anthropization: A Large-Scale Geophysical Experiment

In 1939, in an article titled “The Composition of the Atmosphere through the Ages,” Guy Stewart Callendar bore witness to the emerging awareness of the anthropogenic character of climate change. As James Fleming (2007) notes, the article “contains an early statement of the now-familiar claim that humanity has become an agent of global change.” Although they date back 80 years, Callendar’s words would be quite appropriate for a text from 2020 on the Anthropocene:

It is a commonplace that man is able to speed up the processes of Nature, and he has now plunged heavily into her slow-moving carbon dioxide into the air each minute. This great stream of gas results from the combustion of fossil carbon (coal, oil, peat, etc.), and it appears to be much greater than the natural rate of fixation.

Callendar’s article later gave rise to the term Callendar effect, which refers to the perception of the direct and well-quantified correlation between anthropogenic GHG emissions and structural changes in the planet’s climate. But Arrhenius and Callendar still thought that this planetary warming would be beneficial, delaying a “return of the deadly glaciers.” In addition, this article came out on the eve of the war, and its wording naturally went unnoticed in the face of much more immediate and tangible dangers. War, killing on an industrial scale, the so-called trivialization of
evil on a scale never before imagined by man, concentration camps, and Hiroshima submerged everything. In the 1950s, two articles by Gilbert Plass (1956, 1959) brought up the problem of global warming once more. Curiously, the impact of CO2 atmospheric concentrations on the climate was still considered no more than a scientific “theory.” In fact, one of Plass’ papers (1956) was titled “The Carbon Dioxide Theory of Climatic Change.” Plass calculated in this article that “the average surface temperature of the earth increases 3.6°C if the CO2 concentration in the
atmosphere is doubled.” In any case, the relationship between the CO2 concentration in the atmosphere and the rise of the Earth’s surface temperature began to be seen in a different light (see also Chap. 8). In the 1950s, the perception of man’s potentially dangerous interference in the climate merges with a new awareness of widespread pollution, destruction of nature, and the ills of industrial society. Many environmental disasters mobilize people’s awareness in England and in the USA. These disasters include the Great Smog of London, the Cuyahoga River Fire in Ohio
(both in 1952), the return to deforestation caused by the US housing boom, projects to flood and build dams in the Grand Canyon (1963), and the explosion of a Union Oil platform off California’s coast in 1969 (polluting its sea and beaches with 16 million liters of oil, the largest event of its kind at that time). Climate change also seems to take public perception by storm. In 1953, the famous magazines, Popular Mechanics and Time, make room for this topic. In a Time article titled “The invisible blanket,” Gilbert Plass predicts that:

At its present rate of increase, the CO2 in the atmosphere will raise the earth’s average temperature 1.5° Fahrenheit [0.9°C] each 100 years (...) For centuries to come, if man’s industrial growth continues, the earth’s climate will continue to grow warmer.

In 1957, Roger Revelle and Hans Suess conclude an article on the increase in atmospheric CO2 concentrations with a paragraph that has become, perhaps, the best known hallmark in the contemporary history of global warming science:

Thus human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future. Within a few centuries we are returning to the atmosphere and oceans the concentrated organic carbon stored in sedimentary rocks over hundreds of millions of years.

The following year, in 1958, an educational film called The Unchained Goddess, produced by Frank Capra, the great Italian-American Filmmaker who was also a Chemical Engineer, predicts that atmospheric warming and glacial melting, both caused by human activity, would be calamitous:

Due to our release through factories and automobiles every year of more than six billion tons of carbon dioxide, our atmosphere seems to be getting warmer. (…) A few degrees rise in the Earth’s temperature would melt the polar ice caps. And if this happens, an inland sea would fill a good portion of the Mississippi valley.

This rising awareness of the harmful effects of pollution and GHG emissions is reflected in the establishment (between 1947 and 1971) of the eight most influential US environmental NGOs: Defenders of Wildlife (1947), The Nature Conservancy (1950), WWF (1961), Environmental Defense Fund (1967), Friends of the Earth (1969), International Fund for Animal Welfare (IFAW) (1969), Natural Resources Defense Council (NRDC) (1970), and Greenpeace (1971) (Hunter 2002). Another reflection of this awakening is the publication of Silent Spring by Rachel Carson (1907–1964)
in 1962, a book that is, as we know, a major milestone in the history of environmental awareness. Linda Lear, her biographer, recalls the context in which the book appears5:

Silent Spring contained the kernel of social revolution. Carson wrote at a time of new affluence and intense social conformity. The cold war, with its climate of suspicion and intolerance, was at its zenith. The chemical industry, one of the chief beneficiaries of postwar technology, was also one of the chief authors of the nation’s prosperity. DDT enabled the conquest of insect pests in agriculture and of insect-borne disease just as surely as the atomic bomb destroyed America’s military enemies and dramatically altered the balance
of power between humans and nature.

Silent Spring, in fact, has the impact of “another” bomb, this time affecting American and even global consciousness. It is the first book on environmental science to be discussed at a press conference by President John F. Kennedy and to remain on the bestseller list for a long time. It is not surprising, thus, that the establishment of the Clean Air Act, a federal air pollution monitoring and control program, happened in 1963. Between 1962 and 1966, the book was translated (in chronological order) into German, French, Swedish, Danish, Dutch, Finnish, Italian, Spanish,
Portuguese, Japanese, Icelandic, and Norwegian. It was later translated into Chinese (1979), Thai (1982), Korean (1995), and Turkish (2004). By warning of the death of birds and other animals caused by the pesticide DDT, Carson emphasized—in the very same year as the Cuban missile crisis—that the risks of human annihilation no longer stemmed only from a nuclear winter, but also from a silent spring. More than nuclear war, one should, henceforth, fear the less noisy, but no less destructive, war against nature. This is because (notwithstanding the fear expressed by
Freud in 1930, 15 years before the atomic bomb) a nuclear winter could be avoided, but not a spring without birds—an allusion to a dead nature—if man did not learn to contain his (self)destructiveness in relation to nature.

In retrospect, it is becoming clearer that the 1960s can conveniently be called the Rachel Carson decade. At the end of the decade, René Dubos, a French Biologist naturalized in the USA, launches another book that is emblematic of those years, So Human an Animal. It begins with a statement that reflects the indignation felt in that decade:

This book should have been written with anger. I should be expressing in the strongest possible terms my anguish at seeing so many human and natural values spoiled or destroyed in affluent societies, as well as my indignation at the failure of the scientific community to organize a systematic effort against the desecration of life and nature. (…) The most hopeful sign for the future is the attempt by the rebellious to reject our social values. (…) As long as there are rebels in our midst, there is reason to hope that our societies can
be saved.

Perhaps influenced by Carson and Dubos, and certainly under the impact of the Vietnam War and the 1968 annus mirabilis, the Union of Concerned Scientists (UCS) was launched at MIT. Its inaugural document was initially signed by 50 senior scientists, including the heads of the biology, chemistry, and physics departments, and then endorsed by other scientists from this renowned center of scientific activity in the USA. The document redefines the meaning of scientific activity. It is no longer just about seeking to broaden knowledge on nature; it is about understanding
science now as a critical activity. It proposes, thenceforth, “to devise means for turning research applications away from the present emphasis on military technology toward the solution of pressing environmental and social problems.”6

12.2.1 “The Masters of the Apocalypse”

Although it occurred simultaneously on both sides of the northern Atlantic, the awareness of human arrogance and human impact on the environment had less momentum in continental Europe. As Hicham-Stéphane Afeissa (2009) points out, it is not coincidental that, until recently, European research on environmental issues seemed to suffer from a significant deficit in relation to the USA. But in addition to having less momentum, the awakening of an environmental consciousness in Europe had different characteristics. Contrary to the US autistic complex of being the
embodiment of good, twentieth-century Europe gasped under the weight of bad conscience: the self-destruction caused by the two wars; the unconditional political, economic, and ideological capitulation to the USA; the atrocities of colonization and decolonization; and genocides. Moreover, it was the main zone of friction between the spheres of influence of the so-called superpowers (and, thus, the most plausible scenario of a nuclear hecatomb in the event of a cold war slippage), making European public opinion and intelligentsia more sensitive to Hiroshima than to
environmental disasters. Therefore, reflection on the ecological question in the Old World emerges slowly from a meditation on the retreat of thought in the post-genocide era and the new precariousness of the human condition in the nuclear age. No one better than Michel Serres (Éclaircissements 1992) expresses the advent of this seismic tremor in the consciousness of European scientists and philosophers: “Since the atomic bomb, it had become urgent to rethink scientific optimism. I ask my readers to hear the explosion of this problem in every page of my books.
Hiroshima remains the sole object of my philosophy.” In fact, the famous sentence from the Bhagavad-Gita, “Now I am become Death, the destroyer of worlds,” muttered by J. Robert Oppenheimer on July 16, 1945, in the face of the explosion of “his” bomb in the Los Alamos desert echoed paradoxically more in Europe than in the USA: in the writings of Bertrand Russell, Einstein, Karl Jaspers, Friedrich Dürrenmatt (in his grim comedy, Die Physiker, 1962), and, especially, Günther Anders, whose later work is strongly devoted to reflecting on the atomic bomb. In “The Bomb
and the Roots of Our Blindness toward the Apocalypse” (essay published in The Obsolescence of Humankind 1956), Günther Anders spells out the essence of man’s “existential situation” (to put it in the language of that time):

If something in the consciousness of men today has the value of Absolute or Infinity, it is no longer the power of God or the power of nature, not even the alleged powers of morality or culture: it is our own power. Creation ex nihilo, once a manifestation of omnipotence, has been replaced by the opposite power: the power to annihilate, to reduce to nothing—and that power is in our hands. We really have gained the omnipotence that we had been yearning for so long, with Promethean spirit, albeit in a different form to what
we hoped for. Given that we possess the strength to prepare each other’s end, we are the masters of the Apocalypse. We are infinity.

12.2.2 From the Nuclear to the Ecological

Today, this paragraph refers not only to the nearly 23 thousand nuclear warheads in the hands of 8 countries, an arsenal of destructive power 200 thousand times larger than the bomb dropped in Hiroshima (Sidel and Levy 2014),7 but to the continuous and increasing destructive action of man over ecosystems in the Anthropocene. It could appear as an epigraph in the Red List of Threatened Species first published in 1963 by the International Union for Conservation of Nature (IUCN), an institution born in Europe immediately after the war.

The transition from nuclear to ecological does not mean, however, just a change in subject or a broadening of its scope. Unlike a nuclear catastrophe, the ecological threat does not become a catastrophe because of a war event or a hamartia, a tragic flaw, but due to a (relatively) peaceful economic process, generally regarded (even today) as beneficial and even essential; one in which the shifts in phases toward collapse were still, in those years, almost imperceptible in their overall configuration. This difference between nuclear and ecological catastrophes was highlighted
by Hans Jonas in 1979 in The Imperative of Responsibility and described precisely in a preface to the English edition of this work (1984):

Lately, the other side of the triumphal advance [of modern technology] has begun to show its face, disturbing the euphoria of success with threats that are as novel as its welcomed fruits. Not counting the insanity of a sudden, suicidal atomic holocaust, which sane fear can avoid with relative ease, it is the slow, long-term, cumulative—the peaceful and constructive use of worldwide technological power, a use in which all of us collaborate as captive beneficiaries through rising production, consumption, and sheer population
growth—that poses threats much harder to counter.

12.2.3 1972–2002: The Final Formulation of the Multiauthored Concept of the Anthropocene

In the abovementioned text, Hans Jonas summarizes the slow awareness of the advent of the Anthropocene in Europe. To a large extent, the notion of the Anthropocene, despite its technicality and its pertaining strictly to the realm of stratigraphy, would not have been conceivable outside of the sphere of critical thinking which gained momentum in the 1960s. This period was rife with philosophical insurgency and political action, and it also saw the emergence of figures such as Barbara Ward and René Dubos, both who drafted the report commissioned by Maurice Strong
for the seminal 1972 United Nations Conference on Human Development. This report, titled Only One Earth: The Care and Maintenance of a Small Planet, brought together the contributions of 152 experts from 58 countries, resulting in the 26 principles that make up the Stockholm Declaration and in the creation of UNEP. The first of these principles clearly advances the central idea of a new geological epoch that is conditioned by human action as much as (or more than) by non-anthropic variables:

In the long and tortuous evolution of the human race on this planet a stage has been reached when, through the rapid acceleration of science and technology, man has acquired the power to transform his environment in countless ways and on an unprecedented scale.

These precedents must be kept in mind so as not to confine the concept of the Anthropocene to the narrow limits of scientific terminology. Indeed, while the International Commission on Stratigraphy (ICS) is still debating its official adoption, this concept is not limited to a proposed revision of stratigraphic nomenclature. It is a collectively formulated idea, circulating through the Zeitgeist of the 1970s–1990s. As James Syvitski (2012) states, “in the 1990s the term Anthroposphere was widely used in the Chinese science literature under the influence of Chen Zhirong of the
Institute of Geology and Geophysics at the Chinese Academy of Sciences in Beijing.” In the West, the birth of the term Anthropocene is credited mainly to two biologists, Eugene F. Stoermer and Andrew C. Revkin, as seen in Revkin (2011) and in a historical study of the term produced by Steffen et al. (2011):

Eugene F. Stoermer wrote: ‘I began using the term “anthropocene” in the 1980s, but never formalized it until Paul contacted me’. About this time other authors were exploring the concept of the Anthropocene, although not using the term. More curiously, a popular book about Global Warming, published in 1992 by Andrew C. Revkin, contained the following prophetic words: ‘Perhaps earth scientists of the future will name this new post-Holocene period for its causative element—for us. We are entering an age that might
someday be referred to as, say, the Anthrocene [sic]. After all, it is a geological age of our own making’. Perhaps many readers ignored the minor linguistic difference and have read the new term as Anthro(po)cene!

Indeed, it is from the seminal ideas of Humboldt, Lamarck, Marsh, Stoppani, Vernadsky, and Teilhard de Chardin, but not least from the emergent reflection of biologists, chemists, meteorologists, environmentalists, anthropologists, and philosophers of their generation,8 that Crutzen and Stoermer proposed, in the 2000 congress of the International Geosphere–Biosphere Program (IGBP) in Cuernavaca and then in a 2002 article by Crutzen alone, the recognition of a new geological epoch, the Anthropocene. Considering the combination of biogeophysical forces that shape
the Earth system, this epoch is characterized by the fact that anthropic action prevails over forces generated by nonhuman factors. “It seems appropriate,” Crutzen wrote in 2002, “to assign the term ‘Anthropocene’ to the present, in many ways human-dominated, geological epoch, supplementing the Holocene—the warm period of the past 10–12 millennia.”

12.2.4 The Great Acceleration and Other Markers of Anthropic Interference

According to Paul Crutzen, the birth date of this new geological epoch could be conventionally fixed in 1784, the year of James Watt’s steam engine patent and also the birth of the atmospheric carbonization era. Jan Zalasiewicz (2014), Director of the Anthropocene Working Group of the International Commission on Stratigraphy (ICS), prefers to date the beginning of the Anthropocene from the second half of the twentieth century, choosing as his criteria the increase in greenhouse gas emissions and pollution, as well as the inscription on rocks of the radioactivity emitted
by the detonation of atomic bombs in open air, among other factors. The proposal put forth by 25 researchers and coordinated by Zalasiewicz to date the Anthropocene from 1950 onward was reported by Richard Monastersky in 2015:

These radionuclides, such as long-lived plutonium-239, appeared at much the same time as many other large-scale changes wrought by humans in the years immediately following the Second World War. Fertilizer started to be mass produced, for instance, which doubled the amount of reactive nitrogen in the environment, and the amount of carbon dioxide in the atmosphere started to surge. New plastics spread around the globe, and a rise in global commerce carried invasive animal and plant species between continents.
Furthermore, people were increasingly migrating from rural areas to urban centres, feeding the growth of megacities. This time has been called the Great Acceleration.

The iconic 24 graphics created by Will Steffen and colleagues in 2015 coined the idea of Great Acceleration, which was forever fixed by the International Geosphere– Biosphere Programme (IGBP—Global Change, 1986–2015) in these terms:

The second half of the 20th Century is unique in the history of human existence. Many human activities reached take-off points sometime in the 20th Century and sharply accelerated towards the end of the century. The last 60 years have without doubt seen the most profound transformation of the human relationship with the natural world in the history of humankind. The effects of the accelerating human changes are now clearly discernible at the Earth system level. Many key indicators of the functioning of the Earth system
are now showing responses that are, at least in part, driven by the changing human imprint on the planet. The human imprint influences all components of the global environment—oceans, coastal zone, atmosphere, and land.

The Anthropocene concept expresses the preponderance of anthropic forces in relation to the other forces that intervene in shaping the Earth’s system. In the previous 11 chapters, we have seen how these forces are causing
deforestation, deep imbalances in the climate system, and mass extinctions of plant and animal species on a growing scale, one that is already comparable to the previous five major extinctions. Future paleontologists—assuming
that a future remains for us—will notice the sudden disappearance of fossil records of an uncountable number of species. Instead of plant and animal fossils, the traces left by anthropic forces on terrestrial and underwater rocks
will be the signatures of isotopes such as plutonium-239, mentioned by Zalasiewicz, as well as numerous other markers, such as the various forms of pollution by fossil fuels, different explosives, POPs, concrete, plastic, aluminum,
fertilizers, pesticides, and other industrial waste.

Mining operations extract 2 billion tons of iron


Ugo Bardi’s book, Extracted: How the Quest for Mineral Wealth Is Plundering the Planet (2014), provides some insight into this.

and 15 million tons of copper globally per year . The USA alone extracts 3 billion tons of ore a year from
its territory, and according to the US Geological Survey, the removal of sand and gravel for construction on a global scale can
exceed 15 billion tons a year. In relation to rocks and earth alone, humans remove, per year, the
equivalent of two Mount Fujis (with an altitude of 3776 m, it is the highest mountain in the Japanese archipelago). According to Jan Zalasiewicz, mining operations dig
up the Earth’s crust to depths that reach several thousand meters (e.g., 5 km deep in a gold mine in South Africa). And in its
extraction processes alone, the oil industry drilled about 5 million kilometers underground, which, as the scholar points out, is
equivalent to the total length of man-made highways on the planet.9
In 2000, the burning of fossil fuels emitted about 160 Tg per year of sulfur dioxide (SO2) into the atmosphere, which is more than the sum emitted by all natural sources, and more synthetic nitrogen for fertilizers was produced and applied to agriculture than is naturally fixed by all the other terrestrial processes added together. Furthermore, more than half of all accessible freshwater resources had already been used by humans, and 35% of the mangroves in coastal areas had been lost. At least 50% of non-iced land surface had already been transformed by human action in
2000, and, over the last century, the extent of land occupied by agriculture has doubled, to the detriment of forests. Anthropic action interferes decisively not only “externally” in vegetation cover, in the behavior of physical forces, and in the extinction of species but also inside organisms, infiltrating into the cellular tissues of countless species and altering their metabolism, hormones, and chemical balance, as discussed in Chaps. 4, 10, and 11. According to Erle C. Ellis and Navin Ramankutty (2008), biomes have been so altered by humans that it would be better to call them
“Anthromes” or “Anthropogenic biomes,” terms that provide “in many ways a more accurate description of broad ecological patterns within the current terrestrial biosphere than are conventional biome systems that describe vegetation patterns based on variations in climate and geology.”

12.3 The New Man–Nature Relationship

In his conversations with Bruno Latour (Éclaircissements 1992), Michel Serres states: “A global subject, the Earth, emerges. A global subject, on the other hand, is constituted. We should, therefore, think about the global relations between these two globalities. We do not yet have a theory that allows us to think about them.” There is, in fact, something unthought in the concept of the Anthropocene. Its importance is, first of all, philosophical. This concept abolishes the separation— foremost in man’s consciousness of himself—between the human and the nonhuman
spheres. In the Anthropocene, nature ceased to be a variable that was independent of man and, ultimately, became a dependent variable. Therefore, nature has ultimately become a social relationship. But the inverse is equally true: relations between men in the broadest sense—from the economic to the symbolic sphere— lose their autonomy and gradually become functions that are dependent on environmental variables.

Similar to the concept of culture, we know that the concept of nature cannot be defined. Its polysemy allows for irreconcilable, perhaps even contradictory, meanings to coexist in it, especially since the subject who defines nature is himself a part of nature. The only thing we can say is that during the Holocene, nature presented itself to human experience in two fundamental and contradictory ways: (a) as other than him, nature appeared to man’s consciousness as that which is not human, as something essentially different from him and in opposition to how he defined
himself, and (b) as a totality, nature was physis and, as such, it encompassed and unified everything, including man and the gods. As Pierre Hadot teaches in a remarkable book, Le Voile d’Isis (2004), this duality was already present in Greece. It arises there in the form of tension between two ideas on nature, which imply opposite attitudes. As early as 1989, Hadot outlined this idea in a lesson at the Collège de France:

One can distinguish two fundamental attitudes of ancient man toward nature. The first can be symbolized by the figure of Prometheus: this represents the ruse that steals the secrets of nature from the gods who hide them from mortals, the violence that seeks to overcome nature in order to improve the lives of humans. The theme already appears in medicine (Corpus hippocraticum, XII, 3), but especially in mechanics [...]. The word mêchanê refers, additionally, to ruse. Opposed to this ‘promethean’ attitude, which puts ruse at
the service of men’s needs, there is a totally different kind of relation to nature in Antiquity that could be described as poetic and philosophical [that is, orphic], where ‘physics’ is conceived of as a spiritual exercise.

Christian Godin (2012) notices quite well that these two attitudes of the Greeks toward the “secrets” of nature “may characterize more generally (and not just in Greece) the two opposite attitudes that man can adopt toward nature: one of fusion and one of conquest.”

The Promethean attitude gradually reduces nature to the “object” of the subject until it becomes completely alienated in the modern age by its conversion into quantity, vector force, and res extensa. It can be said that the whole history of philosophy in the modern and contemporary age is strongly dominated by the unfinished double enterprise of determining the ontology of this object and the epistemological status of the subject’s relationship with it. Alexandre Koyré’s definition of this object quite accurately specifies the point that we have reached along this path:
“Nobody knows what nature is except that it is whatever it is that falsifies our hypotheses” (as cited by Crombie 1987). In the nineteenth century, Hegel’s logic would still attempt to restore unity between subject and object through a dialectical identification between the spirit and the world or the reciprocal subsumption of one by the other (Beiser 1993). But modern science would simply ignore Hegel as much as he ignored the two categories that have progressively come to characterize science in the modern age: the central role of experiment and the mathematization
of knowledge. Such a restoration actually occurs, in Hegel’s century, only in the aesthetic and existential experience of Stimmung, an empathetic convergence and communion of spirit with nature in the lyrical instant or in the spirit’s submission to the sublime. In the lyrical instant, to stick to emblematic examples, we have Goethe’s Über allen Gipfeln ist Ruh, certain small landscapes of the French or Italian countryside by the likes of Valenciennes to Corot, and the second movement of Beethoven’s Pastoral; in the second case, that of the sublime, we have Leopardi’s
L’infinito or the fourth movement of Beethoven’s Pastoral.

It is superfluous to remember that as a living organism, man is, objectively, nature. But the very idea of anthropogenesis, or hominization, has always been perceived, at least in the West, as a slow and gradual process of distancing and differentiation of the human species from other species and from nature in general. In this process, nature meant, at the same time, that which is not human, that which surrounds man (his Umwelt), and that which is the origin of man. Whatever the meaning— biological, utilitarian, phenomenological, or symbolic—of the word origin, man
was, in short, the effect of that origin.

12.3.1 The Powerlessness of Our Power

In the Anthropocene, by contrast, it is nature that becomes an effect of man. Wherever he goes, from the stratosphere to the deep sea, man now finds—objectively, and no longer merely as a projection of his consciousness—the effects of himself, of his action, and of his industrial pollution. La Terre, jadis notre mère, est devenue notre fille10: proposed by Michel Serres, this metaphor of the ancestral mother turned into our daughter illustrates perfectly the concept of the Anthropocene. It urges us to take care of the Earth just as we would do for our child.

But this supposed parental responsibility should not mislead us: we have not acquired any parental power over “our child.” If the Earth has become a variable that is dependent on anthropic action, this does not mean that man has greater control over it. On the contrary, since the mother could, occasionally, be a stepmother, the degraded daughter is systematically unsubmissive and “vindictive,” to use James Lovelock’s metaphor in The Revenge of Gaia (2006). Henceforth, societies will be increasingly governed by boomerang effects, that is, by negative effects on man of
the imbalances that he has wrought upon ecosystems, as can be best seen in Chap. 15. In its own way, the Anthropocene achieved the ideal unity of science—gradually abandoned since the nineteenth century—because by abolishing the separation between the human and nonhuman spheres, it abolished ipso facto the boundaries between the natural sciences and the “human sciences.” As Michel Serres (2009) also states, today “human and social sciences have become a kind of subsection of life and earth sciences. And the reciprocal is also true.”

Today, more than ever, we are existentially vulnerable to that which has become vulnerable to us. The Anthropocene is, in short, the revelation of the powerlessness of our power. This impotence is precisely our inability to halt the effects of that which we have caused and our inability to act economically according to what science tells us about the limits of the Earth system and its growing imbalances. On a more fundamental and also more concrete level, it is our inability to free ourselves psychically from the quantitative, compulsively expansive, and anthropocentric
paradigm of capitalist economy. As we will discuss in Chaps. 14 and 15, this inability is the causa causans of the environmental collapse that looms on our horizon. Rachel Carson was already aware of this when she made the following statement in an American documentary aired on CBS in April 1963 (Gilding 2011): “man’s attitude towards nature is today critically important simply because we have now acquired a fateful power to destroy nature. But man is part of nature and his war against nature is inevitably a war against himself.”

12.3.2 A New World, Biologically, Especially in the Tropics

The current radical reduction of vertebrate and invertebrate life-forms is a central feature of the environmental collapse we are facing in the Anthropocene. In fact, we are headed for “a new world, biologically.” A collective synthesis of research done over the past two decades and published in June 2012 in the journal Nature suggests this conclusion. It shows that “within a few generations,” the planet can transition to a new biospheric state never known to Homo sapiens. The main author of this study, Anthony Barnosky of the University of California, declared to Robert
Sanders (2012)11:

“It really will be a new world, biologically. (…) The data suggests that there will be a reduction in biodiversity and severe impacts on much of what we depend on to sustain our quality of life, including, for example, fisheries, agriculture, forest products and clean water. This could happen within just a few generations.

The forms of this “new world” are beginning to emerge, insofar as expanding economic activity destroys ecosystems and alters the physical, chemical, and biological parameters of the planet. Compared to the lush biodiversity of the Holocene, the Anthropocene will be almost unrecognizable.

The contrast between the former exuberance and the forthcoming destitution will be more acute in the tropics because the greatest biodiversity is still concentrated there and because such latitudes, which are already warmer, will be more rapidly and profoundly affected by global warming and other biospheric degradation factors, as shown by three studies published in 2011, 2012, and 2013. Diffenbaugh and Scherer (2011) state that:

In contrast to the common perception that high-latitude areas face the most accelerated response to global warming, our results demonstrate that in fact tropical areas exhibit the most immediate and robust emergence of unprecedented heat, with many tropical areas exhibiting a 50% likelihood of permanently moving into a novel seasonal heat regime in the next two decades. We also find that global climate models are able to capture the observed intensification of seasonal hot conditions, increasing confidence in the
projection of imminent, permanent emergence of unprecedented heat.

Camilo Mora et al. (2013) also predict that “unprecedented climates will occur earliest in the tropics.” For their part, Oliver Wearn et al. (2012) warn that the extinction rate of vertebrate species in the Brazilian Amazon should expand enormously in the future:

[…] local extinctions of forest-dependent vertebrate species have thus far been minimal (1% of species by 2008), with more than 80% of extinctions expected to be incurred from historical habitat loss still to come. Realistic deforestation scenarios suggest that local regions will lose an average of nine vertebrate species and have a further 16 committed to extinction by 2050.

As seen in the previous chapter, there will also be a radical reduction in most forms of marine life, including (possibly) phytoplankton. Therefore, as the biosphere regresses, both on land and in water, this new world of the Anthropocene will move toward a reduced biosphere that could perhaps be called hypobiosphere.

12.4 Hypobiosphere: Functional and Nonfunctional Species to Man

We propose this neologism, hypobiosphere, to refer to the growing areas of the planet in which deforestation and defaunation will deprive the biosphere of the vast majority of highly evolved multicellular forms of animal and plant life still present in nature. The first 11 chapters of this book, and, in particular, the preceding two chapters, offer a display of partial prefigurations of the hypobiosphere. In fact, to some extent, we are already living in a hypobiosphere. According to Yinon Bar-On, Rob Phillips, and Ron Milo (2018), humans currently account for about 36% of the
biomass of all mammals, livestock and domesticated animals for 60%, and wild mammals for only 4%:

Today, the biomass of humans (≈0.06 Gigatons of Carbon, Gt C) and the biomass of livestock (≈0.1 Gt C, dominated by cattle and pigs) far surpass that of wild mammals, which has a mass of ≈0.007 Gt C. This is also true for wild and domesticated birds, for which the biomass of domesticated poultry (≈0.005 Gt C, dominated by chickens) is about threefold higher than that of wild birds (≈0.002 Gt C). (…) Intense whaling and exploitation of other marine mammals have resulted in an approximately fivefold decrease in marine
mammal global biomass (from ≈0.02 Gt C to ≈0.004 Gt C). While the total biomass of wild mammals (both marine and terrestrial) decreased by a factor of ≈6, the total mass of mammals increased approximately fourfold from ≈0.04 Gt C to ≈0.17 Gt C due to the vast increase of the biomass of humanity and its associated livestock.

Given these numbers, it does not seem arbitrary to say that the Anthropocene is reducing the biosphere into two groups. On one side are the species controlled by man and on the other side are those uncontrolled but able to withstand anthropogenic impacts, either due to their minimal contact with humans (such as species living in the deep ocean) or because they thrive in the anthromes, thanks to rubbish or other disturbances in the ecosystems. If this is the case, we should note the gradual prevalence of ten categories of life on the planet:

(1) Plants intended for human consumption and animal husbandry

(2) Plant inputs for industry (cellulose, ethanol, etc.)

(3) Domestic animals

(4) Animals raised for human consumption

(5) Animals raised for scientific experiments

(6) Plant and animal species unaffected by pesticides and human pollutants

(7) Species that benefit from anthropogenic environmental imbalances

(8) Species that feed on our food and our waste

(9) Species that live in remaining remote areas with little contact with humans (e.g., in deep ocean trenches)

(10) Fungi, worms, microorganisms (viruses,12 bacteria, mites, etc.)

Due to its apparent arbitrariness, this classification may be part of an array of absurd taxonomies, like the one imagined by Jorge Luis Borges (1952). However, this one has a rigorous logic, since, as might be expected in the Anthropocene, it divides species into those deeply manipulated by humans (1–5) and those that live outside the sphere of human domination (6–10). The last five categories of this classification, especially the last one, encompass billions or trillions of species; so, this new equilibrium of the biota will not necessarily be hostile to most life-forms. But it will
be hostile—perhaps at the point of extermination—to the vast majority of approximately eight million eukaryote species. This is especially the case for vertebrates, a phylum (or subphylum) that contains, according to the 2004 IUCN report, 57,739 described species. Within this phylum, the class of mammals (endowed with a neocortex) includes about 5500 described species, a number that is probably very close to that of existing mammal species.

12.4.1 The Increase in Meat Consumption

Many factors in globalized capitalism combine and strengthen each other to push us toward the hypobiosphere, this impoverished state of the biosphere. Many of these factors have already been examined in previous chapters: the widespread pollution of soil, water, and the atmosphere; the increasing use of pesticides; extreme heat waves in the atmosphere and oceans, which eliminate coral reefs and many other forms of terrestrial and aquatic life; the disappearance of beaches, essential for the spawning of some species, through rising sea levels; destruction (by man
and by extreme weather events) of mangroves which are large breeding grounds; the damming of rivers by hydroelectric dams; eutrophication of waters due to overuse of industrial fertilizers; ocean acidification; overfishing; expansion of urban areas and road networks; mining, etc. But in the short term, the most powerful factor for the impoverishment of the biosphere is, undoubtedly, the elimination of wild habitats, especially tropical ones, due to deforestation and fires caused by agribusiness and, in particular, by the livestock industry. According to FAO’s Food
Production document:

Livestock is the world’s largest user of land resources, with grazing land and cropland dedicated to the production of feed representing almost 80% of all agricultural land. Feed crops are grown in one-third of total cropland, while the total land area occupied by pasture is equivalent to 26% of the ice-free terrestrial surface.

In other words, the most important factor currently driving us toward the hypobiosphere is the increasing human consumption of meat. The combined per capita consumption of meat, eggs, and milk in developing countries grew by about 50% from the early 1970s to the early 1990s. Based on this finding, a team of researchers from FAO, the International Food Policy Research Institute (IFPRI), and the International Livestock Research Institute (ILRI) produced a document 20 years ago titled Livestock to 2020: The Next Food Revolution (Delgado et al. 1999). Christopher
Delgado and his colleagues state in this document:

It is not inappropriate to use the term ‘Livestock Revolution’ to describe the course of these events in world agriculture over the next 20 years. Like the well-known Green Revolution, the label is a simple and convenient expression that summarizes a complex series of interrelated processes and outcomes in production, consumption, and economic growth.

The authors were obviously correct in their projections to 2020, since the transition to animal protein consumption that began in the 1970s has continued unabated for the past 20 years. But what seemed to them to be a “Livestock Revolution” turned out to be a “Livestock Apocalypse” or, as Philip Lymbery and Isabel Oakeshott (2014) prefer to call it, a “Farmageddon.” Animal breeding for human consumption which, until the nineteenth century, was an almost artisanal activity has notably become, in recent decades, a highly industrialized activity, led by factory farms,
first for chickens, then pigs, and more recently cattle. From 2000 to 2014, Chinese production of meat, eggs, and milk has rapidly increased, and this will continue—especially for pork. Its production has jumped from around 40 million tons in 2000 to approximately 56 million tons in 2014 (Yang et al. 2019). The meat industry’s fast food distribution chains are among the largest in the world. McDonald’s, for example, which in the 1950s was still a simple diner in California, has become the world’s largest international meat chain in the past half century, serving about 70
million customers a day in over 100 countries.

The consumption of meat on an industrial scale by today’s highly urbanized societies is, first and foremost, a moral problem that increasingly mobilizes philosophical, anthropological, and biological arguments: “Every child should visit an abattoir. If you disagree, ask yourself why. What can you say about a society whose food production must be hidden from public view?” asks George Monbiot (2014). The meat industry and the consumption of its products infringe on the first right of animals: the right to a life without avoidable suffering. And it is not just a threat to the
rights of animals directly victimized by agribusiness, but the gigantic increase in herds from the second half of the twentieth century threatens, as stated above, the survival of tropical wild habitats where 80% of terrestrial biodiversity is present.

Indeed, meat consumption by humans has reached insane levels, and there is no indication that we have reached the top of this escalation in animal cruelty, one that is driven by the logic of maximum productive efficiency at the lowest cost. Actual numbers surpass all efforts at dystopian imagination. According to Mia MacDonald (2012), who bases herself on 2010 data:

Each year, more than 60 billion land animals are used in meat, egg, and dairy production
around the world. If current trends continue, by 2050 the global livestock population could exceed 100 billion—more than 10 times the expected human population at that point.

In 2010, this meant about ten animals for each human. According to Alex Thornton (2019), basing himself on data from 2014, “an estimated 50 billion chickens are
slaughtered for food every year—a figure that excludes male chicks and unproductive hens killed in egg production. The number of larger livestock, particularly pigs, slaughtered is also growing,” as Fig. 12.1 shows.

[[FIGURE 12.1 OMITTED]]


[[TABLE 12.1 OMITTED]]
The graph above translates into Table 12.1 which shows the amounts of animals killed by the food industry each year (in millions), often after a life of terrible suffering.

Adding these numbers to the 50 billion chickens mentioned above, the estimates are of nearly 53 billion animals killed
for food, which amounts to 7.4 animals killed for each human on the planet in 2014 alone when the population had reached 7.2 billion. Gary Dagorn (2015) reports slightly higher numbers:

We never produced and consumed as much meat as we do today . In 2014, the world produced 312 million tons of meat, which is
equivalent to an average of 43 pounds per person per year. Each year, 65 billion animals are killed (that is, almost two thousand animals ... per second) to end up on our plates.

The stock of animals raised for human consumption is clearly larger, as Lymbery and Oakeshott (2014) show in their book:

Some 70 billion farm animals are produced worldwide every year, two-thirds of them now
factory-farmed. They are kept permanently indoors and treated like production machines, pushed ever further beyond their natural limits, selectively bred to produce milk or eggs, or to grow fat
enough for slaughter at a younger and younger age.

In fact, since 1925, at least in the USA, chickens have had their lives shortened from 112 to 48 days, while their weight at the time of slaughter has gone from 2.5 pounds to 6.2 (Elam 2015), due to immobility and the administration

in the Brazilian
of additives and antibiotics in their feeding. Brazilian agribusiness is the largest exporter of beef in the world. In 2018, there were 213.5 million head of cattle, more than one animal per person

territory. Most of the herds are concentrated in the North and Midwest of the country, with pastures expanding
rapidly over the Cerrado and the Amazon rainforest, which are among the richest areas of planetary biodiversity . Along
with China and some countries in Southeast Asia and in Africa, Brazil is the largest laboratory of the planetary hypobiosphere .

12.4.2 Doubling of Per Capita Consumption Between 2000 and 2050

The world population is expected to increase by just over 50% between 2000 and 2050 (from 6.1 to about 9.7 billion), while world meat production is expected to increase by just over 100% in that same period. According to FAO (2006):

Global production of meat is projected to more than double from 229 million tonnes in 1999/01 [312 million tonnes in 2014 and 317 million tonnes in 2016] to 465 million tonnes in 2050, and that of milk to grow from 580 to 1,043 million tonnes. The environmental impact per unit of livestock production must be cut by half, just to avoid increasing the level of damage beyond its present level.

Assuming the increasingly unlikely hypothesis that the global food system will not collapse in the next few decades of the century, meat consumption in Europe’s richest countries will tend to stabilize at a level of 80 kg per year per inhabitant (about 220 grams per day). This demand will be met by ever-increasing meat imports from the so-called “developing” countries, such as Brazil, where demand for meat continues to grow at high rates. Resilient as they may be, ecosystems will not be able to resist such increases. Indeed, for several reasons, the mounted knights of the
livestock apocalypse are riding species that are bred for human consumption and, especially, cattle.

12.4.3 Meat = Climate Change

Two reports by FAO (2006, 2013) estimate that CO2e (CO2 equivalent) emissions from agribusiness account for about 15% to 18% of total anthropogenic emissions per year. According to FAO’s Livestock’s Long Shadow (2006), livestock production generates more GHG, measured in CO2e, than the transport sector. But these percentages are not accepted by Robert Goodland and Jeff Anhang of the Worldwatch Institute. In 2009, they issued a very well-documented article on the responsibility of carnivorism in the environmental crises (“Livestock and Climate Change. What
if the key actors in climate change are cows, pigs, and chickens?”). It states:

7,516 M per year of CO2 equivalents (CO2e), or 18% of annual worldwide GHG
Livestock’s Long Shadow, the widely-cited 2006 report by FAO, estimates that

emissions, are attributable to cattle, buffalo, sheep, goats, camels, horses, pigs, and poultry . That
amount would easily qualify livestock for a hard look indeed in the search for ways to address climate change. But our analysis shows that livestock and their byproducts actually account for at least 32,564 million
tons of CO2e per year, or 51% of annual worldwide GHG emissions.

Whatever may be the extent of its role in anthropogenic greenhouse gas emissions, in considering the environmental liability of meat-eating, we should include the impacts of the use of antibiotics, hormones, and pesticides on animals, as well as the two next equations:

12.4.4 Meat = Deforestation, Land-Use Change, and Soil Degradation

pasture land occupied 34 million km2 or 26% of dry land. This is more
We present some data on this equation, taken from FAO’s 2006 study. In that year,

than the total area of Africa (30.2 million km2). The study states that 20% of these lands are degraded. Globally, about 24,000 km2 of forest are replaced by pasture
each year, and, according to data from another report (Climate Focus, 2017), an average of 27,600 km2 of forests are replaced by pasture each year (Whyte 2018). Globally, soy is responsible for the deforestation of 6000 km2 per
year, but a major part of its cultivation goes toward animal feed. Thus, the deforestation caused by soybean production is also largely due to carnivorism. Two texts by João Meirelles (2005, 2014) show the responsibility of meat-
eating in the destruction of the Amazon rainforest, either through domestic or international consumption. David Pimentel (1997) estimates that “nearly 40% of world grain is being fed to livestock rather than being consumed
directly by humans.” Much more recent estimates are very close to those of Pimentel. For Jonathan Foley (2014), “today only 55% of the world’s crop calories feed people directly; the rest are fed to livestock (about 36%) or turned
into biofuels and industrial products (roughly 9%). In 1997, Pimentel estimated that “the 7 billion livestock animals in the United States consume five times as much grain as is consumed directly by the entire American population.”
In addition, Pimentel continued, “if all the grain currently fed to livestock in the United States were consumed directly by people, the number of people who could be fed would be nearly 800 million.” This means that a vegetarian
diet would allow us to feed more than twice the current US population and almost all of the world’s starving population. After deforestation, overgrazing exterminates the remaining biodiversity through the effect of trampling, as
well as that of feces and urine in excess. Also, according to Pimentel (1997), 54% of US pasture land is being overgrazed.

12.4.5 Meat = Water Depletion

This second equation also makes meat consumption a major contributor to the depletion of water resources, as shown in Fig. 12.2:

Since the emergence of the concept of “water footprint,” coined by Arjen Hoekstra (from John Anthony Allan’s concept of “virtual water”), we know how to calculate how much water is used or polluted in the development of each

agriculture and livestock product. The production of 1 kg of meat requires the use of 20,000 liters of water. Although the amount of water
consumed varies depending on the region where the crop is grown, the irrigation system, and the type of cattle feed, the proportion indicated in the figure above remains substantially the same. The use of water by cattle is even
greater when the animals are fed fodder and grain. According to calculations made by David Pimentel (1997), US agriculture accounts for 87% of all freshwater consumed per year. Only 1.3% of this water is used directly by cattle.

But when we include water used for fodder and grain, the amount of water used increases dramatically. All in all, each pound of beef consumes 100,000 liters of
water.

Due to the multiplicity of these impacts, the process of extreme “carnivoration” of the human diet from the
1960s onward is one of the most emblematic characteristics of the Anthropocene. It drives the tendency toward biodiversity collapse and the advent of the hypobiosphere.

[[FIGURE 12.2 OMITTED]]

[[REFERENCES OMITTED]]
[[PART II BEGINS]]

[[CHAPTER 13 BEGINS]]
The preceding 12 chapters have sought to offer an overall picture of the multiple crises that—together, through interacting dynamics—are propelling us toward socio-environmental collapse. It is from this background that we begin the second part of this book. We will develop the book’s two central theses, laid out in the Introduction and repeated here. First thesis, the illusion that capitalism can become environmentally sustainable is the most misleading idea in contemporary political, social, and economic thought. Second thesis, this first illusion is nourished by a second
and a third one. The second illusion, discussed in Chap. 14, is the tenacious belief—at one time reasonable but now definitely fallacious—that the more material and energetic surplus we are able to produce, the safer will be our existence. These two illusions are grounded in a third one, discussed in Chap. 15—the anthropocentric illusion.

That capitalism is incapable of reversing the tendency toward global environmental collapse —the thesis of this chapter—is
something that should not be considered a thesis, but an elementary fact of reality, given the evidence for it. This fact is even admitted by an authority on global capitalism, Pascal Lamy. In an interview held in 2007, the former
director general of Crédit Lyonnais and former director general of WTO stated1:

Capitalism cannot satisfy us. It is a means that must remain in the service of human development. Not an end in itself. A single example: if we do not vigorously question the dynamic of capitalism, do you believe we will succeed in mastering climate change? (…) You have, moreover, events that come to corroborate the least bearable aspects of the model: either its intrinsic dysfunctions, such as the subprime crisis, or the phenomena that capitalism and its value system don’t allow us to deal with—the most obvious of those being
global warming.

In the same year (2007), a similar verdict was issued: “ Climate change is a result of the greatest market failure the world has seen.” This
sentence is not from an anti- capitalist manifesto, but from the Stern Review. The Economics of Climate Change authored by Lord Nicholas Stern, former president of the British Academy, former Chief Economist and Senior Vice-
President of the World Bank, second permanent secretary to Her Majesty Treasury, and professor at the London School of Economics and the Collège de France. If there is someone who no longer has illusions about the

Yvo de Boer, former Executive Secretary of the United Nations Framework Convention
compatibility between capitalism and any concept of environmental sustainability, it is

UNFCCC), who resigned after the failure of the 15th Convention of the Parties (COP15) in Copenhagen in 2009. A diplomat well versed in the intricacies
on Climate Change (

of international climate negotiations and professionally attached to the weight of words , he clarified his
position in 2013 in an interview with Bloomberg Business: “ The only way that a 2015 agreement can achieve
a 2°[C] goal is to shut down the whole global economy” (Jung et al. 2015). Not too long ago, this statement could summarize the content of this chapter. Today,
meeting the 2 °C target no longer depends on a deal. It has become a socio-physical impossibility, as discussed
in Chap. 7. What is now on the agenda is to divert ourselves from our current trajectory which is leading us to

an average global warming of over 2 °C in the second quarter of the century and over 3 °C probably in
the third quarter of the century. We will repeat: any average global warming above 3 °C is considered “catastrophic” (Xu and
Ramanathan 2017), as it will lead to low latitudes becoming uninhabitable during the summer, the frequent flooding of urban

infrastructure in coastal cities, and drastic decreases in agricultural productivity. It will also probably trigger even
greater warming (>5 °C) which will have unfathomable impacts.
To minimize the destruction to the Earth system resulting from the predatory dynamics of global capitalism, regulatory frameworks have been implemented on a “deregulated” capitalism. The adjective is in quotation marks to underline its redundancy. In other words, would an economy operating within ecological frameworks still be capitalist? One might ask a more modest question: would an economy still be capitalist if it is capable of functioning under the ten key recommendations for a Global Action Plan proposed in 2014 by Lord Nicholas Stern and Felipe Calderón in
their report, Better Growth Better Climate? The first part of this book presents ample evidence that capitalism is incompatible with the adoption of these ten proposals, each of which equates to an admission of capitalism’s congenital environmental unsustainability:

(1) “Accelerate low-carbon transformation by integrating climate into core economic decision-making processes.” In their investment decisions, corporations will not consider the impact of global warming whenever that impact conflicts with the raison d’être of the investment: the expectation of profitability in the shortest time possible. As long as the investment decision is an inalienable legal prerogative of those in charge of companies (private or state-controlled) and as long as this decision does not emanate from a
democratic authority that is guided by science, this first recommendation will be ignored. This is a trivial observation based on overwhelming historical experience, and it is quite puzzling that it would still be necessary to mention it in the twenty-first century.

(2) “Enter into a strong, lasting and equitable international climate agreement.” This second recommendation brings to mind Antonio Gramsci’s famous adage: “History teaches, but it has no pupils.” Stern and Calderón should remember, in fact, a lesson that has been relentlessly repeated for over 40 years. The first international resolution to reduce GHG emissions dates back to June 1979 (Rich 2018; Klein 2018; Mecklin 2018). In 1988, in Toronto, the World Conference on the Changing Atmosphere took place. More than 340
participants from 46 countries attended it. The same virtuous resolution was promoted here, but this time the goals were well quantified. Participants agreed that there should be a 20% cut in global CO2 emissions by 2005 and, eventually, a 50% cut. Since then, international meetings and protocols have followed monotonously; their central theme has been the reduction of GHG emissions. And yet, since 1990, CO2 emissions have increased by over 63%, as is well known. This pattern is repeated even after the Paris Agreement.
The simplest and irrefutable proof of this is that by 2018 global CO2 emissions were already 4.7% higher than in 2015. There is nothing ambitious about the Paris Agreement, it is not legally binding, and it remains ignored by the world of high finance. A report released in March 2020 by Rainforest Action Network, BankTrack, Indigenous Environmental Network, Oil Change International, Reclaim France, and Sierra Club, and endorsed by over 160 organizations around the world, showed that 35 global banks from Canada, China,
Europe, Japan, and the United States have together funneled USD 2.7 trillion into fossil fuels in the 4 years since the Paris Agreement was adopted (2016–2019). “The massive scale at which global banks continue to pump billions of dollars into fossil fuels is flatly incompatible with a livable future,” said rightfully Alison Kirsch (Corbet 2019). The Guardian, together with two think tanks (InfluenceMap and ProxyInsight), has revealed that, since the Paris Agreement, the world’s three largest money managers (BlackRock, Vanguard,
and State Street) have built a combined USD 300 billion fossil fuel investment portfolio (Greenfield 2019). Furthermore, the Paris Agreement has not been ratified by many OPEC countries, has been abandoned by the United States, and pledges to reduce GHG emissions and to transfer resources to the poorest countries are not being observed by the signatory countries. Germany did not reach its goal of a 40% CO2 emission reduction (from its 1990 level) by 2020. France’s 2015–2018 carbon budget has not been met either.
During this period, its annual emissions decreased by only 1.1%, much less than planned. Brazil, the world’s seventh largest GHG emitter, saw an 8.9% increase in emissions in 2016 (compared to 2015), despite the worst economic recession in its recent history. It is true that these emissions decreased by 2.3% in 2017 (2071 GtCO2), compared to 2016 (2119 GtCO2), but the surge in fires and the acceleration of deforestation in the Amazon and other Brazilian biomes in 2019 should completely reverse this very modest advance.
Brazil’s current president, Jair Bolsonaro, one of the most heinous promoters of ecocide of our time, probably does not even know what commitments Brazil took on in the Paris Agreement. In September 2019, Antonio Guterres coordinated a high-level UN meeting with the goal of increasing the ambitions of the Paris Agreement. All major economies of the planet failed to answer. The complete failure of the COP25 in Madrid to cope with the climate emergency has once again shown that James Hansen was right when he
declared in 2015: “Promises like Paris don’t mean much, it’s wishful thinking. It’s a hoax that governments have played on us since the 1990s.” What still needs to be understood in order to conclude that the Paris Agreement is doomed to the same pathetic fate as the Kyoto Protocol?

(3) “Phase out subsidies for fossil fuels and agricultural inputs, and incentives for urban sprawl.” In 2009, the G20 issued a solemn statement: “We pledge to phase out fossil fuel subsidies.” In 2015, the G20 countries spent US$ 452 billion on direct subsidies to fossil fuels (Bast et al. 2015), and there was no mention of this in the Paris Agreement. An IMF working paper (Coady et al. 2019) provides estimates of direct and indirect fossil fuel subsidies (defined as the gap between existing and efficient prices for 191 countries):
“Globally, subsidies remained large at $4.7 trillion (6.3% of global GDP) in 2015 and are projected at $5.2 trillion (6.5% of GDP) in 2017.” The Climate Accountability Institute keeps reporting on the responsibility of the big polluters. As already pointed out in the Introduction, 63% of global emissions occurring between 1751 and 2010 originated from the activities of 90 corporations in the fossil fuel and cement industries. Just 100 corporations have been responsible for 71% of global emissions since 1988, and just 20 of them are
directly linked to more than 33% of all GHG emissions since the beginning of the Industrial Revolution. These corporations are not paying the huge costs imposed on our societies by the burning of fossil fuels. We are. As stated by Lord Nicholas Stern, the IMF estimate “shatters the myth that fossil fuels are cheap” (Carrington 2015). In 2016, G7 leaders again urged all countries to phase out fossil fuel subsidies by 2025. Three years later, no significant step has been taken in this direction. And this has not happened for the same
reason as always, one that everyone is aware of: seven out of the top ten corporations in the world by revenue (according to the 2018 Fortune Global 500 list) are fossil fuel industries or are umbilically linked to them. Together, the revenues of these seven corporations amount to almost two trillion dollars. Not only do they control states, but the two largest of them—Sinopec Group and China National Petroleum—are state-owned enterprises and are an essential part of the Chinese state’s power strategies.

(4) “Introduce strong, predictable carbon prices as part of good fiscal reform and good business practice.” With their unwavering faith in the market, economists continue to believe in the carbon pricing myth, as if the energy transition—at the required scale and speed—could be induced through pricing mechanisms. Actually, in 2019, more than 25 national or subnational carbon tax systems have already been implemented or are scheduled to be implemented around the world. So far, there has been no observed impact of these
initiatives on fossil fuel consumption. And even as carbon tax systems become more widespread and more aggressive, the market will always be able to adapt to them without significantly reducing fossil fuel consumption, simply because these fuels are not commodities like any other. As seen in Chap. 5 (Fig. 5.1), the huge variations in oil price between 1990 and 2019 had very little impact on the almost constant increase in oil consumption.

(5) “Substantially reduce capital costs for low-carbon infrastructure investments.” These costs have been reduced without influencing the growth of fossil fuel consumption, as seen above. All projections indicate that there will be no significant reduction in gas, oil, and coal consumption in the discernible future.

(6) “Scale up innovation in key low-carbon and climate resilient technologies, tripling public investment in clean energy R&D and removing barriers to entrepreneurship and creativity.” With the exception, perhaps, of China and India, there is no global expectation of tripling the allocation of resources for such research. Instead, we see a slight reduction in these investments on a global scale between 2011 and 2017, as shown in Fig. 13.1.

(7) “Make connected and compact cities the preferred form of urban development, by encouraging better managed urban growth and prioritising investments in efficient and safe mass transit systems.” As seen in Chap. 9, urban sprawl and chaos increase with the proliferation of carmakers, fossil and cement industries, intensive agriculture, urban solid waste, and other unprocessed waste, particularly in so-called “developing” countries where gigantic conurbations tend to be concentrated.

(8) “Stop deforestation of natural forests by 2030.” As the Global Forest Watch and several other indicators show, deforestation continues to accelerate in [[FIGURE 13.1 OMITTED]]
tropical and boreal forests on a global scale (see Chap. 2). Deforestation of tropical forests will cease simply because by 2030, as seen in Chap. 2, many of them will have been wiped out.

(9) “Restore at least 500 million hectares of lost or degraded forests and agricultural lands by 2030.” Reforestation has been limited to little more than planting a few—usually exotic—species, ones that are considered inputs for industries. In addition, soils continue to be degraded and will remain so, as long as we maintain the two paradigms on which agribusiness is based on: (a) a commodity agriculture that is toxic-intensive and strongly export-oriented, with food self-sufficiency decreasing in a growing number of countries,
and (b) a diet based on carnivorism, which is evidently unsustainable.

(10) “Accelerate the shift away from polluting coal-fired power generation, phasing out new unabated coal plants in developed economies immediately and in middle-income countries by 2025.” As seen in Chap. 6, coal consumption rose again in 2017 after reaching a plateau and even a slight decrease over the previous 3 years. There are no signs of a significant, much less accelerated, decrease in the burning of coal for power generation on a global scale. Moreover, if the opening of hundreds of thousands of hydraulic fracturing
oil and gas wells in more than 20 US states since 2005 has led to a reduction in coal use, it has not resulted in lower atmospheric GHG emissions. In Chap. 5, we refer to the work of Jeff Tollefson and colleagues (“Methane leaks erode green credentials of natural gas”) published in 2013 in Nature. The results of this study have been confirmed by successive observations and measurements. The most recent of these is a study published in April 2016 by the Environment America Research and Policy Center, according to which, in
2014 alone, at least 5.3 billion pounds of methane have leaked from fracking wells (for gas extraction) in the United States. This is equivalent to the average emissions of 22 coal-fired thermoelectric plants in that year (Ridlington et al. 2016).

evoking historical evidence would not be enough to demonstrate the structural unsustainability of
Obviously,

capitalism, since such a system could change, as Lord Nicholas Stern and Felipe Calderón might argue, this being the very raison d’être of their document. It turns
out, though, that globalized capitalism cannot change. More than a lesson from history, it is the logic of accumulation that
can demonstrate the unfeasibility of the ten recommendations proposed in this document. The regulatory frameworks that the
authors dream of are not within the aims of global capitalism and will never occupy a central position in
its agenda. This chapter, thus, examines the two impossibilities of implementing regulatory frameworks capable of containing the tendency toward collapse within the realm of globalized capitalism:
(1) The self-regulation of economic agents induced by the presence of mechanisms that emanate from the market itself

(2) Regulation induced not only by market mechanisms but by agreements negotiated between businesses, the state, and civil society

13.1 The Capitalist Market Is Not Homeostatic

The idea of self-regulation does not apply to capitalism. It is not ruled by the principle of homeostasis, pertaining to the dynamics of optimizing the internal stability of an organism or system. Maurice Brown (1988) notes that this idea goes back to Adam Smith’s “faith in the homeostatic properties of a perfectly competitive market economy.” Even today, the belief that capitalism is self-regulating has the value of a maxim, being accepted by many scholars. An example of the use of this analogy comparing the mechanisms of the capitalist market to that of a living organism is
found in Eduardo Giannetti (2013):

[The market] “has a functioning logic equipped with surprising properties from the standpoint of productive and allocative efficiency. It is a homeostatic system governed by negative feedback. Every time the system becomes disturbed, it seeks to return to equilibrium.”

The analogy between the market mechanism and that of a homeostatic system is a mistake. Since Claude Bernard’s idea of milieu intérieur or the internal environment (Canguilhem 1968/1983)2 and Walter Cannon’s notion of homeostasis, we know that any influence disturbing the balance (deficits or excesses) of the vital functions in an organism or organic system triggers regulatory and compensatory activities that seek to neutralize this influence, resulting in the recovery of balance or, more precisely, in a new balance (allostasis). The maintenance of this efficient
stability of the internal environment in its constant exchanges with the external environment is what guides the activity of every organism. Even though it is dependent on the external environment, even though it is, therefore, an “open” system, all the energies of an organism are ultimately centripetal: they are directed toward the survival, security, and reinforcement of the organism’s centrality and stability, in short, of its own identity.
Now, regarding the basic functioning of the capitalist market, not only does it not work by negative feedback, but it is even opposed to the mechanism of homeostasis. This is because the fundamental force that drives the market is not the law of supply and demand, which operates within the scope of commodity circulation, but the law of capital accumulation, which operates within the scope of commodity production and is, by definition, expansive. The market can even force a cyclical crisis and less production, but expansion is the basic rule of the return on capital, in
other words, of the physiology of capitalism.

This leads us to the second misconception of ascribing the attributes of homeostasis to the market: once it reaches its ideal size, every organism ceases to grow and goes onto the phase in which conservative adaptations prevail. This phenomenon does not occur in the capitalist market, which is driven by centrifugal forces (imposed by capital accumulation) toward unlimited growth. The ideal size of the capitalist market is, by definition, one that is infinite. Unlike an organism, if the capitalist market stops growing, it becomes unbalanced. If the age of capitalist growth is
coming to an end, this is not due to a homeostatic virtue of the market, but to something extraneous to it: the physical limits of the biosphere’s resilience. Since the 1970s, Ivan Illich (2003) has noticed that:

Being open, human equilibrium is liable to change due to flexible but finite parameters; if men can change, they do so within certain limits. On the contrary, the dynamic of the industrial system underlies its instability: it is organized toward indefinite growth and an unlimited creation of new needs that quickly become mandatory in the industrial framework.

Another argument of Giannetti found in the same essay is, however, absolutely correct: “the pricing system, notwithstanding all of its surprising merits and properties, has one serious flaw: it does not give the right signs regarding the use of environmental resources.” In this respect, André Lara Resende (2013) is adamant:

Regarding the physical limits of the planet or the destruction of the environment caused by human action, relying on the market price system [...] makes no sense. Any student in a basic microeconomics course should know this.

13.1.1 The Inversion of Taxis

The only pricing operated by the market is that of the relationship between economic costs and profit rate. Capitalism cannot price its action on ecosystems because ecosystems are broader in space and time than the investment/profit cycle. In Energy and Economic Myths (1975), Nicholas Georgescu-Roegen essentially states the same thing: “the phenomenal domain covered by ecology is broader than that covered by economics.” In this way, he continues, “economics will have to merge into ecology.” Herman Daly (1990) formulates this thesis equally elegantly: “in its
physical dimensions the economy is an open subsystem of the earth ecosystem, which is finite, nongrowing, and materially closed.” In capitalism, the world is upside down: the physical environment is conceived of as a raw material, that is, as an open subsystem of the economic system; there is a reversal of taxis which results in an equally inverted hierarchy of the world, one that is incompatible with its sustainability. Therefore, the ability to subordinate economic goals to the environmental imperative is not within the sphere of capitalism.

13.2 Milton Friedman and Corporate Mentality

There is no moral judgment here. Capitalism is unsustainable not because corporate leaders are bad or unscrupulous men. It would be absurd to suppose that corporate owners, shareholders, and CEOs are people who lack a moral compass. Nothing allows us to affirm that there is less moral sense in business circles than there is in any other civil society environment, for example, in trade unions and universities and in religious, artistic, or sports groups. The problem is that, no matter how much they want to improve the ethical conduct of their corporations, managers
cannot afford to subordinate their corporate goals to the environmental imperative.

To demonstrate this impossibility, we must start with a trivial example: money loses purchasing power due to inflation and has varying rates of purchasing power or profitability due to unequal market opportunities. To avoid depreciation or its use in disadvantageous conditions, every owner of a certain sum of money must choose the best exchange option in each moment. This applies to both the worker seeking to exchange his salary for as many goods as possible and the investor who chooses the most promising transactions or funds. In light of this elementary market
reality, corporations must present the comparative advantages of one investment opportunity over others to their current or future investors and shareholders. If British Petroleum, for example, forgoes a potentially profitable investment because of its environmental impact, investors will have two alternatives: they will replace that decision-maker if they have the power to do so; or, if not, they will redirect their investments to other corporations or even other sectors of the economy that are more likely to be profitable.

Both those who offer and those who raise funds in the market are subordinate to this relentless rationality. It explains why corporations cannot self-regulate around variables other than profit maximization. They have minimal leeway to adopt what Seev Hirsch (2011) calls “enlightened self-interest,” as this most often entails sacrificing investment opportunities, raising costs, losing competitiveness, or limiting profits in the short term. Both critics and supporters of capitalism agree on this. In 1876, Friedrich Engels wrote (as cited by Magdoff 2011):

As individual capitalists are engaged in production and exchange for the sake of the immediate profit, only the nearest, most immediate results must first be taken into account. As long as the individual manufacturer or merchant sells a manufactured or purchased commodity with the usual coveted profit, he is satisfied and does not concern himself with what afterwards becomes of the commodity and its purchasers. The same thing applies to the natural effects of the same actions.

This passage could have been undersigned by Milton Friedman (1912–2006), winner of the Nobel Prize for economics in 1976, adviser to Ronald Reagan, professor at the Chicago School of Economics, and, according to The Economist, “the most influential economist of the second half of the 20th century.” Friedman justly classifies as immoral any initiative of a corporation manager aimed at mitigating environmental impacts if such initiative entails a decrease in profit. Asked in 2004 whether John Browne, then President of British Petroleum, had the right to take
environmental measures that would divert BP from its optimal profit, Friedman replied (quoted in Magdoff and Bellamy Foster 2011):

No... He can do it with his own money. [But] if he pursues those environmental interests in such a way as to run the corporation less effectively for its stockholders, then I think he’s being immoral. He’s an employee of the stockholders, however elevated his position may appear to be. As such, he has a very strong moral responsibility to them.

Friedman’s answer is irrefutably logical. It defines a corporation’s “moral responsibility” as the commitment of its governing bodies to its shareholders. This logic and conception of moral responsibility were defended by the New Individualist Review, on whose editorial board Friedman served.3 For the same reason, Rex Tillerson, chief executive officer of ExxonMobil (2006–2017) and the US Secretary of State (from February 2017 to March 2018), was applauded in May 2015 at the Exxon Mobil Corporation’s annual meeting of shareholders in Dallas by justifying his refusal
to host climate change specialists and set greenhouse gas emission limits: “We choose not to lose money on purpose.” Incidentally, a proposal at this meeting to set GHG emission limits obtained less than 10% of the votes (Koenig 2015). This same moral responsibility toward shareholders is illustrated in another case analyzed by The Economist in a 2012 report on rising global obesity levels: in 2010, a PepsiCo leader gave up making her products healthier, as shareholders started becoming outraged. And rightly so, Friedman would say, since shareholders only put their
resources and trust in PepsiCo because it promised them the best expected market return. Frederic Ghys and Hanna Hinkkanen (2013) showed why “socially responsible investments” (SRI) are “just marketing,” as they do not really differ from traditional portfolios. According to a financial investment expert, quoted in their report: “the bank would transgress its financial role as an asset manager when including environmental and social considerations in investment decisions for clients who had not directly requested it.”

We know that in orderto maintain a reasonable chance of not exceeding a global average warming of 2 °C above the pre-
industrial period, we have a carbon budget of approximately 600 GtCO2 (Anderson 2015; Figueres et al. 2017). According to Friedman’s logic, for corporations to be
a moral entity, in other words, to keep their stock prices high, thereby honoring their contracts and commitments to their shareholders, they must continue burning the coal, oil, and gas reserves controlled by them and by the
state-owned corporations that live off the sale of these fuels. In an open letter to Christiana Figueres (UNFCCC Executive Secretary), written by Cameron Fenton (director of the Canadian Youth Climate Coalition) and signed by over
160 people and NGOs (2012), we read:

All together, the global oil, coal and gas industries are planning to burn over five times that amount, roughly 2,795 Gigatonnes of carbon. Indeed, their share prices depend on exploiting these reserves. (...) Their business plan is incompatible with our survival. Frighteningly, there are also states, parties to the convention [UNFCCC], with the same plan.

13.3 Three Aspects of the Impossibility of a Sustainable Capitalism

The logic of the impossibility of a sustainable capitalism is concretely proven in numerous aspects of its modus operandi. Let us isolate three aspects of this impossibility.

But before that, it is best to start by giving voice not to pure and staunch liberals like Milton Friedman, but to those who believe that capitalism has nothing to fear from environmental regulation. Many of them reject a defensive stance and put themselves on the offensive, claiming that environmental sustainability and increased profits are not only compatible but reciprocally strengthen each other in a virtuous circle.

If I am not mistaken, the advocates of this thesis prefer the following argument: adopting innovative solutions to increase the efficiency of the input/product or product/ waste ratio and improve environmental safety in the production process increases the company’s competitiveness (as opposed to reducing it) because it is a value-generating process, be it in terms of risk management, brand image, and, finally, effective financial results. If this is true, then taking the lead and being at the forefront of economic processes with lower environmental impact and risk will ensure
a better profitability than the average profit rate. I hope to not underestimate the literature on the business and sustainability binomial by saying that it limits itself to elaborating variations on this theme while offering several case studies on the direct relationship between sustainability and profitability. There are a growing number of economists and NGOs committed to encouraging companies to embrace this belief. They naturally render a tremendous service to society and to the companies themselves through their work. However, their success is limited by the three
aspects that render an environmentally sustainable capitalism impossible, as stated in the title of this section.

(1) Decoupling and Circular Economy

Decoupling is the hope that eco-efficient technologies and production processes in industrialized countries with
mature economies will enable the miracle of increased production and consumption with less pressure (or
at least no corresponding increase in pressure) on ecosystems (Jöstrom and Östblom 2010). It is true that a greater efficiency in the production

process may allow for relative decoupling, meaning that it enables a reduction in pressure per product or per unit of GDP. But it does not decrease
this pressure in absolute terms, since the number of products does not cease to increase on a global scale. The mechanism known as the
“Jevons paradox” or rebound effect describes how increasing demand for energy or natural resources
always tends to offset the eco-efficiency gain of technological innovation. Thus, although energy efficiency
per product has doubled or even tripled since 1950, this gain is offset by the expansion of production at a
greater rate than the eco-efficiency gain.

The actions of institutions and business foundations that advocate for an eco-efficient and circular
economy based on reverse engineering, recycling, reuse, and remanufacturing are certainly positive. We know, however, that there is no circular
economy. No economy, let alone a global economy trapped in the paradigm of expansion, can evade the second law of
thermodynamics, whose relationship with economics has been analyzed by Nicholas Georgescu-Roegen since the 1970s (1971, 1975 and 1995). Here we must state the obvious: even though
the surplus energy supplied by oil and other fossil fuels in relation to the energy invested to obtain them
is declining (for this declining EROI, see Chap. 5, Sect. 5.5), low-carbon renewable energies are not yet, and may never be, as
efficient as oil. This means that the energy transition, while urgent and imperative, will further distance us from a
circular economy. According to calculations by Dominique Guyonnet, “to provide one Kw/h of electricity through land-based wind
energy requires about 10 times more reinforced concrete and steel and 20 times more copper and
aluminum than a coal-fired thermal power plant” (Madeline 2016). The only way, therefore, to lessen the environmental impact
of capitalism is to reduce, in absolute terms, the consumption of energy and goods by the richest 10% or 20% of the planet.
This is incompatible with capitalism’s basic mechanism of expansive functioning and with the worldview that it sells to society.
(2) The Law of Resources Pyramid
The increasing scarcity of certain inputs and the need to secure their large-scale and low-cost supply nullify
the potential benefits of various green initiatives taken on by companies. These cannot, in fact, evade the law of the resources pyramid, described by Richard Heinberg (2007):

The capstone [of the pyramid] represents the easily and cheaply extracted portion of the
resource; the next layer is the portion of the resource base that can be extracted with more
difficulty and expense, and with worse environmental impacts; while the remaining bulk of the
pyramid represents resources unlikely to be extracted under any realistic pricing scenario

in capitalism, the logic of capital accumulation and surplus, together


This law of the resources pyramid can be stated in an even simpler form:

with the growing scarcity of finite natural resources, necessarily exacerbates the negative environmental
impact of economic activity.
(3) The Impossibility of Internalizing the Environmental Cost

What makes it specifically impossible for corporations to submit themselves to the environmental imperative is the
impossibility of “internalizing” the costs of increasing environmental damage that they bring about. Methodologies
to “price” nature are now multiplying. But whatever the methodology (always based on the assumption that the value of nature is reducible to a market price), the result is the same:
it is impossible for corporations to internalize their environmental cost because the total value
generated by their activity is often less than the monetary expression of the value of the natural heritage
that was destroyed by that activity.4 A report was prepared for The Economics of Ecosystems and Biodiversity (TEEB), titled Natural Capital at Risk. The top 100 externalities of business (2013) show that:
The estimated cost of land use, water consumption, GHG emissions, air pollution, land and water pollution and waste for the world’s primary sectors amounts to almost US$7.3 trillion. The analysis takes account of impacts under standard operating practices, but excludes the cost of, and risk from, low-probability, high-impact catastrophic events. (…) This equates to 13% of global economic output in 2009. Risk to business overall would be higher if all upstream sector impacts were included.

13.4 Regulation by a Mixed Mechanism

Let us now examine the second general impossibility of sustainable capitalism mentioned in the beginning of this chapter: sustainability achieved through regulatory frameworks negotiated between organized sectors of civil society, on the one hand, and states and corporations, or state–corporations, on the other. Here we touch on the punctus dolens of all the problems discussed in this chapter and even in this book: the impossibility, at least up to now, of this second route stems from the lack of parity between the two parties, a necessary condition for an effective
negotiation.

There is still a huge gap between science and societies’ perception of reality. The latter continue to consider the issue of climate change and the decline of biodiversity as non-priority issues in their list of concerns and expectations. But this is changing very quickly. Increasingly aware of the bankrupt planet being bequeathed to them by the values and paradigms of globalized capitalism, society as a whole, led by youth, is beginning to mobilize around the idea of an alternative paradigm, characterized by the subordination of the economy to ecology. Their movements have
been gaining much more momentum and global reach than had been foreseen until recently by even the most optimistic. It is true that, up to now, their protests and claims have not slowed GHG emissions, nor the decline in biodiversity, nor the pollution of soils, water, and air. But history, it never hurts to repeat, is unpredictable, and a sudden paradigm shift in civilization, capable of overcoming the imperative of economic growth and anthropocentrism, may be closer than ever before. In any case, the possibility of mitigating the ongoing collapse is dependent on the
strengthening of these socio-environmental movements, society’s greater awareness of the extreme severity of the current situation, and the ability to impose on state–corporations’ policies that are congruent with the current state of urgency.

13.4.1 The State and the Financial System

We must recognize that, on the other side of the battlefield, the constituted powers are increasingly united in defending themselves. We should not expect from the state initiatives that might lead corporations toward activities with low environmental impact. We saw in the Introduction that there appears to be a true transformation toward a new type of state that is the partner, creditor, and debtor of corporations: the state–corporation. Contrary to the 1929 crisis, which led to the New Deal in the United States and to a new role of the state in the international stage,
the financial crisis unleashed in 2008 displayed the impotence of the state and the loss of its identity. Rather than regulating financial activity, governments embarked on the most comprehensive bailout operation of banks. Since September 2008, the bulk of US and European financial resources has been used to bail out the banking system and “calm the markets.” As a July 2011 document from the US Government Accountability Office (GAO) shows, from December 1, 2007, to July 21, 2010, the Federal Reserve Bank (FED) had, through various emergency programs and
other assistance provided directly to institutions facing liquidity strains, given out loans that amounted to US$ 1139 trillion.5 The momentum of the crisis has led banks to, more than ever, take control of the state and raid their resources. According to a GAO report on conflict of interest, requested by Senator Bernie Sanders and published by him on June 12, 20126:

During the financial crisis, at least 18 former and current directors from Federal Reserve Banks worked in banks and corporations that collectively received over US$4 trillion in low-interest loans from the Federal Reserve.

According to information reported by Bloomberg, as of March 2009, the Federal Reserve had pledged US$ 7.7 trillion in guarantees and credit limits to the North American financial system (Ivry et al. 2011). As shown in Table 8 of the GAO Report to Congressional Addressees cited above, between December 1, 2007, and July 21, 2010, funds from FED emergency programs were mobilized by 21 US and European banks in the form of not term-adjusted transactions, with an aggregate value of US$ 16.115 trillion. As George Monbiot asked in 2011, “Why is it so easy to save the
banks, but so hard to save the biosphere?” The question has an unambiguous answer: because saving banks and other corporations have become a primary function of states. According to Moody’s Bank Ratings 2012, which assessed 7 banks in Germany in June 2012 and an additional 17 in July 2012 (plus seven in The Netherlands), even the richest banks in Europe cannot manage their losses alone and cannot strategically survive without the state’s safety net.

13.4.2 The Obsolescence of the Statesman

There is no longer any place in the state for the classic figure of the statesman. Voters complain about the increasing corruption of parties, the loss of values and principles, and politicians’ venal attachment to state benefits. They also complain about managerial incompetence, disloyalty, or lack of leadership of heads of state who betray their ideological profiles and break the promises that galvanized their electoral victories. It has become common to compare yesterday’s politicians with their successors in a way that is always disadvantageous to the latter: De Gaulle with
Hollande or Macron, Churchill with Cameron or Boris Johnson, Franklin D. Roosevelt with Obama or Trump, Adenauer with Merkel, De Gasperi with Berlusconi, Renzi, or Giuseppe Conte, etc. But it would be absurd to suppose that societies have lost the ability to produce personalities equal to the great statesmen who led Western democracies at critical moments in their history. What has been lost is the power of the state as the quintessential place of power and of political representation.

13.4.3 Threats to the Democratic Tradition of Political Representation

The idea that political leaders are provisional bearers of a mandate granted to them by the governed, the idea, in short, of political representation, a cornerstone of the democratic tradition born in Athens and expanded by universal suffrage in the contemporary age, obviously continues to be the only legitimate form of power that the state can exercise and must always be furthered. Nevertheless, this legitimacy is critically endangered in our time. By deterritorializing power and shifting strategic decisions to the anonymous boards of corporations (whereby states and their
resources are activated to finance and execute these decisions), the globalization of capitalism produces, along with the chronic indebtedness of national states, a progressive transformation in the historical meaning of state political power. With statesmen being unable to set the conduct for and impose boundaries on corporations, popular mandates are increasingly becoming the loci of spectacular ritualizations of power, and its dignitaries are increasingly masters in the art of gesticulation. The meaning of the term “representation” exercised by the representatives of
popular votes is, thus, increasingly understood in its pantomimic sense.

13.4.4 State Indebtedness

The international financial network controls states, mainly through their debts. Total government debt reached US$ 59 trillion in 2016 and exceeded US$ 65 trillion in 2018, up from US$ 37 trillion just a decade ago (Cox 2019). This debt has ballooned since the 2008 financial crisis, reaching levels never seen before in peacetime (Stubbington 2019). The dramatic leaps in public debt over the past decade, in both advanced and (so-called) emerging economies, are described in Fig. 13.2, according to data from the Bank for International Settlements (BIS).

Based on data from 2016 to 2017, of the 153 nations listed by the IMF or the CIA World Factbook, 102 now have public debts that exceed 50% of their GDP, 32 have public debts that exceed 80% of their GDP, and 16 countries, including the United States, Japan, and Italy, have debts that exceed 100% of their GDP: “the world’s major economies have debts on average of more than 70% of GDP, the highest level of the past 150 years, except for a spike around the second world war” (Stubbington 2019). François Morin (2015) shows how 28 large banks, resulting from
successive merges spurred by globalization and the deregulation of the Reagan-Thatcher era, have a total balance of US$ 50.3 trillion. These major banks have been labelled global systemically important banks (G-SIBs) by the Financial Stability Board (FSB), created in 2009 during the G20 summit in London. According to François Morin, they have fraudulently colluded into an oligopoly that he equates to a worldwide hydra (De Filippis 2015):

Public debt plagues every major country. The private and toxic debts were massively transferred to states during the last financial crisis. This public over-indebtedness, linked exclusively to the crisis and to these banks, explains—while completely denying the causes of the crisis—the policies of rigor and austerity applied everywhere. [...] States are not only disciplined by markets but, above all, they are hostage to the world hydra.

Corporations manage European public debt through a vicious cycle: (1) Prohibited by their regulations and by the Lisbon Treaty from buying government bonds directly from insolvent states, the European Central Bank (ECB) should

buy them [[FIGURE 13.2 OMITTED]] from banks in the secondary market, thus improving the balance sheet of these banks and avoiding the next systemic banking crisis. In addition, the ECB gives
loans to banks at rates of 1% to 1.5%, obtaining “junk” bonds or high-risk government bonds as a guarantee.7 (2) Recapitalized, banks lend “new” money to defaulting states so that they (3) avoid default and repay their creditors.
(4) In this way, banks can continue to finance states at higher interest rates, since the state is poorly rated by credit rating agencies. To be able to pay off their debts, states (5) sacrifice their investments and public services for the
imperative of reducing the budget deficit and public debt. The so-called austerity measures (6) weaken the economy and lower revenue, which (7) pushes states into default, completing the vicious cycle and taking it to the next
level.

“Those in insolvency have to sell everything they have to pay their creditors,” said Joseph Schlarmann, one of the leaders of the CDU, the party that leads Angela Merkel’s coalition in Germany. This diktat led Greece to sell the island of Oxia in the Ionian Sea to Shaykh Hamad bin Khalifa al-Thani, the emir of Qatar, who bought it for a paltry sum of five million euros. Other Greek islands (out of a total of six thousand), such as Dolicha, were also put on sale. The natural, territorial, and cultural heritage of Mediterranean Europe is considered by creditors to be little more than a
bankrupt estate. This type of negligence of Mediterranean civilizational heritage gave rise to bitter assessments done by Salvatore Settis (2002) and Silvia Dell’Orso (2002), this time on the Italian state’s abandonment of its responsibilities toward the nation’s extraordinary cultural memory. Formerly, the state guaranteed citizens the enjoyment of their heritage and devotion to their monuments through museums and the educational system. Having custody over and conserving this memory, the state was the nexus between the generations, and, through research, it also
promoted the critical historical revision of this heritage.8 Today, even when it does not simply sell this natural, territorial, or cultural heritage, the state–corporation denatures it by conceiving it as an input for tourism to be managed according to the industry’s profitability motives.

13.4.5 Tax Evasion, the Huge “Black Hole”

The impoverishment of states comes, above all, from tax evasion. In 2000, an article published in the newspaper Libération estimated that approximately six trillion euros worth of resources were diverted to 65 tax havens, with an increase of 12% per year over the previous 3 years (1997–1999). According to a July 2012 report by economists from the Tax Justice Network (TJN):

At least $21 trillion of unreported private financial wealth was owned by wealthy individuals via tax havens at the end of 2010. This sum is equivalent to the size of the United States and Japanese economies combined. There may be as much as $32 trillion of hidden financial assets held offshore by high net worth individuals (HNWIs), according to our report The Price of Offshore Revisited which is thought to be the most detailed and rigorous study ever made of financial assets held in offshore financial centres and secrecy
structures. We consider these numbers to be conservative. This is only financial wealth and excludes a welter of real estate, yachts and other non-financial assets owned via offshore structures. (…) The number of the global super rich who have amassed a $21 trillion offshore fortune is fewer than 10 million people. Of these, less than 100,000 people worldwide own $9.8 trillion of wealth held offshore. (…) This at a time when governments around the world are starved for resources.

In 2008, Edouard Chambost, an expert on the subject, stated that “55% of international trade or 35% of financial flows pass through tax havens.”9 Gabriel Zucman (2014) of the London School of Economics estimates that states lose US$ 190 billion every year to tax evasion. What the tax havens and financial opacity show is the impotence of the states and, at the same time, its complicity in relation to the power of corporations. According to Thomas Piketty (2016):

Unfortunately, in this area there is a huge gap between the triumphant declarations of governments and the reality of what they actually do. (…) The truth is that almost nothing has been done since the crisis in 2008. In some ways, things have even got worse. (…)

Thus, another vicious cycle, complementary to the one described above, is set up between corporations, investors, and big fortunes. These (1) divert a considerable portion of the taxes that are due to tax havens, and, through the banks that receive these funds, (2) they provide loans to states at high interest rates. Such interest (3) further puts states at the mercy of creditors. From de jure creditors of corporations, states become their chronic debtors, which, finally, (4) fuels the ideology that social democracy is unfeasible because it generates leviathan states that are
deficit-prone and wasteful.

And, as if this vicious cycle was not enough, part of the state’s revenue is directed to subsidize or finance—through the public treasury, public “development” banks, and tax exemptions—agribusiness, the auto industry, big mining and fossil fuel projects, the military–industrial complex, and other segments with a high concentration of corporate capital that have a deadly environmental impact. The current deterioration of states’ financial health is comparable only to the situation at the end of World War II, when public finances had been destroyed. The difference,
however, is that the destruction of the biosphere eliminates the prospect of a new cycle of economic growth, like the one that occurred in 1947–1973.

13.4.6 What to Expect from States?

In this context, what efforts can we still expect from states regarding the environmental regulation of corporate activity? Can the Brazilian state be expected— even before the election of Jair Bolsonaro—to implement an active energy transition policy and/or a policy to protect the country’s forests? What to expect from the United States with its debt of US$ 16 trillion in December 2012 which then jumped to US$ 19 trillion in February 2016 and is expected to exceed US$ 23 trillion by 2020? In 2018, Trump signed a US$ 1.2 trillion plan to revise the entire US nuclear arsenal
and authorized a new nuclear warhead for the first time in 34 years (Hennigan 2018). To sustain the country’s military–industrial corporate complex, one of the most polluting and environmentally unsustainable, its “defense” budget must be the third item in the national budget. Dwight Eisenhower had already warned about this in his famous farewell address to the nation in 1961:

This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence—economic, political, even spiritual—is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet, we must not fail to comprehend its grave implications.
What Eisenhower called in 1961 the military–industrial complex has completely taken over the United States and is now best known as the military–industrial–congressional complex (MICC). The civil and military arms industry lobby maintains control over Congress, with the latter approving funds that are not even required by the military, such as financing the production of Abrams battle tanks which the army says it does not want, since the current fleet of 2400 units is only about 3 years old (Lardner 2013). Similarly, in France, although the Centre International de
Recherche sur le Cancer (CIRC) has been claiming since 1988 that diesel is carcinogenic, the French government, followed by Germany, continues to subsidize some models of diesel engines, making France the country with the highest percentage (61%) of diesel-powered vehicles in the world. In conclusion, can we expect states to impose efficient environmental controls on the network of corporations that control them? Notwithstanding timid advances, the answer is essentially a negative one.

13.5 A Super-entity. The Greatest Level of Inequality in Human History

When comparing corporate vs. government revenues, it becomes very clear that corporate power is greater than that of states. According to the 2017 Fortune Global 500, revenues of the world’s 500 largest corporations amounted to US$ 28 trillion, the equivalent of 37% of the world’s GDP in 2015. As revealed by the NGO Global Justice Now (2016), “69 of the world’s top economic entities are corporations rather than countries in 2015,” and 10 of top 28 economic entities are corporations. “When looking at the top 200 economic entities, the figures are even more
extreme, with 153 being corporations.”10 Nick Dearden, director of Global Justice Now, declared: “The vast wealth and power of corporations is at the heart of so many of the world’s problems – like inequality and climate change. The drive for short-term profits today seems to trump basic human rights for millions of people on the planet.”11

Even more important than the power of an isolated corporation is the obscure and highly concentrated power within the corporate network itself. This network is controlled by a caste that is barely visible and that is impervious to the pressures of governing parties and societies. Their investment decisions define the destinies of the world economy and, therefore, of humanity. This is what research on the architecture of the international ownership network, conducted by Stefania Vitali, James B. Glattfelder, and Stefano Battiston (2011) from the Eidgenössische Technische
Hochschule (ETH) in Zurich, has shown. Based on a list of 43,060 transnational corporations (TNCs), taken from a sample of about 30 million economic actors in 194 countries contained in the Orbis 2007 database, the authors discovered that “nearly 4/10 of the control over the economic value of TNCs in the world is held, via a complicated web of ownership relations, by a group of 147 TNCs in the core, which has almost full control over itself.” These 147 conglomerates occupy the center of a tentacular power structure, as these authors further elaborate:

We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers. (…) About 3/4 of the ownership of firms in the core remains in the hands of firms of the core itself. In other words, this is a tightly-knit group of corporations that cumulatively hold the majority share of each other.

13.5.1 An Emerging Subspecies of Homo sapiens: The UHNWI

The concentration of so much economic power in the hands of a numerically insignificant caste is unprecedented in human history. By combining data from the Crédit Suisse Global Wealth Report pyramid (already presented in the Introduction, see Fig. 1.1) with the Wealth-X and UBS World Ultra Wealth Report 2014 and the two Oxfam International reports (2014 and 2015), we can examine this trend in more detail. As seen in the Introduction, in 2017, at the top of the global wealth pyramid, 0.7% of adults, or 36 million individuals, owned 45.9% of the world’s wealth (US$
128.7 trillion). Penetrating into the vertex of this global asset pyramid, we see that in this group of 36 million (with assets worth over US$ 1 million), there are 211,275 billionaires—the ultra-high net-worth individuals (UHNWI)—corresponding to 0.004% of adult humanity. Their assets alone total US$ 29.7 trillion, assets that, moreover, increased by 7% in 2014 compared to the previous year (64% of UHNWI are in North America and Europe, while 22% are in Asia).

Now, with the aid of a magnifying lens provided by two listings (from Forbes Magazine and the Bloomberg Billionaires Index), we will move up to examine the most exclusive stratum of this UHNWI club. In 2013, Forbes Magazine listed 1426 billionaires amassing US$ 5.4 trillion, which is equivalent to the GDP of Japan (the world’s third largest GDP). Forbes Magazine 2018 lists 2208 billionaires with US$ 9.1 trillion, an 18% increase in wealth compared to the previous annual survey. The Bloomberg Billionaires Index contains an even more stratospheric list: the world’s 300
richest individuals had a wealth of US$ 3.7 trillion in December 31, 2013. These 300 people became even richer throughout 2013, adding another US$ 524 billion to their net worth. From the top of this nanopyramid, situated at the extreme end of the Crédit Suisse pyramid, Table 13.1 allows us to contemplate the general picture of human inequality in the current phase of capitalism.

[[TABLE 13.1 OMITTED]]

[[TABLE 13.2 OMITTED]]


In 2014, Oxfam showed that the 85 richest individuals on the planet had a combined wealth of more than US$ 1.7 trillion, which is equivalent to the wealth held by 3.5 billion people, the poorest half of humanity. The concentration of these assets continues to grow at a staggering rate, as shown in the Table 13.2 which shows the declining number of individuals whose combined wealth equals that of the bottom 3.6 billion people (the poorest half of humanity which is becoming increasingly poorer).

According to the Oxfam report, An Economy for the 1% (2016),

The wealth of the richest 62 people has risen by 45% in the five years since 2010 – that’s an increase of more than half a trillion dollars ($542bn), to $1.76 trillion. Meanwhile, the wealth of the bottom half fell by just over a trillion dollars in the same period – a drop of 38%.

Oxfam’s 2015 report stated that in 2014, the richest 1% owned 48% of global wealth, leaving just 52% to be shared between the other 99% of adults on the planet:

Almost all of that 52% is owned by those included in the richest 20%, leaving just 5.5% for the remaining 80% of people in the world. If this trend continues of an increasing wealth share to the richest, the top 1% will have more wealth than the remaining 99% of people in just two years.

This prognosis was confirmed in the following year. The 2016 report (An Economy for the 1%) states that “the global inequality crisis is reaching new extremes. The richest 1% now have more wealth than the rest of the world combined.” Bill Gates’s fortune, estimated at US$ 78.5 billion (Bloomberg), is greater than the GDP of 66% of the world’s countries. In present-day Russia, 110 people own 35% of the country’s wealth.12 Another way to understand this extreme concentration of wealth is to look at the large financial holding companies. Seven of the largest US financial
holding companies (JPMorgan Chase, Bank of America, Citigroup, Wells Fargo, Goldman Sachs, MetLife, and Morgan Stanley) have more than US$ 10 trillion in consolidated assets, representing 70.1% of all financial assets in the country (Avraham et al. 2012).13 Also according to Oxfam, among the countries for which data is available, Brazil is the one with the greatest concentration of wealth in the hands of the richest 1%. Six billionaires in the country hold assets equivalent to that of the poorest half of the Brazilian people, who saw their share of national wealth further
reduced from 2.7% to 2% (Oxfam 2017).

This emerging subspecies—the 0.004% of the human species known by the acronym UHNWI—owns the planet. Their economic and political power is greater than that of those who have a popular mandate in national governments. Their domination is also ideological, since economic policies are formulated—and evaluated by most opinion leaders—to benefit the business strategies of this caste. Its power surpasses— in scale, reach, and depth and in a way that is transversal and tentacular— everything that the most powerful rulers in the history of pre-capitalist societies
could ever have conceived of or had reason to desire. Additionally, the power of these corporations is infinitely disproportionate to their social function of job creation. In 2009, the 100 largest among them employed 13.5 million people, that is, only 0.4% of the world’s economically active population, estimated by the International Labour Organization at 3.210 billion potential workers. Who in such circumstances can still uphold the historical validity of the so-called Kuznets curve? As is well known, Simon Kuznets (1962) claimed, in the 1950s, that as an economy develops,
market forces first increase and then decrease economic inequality. In reality, all the actions of this plutocratic super-entity are guided toward a single motto: defend and increase their wealth. Their interests are, therefore, incompatible with the conservation of the biophysical parameters that still support life on our planet.

13.6 “Degrowth Is Not Symmetrical to Growth”

Since the 1960s, evidence of the incompatibility between capitalism and the biophysical parameters that enable life on Earth has been recognized by experts from various disciplines. Some Marxist scholars (or those in close dialogue with Marx) belonging to two generations, from Murray Bookchin (1921–2006) and André Gorz (1923–2007) to a range of post-1960s scholars, such as John Bellamy Foster, Fred Magdoff, Brett Clark, Richard York, David Harvey, Michael Löwy, and Enrico Leff, clearly understand that the current historical situation is essentially characterized by an
opposition between capitalism and conservation of the biosphere. In the preface to a book that is emblematic of this position (The Ecological Rift. Capitalism’s War on the Earth, 2010), John Bellamy Foster, Brett Clark, and Richard York write14: “a deep chasm has opened up in the metabolic relation between human beings and nature—a metabolism that is the basis of life itself. The source of this unparalleled crisis is the capitalist society in which we live.”

But capitalism is perceived as an environmentally unsustainable socioeconomic system also by those for whom Marx is not a central intellectual reference. Pascal Lamy and Yvo de Boer were already mentioned above. For them
there is an unmistakable causal link between capitalism and the exacerbation of environmental crises. In fact, there are a number of non-Marxist thinkers who subscribe to this evidence. Two generations of pioneer thinkers, born

capitalist accumulation is disrupting the climate


between the beginning of the century and the interwar period, have laid the foundation for the understanding that

system, depleting the planet’s mineral, water, and biological resources, and provoking multiple
disruptions in ecosystems and a collapse of biodiversity. We will point out the names of great economists, such as Kenneth E. Boulding, Nicholas Georgescu-
Roegen, and Herman Daly; geographers like René Dumont (1973); key philosophers on ecology, such as Michel Serres and Arne Naess; theistic philosophers, such as Hans Jonas and Jacques Ellul; biologists, such as Rachel Carson
and Paul and Anne Ehrlich (Tobias 2013); or a Christian-educated polymath and ecologist, Ivan Illich. Inspired by the writings of these thinkers who shaped the critical ecological thinking of the second half of the twentieth century,
studies on socio-environmental crises are growing today. These studies share an awareness that the imperative of economic growth is increasingly threatening the maintenance of an organized society. Included here are authors of
diverse ideological backgrounds and range of expertise, from Claude Lévi-Strauss, Edgar Morin (2007), Cornelius Castoriadis, and Richard Heinberg to Vittorio Hösle, Serge Latouche, and Hervé Kempf, who has a book suggestively

Global
titled, Pour Sauver la Planète, Sortez du Capitalisme [To Save the Planet, Leave Capitalism] (2009). In A User’s Guide to the Crisis of Civilization: And How to Save It (2010), Nafeez Mosaddek Ahmed hit the bull’s eye: “

crises cannot be solved solely by such minor or even major policy reforms – but only by drastic reconfiguration of the
system itself.” More recently, Hicham-Stéphane Afeissa (2007, 2012) has been working extensively on an overview of deep ecological thinking in philosophy, especially in the twentieth century.
13.6.1 The Idea of Managed Degrowth

The idea of managed degrowth, which implicitly or explicitly unites the names mentioned above, appears to be the most significant proposal today, perhaps the only one, that would be effective in creating a viable society. Any
decreased human impact on the Earth system obviously requires abandoning the meat-based food system and the fossil fuel-based energy system. Moreover, the idea of degrowth rests on three assumptions, which, if not properly
understood, would make degrowth seem absurd.

The first assumption is that economic downturn, far from being an option, is an inexorable trend. Precisely because we
are depleting the planet’s biodiversity and destabilizing the environmental coordinates that have prevailed in the Holocene,
global economic growth rates are already declining when compared to the 1945–1973 average, as Gail Tverberg has shown
(see Introduction, Sect. 1.7. The Phoenix that Turned into a Chicken). According to the World Bank, in the 2013–2017 period, the average growth of the global

economy was 2.5%; this is a 0.9% decrease from the average a decade ago.15 This tendency toward
declining growth is inescapable. The current pandemic (SARS-CoV-2) will only accelerate this process of decline, as the
economy will find it increasingly difficult to recover from its next crises. Conscious that the developmental illusion is pushing the services rendered by the
biosphere toward collapse, supporters of degrowth realize that a managed degrowth would be the only way to prevent economic and socio-

environmental collapse, one that will be more brutal and deadly the longer it is denied or underestimated.
The second assumption is that administered degrowth is essentially anti-capitalist. The idea of degrowth within the framework of capitalism was rightly defined by John Bellamy Foster (2011) as an impossibility theorem. Finally, the third assumption is that managed degrowth is not simply about a quantitative reduction in GDP. First of all, it means qualitatively redefining the objectives of the economic system which should be about adapting human societies to the limits of the biosphere and of natural resources. Obviously, this adaptation implies investments in places and
in countries that lack basic infrastructure and, in general, an economic growth that is crucial for the transition to energy and transport systems that have a lower environmental impact. But these are localized investments that are oriented toward reducing environmental impacts (sanitary infrastructure, abandoning the use of firewood, public transportation, etc.); it is never about growth for the sake of growth.

Serge Latouche devoted almost all of his work to the question of degrowth (2004, 2006, 2007, 2012 and 2014). He explains (2014) the link between degrowth and overcoming capitalism: “The degrowth movement is revolutionary and anti-capitalist (and even anti-utilitarian) and its program is fundamentally a political one.” Degrowth, as the same author insists, is the project of building an alternative to the current growth society: “this alternative has nothing to do with recession and crisis … There is nothing worse than a growth society without growth. [...] Degrowth is not
symmetrical to growth.” The most poignant formulation on the incompatibility between capitalism and sustainability comes from the theories of two economists, developed before the emergence of the concepts of sustainability and decay: (1) the theory by Nicholas Georgescu-Roegen in 1971 on the increasing generation of entropy from economic activity and, a fortiori, from an economy based on the paradigm of expansion and (2) the theory, developed in 1966 by Kenneth E. Boulding in The Economics of the Coming Spaceship Earth, on the need to overcome an open
economy (cowboy economy) toward a closed economy (spaceman economy).16 We will return to this last theory in the next chapter. For now, it suffices to cite a central passage from this text in which Boulding shows that for the capitalist economy, production and consumption are seen as a commodity whereas in the economy toward which we should aspire—the closed economy or spaceman economy—what matters is minimizing throughput, namely, the rate of transformation (operated in the economic system) of raw materials into products and pollution. This means
minimizing both production and consumption, which clearly conflicts with the capitalist view of the economic process:

The difference between the two types of economy becomes most apparent in the attitude towards consumption. In the cowboy economy, consumption is regarded as a good thing and production likewise; and the success of the economy is measured by the amount of the throughput from the “factors of production,” a part of which, at any rate, is extracted from the reservoirs of raw materials and noneconomic objects, and another part of which is output into the reservoirs of pollution. If there are infinite reservoirs from which
material can be obtained and into which effluvia can be deposited, then the throughput is at least a plausible measure of the success of the economy. (…) By contrast, in the spaceman economy, throughput is by no means a desideratum, and is indeed to be regarded as something to be minimized rather than maximized. The essential measure of the success of the economy is not production and consumption at all, but the nature, extent, quality, and complexity of the total capital stock, including in this the state of the human
bodies and minds included in the system. In the spaceman economy, what we are primarily concerned with is stock maintenance, and any technological change which results in the maintenance of a given total stock with a lessened throughput (that is, less production and consumption) is clearly a gain. This idea that both production and consumption are bad things rather than good things is very strange to economists, who have been obsessed with the income-flow concepts to the exclusion, almost, of capital-stock concepts.

The fundamental unsustainability of capitalism is demonstrated not only through the arguments of Georgescu-Roegen and Boulding but also by Herman Daly’s impossibility theorem (1990). He claims that the impossibility—obvious, but not necessarily accepted in its consequences—of an economy based on the expanded reproduction of capital in a limited environment occupies a position in economic theory that is equivalent to the fundamental impossibilities of physics:

Impossibility statements are the very foundation of science. It is impossible to: travel faster than the speed of light; create or destroy matter-energy; build a perpetual motion machine, etc. By respecting
impossibility theorems we avoid wasting resources on projects that are bound to fail. Therefore economists should be very interested in impossibility theorems, especially the one to be demonstrated here,
it is impossible for the world economy to grow its way out of poverty and environmental
namely that

degradation. In other words, sustainable growth is impossible.


13.7 Conclusion

What delays a broader acceptance of this set of reflections are not arguments in favor of capitalism, but the mantra of the
absence of alternatives to it. Such is the hypnotic power of this mantra that even the scholars most aware
of the links between environmental crisis and economic activity cling to the absurd idea of a
“sustainable capitalism,” an expression that Herman Daly aptly called “a bad oxymoron—self-contradictory as prose, and
unevocative as poetry.” It turns out that it is possible, and more than ever necessary, to overcome the idea that
the failure of socialism has left society with no other alternative but to surrender to capitalism. Human thinking is not binary, and the

unfeasibility of the socialist experience in the twentieth century does not ipso facto imply the viability
of capitalism.

B---Recessions don’t make war more likely, but they do make them smaller
Jianan Liao 19, Shenzhen Nanshan Foreign Language School, February 2019, “Business Cycle and War:
A Literature and Evaluation,” https://dx.doi.org/10.2991/ssmi-18.2019.37
Through the comparison of the two views, it can be found that both sides are too vague in the description of the concept of business cycle.
According to economists such as Joseph Schumpeter, the
business cycle is divided into four phases: expansion, crisis,
recession, recovery. [12] Although there are discords in the division and naming of business cycle, it is certain that they are not
simply divided into two stages of rise and recession. However, as mentioned above, scholars who discussed the
relationship between business cycle and war often failed to divide the business cycle into four stages in
detail to analyze the relationship.

First, war can occur at any stage of expansion, crisis, recession, recovery, so it is unrealistic to assume
that wars occur at any particular stage of the business cycle. On the one hand, although the domestic economic
problems in the crisis/recession/depression period break out and become prominent in a short time, in
fact, such challenge exists at all stages of the business cycle. When countries cannot manage to solve
these problems through conventional approaches, including fiscal and monetary policies, they may resort to
military expansion to achieve their goals, a theory known as Lateral Pressure. [13] Under such circumstances, even
countries in the period of economic expansion are facing downward pressure on the economy and may
try to solve the problem through expansion. On the other hand, although the resources required for foreign
wars are huge for countries in economic depression, the decision to wage wars depends largely on the
consideration of the gain and loss of wars. Even during depression, governments can raise funding for war by issuing bonds.
Argentina, for example, was mired in economic stagflation before the war on the Malvinas islands (also known as the Falkland islands in the
UK). In fact, many governments would dramatically increase their expenditure to stimulate the economy during the recession, and economically
war is the same as these policies, so the claim that a depressed economy cannot support a war is unfounded. In addition, during the crisis
period of the business cycle, which is the early stage of the economic downturn, despite the economic crisis and potential depression, the
country still retains the ability to start wars based on its economic and military power. Based on the above understanding, war has the
conditions and reasons for its outbreak in all stages of the business cycle.

Second, the economic origin for the outbreak of war is downward pressure on the economy rather than
optimism or competition for monopoly capital, which may exist during economic recession or economic prosperity. This is due to a
fact that during economic prosperity, people are also worried about a potential economic recession. Blainey
pointed out that wars often occur in the economic upturn, which is caused by the optimism in people's mind [14],
that is, the confidence to prevail. This interpretation linking optimism and war ignores the strength contrast between the warring parties. Not
all wars are equally comprehensive, and there have always been wars of unequal strength. In such a war, one of the parties tends to have an
absolute advantage, so the expectation of the outcome of the war is not directly related to the economic situation of the country. Optimism is
not a major factor leading to war, but may somewhat serve as stimulation. In addition, Lenin attributed the war to competition between
monopoly capital. This theory may seem plausible, but its scope of application is obviously too narrow. Lenin's theory of imperialism is only
applicable to developed capitalist countries in the late stage of the development capitalism, but in reality, many wars take place
among developing countries whose economies are still at their beginning stages. Therefore, the theory centered
on competition among monopoly capital cannot explain most foreign wars. Moreover, even wars that occur during periods of economic
expansion are likely to result from the potential expectation of economic recession, the "limits of growth" [15] faced during prosperity -- a
potential deficiency of market demand. So the downward pressure on the economy is the cause of war.

Third, the business cycle may be related with the intensity, instead of the outbreak, of the war . Scholars who
supported the first two views did not pay attention to the underlying relationship between business cycle and the intensity of war. Some
scholars, such as Nikolai Kondratieff and Joshua Goldstein, believes that the business cycle is not directly related to the outbreak of war, but the
outbreak of war during the economic upswing appears to be more intense and persistent. In their analysis of the business cycle and war,
Kondratieff and Goldstein discovered that the
most dramatic and deadly wars occurred during periods of economic
upswing. This finding may provide some clues on the relationship between war and the business cycle. Although the relationship
between the outbreak of war and the business cycle is unclear, the scale of the war is likely to be
influenced by the exact phase of the business cycle in which the belligerents are engaged . Such a
phenomenon might make sense, since countries in economic upturn have better fiscal capacity, making them
more likely to wage large-scale wars. Moreover, such relationship may also stem from the optimism pointed out by Blainey.
While optimism may not directly lead to wars, it may have an impact on the choice of rivals . This is because
optimism about national strength and the outcome of the war may drive countries to choose stronger
rivals. The resulting war is likely to be far more massive and bloody. Nevertheless, more research is needed to
specifically reveal this relationship.

4---Growth increases war---both funds AND motivates aggression


Lucas Hahn 16, Bryant University, Senior Staff Accountant at EDF Renewables, “Global Economic
Expansion and the Prevalence of Militarized Interstate Disputes,” April 2016,
https://digitalcommons.bryant.edu/honors_economics/24/
Economic Factors Leading to Increased Militarized Interstate Disputes

Running counter to the arguments that global economic expansion has led to a decline in MIDs throughout the world, there is a large body of
literature that claims the exact opposite. In particular, some authors argue that the recent declines that have been observed are a direct result
of a decline in conflict after major spikes during the World Wars and the Cold War. The following section will highlight four different
economic factors that are potentially leading to an increase in MIDs. These four factors include: (1) imperialism and
resources, (2) the “War-Chest Proposition”, (3) Neo-Marxist views on asymmetrical trade, and (4) interdependence versus interconnectedness.

1. Imperialism and Resources

The presence of imperialism between the 17th and early 20th centuries was, in a way, a precursor to globalization today. During this period of
time the most developed nations worked to expand their empires and in doing so, began to connect the people of the world for the first time.
However, while there were many positive benefits of this expansion, there were also many negative happenings that led to violent conflict. As
Arquilla (2009, 73) frames it imperialism involved commercial practices (often supported by military force) that took advantage of the colonized
people and ultimately destroyed their way of life. Thus, the increased economic expansion that was brought about in order to build the empire,
often led to violent encounters.

More specifically, imperialism and the conquest of particular regions was often done in an effort to gain access to that region’s natural
resources. Authors such as Schneider (2014) state that undeveloped nations or regions are often subject to what he refers to as the “domestic
resource curse”. Basically, during the times of imperialism, the more powerful nations would go to undeveloped areas and take whatever they
wanted or needed from areas that were rich with resources5. This often involved a great deal of conflict and the native people were often
exploited. In modern times, the
presence of significant caches of national resources, particularly in Africa, has been
shown to lead to violence as corrupt governments and warlords take advantage of those native to the area.
Additionally, as Barbieri (1996) points out, conflict over resources may not be limited to an imperialist nation’s encounter with the undeveloped
region. Violent
conflict can also exist between the multiple nations that are competing to gain access or
control over natural resources in a given area.
2. The “War Chest Proposition”
Building on the previous discussion, Boehmer (2010) proposes something that he calls the “War-Chest Proposition”. He
states that
economic growth can lead to increased military/defense spending and that this buildup of a nation’s
“war chest” may be used to pay for new or continuing military engagements (251). In other words, increased
economic power often leads to greater capabilities of the nation-state as a whole. This is particularly true
in terms of military capabilities and in this way, nations may thus be able to engage in more conflict.
Furthermore, he argues that positive economic expansion builds up the confidence of the nation to a point
where they may feel invincible and thus, engage in violent conflict that will help them to continue to
expand.
3. Neo-Marxist Views on Asymmetrical Trade

One of the most supported arguments against the notion that economic expansion promotes peace is that trade, brought about by
economic expansion, actually increases MIDs. Many authors have in fact argued that increased economic
interdependence and increased trade may have, in some ways, “cheapened war”, and thus made it easier
to wage war more frequently (Harrison and Nikolaus 2012).

5---Growth increases human-animal interfaces which causes outbreaks


Dr Kwasi Bowi 9, president of the Ghana Veterinary Medical Association, “Globalisation: Catalyst for the
spread of zoonotic diseases” 8 May 2009.
http://www.modernghana.com/news/215180/1/globalisation-catalyst-for-the-spread-of-zoonotic-.html

Zoonotic diseases or zoonoses are those diseases and infections which are naturally transmitted from animals to man Zoonosis is a
concept primarily useful to public health and veterinary disease control authorities. It defines an area of cooperative activity in both research
and in control. Mutual concern for zoonoses furnishes an opportunity for communication between the physician interested in disease in no-
human animals and the veterinarian interested in disease in man. Central to the complex and close relationship linking humans and animals,
infections diseases, especially zoonoses,
have played a part in shaping the destiny of [hu]mankind. Today, the
world human and animal populations in ever-increasing numbers and in a perpetual state of movement
and interaction have never been so close together through nature, agriculture and livestock; through
the growth of trade in animals and animal products, and through our food. Yet globalisation also encourages the
circulation of pathogens and helps to make them more aggressive. The worldwide upsurge in animal
diseases, especially zoonoses is a dangerous reality. Dramatic events and spectacular crises serve as constant reminder of
the devastating consequences of these emerging and re-emerging diseases and the fear they can arouse. Today, the natural or deliberate spread of
these diseases is a threat without precedent in the history of mankind. There are over 134 zoonotic diseases of virus, bacteria, fungi, parasitic or rickettsia origin
which can be transmitted from animals to man. These include rabies, anthrax, brucellosis, tuberculosis clostridial diseases like tetanus, ringworm, Echinococcosis,
Fish tapeworm, Pork Tapeworm, Salmonellosis and Highly Pathogenic Avian Influenza virus (HPAI) which has already caused outbreak in 62 countries and caused the
death of about 140 million birds, 407 human infections with 254 deaths as at March 20, 2009. Modern
modes of transportation allow
more people and animal products to travel around the world at a faster pace. They also open airways to the
transcontinental movement of infectious diseases. One example of this is the West Nile virus. It is believed that this disease reached the United States
via "mosquitoes that crossed the ocean by riding in airplane wheel wells and arrived in New York City in 1999. With the use of air travel, people are able to go to
foreign lands, contract a disease and not have any symptoms of illness until they get home, having exposed others to the disease along the way.
Globalisation, the flow of goods, capital and people across political and geographic boundaries have also
helped to spread some of the deadliest infectious diseases especially zoonotic ones known to humans. The
spread of diseases across wide geographic scales has increased through history . In the current era of globalisation
the world is more interdependent than at any other time. Efficient and inexpensive transportation has
left few places inaccessible, increased global trade in agricultural products and brought more and more people in
contact with animal diseases that have subsequently jumped species barriers.

Growth causes zoonotic diseases.


Jeffrey D. Sachs 14, Professor of Sustainable Development, Health Policy and Management @ Columbia
University (Director of the Earth Institute @ Columbia University and Special adviser to the United
Nations Secretary-General on the Millennium Development Goals) “Important lessons from Ebola
outbreak,” Business World Online, August 17, 2014, http://tinyurl.com/kjgvyro

Ebola is the latest of many recent epidemics, also including AIDS, SARS, H1N1 flu, H7N9 flu, and others. AIDS is the
deadliest of these killers, claiming nearly 36 million lives since 1981. Of course, even larger and more sudden epidemics are
possible, such as the 1918 influenza during World War I, which claimed 50-100 million lives (far more than the war
itself). And, though the 2003 SARS outbreak was contained, causing fewer than 1,000 deaths, the disease was on the verge of deeply disrupting
several East Asian economies including China’s. There
are four crucial facts to understand about Ebola and the other
epidemics. First, most emerging infectious diseases are zoonoses, meaning that they start in animal populations,
sometimes with a genetic mutation that enables the jump to humans . Ebola may have been transmitted from bats;
HIV/AIDS emerged from chimpanzees; SARS most likely came from civets traded in animal markets in southern China; and influenza strains such
as H1N1 and H7N9 arose from genetic re-combinations of viruses among wild and farm animals. New
zoonotic diseases are
inevitable as humanity pushes into new ecosystems (such as formerly remote forest regions); the food industry
creates more conditions for genetic recombination; and climate change scrambles natural habitats and
species interactions. Second, once a new infectious disease appears, its spread through airlines, ships, megacities, and
trade in animal products is likely to be extremely rapid. These epidemic diseases are new markers of
globalization, revealing through their chain of death how vulnerable the world has become from the pervasive movement
of people and goods. Third, the poor are the first to suffer and the worst affected. The rural poor live closest to the infected animals that first transmit the
disease. They often hunt and eat bushmeat, leaving them vulnerable to infection. Poor, often illiterate, individuals are generally unaware of how infectious diseases
-- especially unfamiliar diseases -- are transmitted, making them much more likely to become infected and to infect others. Moreover, given poor nutrition and lack
of access to basic health services, their weakened immune systems are easily overcome by infections that better nourished and treated individuals can survive. And
“de-medicalized” conditions -- with few if any professional health workers to ensure an appropriate public-health response to an epidemic (such as isolation of
infected individuals, tracing of contacts, surveillance, and so forth) -- make initial outbreaks more severe.

It says disease---extinction
Thomas Such 21, Writer for the Glasgow Guardian, “How to Wipe Out Humanity”, The Glasgow
Guardian, 2/9/2021, https://glasgowguardian.co.uk/2021/02/09/how-to-wipe-out-humanity/

Not only is a virus the key to an unpredictable disease which may one day wipe out humanity, but the origins of said virus
are key. My “research” concluded that the most efficient way of killing humanity is to have the disease spread in a semi-developed country such
as China; it is far easier for viruses and other diseases to spread in less developed countries - but in order to successfully achieve global
transmission, a host country must have the substantial international infrastructure to spread a pandemic worldwide. Using this strategy in my
“research”, I was able to play a game of Plague Inc. wherein the spread and threat of Covid-19 seemed minimal by using China as a perfect
starting point to wipe out humanity. The key, as it turns out, is the spreading of the virus - something that Covid-19 has proved can
be done very easily in our modern globalised society. The journey from China to Italy spreading the plague across Europe
took almost 40 years back in the middle ages: in 2020, it took a mere 40 days for widespread outbreaks to begin in Italy
spreading across much of Europe.

My research into the historical spread of pandemics and my attempts to create my destroyer of humanity illustrates a significant disparity
between what happens in the real world and the measures one may find in a simulator. Political realities and measures many may consider
“draconian” or simply unrealistic heavily impact how a pandemic may spread and the eventual impact it will have on humanity. Using Plague
Inc., I was able to effectively kill off humanity in around a year using a fast transmission of virus, which also became gradually more
lethal. This is essential to ensure the near-universal transmission of the virus, and to penetrate measures such as increased border security
and isolated regions once the knowledge of the pandemic spreads. Whilst it is unlikely that humanity will be wiped out any time soon, it
became clear to me that instead of nuclear war, alien invasion or the looming threat of global warming; the real threat
to humanity and the eventual destruction of our species may come from something as simple as
bacteria. While we like to revel in our scientific advancements of the modern era and the ascension of humans as Earth’s alpha species, it
becomes clear that, when we adapt, our environment and the threats it poses adapt with us. Plague Inc. showed a clear pathway
to killing off humanity - though, luckily, the Covid-19 threat in the program was combated due to immense political pressure,
restrictions and record-making vaccine paces. The real threat, however, remains clear: if humanity becomes increasingly lax with preventive
and managerial measures, it is obvious what the future may hold for us.
AT: Innovation
1) FOURTH INDUSTRIAL REV---innovations will be dual-use, democratizing the means
of violence---only degrowth solves
Michael J. Albert 20, doctoral candidate in Political Science at Johns Hopkins University, “The Dangers
of Decoupling: Earth System Crisis and the ‘Fourth Industrial Revolution,’” Global Policy, vol. 11, no. 2,
2020, pp. 245–254
Infinite growth on a finite planet: the decoupling challenge

As both its critics and defenders agree, global


capitalism as a system relies on continuous compound growth (about 3 per
cent per year) for its stability and survival (Lynas, 2011; Smith, 2016). Without growth (and by extension the expectation of
future profit), investment dwindles, interest on debt cannot be repaid, unemployment rises, and consumer
spending falls, thereby catalyzing a reinforcing spiral of economic contraction. The problem for global
capitalism in a context of earth system crisis , then, is how to make this compound growth compatible

with climate stabilization and ecological regeneration . This has clearly been a challenge thus far. As Roger Pielke explains: ‘If
there is an iron law of climate policy, it is that when policies focused on economic growth confront policies focused on emission reductions, it is
economic growth that will win out every time’; therefore, any successful policy ‘must be designed so that economic growth and environmental
progress go hand in hand’ (quoted in Lynas, 2011, p. 68). The philosophy known as ‘ecomodernism’, which can be considered the dominant
approach to climate policy in the World Bank, OECD, and UNEP, believes these goals can be simultaneously attained by ‘decoupling’ economic
growth from resource use and environmental impact. In the words of the Ecomodernist Manifesto:

Intensifying many human activities – particularly farming, energy extraction, forestry, and settlement – so that they use less land and
interfere less with the natural world is the key to decoupling human development from environmental impacts … Together they
allow people to mitigate climate change, to spare nature, and to alleviate global poverty (Asafu-Adjaye et al., 2015, p. 7).

The ecomodernists distinguish between relative and absolute decoupling: relative decoupling means that ‘human environmental impacts rise at
a slower rate than overall economic growth’, whereas absolute decoupling would occur when ‘total environmental impacts … peak and begin to
decline, even as the economy continues to grow’ (Asafu-Adjaye et al., 2015, p. 11). Modern technology and urbanization are considered the
keys to achieving decoupling, which they claim enable humanity to ‘[use] natural ecosystem flows and services more efficiently’ (Asafu-Adjaye
et al., 2015, p. 17). In this way, the ecomodernists not only believe that it is possible to decouple economic growth from CO2 emissions, but
that all environmental impacts – including deforestation, biodiversity, soil depletion, air and water pollution, etc. – can decline even as the
global economy continues to grow.

There are a number of indicators that ecomodernists and other proponents of decoupling draw upon as evidence for their theoretical claims.
First, the ‘domestic material consumption’ indicator, which measures the total material and energy consumption in a given nation-state, shows
that GDP has grown faster than total material consumption in rich countries like the United States, with some European countries going further
towards absolute decoupling (Pearce, 2012). In particular, ecomodernists highlight trends in wealthier countries toward reforestation, reduced
air pollution, plateauing meat consumption, and saturating demand for material-energy intensive goods (e.g. cars) (Asafu-Adjaye et al., 2015).
This shift is often attributed to the transition from manufacturing to service-based economies in these countries, which are thought to promote
‘dematerialization’ by relying on less material and energy intensive services to create economic value (Asafu-Adjaye et al., 2015).
Ecomodernists also point to steady improvements in the carbon intensity of the global economy (roughly 1.4 per cent per year, though the rate
of improvement has slowed in the past 2 years), which has enabled global growth to relatively decouple from CO2 emissions (IEA, 2016).
Ecomodernists therefore conclude: ‘taken together, these trends mean that the total human impact on the environment, including land-use
change, overexploitation, and pollution, can peak and decline this century’ (Asafu-Adjaye et al., 2015, p. 15).

Unfortunately for the ecomodernists, degrowth scholars and ecological economists have begun to poke holes in their optimistic
assessments. Their response can be summarized according to three key counter-arguments: (1) the evidence that ecomodernists provide for
relative decoupling is flawed and limited at best; (2) their evidence for the possibility of absolute decoupling is even weaker; and (3) even if
absolute decoupling was possible in principle, there is even weaker evidence that this could occur with the necessary speed to stabilize the
earth system before reaching irreversible tipping points.

First, claims that rich countries have seen relative or even absolute decoupling of economic growth from domestic
material consumption have been shown to focus solely on correlations between national GDP and material throughput while
ignoring the material-energetic costs embodied in imported consumer goods. For example, Thomas Wiedmann and colleagues show
that while the EU, the US, and Japan have grown economically while stabilizing or even reducing domestic material consumption, a broader
analysis of their material footprint embedded in their imports shows that it has kept pace with GDP growth. They conclude that ‘no decoupling
has taken place over the past two decades for this group of developed countries’ (Wiedmann et al., 2015, p. 6273). Focusing on the global
economy as a whole, Krausmann et al. show that its resource intensity improved over the course of the 20th century, though the early 21st
century has seen a faster rate of growing resource consumption than global economic growth (cited in Hickel and Kallis, 2019). Thus, as Kallis
and Hickel (Kallis and Hickel, 2019, p. 4; italics added) explain: ‘Global historical trends show relative decoupling but no evidence of absolute
decoupling, and twenty-first century trends show not greater efficiency but rather worse efficiency, with re-coupling occurring’.

Second, given the limited


evidence for even relative decoupling , it is little surprise that the evidential basis on
which claims for the possibility of absolute decoupling rest is even flimsier . In the most comprehensive summary of the
modeling evidence to date, Hickel and Kallis (2019) show that even the most optimistic scenarios fail to prove the possibility of absolute
decoupling. For example, a modeling study by Schandl et al. (2016) shows that in a ‘high efficiency’ scenario, one that combines a high and
rising carbon price plus a doubling in the rate of material efficiency improvement, global resource use grows more slowly (about a quarter the
rate of GDP growth) but steadily to reach 95 billion tons in 2050, while global energy use grows from 14,253 million tons of oil equivalent in
2010 to 26, 932 million in 2050. The authors therefore conclude: ‘While some relative decoupling can be achieved in some scenarios, none
would lead to an absolute reduction in … materials footprint’ (Schandl et al., 2016, p. 8). A high efficiency scenario modeled by the UNEP comes
to even less optimistic conclusions (with global resource use rising to 132 billion tons in 2050), since it incorporates the ‘rebound effect’ in
which efficiency improvements lead to increased consumption due to resulting price reductions (Hickel and Kallis, 2019). In short, as they
conclude, these ‘models suggest that absolute decoupling is not feasible on a global scale in the context of continued economic growth’ (Hickel
and Kallis, 2019, p. 6).

Third, the critics show that even


if absolute decoupling (from both emissions and total environmental impact) were possible in
principle, this would need to occur fast enough to prevent transgression of ecological tipping points . Just
focusing on the climate problem, the 2018 IPCC report claims that emissions must be reduced 7 per cent annually to reach net zero by 2050 in
order to achieve the 1.5 C target, whereas they must reduce 4 per cent annually to reach net zero by 2075 for a shot at the 2 degree target
(IPCC, 2018, p. 15). However, even under optimistic assumptions (e.g. a near-term implementation of a high and rising carbon price, alongside
heroic carbon intensity improvements), studies suggest that annual declines of 3–4 per cent might be the fastest rate possible assuming
continued economic growth (Hickel, 2019). Thus, it would most likely be impossible to meet the 1.5 C target in a context of continuous
compound growth. While the 2 degree target might be feasible in this context (assuming implementation of a globally coordinated program
starting in 2020), many argue that the IPCC’s estimates downplay the existence of positive feedbacks in the earth system (e.g. Steffen et al.,
2018), and thus more rapid emissions cuts might be needed even for 2 degrees. On top of this, economic growth must also be decoupled from
impacts on other ‘planetary boundaries’ that may have already been overshot, especially land-use change and biodiversity loss (Raworth,
2017). A number of ecologists believe that to bring humanity back into a ‘safe operating space’, total resource consumption should be reduced
from roughly 70 to 50 gigatons per year (Hoekstra and Wiedmann, 2014), while a ‘half earth strategy’ should be implemented that protects 50
per cent of the planet’s surface from direct human interference (up from roughly 18 per cent today) (Wilson, 2017), possibly by 2050 to prevent
tipping points in biodiversity loss and land-use change (Hickel and Kallis, 2019). Even if these claims are exaggerated, the magnitude of the
overall decoupling challenge remains clear. It would mean that total resource consumption and land use needs to shrink, remain stable, or only
increase moderately (depending on our assumptions regarding the further stress (if any) that planetary boundaries can handle) even as the
total output of the global economy triples by 2060. It is thus not hyperbole to say, as Boris Frankel puts it, that this goal of absolute decoupling
is ‘overwhelmingly staggering in its ambition and historical novelty’ (Frankel, 2018, p. 127).

Given the magnitude of the decoupling challenge and limited evidence for even relative decoupling so far,
what arguments could believers in the possibility of absolute decoupling in the future possibly turn to?
Some would claim that we simply need to ramp up government regulations and planning to accelerate efficiency improvements. However, the
Schandl et al. (2016) study cited above shows that even under highly optimistic scenarios in which such policies are globally implemented,
absolute decoupling still fails to occur. Others point to the potential of the ‘circular economy’ in which wastes are converted into inputs for
other industrial processes across the global economy (e.g. Rockstrom and Klum, 2015). However, only a fraction of total throughput (roughly 29
per cent) can be converted to a circular economy, since agricultural and energy inputs (44 per cent of the total) are irreversibly degraded, while
buildings and infrastructure (27 per cent) involve net additions that cannot be recycled until the end of their lifespan (Hickel and Kallis, 2019).
Even for the 29 per cent of the economy that is convertible to the circular economy, the reality of entropy means that total recycling is likely to
be physically impossible, while additional constraints on re-using other materials (particularly the rare earth minerals in electronic goods) may
lower this potential even further (Frankel, 2018).
The best hope for advocates of absolute decoupling, therefore, appears to be a technological revolution that would render
projections of potential material-energy efficiency improvement rates obsolete. Indeed, the Schandl et al. (2016, p. 4) study makes ‘very
conservative assumptions regarding the development of new technologies’, and thus significantly faster rates of efficiency improvement are
possible (at least in principle) via technological breakthroughs. And as Kallis and Hickel acknowledge, ‘we cannot rule out substitutions or
technological breakthroughs that will push such limits [to efficiency improvements] so far into the future as to render them irrelevant’ (Hickel
and Kallis, 2019, p. 13). The belief that future innovations will in fact enable such breakthroughs is likely responsible for the fact that
ecomodernists and other advocates of decoupling remain undeterred by limited evidence to date. Is there any basis for their optimism?

The fourth industrial revolution

While it remains to some extent speculative, there is a wildcard in the pocket of ecomodernists that lends at least a
degree of plausibility to their confidence in future decoupling. This is the Fourth Industrial Revolution (FIR):
the convergence of technological developments in the fields of nanotechnology, biotechnology, information
technology, AI, and 3D printing among others. As noted earlier, it is the convergent and reinforcing nature of
these technological trends that lead many to believe that they will deliver exponential breakthroughs in
all fields of science and engineering, even catalyzing a transformation that will be ‘unlike anything humankind has
experienced before’ (Schwab, 2017, p. 1). Klaus Schwab, the founder and executive chairman of the World Economic Forum, effectively
captures the hope that many place in these converging technologies:

We have yet to grasp fully the speed and breadth of this new revolution … think about the staggering confluence of emerging
technology breakthroughs, covering wide-ranging fields such as artificial intelligence (AI), robotics, the Internet of things (IoT),
autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing.
Many of these innovations are in their infancy, but they are already reaching an inflection point in their development as they build
on and amplify each other in a fusion of technologies across the physical, digital, and biological worlds (Schwab, 2017, p. 1).

Given the immensity of the decoupling challenge, it seems likely that to sustain economic growth in the coming decades while stabilizing the
earth system would require such a technological revolution. And indeed, this is what many ecomodernists anticipate. Stewart Brand (2012, p.
19), for example, affirms the need for environmentalists to embrace these ‘self-accelerating’ technologies, which he claims can be ‘deployed
against the self-accelerating problems of world industrialization and against the positive feedbacks in climate itself’. In particular, both Brand
(2012) and Lynas (2011) envision an important role for biotechnology and synthetic biology, which they claim will enable the production of
more resilient crops with higher yields, clean and renewable biofuels, and microbes engineered to cleanse polluted environments and
sequester carbon. Recent breakthroughs in gene editing and DNA synthesis have enabled new techniques for restoring damaged ecosystems,
conserving endangered species, improving biological fixation of carbon, developing bio-based materials, and boosting crop yields by enhancing
the efficiency of photosynthesis (Maxmen, 2015; Wintle et al., 2017), thereby raising hopes among environmentalists and governments that the
emerging ‘bioeconomy’ can help solve sustainability challenges (Synthetic Biology Leadership Council, 2016).

Others focus on the promise of emerging developments in information technology, particularly AI, big data, and IOT – the global network of
online devices, sensors, and databases forming a ‘world-spanning information fabric’ (Goodman, 2016, p. 284). For example, a recent report
commissioned for the 2018 Global Climate Action Summit highlights the importance of these ‘exponential technologies’ for accelerating the
transition to a low-carbon economy. It places particular emphasis on the power of the IOT and machine learning to ‘enable next generation
mobility and electric vehicle breakthroughs, improvements in energy and space efficiency for buildings, and electricity generation and storage’,
while making cities orders of magnitude more efficient through traffic, energy, and infrastructural optimization (Falk et al., 2018, p. 80). It also
highlights the potential of 3D printing to ‘democratize production’ by enabling local communities to print their material and infrastructural
needs, thereby making them ‘far less dependent on global supply chains’ (Falk et al., 2018, p. 33). Overall, the authors
believe these
technologies can fuel a rapid decarbonization and dematerialization of the economy, with IOT and AI-driven efficiency
gains alone enabling 15 per cent emissions reductions by 2030, without sacrificing economic growth or rising material
standards of living (Falk et al., 2018).
While its technological flowering may not occur for at least another decade or two, nanotechnology may further revolutionize the above fields.
For example, inventor and futurist Eric Drexler claims that nanotech:

will increase energy efficiency across a wide range of applications and sometimes by large factors…In ground and air transportation,
the accessible improvements include ten-fold reductions in vehicle mass and a doubling of typical engine efficiencies…reductions in
the costs of physical capital will lower the cost of new installations of all kinds, facilitating replacement of capital stock at rates that
could surpass any in historical experience. (Drexler, 2013, p. 229)

Combined with 3D printing, nanotechnologists claim that ‘personal nanofactories’ will enable any product to be assembled locally, atom by
atom, which would bypass energy-intensive supply chains, reduce energy consumption by an ‘order of magnitude’ (Ramsden, 2016, p. 288),
‘essentially eliminate waste’ and overcome scarcity by disassembling and reassembling any atomic assemblage into novel material compounds
(Ramsden, 2016, p. 296), and may even enable the rapid creation of a carbon sequestration and storage infrastructure that would ‘return the
Earth’s atmosphere to its pre-industrial composition in a decade, and at an affordable cost’ (Drexler, 2013, p. 234).

Whatever the actual potential of these technologies, it is clear that a powerful technological imaginary exists among policy makers,
technologists, and economists that contributes to an unshakeable faith in innovation and human ingenuity to solve the decoupling challenge.
Degrowth proponents have so far mainly challenged this optimism by emphasizing the limited potential
of renewable energy due to its intermittency and high land and raw material demands (e.g. Kallis, 2018). However, this may
downplay the (at least theoretical) potential for convergent breakthroughs in nanotechnology, synthetic biology, and AI
to vastly improve renewable energy efficiency and storage systems while designing new materials to substitute for depleting minerals
(Diamandis and Kotler, 2014). More broadly, while degrowthers have to some extent considered individual FIR technologies (particularly AI and
biotechnology) (e.g. Kallis, 2018; Kerschner et al., 2018), they have yet to address their convergent and mutually amplifying character, which
leaves them vulnerable to the arguments of techno-optimists.

Of course, the revolutionary promise of these technologies may fail to materialize, and, given the magnitude of the decoupling challenge,
degrowth advocates are right to be skeptical. However, due to irreducible uncertainty combined with the ‘exponential’ and ‘revolutionary’
potential of the FIR (Schwab, 2017), even more rigorous critical assessments would always be insufficient in the eyes of the techno-optimists.
Therefore, an alternative line of response should also be pursued: what if the FIR does succeed in decoupling economic growth from total
environmental impact? What unintended consequences then might this give rise to?3

Dual-use technologies and the democratization of violence

First, we must consider that all


these are ‘dual-use technologies’, or technologies with potential both for economic productivity
and violence. As Blum and Wittes (2015, p. 2) explain, these technologies are driving a trend referred to as the ‘democratization of
violence’ in which the ‘destructive power once reserved to states is now the potential province of individuals’. Rather than simply a
matter of creating new individual weapons, Blum and Wittes (2015, pp. 39, 7–8) emphasize that convergent FIR technologies are
generating ‘whole technological fields – a series of breakthroughs in basic science and engineering’ that ‘generate
creativity in their users to build and invent new things, new weapons, and new modes of attack’. And to
compound the problem, while FIR technologies empower individuals to kill and provoke systemic chaos
unlike any other time in history, they also empower states to monitor the minute details of private and public life
and potentially constrict individual and collective freedoms, while the unprecedented threats enabled by these
same technologies will likely reinforce governmental efforts to intensify securitization as deeply as is
technologically feasible. Blum and Wittes summarize the emerging predicament as follows:

How should we think about the relationship between liberty and security when we both rely on governments to protect us from
radically empowered fellow citizens around the globe and also fear the power those same technologies give to governments? (Blum
and Wittes, 2015, p. 13)

Blum and Wittes do not consider how the earth system crisis will intersect with these
threats, either as a positive or negative feedback. But
it should be clear that, in a world of FIR-driven sustainability solutions, they would inevitably intensify, and it is thus
necessary to consider what new problems and governmental responses they would engender.4

Without claiming to exhaustively describe the security risks created by the FIR, I will focus on three emerging areas of concern: biosecurity,
cybersecurity, and state securitization, and will then discuss how they may collectively generate a spiral of insecurity and securitization.

Biotechnology and the emerging terrain of biosecurity

To begin with biosecurity, both the promise and peril of biotechnology – particularly the still nascent field of synthetic biology – is its immense
creative potential. As a recent report from the National Academies of Sciences (NAS) describes:

synthetic biology is expected to (1) expand the range of what could be produced, including making bacteria and
viruses more harmful; (2) decrease the amount of time required to engineer such organisms; and (3) expand the range of actors
who could undertake such efforts. (NAS, 2018, p. 4)

For example, manipulating DNA structures in microorganisms can make certain agents more virulent, improve their resistance
to antibiotics and vaccines, make them less detectable by already limited surveillance systems, transform harmless
microorganisms into deadly ones, and make pathogens more resilient to diverse atmospheric conditions, thus increasing their lifespan
(Charlet, 2018; NAS, 2018). At present these capabilities remain limited and dependent on highly advanced techniques and laboratory
equipment, which is why most experts believe there have to date been no mass casualty bioterror attacks (NAS, 2018). However, the NAS notes
that improvements in synthesis technology have followed a ‘Moore’s Law–like’ curve for both reductions in costs and
increases in the length of constructs that are attainable’, and that ‘these trends are likely to continue’ (NAS, 2018, pp. 18–
19). Moreover, automated DNA synthesis techniques remove much of the time-consuming and technically difficult aspects of manipulating
DNA, further reducing barriers to access (Wintle et al., 2017). And in the future, experts warn that ‘ convergent
capabilities’ between
synthetic biology, information technology, nanotechnology, and 3D printing may enable ‘sudden’ breakthroughs in bio-
weaponization (e.g. by improving bio-agent stability and delivery, providing advances aerosolization capability, and accelerating the
‘Design-and-Build’ cycle) (NAS, 2018, p. 87).

The possibilities of bio-weaponization will expand as these techniques diffuse, which are already enabling the formation of a ‘DIYbio’ movement
in which amateur scientists, inventors, and others are increasingly ‘capable of doing at home what just a few years ago was only possible in the
most advanced university, government or industry laboratories’ (Bennett et al., 2009, p. 1109). The new CRIPSR/Cas9 gene editing technique
further expands the range of genomic tinkering available to individuals, which has been widely embraced by the DIYbio community as a
powerful tool that ‘makes it easy, cheap, and fast to move genes around – any genes, in any living thing’ (Maxmen, 2015). The capacities of
DIY biohackers remain limited in important ways, though the trends described above suggests they will continue to increase as barriers
to advanced bio-weaponization fall (NAS, 2018). And while the risks are evident, the democratization of these techniques may also
facilitate the diffusion and customization of local solutions to environmental and health challenges while enhancing popular participation in the
direction of biotechnological evolution away from transnational corporate dominance (Bennett et al., 2009).

We can therefore say that these emerging technologies pose a unique kind of ‘security dilemma’: while their development and diffusion may
strengthen local and global capacities to solve environmental challenges, they may also imperil global security by unleashing uniquely powerful
and complex violence capabilities. Synthetic biology is only in its early stages, and governments from the UK to China aim to ‘accelerate [its]
industrialization and commercialization’ in order ‘to drive economic growth’ and ‘develop solutions to key challenges across the bioeconomy,
spanning health, chemicals, advanced materials, energy, food, security and environmental protection’ (Synthetic Biology Leadership Council,
2016, pp. 13, 4). If calls for emergency action to exponentially expand the green economy indeed accelerate these trends (Falk et al., 2018),
then by 2030 (and more so by 2040) we will live in a world where genetically engineered biofuels dramatically increase, genetic tinkering with
crop varieties is normalized to enhance agricultural resilience, and gene drives are deployed to control old and new disease vectors intensified
by climate change (among other potential applications), which would exponentially expand the number of individuals with biotech expertise
and access to the needed equipment. Therefore, while we have yet to experience a catastrophic bioterror attack, rapid advances in synthetic
biology are nonetheless creating
a ‘black swan waiting to happen’ (Bennett et al., 2009, p. 1110), and the risk is that such black
swans could become increasingly ‘normal’ if this technology
becomes a key engine of economic growth and
green technological innovation.
Cybersecurity in an age of ‘smart everything’

The second key problem with the FIR is that ‘exponential technologies’ deployed to decouple growth from environmental impact will
also intensify ongoing cybersecurity threats. Cybercrime has increased to the point of costing the global economy an estimated $500–600
billion per year, while new vulnerabilities in civilian infrastructures continue to be discovered and exploited more quickly than they can be
secured (Goodman, 2016). We are thus dealing with an already significant problem, though it remains important to consider how it will deepen
in a world reliant on FIR-dependent solutions to the earth system crisis, especially once we take into account the cyber vulnerabilities posed by
next generation information systems (Goodman, 2016).

In particular, we must consider the risks associated with the incipient IOT, which is a key component of the solution-set offered by
techno-optimists for decoupling economic growth by dramatically improving efficiencies in energy, transportation, and agriculture (Falk et al.,
2018; World Economic Forum, 2018). One of the prerequisites of a future renewable energy system capable of providing at least 80 per cent of
growing electricity demand would be the creation of national or regional ‘smart grids’ in which energy surpluses in areas with lots of wind and
sun at a given time can be transmitted to areas with energy deficits. While this system would itself increase cyber vulnerabilities relative to
more modular systems, the efforts of Cisco and others to enhance the efficiency of smart grids via the IOT would intensify these vulnerabilities
even more. In this vision, the smart grid would form ‘an intelligent network of power lines, switches, and sensors able to monitor and control
energy down to the level of a single lightbulb’, which would be enabled by IOT connected sensors that ‘monitor energy use and manage
demand, time shifting noncritical applications like delaying the start of your dishwasher to the middle of the night, when energy is cheaper’
(Diamandis and Kotler, 2014, pp. 169–171). In this way, every connected device – from iPhones and laptops to dishwashers and microwaves –
would become a possible point of entry for hackers to the overall network (Goodman, 2016). The IOT is also envisioned as a possible solution to
traffic congestion and fuel efficiency for the future fleet of self-driving electric vehicles that are set to (potentially) transform the market over
the next decade. While advocates of ‘smart’ cars and ‘smart’ cities are enthusiastic regarding the possibilities for improved energetic and
economic efficiency, it would also leave vehicles vulnerable to remote hijacking, as researchers Chris Valasek and Charlie Miller demonstrated
in 2014 by taking control of a 2014 Jeep Cherokee (Markey, 2015). Adding further to the IOT-hype, a recent World Economic Forum report
proposes deploying it to create ‘precision agriculture’ systems, which could link farms with global positioning systems and weather data
collection to monitor water and soil conditions while enabling farms to automatically optimize inputs (World Economic Forum, 2018).

If these IOT powered energy, urban, and agricultural systems come into being, this would constitute an exponential expansion of
attack vectors for would-be hackers, whether they come from states, criminal organizations, or non-state terrorist networks.
Cybersecurity analyst Mark Goodman effectively captures the scale the problem:

The IoT will be a global network of unintended consequences and black swan events … we cannot
even adequately protect the standard desktops and laptops we presently have online, let alone
the hundreds of millions of mobile phones and tablets we are adding annually. In what vision of
the future, then, is it conceivable that we will have any clue how to protect the next fifty billion
things to go online? (Goodman, 2016, pp. 301–302).
In short, while the expansion of cyber vulnerabilities is already stressing if not overwhelming the defense capacities of governments,
corporations, and public utilities, it is also practically assured that these vulnerabilities will expand significantly if
the global
economy relies on smart energy grids and the IOT to maximize energy efficiency and decouple growth from
growing resource use.

State securitization and totalitarian dangers

The third key risk domain involves the securitization powers of states. FIR technologies may not qualitatively transform state power individually,
though their convergent character could offer immense power to states that are able to systematically harness these capabilities for
surveillance and militarization purposes. Unsurprisingly, such capacities are being intensively pursued by leading states. In particular, the US
and China appear to be engaged in an AI arms race, with China aiming to create a $150 billion AI industry by 2030 and the Pentagon seeking to
triple its AI warfare budget to match China’s ambition (Ashizuka, 2019). Military robotics is also a key field of competition, with worldwide
spending tripling between 2000 and 2015 from $2.4 to $7.5 billion, and which some estimate will double again by 2025 (Allen and Chan, 2017).
The US has also spent $29 billion on nanotechnology research since 2001, with about 20 per cent of its investments involving military
applications (National Nanotechnology Initiative, 2019). A short list of potential military applications includes powerful and lightweight body
armor, microscopic and networked nano-bots with capacities for ‘swarm intelligence’, and more compact and powerful chemical and nuclear
weapons (Drexler, 2013; National Nanotechnology Initiative, 2019).

The full extent of the capabilities these technologies may unleash cannot be known in advance, though it seems possible that they could
become an ‘axial’ capability of states. As Deudney (2007) describes, an axial capability is one that can dominate an entire system due to its
unique character. While FIR technologies may not offer axial capabilities individually, their convergent character is such that they could
collectively offer an axial advantage to states able to systematically harness their potential. This could take the form of a globally networked
and nano-IOT-AI powered system harnessing vast capacities for force mobilization and information gathering and processing. By integrating
nanotechnology, the IOT, big data, and robotics while harnessing the processing power and flexibility of advanced AI, states may in this way be
in the midst of unleashing technological capabilities that will enable them to informationalize and monitor human populations while mobilizing
destructive power with an unprecedented degree of precision and sophistication.

Of course, without speculating on the future, we can already see how states are taking advantage of the global information infrastructure to
enhance control over the security environment. In particular, the metastasizing US security state is already in process of forging an incipient
Techno-Leviathan – a ‘global-surveillance-state-in-the-making’ – whose drive for informational omniscience is pushing it beyond territorial
boundaries in an effort to control the global infosphere and erode all pretense of legality and democratic oversight (Engelhardt, 2014, p. 107).
And we are seeing comparable developments in China, where advances in AI, the IOT, and big data are being used to construct a ‘citizen score’
system that incentivizes ‘good’ (i.e. regime-friendly) behavior and punishes citizens for critical thinking (Mitchell and Diamond, 2018). Thus,
while securitization trends in the US and China should already give us pause, they will only become more extensive and intensive by integrating
increasingly advanced FIR technologies over time, which would likely be the case if the latter are relied upon to achieve decoupling.

The spiral of insecurity and securitization

Overall, due to the combination of democratized violence capacities and totalitarian state powers that it
would create, the FIR would likely generate a reinforcing spiral of insecurity and securitization that
produces a qualitatively new kind of techno-authoritarianism on a global scale. To understand how this may come
about, it is first important to recognize that even if the FIR enables the global economy to grow while stabilizing the
climate at 1.5 or 2 degrees C (a highly optimistic assumption), this would still (according to one study) leave 16
to 29 per cent of the world’s population (mostly in the Global South) vulnerable to lethal climate impacts (Byers et
al., 2018). Technological advance could certainly improve adaptation capacities even amidst such environmental changes,
but poverty and deprivation will remain difficult to reverse, and deep grievances felt towards the Global
North – due to its primary responsibility in creating the problem whose consequences are primarily suffered in the Global South – will
make militant and/or terrorist violence a likely response. Second, we can see that the increasing dependence of
the global economy on FIR technologies would create an exponential expansion of possible bio and
cyber attack vectors. In conjunction with steady advances in technologies of securitization and rising fear among policy makers and
populations, it may only require a relatively ‘minimal’ attack (e.g. something comparable to 9/11, rather than
the kind of million or even billion casualty attack feared by some bioterror experts ) to catalyze a further
threshold of intensified global securitization.

What might this threshold entail? Abstractly, it could be understood as a shift from a predominant ‘liberal’ security apparatus to an
‘authoritarian’ mode that establishes a permanent ‘state of emergency’ on a global scale (Opitz, 2011). While we can only speculate on what
this might look like in practice, especially as technologies of securitization advance, it would likely involve a conjoined transformation in and
integration of both technological-surveillance and institutional-legal assemblages, with the former being intensified and extended while the
latter sheds all pretext of democratic oversight to become an increasingly absolutist form of sovereign authority on a global scale.
Surveillance would reach from the planetary to the molecular scale through a network of satellites, distributed
environmental sensors, and AI-facilitated data collection and processing techniques; military force mobilization capacities of
nearly absolute speed and global reach could be created through a combination of space-based and
networked AI-robotic weapons systems; and the right of the planetary sovereign to detain individuals,
mobilize force without legal pretext, and constrict the mobility of people and goods to more tightly
regulated territories, would be enshrined. While such an apparatus may seem far-fetched, philosopher and futurist Nick
Bostrom envisions a similarly totalitarian global surveillance system as the necessary prerequisite of global security in an age
of democratized weapons of mass destruction (Bostrom, 2018). And he notes that ‘thanks to the falling price of cameras, data
transmission, storage, and computing, and the rapid advances in AI-enabled content analysis, [it] may soon become both technologically
feasible and affordable’ (Bostrom, 2018, p. 25).

In sum, while
techno-authoritarian trends are already evident in the US and China, FIR technologies would
further enhance their capabilities while ‘democratizing’ WMD capacities among non-state actors (Blum and
Wittes, 2015). This would incentivize states to extend and deepen surveillance as far as possible while making democratic populations more
willing to accept intensified securitization, therefore making it difficult to avoid an authoritarian global security apparatus.

Conclusions

To return to the question that opened this essay: can global capitalism solve the earth system crisis? I have shown that the
answer is an ambiguous maybe: the FIR may enable economic growth to decouple sufficiently rapidly from CO2
emissions and broader environmental impacts to stabilize the earth system, though these technological solutions would then
intensify risks in the domains of biosecurity, cybersecurity, and state surveillance, thereby unleashing a spiral
of insecurity and securitization that will push global capitalism towards a new kind of techno-
authoritarianism. It is thus worth showing, in a way that differs from, yet complements the arguments of degrowth advocates, that
even if global capitalism can succeed in stabilizing the earth system in a context of endless growth , then it
would likely create security threats and totalitarian dangers that would undermine the desirability of
such a system.

This conclusion reinforces the need for a set of global policies that break decisively from the growth-oriented status
quo. On one hand, to dampen these technological trends and improve the prospects of earth system stabilization, the pursuit of GDP growth
should be replaced by alternative goals based on new metrics (e.g. the Genuine Progress Indicator or Index of Sustainable Economic Welfare)
that more accurately represent social welfare (Kallis, 2018). The European Commission’s Beyond GDP project shows that steps are being taken
in this direction, though they should go further by explicitly ending reliance on growth by placing hard caps on material-energy throughput
while restructuring economies so that livelihoods are not dependent on increasing GDP (Hickel, 2019; O’Neill et al., 2018). On the other hand,
many FIR technologies (especially open source synthetic biology) offer great promise for improving human welfare through
advances in sustainable energy, agriculture, and medicine. Thus, transitioning beyond growth should not necessarily
entail abandoning these technologies, and strong global regimes for regulating and monitoring their use would therefore be
necessary. However, rather than simply strengthening existing regimes like the Biological Weapons Convention (Charlet,
2018) or relying on private sector-led initiatives to regulate emerging risks ‘without impeding the capacity of research to deliver innovation and
economic growth’ (Schwab, 2017, p. 90), more
far-reaching changes are needed to enhance democratic control
over the pace and direction of technological innovation, thereby counter-balancing the influence of
multinational firms and militaries. In particular, ‘citizens assemblies’ should be empowered to debate the relative benefits and risks
posed by FIR technologies (from synthetic biology to IoT, nanotechnology, and AI) and set mandates regarding investment levels and priorities,
the direction of research, and the pace of deployment, while also having the right to ‘relinquish’ certain technological trajectories if their risks
are perceived to outweigh the benefits.5

Overall, a ‘post-growth’ economy based on more democratized ownership of common wealth, reduced
overall material-energetic throughput, decelerated and democratically controlled technological
innovation, and prioritization of production for meeting essential human needs rather than profit (Hickel,
2019; Kallis, 2018; Raworth, 2017), has the potential to create a global political-economy that meets all human
needs within planetary boundaries without shifting problems into the realms of biosecurity,
cybersecurity, and state securitization. While the obstacles it confronts are of course formidable, the
alternatives may be ecological collapse and civilizational breakdown (if the FIR fails to decouple economic growth
from environmental impacts) or global techno-authoritarianism (if it succeeds).
1AR
Case
1) BURDEN OF PROOF---innovation has to work every time to creates sustainability---
failing just once causes collapse.
Jason Hickel 21, economic anthropologist, Fulbright Scholar and Fellow of the Royal Society of Arts,
“Three - Will Technology Save Us?,” Less Is More: How Degrowth Will Save the World, Random House,
02/25/21, pp. 115–149

This narrative relies heavily on theclaim that technology will save us, in one way or another. For some, it is a simple
matter of switching the global economy to renewable energy and electric cars; once we do that, there’s no reason we can’t
keep growing for ever. After all, solar and wind power are getting cheaper all the time, and Elon Musk has shown
that it’s possible to mass-produce storage batteries at a rapid clip. For others, it’s a matter of ‘negative-
emissions technologies’ that will pull carbon out of the atmosphere. Still others bank on the hope of enormous geo-
engineering schemes: everything from blocking out the sun to changing the chemistry of the oceans. Of course, even if these
solutions succeed in stopping climate change, continued growth will still drive continued material use,
and continued ecological breakdown. But here too some insist that this is not a problem. Efficiency
improvements and recycling technologies will allow us to make growth ‘green’.

These hopes have been touted by some of the richest and most powerful people in the world, including presidents and
billionaires. The ecological crisis is no reason to start questioning the economic system, they say. It’s a
comforting narrative, and one I myself once clung to. But the more I have explored these claims, the more it has become clear to me
that to take this position requires accepting an extraordinary risk. We can choose to keep shooting up
the curve of exponential growth, bringing us ever closer to irreversible tipping points in ecological
collapse, and hope that technology will save us. But if for some reason it doesn’t work, then we’re in
trouble. It’s like jumping off a cliff while hoping that someone at the bottom will figure out how to build
some kind of device to catch you before you crash into the rocks below, without having any idea as to whether they’ll
actually be able to pull it off. It might work … but if not, it’s game over. Once you jump, you can’t
change your mind.

If we’re going to take this approach, the evidence for it had better be rock-solid. We’d better be dead
certain it will work.

--“Inductivist Turkey” quote by Lord Bertrand Russel – “This turkey found that – on his first morning at
the turkey farm – he was fed at 9 a.m. However, being a good inductivist, he did not jump to
conclusions. He waited until he had collected a great number of observations on the fact that he was fed
at 9 a.m., and he made these observations under a wide variety of circumstances, on Wednesdays and
Thursdays, on warm days and cold days, on rainy days and dry days. Each day, he added another
observation statement to his list. Finally, his inductivist conscience was satisfied and he carried out an
inductive inference to conclude, “I am always fed at 9 a.m.”. Alas, this conclusion was shown to be false
in no uncertain manner when, on Christmas eve, instead of being fed, he had his throat cut. An inductive
inference with true premises has led to a false conclusion”.

--added “enough” for readability

Umberto Mario Sconfienza 20, Goethe University Frankfurt, “Incomplete Ecological Futures,” World
Futures, vol. 76, no. 1, Routledge, 01/02/2020, pp. 17–38
After problems. Often efficiency savings resulting from productivity growth “ rebound” and are put toward
consumption (Alcott, 2005), not reduced use of resources, because the demand for material goods and services can never saturate in a
capitalist economy. Even the service economy, which looks lighter, requires a massive amount of energy to
power internet servers. According to a recent research, Bitcoin emissions alone could push global
warming above 2C in the next two decades (Mora et al., 2018).

Ecomodernist scholars urge that the processes of modernization should be accelerated. Urbanization should
be encouraged because cities use fewer resources per unit of product and are overall more productive. It makes sense for ecomodernist to
push in this direction: the economies of scale afforded by the urbanization process reduce the ecological footprint of city dwellers while the
positive returns to scale, for example in terms of patents, economic growth, numbers of encounters between people fuel the process of
innovation necessary to decouple the economy. Whether this is a fair and theoretically sound analysis of the processes of modernization or
whether the ecomodernist project is feasible at all is beyond the scope of the present section – Szerszynski (2015) and Kallis (2018) provides a
sharp rebuke of these aspects of ecomodernist thinking –, however, even if received uncritically and flatly implemented,
the ecomodernist project has important implications for its social stability and environmental
sustainability. According to Bettencourt, Lobo, Helbing, Kuhnert, and West (2007), the fact that the system of wealth
creation and innovation of cities produce positive returns to scale – i.e. they scale superlinearly –
requires the pace of life “to increase with size at a continually accelerating rate to avoid stagnation and
potential crisis” (Bettencourt et al., 2007, p. 7306). In other words, innovations need to be introduced at an always
faster rate to stave off what is called a finite time singularity, i.e. the situation in which a certain output –
GDP, population, the number of patents – becomes infinite in a finite amount of time. This is an
unsustainable situation which would require an infinite amount of energy; if not [enough] innovations are
introduced, the system collapses (West, 2017). Coming back to more prosaic manifestations of these phenomena, the pace of
life accelerates in bigger cities – e.g. people walk faster in bigger cities (Wirtz & Ries, 1992) – and we now witness more
than one round of innovations within our lifetime, to the point that often the workforce of a company
needs to be retrained throughout their active working years to perform a similar job. The fact that human
ingenuity has so far succeeded to stave off the finite time singularity does not mean that it will
continue to do so in the future (especially considering that innovations need to be introduced at an always
faster rate); to believe so would be to commit the well-known fallacy of the inductivist turkey.

While the metaphor of the rocket pushing through the atmosphere is alluring (see section 2) – after all it
requires us to simply continue to power the engines to exit the condition of unsustainability – it is also a
misleading one. Contrary to rocket science, it is difficult to predict how long the phase of unsustainability
will last if we pursue the ecomodernist vision ; the increasing pace of life and the faster rate of innovation might actually
stretch this period.

2) UNIQUENESS---innovations are slowing down


Parrique et al. 19, Timothée Parrique, Centre for Studies and Research in International Development
(CERDI), University of Clermont Auvergne; Jonathan Barth, ZOE.Institute for Future-Fit Economies;
François Briens, Independent, Informal Research Centre for Human Emancipation; Christian Kerschner,
Department of Sustainability, Governance, and Methods, MODUL University Vienna, Austria, and the
Department of Environmental Studies, Masaryk University, Brno, Czech Republic; Alejo Kraus-Polk,
University of California; Anna Kuokkanen, Lappeenranta-Lahti University of Technology; Joachim H.
Spangenberg, Sustainable Europe Research Institute (SERI Germany), “Decoupling Debunked,” July 2019,
European Environmental Bureau, https://mk0eeborgicuypctuf7e.kinstacdn.com/wp-
content/uploads/2019/07/Decoupling-Debunked.pdf
Many more ambitious scenarios can be imagined,39 but the message is already clear: relying only on
technology to mitigate climate change implies extreme rates of eco-innovation improvements, which current
trends are very far from matching, and which, to our knowledge, have never been witnessed in the history of
our species. Such an acceleration of technological progress appears highly unlikely, especially when
considering the following elements:

First, global carbon intensity improvement has been slowing down since the turn of the century, from an average yearly
1.28% between 1960 and 2000 to 0% between 2000 and 2014 (Hickel and Kallis, 2019, pp. 8–9). Narrowing the scope to high-income OECD
countries only, where most innovations are developed, the improvement rate of CO2 intensity still declines from 1.91% (1970-2000) to 1.61%
(2000-2014), which is a long way from matching appropriate levels to curb emissions to a 2°C target, let alone to 1.5°C.

This empirical observation is nothing like a surprise with regards to the theory. Technological innovation
is limited as a long-term
solution to sustainability issues because it itself exhibits diminishing returns (Reason 1). Tracking the number of
utility patents per inventor in the US over the 1970-2005 period, Strumsky et al. (2010) provide evidence that the
productivity of invention declines over time, including in the sectors such as solar and wind power as
well as information technologies (which are often acclaimed for their innovative potentials). “ Early work
[…] solves questions that are inexpensive but broadly applicable. [Then] questions that are increasingly
narrow and intractable. Research grows increasingly complex and costly […]” (ibid. 506). Looking at total factor
productivity changes from 1750 to 2015, Bonaiuti (2018) argues that humanity has entered an overall phase of decreasing
marginal returns to innovation.
Warming
Err NEG on every level of the turn---you are cognitively predisposed to neglect
warming impacts, exaggerate decline impacts, AND underestimate alternatives.
David Wallace-Wells 19. National Fellow at New America, deputy editor of New York Magazine.
02/19/2019. “III. The Climate Kaleidoscope.” The Uninhabitable Earth: Life After Warming,
Crown/Archetype.

The scroll of cognitive biases identified by behavioral psychologists and fellow travelers over the last half
century is, like a social media feed, apparently infinite, and every single one distorts and distends our
perception of a changing climate—a threat as imminent and immediate as the approach of a predator ,
but viewed always through a bell jar. There is, to start with, anchoring, which explains how we build
mental models around as few as one or two initial examples, no matter how unrepresentative—in the
case of global warming, the world we know today, which is reassuringly temperate. There is also the ambiguity
effect, which suggests that most people are so uncomfortable contemplating uncertainty, they will
accept lesser outcomes in a bargain to avoid dealing with it. In theory, with climate, uncertainty should
be an argument for action—much of the ambiguity arises from the range of possible human inputs, a quite concrete prompt we
choose to process instead as a riddle, which discourages us. There is anthropocentric thinking, by which we build our
view of the universe outward from our own experience, a reflexive tendency that some especially
ruthless environmentalists have derided as “human supremacy” and that surely shapes our ability to apprehend
genuinely existential threats to the species—a shortcoming many climate scientists have mocked: “The planet will survive,” they say; “it’s the
humans that may not.” There is automation bias, which describes a preference for algorithmic and other kinds
of nonhuman economic systems unencumbered by regulation or restriction would solve the problem of
global warming as naturally, as surely as they had solved the problems of pollution, inequality, justice, and conflict. These biases
are drawn only from the A volume of the literature —and are just a sampling of that volume. Among the
most destructive effects that appear later in the behavioral economics library are these: the bystander effect, or our
tendency to wait for others to act rather than acting ourselves; confirmation bias, by which we seek evidence
for what we already understand to be true , such as the promise that human life will endure, rather than endure the cognitive
pain of reconceptualizing our world; the default effect, or tendency to choose the present option over alternatives ,
which is related to the status quo bias, or preference for things as they are, however bad that is, and to the endowment effect, or
instinct to demand more to give up something we have than we actually value it (or had paid to acquire or
establish it). We have an illusion of control, the behavioral economists tell us, and also suffer from overconfidence
and an optimism bias. We also have a pessimism bias, not that it compensates—instead it pushes us to see
challenges as predetermined defeats and to hear alarm, perhaps especially on climate, as cries of fatalism. The
opposite of a cognitive bias, in other words, is not clear thinking but another cognitive bias. We can’t see
anything but through cataracts of self-deception. Many of these insights may feel as intuitive and familiar as folk wisdom,
which in some cases they are, dressed up in academic language. Behavioral economics is unusual as a contrarian intellectual movement in that
it overturns beliefs—namely, in the perfectly rational human actor—that perhaps only its proponents ever truly believed, and maybe even only
as economics undergraduates. But altogether the field is not merely a revision to existing economics. It is a thoroughgoing contradiction of the
central proposition of its parent discipline, indeed to the whole rationalist self-image of the modern West as it emerged out of the universities
of—in what can only be coincidence—the early industrial period. That is, a map of human reason as an awkward kluge, blindly self-regarding
and selfdefeating, curiously effective at some things and maddeningly incompetent when it comes to others; compromised and misguided and
tattered. How did we ever put a man on the moon? That climate change demands expertise, and faith in it, at precisely the moment when
public confidence in expertise is collapsing, is another of its historical ironies. That climate change touches each of these biases is not a
curiosity, or a coincidence, or an anomaly. It is a mark of just how big it is, and how much about human life it touches—which is to say, nearly
everything.
Feedbacks are overwhelmingly positive
David Wallace-Wells 19, National Fellow at New America, deputy editor of New York Magazine, “I.
Cascades,” The Uninhabitable Earth: Life After Warming, 02/19/2019, Crown/Archetype
Some climate cascades will unfold at the global level—cascades so large their effects will seem, by the curious legerdemain of environmental
change, imperceptible. Awarming planet leads to melting Arctic ice, which means less sunlight reflected back
to the sun and more absorbed by a planet warming faster still, which means an ocean less able to absorb
atmospheric carbon and so a planet warming faster still. A warming planet will also melt Arctic
permafrost, which contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the earth’s
atmosphere, and some of which, when it thaws and is released, may evaporate as methane, which is thirty-four times as
powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of
two decades, it is eighty-six times as powerful. A hotter planet is, on net, bad for plant life, which means
what is called “forest dieback”—the decline and retreat of jungle basins as big as countries and woods that sprawl for so many miles
they used to contain whole folklores— which means a dramatic strippingback of the planet’s natural ability to
absorb carbon and turn it into oxygen , which means still hotter temperatures, which means more dieback, and so on. Higher
temperatures means more forest fires means fewer trees means less carbon absorption, means more carbon in the atmosphere, means a
hotter planet still—and so on. A warmer planet means more water vapor in the atmosphere, and, water vapor being a greenhouse
gas, this brings higher temperatures still—and so on. Warmer oceans can absorb less heat, which means more stays in
the air, and contain less oxygen, which is doom for phytoplankton—which does for the ocean what plants do on land,
eating carbon and producing oxygen— which leaves us with more carbon, which heats the planet further. And so on. These are the
systems climate scientists call “feedbacks”; there are more. Some work in the other direction, moderating
climate change. But many more point toward an acceleration of warming, should we trigger them. And just how these
complicated, countervailing systems will interact— what effects will be exaggerated and what
undermined by feedbacks—is unknown, which pulls a dark cloud of uncertainty over any effort to plan ahead for the climate future.
We know what a best-case outcome for climate change looks like, however unrealistic, because it quite
closely resembles the world as we live on it today. But we have not yet begun to contemplate those
cascades that may bring us to the infernal range of the bell curve.
AI Impact---1NC
Growth-oriented AI ensures extinction---BUT, degrowth orientation solves.
Salvador Pueyo 18, Department of Evolutionary Biology, Ecology, and Environmental Sciences,
Universitat de Barcelona, “Growth, Degrowth, and the Challenge of Artificial Superintelligence,” Journal
of Cleaner Production, 10/01/2018, vol. 197, pp. 1731–1736

The challenges of sustainability and of superintelligence are not independent. The changing 84 fluxes of
energy, matter, and information can be interpreted as different faces of a general acceleration2 85 .
More directly, it is argued below that superintelligence would deeply affect 86 production technologies and also
economic decisions, and could in turn be affected by the 87 socioeconomic and ecological context in which it
develops. Along the lines of Pueyo (2014, p. 88 3454), this paper presents an approach that integrates these topics. It employs insights from
a 89 variety of sources, such as ecological theory and several schools of economic theory. 90

The next section presents a thought experiment, in which superintelligence emerges after the 91 technical aspects of goal alignment have been
resolved, and this occurs specifically in a neoliberal 92 scenario. Neoliberalism
is a major force shaping current policies on
a global level, which urges 93 governments to assume as their main role the creation and support of
capitalist markets, and to 94 avoid interfering in their functioning (Mirowski, 2009). Neoliberal policies stand in sharp
contrast 95 to degrowth views: the first are largely rationalized as a way to enhance efficiency and production 96 (Plehwe, 2009), and
represent the maximum expression of capitalist values. 97

The thought experiment illustrates how superintelligence perfectly aligned with capitalist 98 markets could have
very undesirable consequences for humanity and the whole biosphere. It also 99
suggests that there is little reason to expect that the wealthiest and most powerful people would be 100 exempt from these consequences,
which, as argued below, gives reason for hope. Section 3 raises 101 the possibility of a broad social consensus to respond to this challenge along
the lines of degrowth, 102 thus tackling major technological, environmental, and social problems simultaneously. The 103 uncertainty involved
in these scenarios is vast, but, if
a non-negligible probability is assigned to 104 these two futures, little room is left
for either complacency or resignation. 105 106
2. Thought experiment: Superintelligence in a neoliberal scenario

107 108 Neoliberalism is creating a very special breeding ground for superintelligence, because it strives
109 to reduce the role of human agency in collective affairs. The neoliberal pioneer Friedrich Hayek 110 argued that the
spontaneous order of markets was preferable over conscious plans, because markets, 111 he thought, have
more capacity than humans to process information (Mirowski, 2009). Neoliberal 112 policies are actively
transferring decisions to markets (Mirowski, 2009), while firms' automated 113 decision systems become an integral part of the
market's information processing machinery 114 (Davenport and Harris, 2005). Neoliberal globalization is locking governments in the role of
mere 115 players competing in the global market (Swank, 2016). Furthermore, automated governance is a 116 foundational
tenet of neoliberal ideology (Plehwe, 2009, p. 23). 117

In the neoliberal scenario, most technological development can be expected to take place either in the context
of firms or in support of firms3 118 . A number of institutionalist (Galbraith, 1985), post119 Keynesian (Lavoie, 2014; and references therein)
and evolutionary (Metcalfe, 2008) economists 120 concur that, in capitalist markets, firms tend to maximize their growth rates
(this principle is related 121 but not identical to the neoclassical assumption that firms maximize profits ; Lavoie, 2014).
Growth 122 maximization might be interpreted as expressing the goals of people in key positions, but, from an 123 evolutionary perspective, it
is thought to result from a mechanism akin to natural selection 124 (Metcalfe, 2008). The first interpretation is insufficient if we
accept that: (1) in big corporations, the 125 managerial bureaucracy is a coherent social-psychological system with motives and preferences of
126 its own (Gordon, 1968, p. 639; for an insider view, see Nace, 2005, pp. 1-10), (2) this system is 127 becoming techno-social-psychological
with the progressive incorporation of decision-making 128 algorithms and the increasing opacity of such algorithms (Danaher, 2016), and (3)
human mentality 129 and goals are partly shaped by firms themselves (Galbraith, 1985). 130
The type of AI best suited to participate in firms' decisions in this context is described in a 131 recent review in Science: AI researchers aim to
construct a synthetic homo economicus, the 132 mythical perfectly rational agent of neoclassical economics. We review progress toward
creating 133 this new species of machine, machina economicus (Parkes and Wellman, 2015, p. 267; a more 134 orthodox denomination would
be Machina oeconomica). 135

Firm growth is thought to rely critically on retained earnings (Galbraith, 1985; Lavoie, 2014, p. 136 134-141).
Therefore, economic selection can be generally expected to favor firms in which these are greater . The
aggregate retained earnings4 137 RE of all firms in an economy can be expressed as: 138

RE=FE(R,L,K)-w⋅L-(i+δ)⋅K-g. (1) 139


Bold symbols represent vectors (to indicate multidimensionality). F is an aggregate production 140 function, relying on inputs of various types
of natural resources R, labor L and capital K (including intelligent machines), and being affected by environmental factors5 141 E; w are wages, i
are returns to 142 capital (dividends, interests) paid to households, δ is depreciation and g are the net taxes paid to 143 governments. 144

Increases in retained earnings face constraints, such as trade-offs among different parameters of 145 Eq. 1.
The present thought experiment explores the consequences of economic selection in a 146 scenario in
which two sets of constraints are nearly absent: sociopolitical constraints on market 147 dynamics are
averted by a neoliberal institutional setting, while technical constraints are overcome 148 by
asymptotically advanced technology (with extreme AI allowing for extreme technological 149
development also in other fields). The environmental and the social implications are discussed in 150 turn. Note that this
scenario is not defined by some contingent choice of AIs' goals by their 151 programmers: The goals of
maximizing each firm's growth and retained earnings are assumed to 152 emerge from the collective
dynamics of large sets of entities subject to capitalistic rules of 153 interaction and, therefore, to
economic selection.

Outweighs the AFF.


Alexey Turchin & David Denkenberger 18, Turchin is a researcher at the Science for Life Extension
Foundation; Denkenberger is with the Global Catastrophic Risk Institute (GCRI) @ Tennessee State
University, Alliance to Feed the Earth in Disasters (ALLFED), “Classification of Global Catastrophic Risks
Connected with Artificial Intelligence,” AI & SOCIETY, 05/03/2018, pp. 1–17

According to Yampolskiy and Spellchecker (2016), the probability and seriousness of AI failures will increase with
time. We estimate that they will reach their peak between the appearance of the first self-improving AI and the moment that an AI or group
of AIs reach global power, and will later diminish, as late-stage AI halting seems to be a low-probability event.

AI is an extremely powerful and completely unpredictable technology, millions of times more powerful
than nuclear weapons. Its existence could create multiple individual global risks, most of which we can
not currently imagine. We present several dozen separate global risk scenarios connected with AI in this article, but it is likely that
some of the most serious are not included. The sheer number of possible failure modes suggests that there are
more to come.
No China AI Impact---AT: Arms Race
AI Arms race impct is fake
Louise Lucas 18, 11-14-2018, "China’s artificial intelligence ambitions hit hurdles," Financial Times,
https://www.ft.com/content/8620933a-e0c5-11e8-a6e5-792428919cee

China’s once-hot artificial intelligence sector is in a funk: spurned by investors, failing to deliver on
cutting-edge technology and struggling to generate returns. It is a far cry from last year, when Beijing issued plans to lead the
world in AI by 2030, venture capital investors were pumping up valuations and China’s tech giants peppered their earnings calls liberally with their AI ambitions.
Disillusionment with the progress of AI is not unique to China. In the US, IBM laid off engineers at its
flagship AI IBM Watson in the summer. Earlier Gary Marcus, a psychology professor at New York
University and longtime sceptic lamented that “six decades into the history of AI, our bots do little
more than play music, sweep floors and bid on advertisements”. But China, where the hype — and funding — went into
overdrive last year, the reversal has cut more deeply. China last year overtook the US in terms of private sector investment, pulling in just shy of $5bn, but the
$1.6bn invested in the first six months of this year is less than one-third of US levels, according to ABI Research. “[We’re] at a juncture where the generic use cases
have been addressed,” said Lian Jye Su, principle analyst at the consultancy. “And building generic general purpose chatbots is much easier than specific algorithms
for industries like banking, construction, or mining because you need industry knowledge and buy-in from the industry.” That
inflection point has
combined with a shortage of computing capacity to power algorithms and machine learning. What is left
is familiar ground for tech investors: inflated valuations, over-hyped pitches and threadbare
monetisation models. “We feel it’s a little bit over-invested,” said Nisa Leung, managing partner at Qiming Venture Partners, a big investor in China
tech. “Many companies are unable to ramp up their monetisation or they are over-promising their

ability.”

Super infancy and everything is open source.


Vivek Wadhwa 18, distinguished fellow at Carnegie Mellon University’s College of Engineering, 10-4-
2018, "Commentary: The AI Wars Have Not Even Begun," Fortune,
http://fortune.com/2018/10/04/artificial-intelligence-war-us-china/

There is no doubt that AI has incredible potential . But the technology is still in its infancy; there are no
AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are open
source, which means that everyone has access to them. Tech companies are generating hype with cool
demonstrations of AI, such as Google’s AlphaGo Zero, which learned one of the world’s most difficult board games in three days and could easily defeat its
top-ranked players. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: The games are just special cases, and the self-
driving cars are still on their training wheels. AlphaGo, the original iteration of AlphaGo Zero, developed its intelligence through use of generative adversarial
networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each
other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined. Unlike board games and arcade games, business
systems don’t have defined outcomes and rules. They work with very limited datasets, often disjointed and messy. The computers also don’t do critical business
analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt;
AI cannot. Google’s Waymo self-driving cars have collectively driven over 9 million miles, yet are nowhere near ready for release. Tesla’s Autopilot, after gathering
1.5 billion miles’ worth of data, won’t even stop at traffic lights. Today’s
AI systems do their best to reproduce the functioning
of the human brain’s neural networks, but their emulations are very limited. They use a technique called
deep learning: After you tell an AI exactly what you want it to learn and provide it with clearly labeled examples, it analyzes the patterns in those data and
stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes.
Herein lies a problem, though: An AI is only as good as the data it receives, and is able to interpret them
only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so
it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from
correlation. The larger issue with this form of AI is that what it has learned remains a mystery: a set of
indefinable responses to data. Once a neural network has been trained, not even its designer knows
exactly how it is doing what it does. They call this the black box of AI. Businesses can’t afford to have their systems
making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and
prove the logic behind every decision that they make. Then there is the issue of reliability. Airlines are installing AI-based facial-recognition
systems and China is basing its national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and
robots. It is being trained to perform medical data analysis and assist or replace human doctors. The
problem is that, in all such uses, AI can
be fooled. Google published a paper last December that showed that it could trick AI systems into
recognizing a banana as a toaster. Researchers at the Indian Institute of Science have just demonstrated
that they could confuse almost any AI system without even using, as Google did, knowledge of what the
system has used as a basis for learning. With AI, security and privacy are an afterthought, just as they were early in the development of
computers and the Internet. Leading AI companies have handed over the keys to their kingdoms by making their

tools open source. Software used to be considered a trade secret, but developers realized that having
others look at and build on their code could lead to great improvements in it . Microsoft, Google, and Facebook have
released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software,

Apollo, available as open source. Software’s real value lies in its implementation: what you do with it.
Just as China built its tech companies and India created a $160 billion IT services industry on top of tools
created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications.
Innovation has now globalized, creating a level playing field—especially in AI.
No Hegemony Impact---2NC
No hegemony impact---empirics and political psychology prove US posture is
unrelated to great power peace
Christopher Fettweis 17, associate professor of political science at Tulane University. 5/8/17,
“Unipolarity, Hegemony, and the New Peace”
http://www.tandfonline.com/doi/pdf/10.1080/09636412.2017.1306394?needAccess=true

After three years in the White House, Ronald Reagan had learned something surprising: “Many people
at the top of the Soviet hierarchy were genuinely afraid of America and Americans ,” he wrote in his
autobiography. He continued: “Perhaps this shouldn’t have surprised me, but it did … I’d always felt that from our deeds it must be clear to
anyone that Americans were a moral people who starting at the birth of our nation had always used our power only as a force for good in the
world…. During
my first years in Washington, I think many of us took it for granted that the Russians, like
ourselves, considered it unthinkable that the United States would launch a first strike against them .” 100
Reagan is certainly not alone in believing in the essential benevolent image of his nation. While it is
common for actors to attribute negative motivations to the behavior of others, it is exceedingly difficult
for them to accept that anyone could interpret their actions in negative ways. Leaders are well aware of
their own motives and tend to assume that their peaceful intentions are obvious and transparent .

Both strains of the hegemonic-stability explanation assume not only that US power is benevolent, but
that others perceive it that way. Hegemonic stability depends on the perceptions of other states to be
successful; it has no hope to succeed if it encounters resistance from the less powerful members of the
system, or even if they simply refuse to follow the rules . Relatively small police forces require the
general cooperation of large communities to have any chance of establishing order. They must perceive
the sheriff as just, rational, and essentially nonthreatening. The lack of balancing behavior in the system, which has been
puzzling to many realists, seems to support the notion of widespread perceptions of benevolent hegemony.101 Were they threatened by the
order constructed by the United States, the argument goes, smaller states would react in ways that reflected their fears. Since internal and
external balancing accompanied previous attempts to achieve hegemony, the absence of such behavior today suggests that something is
different about the US version.

Hegemonic-stability theorists purport to understand the perceptions of others, at times better than
those others understand themselves. Complain as they may at times, other countries know that the United States is acting in the
common interest. Objections to unipolarity, though widespread, are not “very seriously intended,” wrote Kagan, since “the truth about
America’s dominant role in the world is known to most observers. And the truth is that the benevolent hegemony exercised by the United
States is good for a vast portion of the world’s population.” 102 In
the 1990s, Russian protests regarding NATO expansion
—though nearly universal—were not taken seriously, since US planners believed the alliance’s
benevolent intentions were apparent to all. Sagacious Russians understood that expansion would actually be beneficial, since it
would bring stability to their western border.103 President Clinton and Secretary of State Warren Christopher were
caught off guard by the hostility of their counterparts regarding the issue at a summit in Budapest in
December 1994.104 Despite warnings from the vast majority of academic and policy experts about the
likely Russian reaction and overall wisdom of expansion itself, the administration failed to anticipate
Moscow’s position.105 The Russians did not seem to believe American assurances that expansion would
actually be good for them. The United States overestimated the degree to which others saw it as
benevolent.

Once again, the culture of the United States might make its leaders more vulnerable to this
misperception. The need for positive self-regard appears to be particularly strong in North American
societies compared to elsewhere.106 Western egos tend to be gratified through self-promotion rather than
humility, and independence rather than interdependence . Americans are more likely to feel good if they
are unique rather than a good cog in society’s wheel, and uniquely good. The need to be perceived as
benevolent, though universal, may well exert stronger encouragement for US observers to project their perceptions onto others.

The United States almost certainly frightens others more than its leaders perceive. A quarter of the
68,000 respondents to a 2013 Gallup poll in sixty-five countries identified the United States as the
“greatest threat to world peace,” which was more than three times the total for the second-place
country (Pakistan).107 The international community always has to worry about the potential for police
brutality, even if it occurs rarely . Such ungratefulness tends to come as a surprise to US leaders. In 2003, Condoleezza Rice was
dismayed to discover resistance to US initiatives in Iraq: “There were times,” she said later, “that it appeared that American power was seen to
be more dangerous than, perhaps, Saddam Hussein.” 108 Both liberals and neoconservatives probably exaggerate the extent to which US
hegemony is everywhere secretly welcomed;
it is not just petulant resentment, but understandable disagreement
with US policies, that motivates counterhegemonic beliefs and behavior.

To review, assuming for a moment that US leaders are subject to the same forces that affect every
human being, they overestimate the amount of control they have over other actors, and are not as
important to decisions made elsewhere as they believe themselves to be. And they probably perceive
their own benevolence to be much greater than do others. These common phenomena all influence US
beliefs in the same direction, and may well increase the apparent explanatory power of hegemony
beyond what the facts would otherwise support. The United States is probably not as central to the New
Peace as either liberals or neoconservatives believe.
In the end, what can be said about the relationship between US power and international stability? Probably not much that will satisfy partisans,
and the pacifying virtue of US hegemony will remain largely an article of faith in some circles in the policy world. Like most beliefs, it will remain
immune to alteration by logic and evidence. Beliefs rarely change, so debates rarely end.

For those not yet fully converted, however, perhaps it will be significant that corroborating evidence for the
relationship is extremely hard to identify. If indeed hegemonic stability exists, it does so without leaving
much of a trace. Neither Washington’s spending, nor its interventions, nor its overall grand strategy seem to
matter much to the levels of armed conflict around the world (apart from those wars that Uncle Sam starts). The
empirical record does not contain strong reasons to believe that unipolarity and the New Peace are
related, and insights from political psychology suggest that hegemonic stability is a belief particularly
susceptible to misperception. US leaders probably exaggerate the degree to which their power matters, and could retrench without
much risk to themselves or the world around them. Researchers will need to look elsewhere to explain why the world
has entered into the most peaceful period in its history.

The good news from this is that the


New Peace will probably persist for quite some time, no matter how dominant the
United States is, or what policies President Trump follows, or how much resentment its actions cause in the periphery.
The people of the twenty-first century are likely to be much safer and more secure than any of their
predecessors, even if many of them do not always believe it.
HEG bad---China

Attempting to maintain primacy guarantees a China war that goes nuclear---biggest


war in history---interdependence doesn’t check and no impact defense applies
Hugh White 10, professor of strategic studies at Australian National University, Canberra, and visiting
fellow at the Lowy Institute of International Policy in Sydney, has served as a senior Defense official in
the Australian government, September 2010, “Australia's future between Washington and Beijing,”
Quarterly Essay, No. 39, p. 36

Those shrugs express America's


third option: escalating competition with China. As China grows, America
faces a choice of Euclidian clarity. If it will not withdraw from Asia, and if it will not share power with
China, America must contest China's challenge to its leadership. That choice carries great costs--much
greater, I think, than most Americans yet realise. Those costs would be justified if China tried to misuse its power to subjugate Asia. There
is a risk, however, that America will slide into conflict with China, not to prevent Chinese hegemony but to
preserve its own. Would it be worth making an adversary of China to avoid surrendering primacy and
joining a Concert of Asia? If questions like this are soberly examined, the answer is almost certainly no , but sober examination
is hard to arrange when one country challenges another. Emotions become engaged, and antagonism becomes the default setting.

The optimists push back against this gloom, arguing that Washington and Beijing both understand that their
relationship is too important to both of them to allow strategic issues to upset it. As we have noted, economic
interdependence does provide a huge incentive to keep the relationship positive and stable. But
powerful forces push the other way, and America's hard choice between withdrawing from Asia, sharing power or
competing with China must still be faced. The economic incentives will shape this choice, but not make it any easier. Economics will
push both sides towards a power-sharing deal of some sort, but they will both still have to make big
political sacrifices to get there. It is far from clear that they will make those sacrifices , especially if--in
America's case, particularly--they do not yet clearly realise why they are worth making. The US risks drifting into strategic
rivalry against China without weighing the costs .
What are those costs? What would rivalry between the US and China mean? We cannot be sure precisely, but some things are clear.
China is already too powerful to be contained without intense and protracted pressure from America .
That means committing more forces to Asia, an intensifying nuclear confrontation and building a
bigger, more intense anti-China alliance in the region. Even if America does all this, China is unlikely simply to succumb. It
would mount a determined and sustained resistance. The resulting antagonism could soon develop its own
momentum, as each country reacted to the other. Military capabilities on both sides would grow
quickly. Competition for influence and military bases in third countries would intensify , and it would be
harder and harder for other countries to avoid taking sides. Asia would again face the prospect of a deep division between camps aligned
with one or other of the two strongest powers. The
conflict between these camps would inhibit trade, investment and
travel, with immense economic costs. And there would be a real and growing risk of major war--even
nuclear war--between them.
All of this sounds rather gloomy and surprising, because we do not have recent experience of serious strategic competition between really
strong states. We have to go back to the last century for examples of how it might develop--the Cold War confrontation between the US
and the Soviet Union, the European maelstroms of the first half of the twentieth century, or Asia's wars up until the 1970s. It would be
wrong to assume that any increase in tension must lead to this kind of disaster, but it would be equally wrong to assume that Asia could
never get that bad again. Any conflict between the US and China has a real chance of going nuclear. Nuclear
war between the US and China would not be as bad as the holocaust we feared in the Cold War, but it could still quickly
become the most deadly war in history. The stakes in Asia are very high indeed.

Only decline solves this---attempting to maintain heg in Asia guarantees instability and
U.S.-China conflict
Hugh White 10, professor of strategic studies at Australian National University, Canberra, and visiting
fellow at the Lowy Institute of International Policy in Sydney, has served as a senior Defense official in
the Australian government, September 2010, “Australia's future between Washington and Beijing,”
Quarterly Essay, No. 39, p. 1-36

Asia's security and Australia's future depend not just on the choices China might make, but on America's
choices too. Even if China overtakes it economically over the next few decades, the US will remain the second-strongest country in the
world for a long time to come, and by far the most serious constraint on Chinese power. The way America chooses to use its
power is as important as anything China decides , and America's choices may be harder than China's. A peaceful
new order in Asia to accommodate China's growing power can only be built if America is willing to
allow China some political and strategic space. Such concessions do not often happen. History offers few examples of a rising
power finding its place in the international order without a war with the dominant power. Conflict is only avoided when the dominant
power willingly makes space for the challenger, as Britain made way for America in the late nineteenth century. Will America do the same
for China? Should it?

As America confronts these questions, it too faces a choice between influence and order. Like China, it wants as much influence as it can
get, with as little disorder as possible, so it has
to balance its desire for Asia to remain peaceful against its desire
to remain in charge. Washington has not faced this choice before. Since Nixon went to China, US primacy has
been synonymous with order, and the more influence America has had, the more stable Asia has
been. Now China's rise means that the region might be more peaceful if America settles for a more
modest role. If instead America tries to retain primacy in the face of China's power, it will provoke a
struggle that upsets the region. It would be sacrificing Asia's peace to preserve its own primacy.
America could easily find itself doing just that. After being in charge for forty years, many Americans cannot imagine that Asia can be
peaceful except under American leadership. Conceding even a share of power to another country looks risky, and especially conceding
power to China. It is easy to see any desire by China to expand its influence as inherently threatening, and the more repressive and
authoritarian China's government appears, the more threatening it looks. No one can be comfortable about a regime that represses dissent
at home exercising more power abroad. But what is the alternative? Forty years ago Washington--and Canberra--decided to accept the
Chinese Communist Party as the legitimate government of China. Since then, and partly as a result, China has grown to become a very
powerful country indeed. As America continues to deal with China and to benefit from its growth, it faces the consequences of those
decisions. Some of those are unpalatable. While continuing to accept the communists as the legitimate government of China internally,
many Americans would now prefer to deny that China's government can legitimately exercise its power internationally.

Unfortunately, Americans do not get to make that kind of choice now. They cannot separate China's internal government from the exercise of
its international power. China's power, controlled by China's government, must be dealt with as a simple fact of international politics. If
Americans deny China the right to exercise its power internationally within the same limits and norms
that they accept for themselves, they can hardly be surprised if China decides not to accept the
legitimacy of American power and starts pushing back. These days it can push back pretty hard.
Sustainability---Warming Turns---Ext
Warming wrecks GDP.
David Wallace-Wells 19. National Fellow at New America, deputy editor of New York Magazine.
02/19/2019. “II. Elements of Chaos.” The Uninhabitable Earth: Life After Warming, Crown/Archetype.

You do not have to believe that economic growth is a mirage produced by fossil fumes to worry that
climate change is a threat to it—in fact, this proposition forms the cornerstone around which an entire
edifice of academic literature has been built over the last decade . The most exciting research on the economics of
warming has come from Solomon Hsiang and Marshall Burke and Edward Miguel, who are not historians of fossil capitalism but who offer some
very bleak analysis of their own: in a country that’s already relatively warm, every degree Celsius of warming reduces
growth, on average, by about one percentage point (an enormous number, considering we count growth in the low single digits as
“strong”). This is the sterling work in the field. Compared to the trajectory of economic growth with no climate
change, their average projection is for a 23 percent loss in per capita earning globally by the end of this
century. Tracing the shape of the probability curve is even scarier. There is a 51 percent chance, this
research suggests, that climate change will reduce global output by more than 20 percent by 2100, compared with a
world without warming, and a 12 percent chance that it lowers per capita GDP by 50 percent or more by then, unless
emissions decline. By comparison, the Great Depression dropped global GDP by about 15 percent, it is estimated
—the numbers weren’t so good back then. The more recent Great Recession lowered it by about 2 percent, in a
onetime shock; Hsiang and his colleagues estimate a one-in-eight chance of an ongoing and irreversible
effect by 2100 that is twenty-five times worse. In 2018, a team led by Thomas Stoerk suggested that these estimates
could be dramatic underestimates. The scale of that economic devastation is hard to comprehend. Even within the postindustrial
nations of the wealthy West, where economic indicators such as the unemployment rate and GDP growth circulate as though they contain the
whole meaning of life in them, figures like these are a little bit hard to fathom; we’ve become so used to economic stability and reliable growth
that the entire spectrum of conceivability stretches from contractions of about 15 percent, effects we study still in histories of the Depression,
to growth about half as fast—about 7 percent, which the world as a whole last achieved during the global boom of the early 1960s. These are
exceptional onetime peaks and troughs, extending for no more than a few years, and most of the time we measure economic fluctuations in
ticks of decimal points—2.9 this year, 2.7 that. What climate change proposes is an economic setback of an entirely different category. The
breakdown by country is perhaps even more alarming. Thereare places that benefit, in the north , where warmer temperatures
can improve agriculture and economic productivity: Canada, Russia, Scandinavia, Greenland. But
in the mid-latitudes, the
countries that produce the bulk of the world’s economic activity—the United States, China—lose nearly
half of their potential output. The warming near the equator is worse, with losses throughout Africa,
from Mexico to Brazil, and in India and Southeast Asia approaching 100 percent. India alone, one study
proposed, would shoulder nearly a quarter of the economic suffering inflicted on the entire world by
climate change. In 2018, the World Bank estimated that the current path of carbon emissions would sharply diminish the living conditions
of 800 million living throughout South Asia. One hundred million, they say, will be dragged into extreme poverty by climate change just over the
next decade. Perhaps “back into” is more appropriate: many of the most vulnerable are those populations that have just extracted themselves
from deprivation and subsistence living, through developing-world growth powered by industrialization and fossil fuel. And
to help
buffer or offset the impacts, we have no New Deal revival waiting around the corner, no Marshall Plan
ready. The global halving of economic resources would be permanent, and, because permanent, we would
soon not even know it as deprivation, only as a brutally cruel normal against which we might measure
tiny burps of decimal-point growth as the breath of a new prosperity . We have gotten used to setbacks on our
erratic march along the arc of economic history, but we know them as setbacks and expect elastic recoveries. What climate change has in store
is not that kind of thing—not
a Great Recession or a Great Depression but, in economic terms, a Great Dying. —
How could that come to be? The answer is partly in the preceding chapters— natural disaster, flooding,
public health crises. All of these are not just tragedies but expensive ones, and beginning already to accumulate at an
unprecedented rate. There is the cost to agriculture: more than three million Americans work on more than two million farms; if
yields decline by 40 percent, margins will decline, too, in many cases disappearing entirely, the small
farms and cooperatives and even empires of agribusinesses slipping underwater (to use the oddly apposite
accountant’s metaphor) and drowning under debt all those who own and work those arid fields, many of them old enough to remember the
same plains’ age of plenty. And then there is the real flooding: 2.4 million American homes and businesses, representing more
than $1 trillion in present-day value, will suffer chronic flooding by 2100, according to a 2018 study by the Union of Concerned Scientists.
Fourteen percent of the real estate in Miami Beach could be flooded by just 2045. This is just within America, though it isn’t only South Florida;
in fact, over the next few decades, the real-estate impact will be almost $30 billion in New Jersey alone. There
is a direct heat cost to
growth, as there is to health. Some of these effects we can see already—for instance, the warping of train tracks or the
grounding of flights due to temperatures so high that they abolish the aerodynamics that allow planes to take
off, which is now commonplace at heat-stricken airports like the one in Phoenix. (Every round-trip plane ticket from New York
to London, keep in mind, costs the Arctic three more square meters of ice.) From Switzerland to Finland, heat waves have
necessitated the closure of power plants when cooling liquids have become too hot to do their job. And in India, in 2012,
670 million lost power when the country’s grid was overwhelmed by farmers irrigating their fields
without the help of the monsoon season, which never arrived. In all but the shiniest projects in all but the
wealthiest parts of the world, the planet’s infrastructure was simply not built for climate change, which
means the vulnerabilities are everywhere you look. Other, less obvious effects are also visible—for instance,
productivity. For the past few decades, economists have wondered why the computer revolution and the internet have not brought
meaningful productivity gains to the industrialized world. Spreadsheets, database management software, email—these innovations alone
would seem to promise huge gains in efficiency for any business or economy adopting them. But those gains simply haven’t materialized; in
fact, the economic period in which those innovations were introduced, along with literally thousands of similar computer-driven efficiencies,
has been characterized, especially in the developed West, by wage and productivity stagnation and dampened economic growth. One
speculative possibility: computers have made us more efficient and productive, but at the same time climate
change has had the opposite effect, diminishing or wiping out entirely the impact of technology. How could this be? One
theory is the negative cognitive effects of direct heat and air pollution, both of which are accumulating more
research support by the day. And whether or not that theory explains the great stagnation of the last several decades, we do know that,
globally, warmer temperatures do dampen worker productivity. The claim seems both far-fetched and intuitive, since,
on the one hand, you don’t imagine a few ticks of temperature would turn entire economies into zombie markets, and since, on the other, you
yourself have surely labored at work on a hot day with the air-conditioning out and understand how hard that can be. The bigger-picture
perspective is harder to swallow, at least at first. It may sound like geographic determinism, but Hsiang, Burke, and Miguel have identified an
optimal annual average temperature for economic productivity: 13 degrees Celsius, which just so happens to be the historical median for the
United States and several other of the world’s biggest economies. Today, the U.S. climate hovers around 13.4 degrees, which translates into
less than 1 percent of GDP loss—though, like compound interest, the effects grow over time. Of course, as the country has warmed over the
last decades, particular regions have seen their temperatures rise, some of them from suboptimal levels to something closer to an ideal setting,
climate-wise. The greater San Francisco Bay Area, for instance, is sitting pretty right now, at exactly 13 degrees. This is what it means to suggest
that climate change is an enveloping crisis, one that touches every aspect of the way we live on the planet today. But the world’s suffering will
be distributed as unequally as its profits, with great divergences both between countries and within them. Already-hot countries like India and
Pakistan will be hurt the most; within the United States, the costs will be shouldered largely in the South and Midwest, where some regions
could lose up to 20 percent of county income. Overall, though it will be hit hard by climate impacts, the United States is among the most well-
positioned to endure—its wealth and geography are reasons that America has only begun to register effects of climate change that already
plague warmer and poorer parts of the world. But in
part because it has so much to lose, and in part because it so
aggressively developed its very long coastlines, the U.S. is more vulnerable to climate impacts than any
country in the world but India, and its economic illness won’t be quarantined at the border . In a globalized
world, there is what Zhengtao Zhang and others call an “economic ripple effect.” They’ve also quantified it, and
found that the impact grows along with warming. At one degree Celsius, with a decline in American GDP of 0.88 percent,
global GDP would fall by 0.12 percent, the American losses cascading through the world system. At two degrees, the economic ripple effect
triples, though here, too, the effects play out differently in different parts of the world; compared to the impact of American losses at one
degree, at two degrees the economic ripple effect in China would be 4.5 times larger. The
radiating shock waves issuing out
from other countries are smaller because their economies are smaller, but the waves will be coming
from nearly every country in the world, like radio signals beamed out from a whole global forest of
towers, each transmitting economic suffering. For better or for worse, in the countries of the wealthy West we have settled
on economic growth as the single best metric, however imperfect, of the health of our societies. Of course, using that metric, climate change
registers—with its wildfires and droughts and famines, it registers seismically. The costs are astronomical already, with single hurricanes now
delivering damage in the hundreds of billions of dollars. Should the planet warm 3.7 degrees, one assessment suggests, climate change
damages could total $551 trillion—nearly twice as much wealth as exists in the world today. We are on track for more warming still. Over the
last several decades, policy consensus has cautioned that the world would only tolerate responses to climate change if they were free— or,
even better, if they could be presented as avenues of economic opportunity. That market logic was probably always shortsighted, but over the
last several years, as the cost of adaptation in the form of green energy has fallen so dramatically, the equation has entirely flipped: we now
know that it will be much, much more expensive to not act on climate than to take even the most aggressive action today. If you don’t think of
the price of a stock or government bond as an insurmountable barrier to the returns you’ll receive, you probably shouldn’t think of climate
adaptation as expensive, either. In 2018, one paper calculated the global cost of a rapid energy transition, by 2030, to be negative $26 trillion—
in other words, rebuilding the energy infrastructure of the world would make us all that much money, compared to a static system, in only a
dozen years. Every day we do not act, those costs accumulate, and the numbers quickly compound. Hsiang,
Burke, and Miguel draw their 50 percent figure from the very high end of what’s possible—truly a worst-case scenario for economic growth
under the sign of climate change. But in 2018, Burke and several other colleagues published a major paper exploring the growth consequences
of some scenarios closer to our present predicament. In it, they considered one plausible but still quite optimistic scenario, in which the world
meets its Paris Agreement commitments, limiting warming to between 2.5 and 3 degrees. This is probably about
the best-case
warming scenario we might reasonably expect; globally, relative to a world with no additional warming, it would cut per-capita
economic output by the end of the century , Burke and his colleagues estimate, by between 15 and 25 percent.
Hitting four degrees of warming, which lies on the low end of the range of warming implied by our current emissions trajectory,
would cut into it by 30 percent or more. This is a trough twice as deep as the deprivations that scarred
our grandparents in the 1930s, and which helped produce a wave of fascism, authoritarianism, and
genocide. But you can only really call it a trough when you climb out of it and look back from a new peak, relieved.
There may not be any such relief or reprieve from climate deprivation, and though, as in any collapse, there will be
those few who find ways to benefit, the experience of most may be more like that of miners buried permanently at the bottom of a shaft.
*Yes transition
Framing issue---use a low threshold---we only need to win a transition’s more likely
than a phase shift in physical limits, NOT that it’s absolutely likely
Lorenz T. Keyßer & Manfred Lenzen 21, Keyßer, ISA, School of Physics A28, The University of Sydney;
Lenzen, Department of Environmental Systems Science, Institute for Environmental Decisions, ETH
Zürich, “1.5 °C Degrowth Scenarios Suggest the Need for New Mitigation Pathways,” Nature
Communications, vol. 12, no. 1, 1, Nature Publishing Group, 05/11/2021, p. 2676
Political and economic feasibility

Compared with technology-driven pathways, it is clear that a degrowth transition faces tremendous
political barriers9,49. As Kallis et al.9 state, currently (p. 18) ‘[a]bandoning economic growth seems politically impossible’, as it implies significant changes
to current capitalist socioeconomic systems in order to overcome its growth imperatives9,19,49. Degrowth, moreover, challenges deeply embedded

cultures, values, mind-sets21 and power structures9,19. However, as Jewell & Cherp state, political feasibility is
softer than socio-technical feasibility25, with high actor motivation potentially compensating for low action
capacity and social change being complex, non-linear and essentially unpredictable50. Political feasibility further
depends to a large extent on social movements formulating and pushing for the implementation of political programs,
changing values, practices and cultures and building alternative institutions49,51 as well as scientists pointing the way to
alternative paradigms49. Consequently, degrowth implies modifications to the strategies for change, with a stronger focus on bottom–up social
movements9,19,49. As many research questions on degrowth remain open9,19 and the state of political feasibility can change with

better knowledge about and awareness of alternative paradigms, strengthened social movements and a clearer
understanding about transition processes49,50,51, it is even more crucial to investigate degrowth pathways.
‘Economic feasibility’ usually refers to the monetary costs of a mitigation pathway, reported as share of GDP4,24. Here, many IAMs follow a cost-minimisation
approach in order to maximise economic welfare4,41, measured in GDP, by progressively implementing only the mitigation measures with the lowest marginal
abatement costs. From this perspective, degrowth is often considered economically inefficient, as the GDP loss is considered a cost and when weighted with the
avoided CO2 appears to be overly expensive compared to technological measures52. However, this reasoning presupposes a fictitious ‘optimal’ GDP growth path,
any negative deviation from which is a priori defined as a ‘cost’41. Importantly, GDP is not a neutral construct9. Thus, one needs to ask to whom costs occur, who
profits, whose contributions are in- or excluded and finally who should decide this9. So, even if this GDP loss is accepted as a ‘cost’, this reasoning compares two
categories that have very different, and partly incommensurable, welfare implications. For instance, the costs of replacing a coal plant with wind turbines (a
technological measure: creating jobs and reducing CO2, but using land and materials) are not directly monetarily comparable to the costs of producing, consuming
as well as working less (a GDP loss: polluting less, while using less resources and potentially leading to further positive social consequences, if well managed). To
have a more valid comparison between the two categories, one would need to monetise the whole variety of ecological and social impacts on different groups of
people and ecosystems, which is impossible at least without strong value judgements4,9. A more suitable perspective when dealing with climate justice issues in a
wellbeing context is a human needs provisioning approach16,20,53. The crucial question then becomes how, if GDP were to shrink as a result of the required
reductions in material and energy use and CO2 (the degrowth hypothesis), e.g., through stringent eco-taxes and/or caps9, this GDP decrease could be made socially
sustainable, i.e. safeguarding human needs and social function9,21. Here, research shows that in principle it is possible to achieve a high
quality of life with substantially lower energy use and GDP9,16,17,20. As noted in the introduction, however, substantial
socioeconomic changes would be necessary to avoid the effects of a recession. Moreover, the reductions and limits would need to be democratically
negotiated9,21,49 and consider potential ‘sufficiency rebound effects’54 (reduced consumption by some being compensated through increases by others), e.g., by
international coordination.

To summarise, as indicated by Fig. 5, the 1.5 °C degrowth scenarios have the lowest relative risk levels for socio-technical feasibility and sustainability, as they are
the only scenarios relying in combination on low energy-GDP decoupling, comparably low speed and scale of renewable energy replacing fossil fuels as well as
comparably low NETs and CCS deployment. When excluding any NETs and CCS deployment, the degrowth scenarios still show the lowest levels of energy-GDP
decoupling as well as speed and scale of renewable energy replacing fossil fuels, compared to the ‘IPCC’ and ‘Dec-Extreme’ pathways. As a drawback, degrowth
scenarios currently have comparably low socio-political feasibility and require radical social change. This conclusion holds as well for the 2 °C scenarios, albeit with
less extreme differences. Here, the ‘Degrowth-NoNNE‘ scenario, with ~0% p.a. global GDP growth, is almost aligned with historical data, in stark contrast to the
technology-driven scenarios without net negative emissions (see Supplementary Fig. 4).

Discussion

The results indicate that degrowth pathways exhibit the lowest relative risks for feasibility and sustainability when
compared with established IPCC SR1.5 pathways using our socio-technical risk indicators. In comparison, the higher the technological reliance

of the assessed mitigation pathways, the higher the risks for socio-technical feasibility and sustainability. The reverse is
likely the case for socio-political feasibility, which, however, is softer than socio-technical feasibility . This result contrasts
strongly with the absolute primacy of technology-driven IAM scenarios in the IPCC SR1.5. In what follows, we discuss limitations of our modelling approach and risk
indicators, implications for the IAM community and further research.

Our results face several limitations. Note that we use the carbon budget for a 50% chance to stay below 1.5 °C2, which can be argued to be too low based on the
precautionary principle, especially when considering that such scenarios still include a 10% chance of reaching catastrophic warming of 3 °C55. Already increasing
the chance for 1.5 °C to 66% lowers the available carbon budget by 160 GtCO2, while including earth system feedbacks lowers it by an additional ~100 GtCO22. In
addition, note that we do not consider CH4 and N2O emissions, for which technological mitigation is more problematic than for CO2. Including all these factors
would substantially increase the mitigation challenges. Any such increase further
strengthens the case for considering
degrowth scenarios, since it becomes even more risky to solely rely on technology to accomplish the
higher mitigation rates. Thus, complementing technology by far-reaching demand reductions through social
change becomes even more necessary for 1.5 °C to remain feasible . This is especially the case when considering the softer nature of
social feasibility compared with socio-technical feasibility. We nevertheless stress that feasibility is a highly complex concept that can be interpreted differently and,
in the case of individual scenarios, remains at least in part subjective24. Therefore, a larger variety of indicators than ours is certainly necessary to arrive at a more
complete picture of feasibility. However, we maintain that such research should explicitly consider degrowth scenarios, e.g., along the lines of the ‘Societal
Transformation Scenario’ by Kuhnhenn et al.56 or the ‘SSP0’ scenario proposed by Otero et al.39. Especially in
view of socio-political feasibility,
we argue that not exploring them actually leads to a self-fulfilling prophecy: with research subjectively judging

such scenarios as infeasible from the start, they remain marginalised in public discourse, thus inhibiting social change, thus letting
them appear as even more infeasible to the scientist and so on. As McCollum et al.57 and Pye et al.58 argue, modellers have a collective responsibility to evaluate
the full spectrum of future possibilities, including scenarios commonly deemed politically unlikely.

A further limitation of this study is our simplified quantitative model, which only addresses the fuel-energy-emissions nexus top–down. This enhances transparency
and understanding and is suited for the purpose of this study by allowing to assess relative feasibility. Moreover, it enables modelling pathways currently excluded
by the IPCC IAMs, avoiding the difficulties and complexities with modelling degrowth (see below and Methods). Nevertheless, our model neglects the monetary
sector22, connections between energy and material availability and economic growth23,34 as well as the bottom–up energy and material requirements for decent
living standards20. This potentially renders some scenarios infeasible, despite our efforts to qualitatively include these factors in our above treatment of feasibility.
Therefore, our simplified modelling approach can only be a very first step to exploring degrowth scenarios and needs to be complemented by more complex
modelling.

To our knowledge, no in-depth study examining the reasons for the omission of degrowth scenarios in mainstream IAM modelling exists (but see4). Such modelling
is highly challenging, partly because a degrowth society would function differently compared to the current society. Thus, model parameters and structures based
on past data could no longer be valid59. Furthermore, it would need to recognise that GDP is an inadequate indicator for societal wellbeing, at least in affluent
countries. Instead, the focus needs to be oriented directly at multidimensional human needs satisfaction9,18,53. This is especially important given that many
degrowth proposals include a strengthening of non-monetary work, such as care work and community engagement, as well as decommodification of economic
activity towards sharing, gifting and commons9,59. This also implies revisiting the widespread, neoclassical economic optimisation approach in IAMs4,23,59. More
plural economic perspectives would need to be taken into account to gain a fuller picture of socioeconomic reality22,59,60, e.g., post-Keynesian, ecological and
Marxian economics. Such modelling would also need to broaden the considered portfolio of demand-side measures and behavioural changes4,61,62. At last, it is
clear that the biophysical foundation of economic activity and energy efficiency rebound effects need to be considered in much greater detail8,23,27. The necessary
detailed discussion of how exactly IAMs would need to change to incorporate some of these features is beyond the scope of this paper, but such discussions are
already under way in the literature8,27,58,61,62 and could be further inspired by current efforts in ecological macroeconomic modelling59. Promising
developments in these directions are put forward by the MEDEAS IAM modelling framework, which connects biophysical economic insights, system dynamics and
input–output analysis23,34. Another recent example is the EUROGREEN model, combining post-Keynesian and ecological economics in a system dynamics stock-
flow consistent framework to assess socio-ecological consequences of national degrowth and green growth scenarios22.

In light of the optimism of IAM mitigation scenarios regarding technological change3,8,27, NETs2,46 as well as the
neglect of the wider ecological crisis5,6,7,39 and equity issues33,40,46, it should be a priority to explore
alternative scenarios. Clearly, degrowth would not be an easy solution, but, as indicated by our results, it would
substantially minimise many key risks for feasibility and sustainability compared with established, technology-
driven pathways. Therefore, it should be as widely and thoroughly considered and debated as are comparably risky technology-driven pathways.

Several specific warrants:


1) LATENT MOVEMENTS---dozens of examples are gathering steam
They can build on the foundation of the AFF to generate broad buy-in.
Samuel Alexander 15, lecturer at the Office for Environmental Programs, University of Melbourne,
“10 The Deep Green Alternative: Debating Strategies of Transition,” Sufficiency Economy, Simplicity
Institute, 2015, pg 270-272
In many ways this final ‘pathway’ could be built into all of the previous perspectives, because none of the theorists considered above (especially
the DGR camp) would think that the transition to a deep green alternative could ever be smooth, rational, or painless. Even many radical
reformers, whose strategy involves working within the institutions of liberal democracy rather than subverting or ignoring them, clearly expect
political conflict and economic difficulties to shape the pathway to the desired alternative (Gilding, 2011). Nevertheless, for those who are
deeply pessimistic about the likelihood of any of the previous strategies actually giving rise to a deep green alternative (however coherent or
well justified they may be), there remains the possibility that some such alternative could arise not by design so much as
by disaster . In other words, it is worth considering whether a crisis situation – or a series of crises – could either (i) force an
alternative way of life upon us; or (ii) be the provocation needed for cultures or politicians to take
radical alternatives seriously . Those two possibilities will now be considered briefly, in turn.

As industrial civilisation continues its global expansion and pursues growth without apparent limit, the
possibility of economic,
political, or ecological crises forcing an alternative way of life upon humanity seems to be growing in

likelihood (Ehrlich and Ehrlich, 2013). That is, if the existing model of global development is not stopped via one of the pathways reviewed
above, or some other strategy, then it seems clear enough that at some point in the future, industrial civilisation will
grow itself to death (Turner, 2012). Whether ‘collapse’ is initiated by an ecological tipping point, a financial breakdown of an overly
indebted economy, a geopolitical disruption, an oil crisis, or some confluence of such forces, the possibility of collapse or deep global crisis can
no longer be dismissed merely as the intellectual playground for ‘doomsayers’ with curdled imaginations. Collapse is a prospect that ought to
be taken seriously based on the logic of limitless growth on a finite planet, as well as the evidence of existing economic, ecological, or more
specifically climatic instability. As Paul Gilding (2011) has suggested, perhaps it is already too late to avoid some form of ‘great disruption’.

Could collapse or deep crisis be the most likely pathway to an alternative way of life? If it is, such a scenario must not be idealised or
romanticised. Fundamental change through crisis would almost certainly involve great suffering for many, and quite possibly significant
population decline through starvation, disease, or war. It is also possible that the ‘alternative system’ that a crisis produces is equally or even
more undesirable than the existing system. Nevertheless, it may be that this is the only way a post-growth or post-
industrial way of life will ever arise . The Cuban oil crisis , prompted by the collapse of the USSR, provides one such
example of a deep societal transition that arose not from a political or social movement, but from sheer force of circumstances (Piercy et al.,
2010). Almost overnight Cuba had a large proportion of its oil supply cut off, forcing the nation to move away
from oil-dependent, industrialised modes of food production and instead take up local and organic systems – or perish . David
Holmgren (2013) published a deep and provocative essay, ‘Crash on Demand’, exploring the idea that a relatively small anti-consumerist
movement could be enough to destabilise the global economy, which is already struggling. This presents one means of bringing an end to the
status quo by inducing a voluntary crisis, without relying on a mass movement. Needless to say, should people adopt such a strategy, it would
be imperative to ‘prefigure’ the alternative society as far as possible too, not merely withdraw support from the existing society.

Again, one must not romanticise such theories or transitions. The


Cuban crisis, for example, entailed much hardship. But it
does expose the mechanisms by which crisis can induce significant societal change in ways that, in the
end, are
not always negative . In the face of a global crisis or breakdown , therefore, it could be that elements of
the deep green vision (such as organic agriculture, frugal living, sharing, radical recycling, post-oil
transportation, etc.) come to be forced upon humanity , in which case the question of strategy has less to do
with avoiding a deep crisis or collapse (which may be inevitable) and more to do with negotiating the descent as
wisely as possible. This is hardly a reliable path to the deep green alternative, but it presents itself as a possible path.
Perhaps a more reliable path could be based on the possibility that, rather than imposing an alternative way of life on a society through sudden
collapse, a deep crisis could provoke a social or political revolution in consciousness that opens up space for
the deep green vision to be embraced and implemented as some form of crisis management strategy.
Currently, there is insufficient social or political support for such an alternative, but perhaps a deep
crisis will shake the world awake . Indeed, perhaps that is the only way to create the necessary mindset .
After all, today we are hardly lacking in evidence of the need for radical change (Turner, 2012), suggesting that shock and response
may be the form the transition takes, rather than it being induced through orderly, rational planning, whether from ‘top down’ or
‘from below’. Again, this ‘nonideal’ pathway to a post-growth or post-industrial society could be built into the other strategies discussed above,
adding some realism to strategies that might otherwise appear too utopian. That is to say, it may be that only deep crisis will create
the social support or political will needed for radical reformism, eco-socialism, or ecoanarchism to emerge as social or
political movements capable of rapid transformation . Furthermore, it would be wise to keep an open and evolving mind
regarding the best strategy to adopt, because the relative effectiveness of various strategies may change over time, depending on how
forthcoming crises unfold.

It was Milton Friedman (1982: ix) who once wrote: ‘Only


a crisis – actual or perceived – produces real change. When that
crisis occurs, the actions that are taken depend on the ideas that are lying around.’ What this ‘collapse’ or
‘crisis’ theory of change suggests, as a matter of strategy, is that deep green social and political
movements should be doing all they can to mainstream the practices and values of their alternative vision. By
doing so they would be aiming to ‘prefigure’ the deep green social, economic, and political structures, so far as that it
is possible, in the hope that deep green ideas and systems are alive and available when the crises hit. Although
Friedman obviously had a very different notion of what ideas should be ‘lying around’, the relevance of his point to this discussion is that in
times of crisis, the politically or socially impossible can become politically or socially inevitable (Friedman, 1982: ix); or, one might say, if not
inevitable, then perhaps much more likely.

It is sometimes stated that every


crisis is an opportunity – from which the optimist infers that the more crises
there are, the more opportunities there are. This may encapsulate one of the most realistic forms of hope we
have left.

2) CULTURE SHIFT---deep crisis forces behavioral changes that are self-perpetuating


and make the transition desirable.
Samuel Alexander 17, lecturer with the Office for Environmental Programs, University of Melbourne,
and research fellow, Melbourne Sustainable Society Institute, 2017, “Frugal Abundance in an Age of
Limits: Envisioning a Degrowth Economy,” in Transitioning to a Post-Carbon Society, p. 159-161
Introduction

This chapter considers whether, or to what extent, different forms of “austerity” exist, or could exist, in relation to material standards of living.
Could an austerity externally imposed be experienced very differently from an austerity voluntarily embraced? The analysis seeks to show,
somewhat paradoxically, perhaps, that although reduced consumption and production within existing capitalist economies tends to impact
negatively on social well-being—representing one form of “austerity”— reducedconsumption and production within
different economic frameworks, and within different value systems, could open up space for a positive, enriching
form of austerity. This latter form of austerity, it will be argued, has the potential to increase social and ecological
well-being in an age of environmental limits (Meadows et al. 2004; Jackson 2009; Turner 2014). It is extremely important, of
course, that these two austerities are not confused, and the present inquiry into the potential for enriching forms of austerity must not be
interpreted as defending the neoliberal or capitalist forms of austerity being implemented in many economies today (see e.g. Hermann 2014;
Pollin 2013). A distinction will be made, therefore, between an austerity of degrowth—which will be the focus of this analysis—and a capitalist
austerity.

Even a cursory inquiry into the definition of austerity highlights the various ways this term can be understood. In
recent years this
notion has been used almost exclusively to refer to a macro-economic policy of crisis management
provoked by the global financial crisis, where governments cut social services in an attempt to reduce budget
deficits and stimulate growth (see e.g. Ivanova 2013). One online dictionary defines austerity as a “severe and rigid economy”, and
that is certainly how many people would experience austerity under capitalism today. Note how austerity in this sense is oblivious to the limits
to growth critique. Far from trying to move beyond the growth paradigm, austerity under capitalism is defended on the grounds that it will help
get the engine of growth started again.

But this is a relatively new way of understanding austerity . Prior to the global financial crisis, austerity
did not refer primarily to a strict macroeconomic policy that cut social services. Instead , online dictionary
definitions define austerity as “simple or plain”, “not fancy”, “unadorned”, or “a situation where money
is spent only on things that are necessary”. In this very different sense of austerity, the term can be understood as a
synonym for frugality or simplicity of living (see Alexander and McLeod 2014), and it is this second form of austerity that will
be the focus of this chapter. It is a form of austerity that is arguably necessary in an age of limits —necessary, that is,
if we are to turn current economic and environmental crises into opportunities by way of a degrowth
transition (Latouche 2009; Schneider et al. 2010; Kallis 2011; Alexander 2015).
Among other things, a degrowth transition will involve examining or reexamining what is truly necessary to live a dignified life, as well as letting
go of so much of what is superfluous and wasteful in consumer societies today (Vale and Vale 2013; Hamilton and Denniss 2005). A strong but
perhaps counter-intuitive case can be made that the wealthiest regions of the world can get by with a far lower material standard of living and
yet increase quality of life (Alexander 2012a; Trainer 2012; Schor 2010; Wilkinson and Pickett 2010), and this is the paradox of simplicity that
lies at the heart of what I am calling an “austerity of degrowth”. A degrowth economy may be “austere” (but sufficient) in a material sense,
especially in comparison to the cultures of consumption prevalent in developed regions of the world today. But such austerity could also
liberate those developed or over-developed societies from the shackles of consumerist cultures (Kasser 2002),
freeing them from materialistic conceptions of the good life and opening up space for seeking prosperity
in various non-materialistic forms of satisfaction and meaning.
Serge Latouche (2014) writes of degrowth as being a society of “frugal abundance”, but what would this look like and how would it be
experienced in daily life? The degrowth movement to date has focused a great deal on the macro-economic and political dimensions of
“planned economic contraction” (Alexander 2012b), but less attention has been given to the implications such contraction would have on our
lives, at the personal and community levels. Consequently, this area of neglect calls for closer examination, because it is at the personal and
community levels where degrowth would be experienced, first and foremost. Indeed, an inquiry into the lived reality of degrowth may be one
of the best ways of describing and understanding what we mean by degrowth, moving beyond vague abstractions or “top down” macro-
economic and political perspectives. In other words, we might gain a clearer understanding of degrowth by imagining someone mending their
clothes or sharing their hammer or bicycle in conditions of scarcity, than by imagining a new financial system or political framework.

Whatever the case, this chapter focuses on the former perspective and explores how an austerity of degrowth may be experienced at the
personal and social levels. This inquiry follows coherently from the various arguments in favour of degrowth that have been developing in
recent years, which have offered many compelling reasons why we should “degrow” (see generally, Latouche 2009; Alexander 2015). But it is
also important to explore more closely what degrowth would actually look like and how it might be experienced. After all, if
people
cannot envision the degrowth alternative with sufficient clarity, and see it as desirable, it is unlikely that a
large social movement will arise to bring a degrowth economy into existence.
No Impact---2NC
Economic downturn doesn’t cause war
Stephen M. Walt 20, the Robert and Renée Belfer professor of international relations at Harvard
University, 5/13/20, “Will a Global Depression Trigger Another World War?,”
https://foreignpolicy.com/2020/05/13/coronavirus-pandemic-depression-economy-world-war/

For these reasons, the pandemic itself may be conducive to peace. But what about the relationship between broader
economic conditions and the likelihood of war? Might a few leaders still convince themselves that provoking a crisis and going
to war could still advance either long-term national interests or their own political fortunes? Are the other paths by which a deep and sustained
economic downturn might make serious global conflict more likely?

One familiar argument is the so-called diversionary (or “scapegoat”) theory of war. It suggests that leaders who are worried
about their popularity at home will try to divert attention from their failures by provoking a crisis with a foreign power and maybe even using
force against it. Drawing on this logic, some Americans now worry that President Donald Trump will decide to attack a country like Iran or
Venezuela in the run-up to the presidential election and especially if he thinks he’s likely to lose.

This outcome strikes me as unlikely, even if one ignores the logical and empirical flaws in the theory itself.
War is always a gamble, and should things go badly—even a little bit—it would hammer the last nail in the
coffin of Trump’s declining fortunes. Moreover, none of the countries Trump might consider going after pose an imminent threat
to U.S. security, and even his staunchest supporters may wonder why he is wasting time and money going after Iran or Venezuela at a moment
when thousands of Americans are dying preventable deaths at home. Even a successful military action won’t put Americans
back to work, create the sort of testing-and-tracing regime that competent governments around the world have been able to implement
already, or hasten the development of a vaccine. The same logic is likely to guide the decisions of other world leaders
too.

Another familiar folk theory is “military Keynesianism.” War generates a lot of economic demand, and it can
sometimes lift depressed economies out of the doldrums and back toward prosperity and full employment. The obvious
case in point here is World War II, which did help the U.S economy finally escape the quicksand of the Great Depression. Those
who are convinced that great powers go to war primarily to keep Big Business (or the arms industry) happy are naturally drawn to this sort of
argument, and they might worry that governments looking at bleak economic forecasts will try to restart their economies through some sort of
military adventure.

I doubt it. It takes a really big war to generate a significant stimulus , and it is hard to imagine any country
launching a large-scale war—with all its attendant risks—at a moment when debt levels are already soaring. More
importantly, there are lots of easier and more direct ways to stimulate the economy —infrastructure spending,
unemployment insurance, even “helicopter payments”—and launching a war has to be one of the least efficient methods
available. The threat of war usually spooks investors too, which any politician with their eye on the stock market would be loath to do.

Economic downturns can encourage war in some special circumstances, especially when a war would enable a
country facing severe hardships to capture something of immediate and significant value. Saddam Hussein’s decision to seize
Kuwait in 1990 fits this model perfectly : The Iraqi economy was in terrible shape after its long war with Iran; unemployment was
threatening Saddam’s domestic position; Kuwait’s vast oil riches were a considerable prize; and seizing the lightly armed emirate was
exceedingly easy to do. Iraq also owed Kuwait a lot of money, and a hostile takeover by Baghdad would wipe those debts off the books
overnight. In this case, Iraq’s parlous economic condition clearly made war more likely.

Yet I
cannot think of any country in similar circumstances today. Now is hardly the time for Russia to try
to grab more of Ukraine—if it even wanted to—or for China to make a play for Taiwan, because the costs of
doing so would clearly outweigh the economic benefits. Even conquering an oil-rich country—the sort of greedy
acquisitiveness that Trump occasionally hints at—doesn’t look attractive when there’s a vast glut on the market. I might be worried if some
weak and defenseless country somehow came to possess the entire global stock of a successful coronavirus vaccine, but that scenario is not
even remotely possible.

You might also like