Download as pdf or txt
Download as pdf or txt
You are on page 1of 77

E1 (The dangers of AI farming)

Imagine a long black shipping container, packed with living


animals. You tip in some human food waste and walk away.
AI does the rest, controlling feeding and growth ‘so the
farmer does not have to’, as the company blurb puts it. What
are these animals inside – your animals? It’s not important.
You don’t need to know anything about them or have any
experience handling them. If problems arise, engineers can
troubleshoot them remotely. And when it’s time for
‘harvesting’, no need for a slaughterhouse: AI handles that
too. The animals live and die in a literal black box, only
leaving as a ready-made product.

The future of farming? No, the present: this is a description of


the ‘X1’ insect farm developed by the UK startup Better
Origin. Of course, the farming of large animals, like pigs,
chickens and fishes, is usually a lot less high-tech than this.
Farms are not yet fully automated. But with the technology
advancing rapidly, trends towards increasing automation are
clear to see.

How much do we want AI to be involved in farming? The time


for that conversation is now, before these trends are
irreversibly locked in. Now is the time to set reasonable
ethical limits.

What is AI used for now? Several different applications are


starting to gain traction. All share the same basic vision of
placing AI at the centre of a control network, using it to
intelligently manage the data that flows in from an array of
automated sensors. The sensors may be placed on various
animal body parts and track body temperature, respiration,
heart rate, sound, even rectal temperature and bowel
movements. Other sensors monitor activities such as grazing,
ruminating, feeding and drinking, picking up signs of
lameness or aggression. Smart ear-tags allow farmers to
recognise animals individually and are sold on the promise of
more personalised care. AI can crunch the readings, images
and sounds to diagnose health problems and predict whether
they are likely to get better or worse. Meanwhile, other AI
products monitor and control environmental factors, such as
temperature and carbon dioxide levels. These tools aim to
predict and prevent disease outbreaks, with a special focus on
dangerous diseases like African swine fever. GPS trackers put
on animals and satellite images provide real-time location
information. This information, when handled by AI, allows
farmers to predict their cows’ grazing behaviour, manage
their pastures, and maintain soil vitality.

Put like this, these new developments may sound like great
news for animal welfare. Indeed, we want to present the case
for AI optimism as charitably as we can – before turning to
the problems. The optimists’ argument is simple. Farmed
animals are sentient beings, capable of feeling pleasure and
pain. Their wellbeing matters, and it can be positively or
negatively impacted by the way we treat them. Yet traditional,
AI-unassisted farming systematically misses many welfare
problems because human detection is not vigilant enough. AI
takes vigilance to the next level, helping farmers give their
animals good lives. In the dairy and beef industry, automated
sensors could spare cattle from undergoing intrusive and
unpleasant interventions at the hands of humans, like body
temperature measurement. Real-time location systems could
allow them to graze and explore their environment more
freely instead of living at the end of a tether. In the poultry
and pork industries, AI could help ensure that the average
chicken or pig is well fed and has enough water. Individual
health monitoring tools could also enable farmers to take care
of sick or injured animals quickly or euthanise those in pain.
Environmental sensors designed to predict disease outbreaks
would indirectly prevent the suffering and early death of
many animals. And all this can be sold to farmers as an
investment that is economically beneficial, since high levels of
death and disease are bad for business (think of how a disease
epidemic can rip through a flock of birds or a herd of pigs,
destroying profit margins along with lives). Defenders of
animal welfare should support investment in agricultural AI,
say the optimists.

Are they right? Some of these benefits are probably


overhyped. Claims that a new era of personalised AI care for
individual animals is just around the corner should certainly
be viewed with scepticism. On broiler farms, which farm
chickens for meat, chickens are slaughtered by six weeks,
whereas turkeys and pigs are usually killed by the age of five
or six months. It is hard to imagine individualised AI-assisted
care taking off in industries in which the individuals are so
quickly replaced, and even harder to envisage this in fish
farming. AI products in these industries will monitor large
groups, tracking averages. In the dairy and beef industries, in
which animals are raised or kept for several years, providing
tailored care to individuals may be more plausible.

The optimists’ claim that animal welfare goals and business goals are in alignment looks incredibly dubious
More fundamentally, it’s crucial to look not only at the
immediate, short-term selling points of AI in animal
agriculture. We also need to think about the foreseeable
long-term consequences. Farming is all about trade-offs:
farmers care about animal welfare, but they also need to
maintain a viable business in a competitive market, leading to
compromises. Intensive farming, called ‘factory farming’ by
critics, already involves compromises that are a widespread
source of ethical concern, and we need to think about the
potential of AI to exacerbate many existing problems.

We should think, in particular, about the kinds of farming AI


can integrate with best. What sort of system will AI most help
to make more profitable? In the case of broiler chickens,
evidence suggests that cage-based systems are worse for
welfare than large indoor barns, which are in turn worse than
free-range systems. Yet cage-based systems are likely to
benefit most from automated welfare monitoring. Currently,
sick, injured and dead broilers usually have to be identified by
manual inspection, a constraint considered ‘time-consuming
and laborious’ within the industry. In a ‘stacked-cage’ system,
where four tiers of cages are stacked on top of each other,
these inspections can even be dangerous for workers, who
must climb to the top, all the while inhaling the
ammonia-rich, foul-smelling atmosphere. It’s no surprise to
see that manufacturers of stacked-cage systems are already
advertising the benefits of shifting to ‘high-tech poultry cages’
equipped with monitoring and control systems for feeding,
watering and (for laying hens) egg collection. AI can collect
data in real time, analyse it, detect health issues, and make
predictions about the flock’s overall ‘productivity’.
Once you see this, it becomes harder to be optimistic about
the alleged welfare benefits of AI. The optimists’ claim that
animal welfare goals and business goals are in alignment (so
that systems primarily designed to boost efficiency will, at the
same time, drive up welfare) starts to look incredibly dubious.
Yes, the welfare of individual animals within cage-based
systems might improve, relative to the horrendous status quo,
if their health is monitored by AI. But these inherently
low-welfare systems may take over a larger and larger share
of the market, as AI turbocharges their economic efficiency in
multiple ways: reducing unwanted mortality, controlling
disease outbreaks, and enabling corporations to hire fewer
employees and give them less training. The result would
surely be a decline in the welfare of the average farmed
animal. The scope for a global race to the bottom on welfare,
as the competitive advantage of the lowest-welfare systems
becomes ever greater, is easy to see.

Might the risk be mitigated by tough animal welfare laws?


That is more plausible in some countries than others. In the
European Union, there are legal limits on stocking densities
(the number or weight of animals per unit of space), and
much talk about the idea of banning cage-based systems, yet
progress seems to have stalled recently in the face of
aggressive industry lobbying. In other countries, the
development of AI could allow corporations to leave the
conditions in which animals are raised largely unaddressed.
In the United States, for instance, there is no federal law
limiting stocking densities, even though figures from 2017
show that 99 per cent of farmed animals are kept in industrial
farms. Similarly, Canada has no federal regulations directly
mandating the humane treatment of farmed animals,
although the federal government and provinces have broader
animal cruelty laws. And China, a major driver of the surging
interest in AI-assisted farming, has some of the world’s
weakest animal welfare laws.

Our focus, so far, has been on the risks that AI-assisted


farming poses to farmed animals. This was a deliberate
choice: we think the interests of the animals themselves often
get forgotten in these discussions, when they should be at the
centre. But we should not forget the interests of farmers. In
the age of AI, we can expect farmers to have less and less
autonomy over their own farms. AI will maintain crucial
parameters, like temperature or humidity, within certain
ranges, but who will control these ranges? If the goals and
parameters are set remotely by company bosses, there is a
risk of eroding the dignity of the farming profession, turning
humans into mere instruments of corporations.

At the same time as driving up stocking densities, we can


expect AI to lead, as in other industries, to fewer and fewer
jobs for human workers. Moreover, the nature of these jobs is
likely to change for the worse. One of the deepest threats
posed by AI is the way it may distort the relationship between
farmers and the animals in their care. AI technologies, in
effect, are sold as a way of outsourcing caring responsibilities
traditionally fulfilled by humans. But can a duty of care be
outsourced to a machine?

Care is a relation between two sentient beings: a carer and a


recipient. It is not a relation between an animal and a
machine: this is, at best, a simulacrum of care. The animals in
our care are vulnerable: they rely on us for food, water and
shelter. To truly care for them, we might need to cultivate
empathy for them. To do this, we need to interact with them
as individuals, come to know their individual capacities and
personalities, gain some insight into their emotional lives,
and become sensitive to their welfare needs. Now, even
traditional, pastoral farming often fails to live up to this
idyllic image, and modern intensive farming has already
moved a long way from that. But, by introducing yet more
distance between farmers and their animals, AI threatens to
make genuine care even more difficult to achieve.

AI opens up new ways for people to use animals as mere means for financial ends

A critic may fire back: this way of thinking about care is


ethically dubious. Caring relationships, they might argue, are
valuable only because of the good consequences they bring
about. In other words, they are instrumentally valuable. For
example, feeling empathy for farmed animals may allow
farmers to be more attentive to their suffering and act more
quickly to alleviate their pain. This could be good for both the
animals, who would feel less pain, and the farmers, who
would feel a greater sense of dignity and pride in their work.
But if AI monitoring can generate the same consequences
without direct caring relationships between farmers and their
animals, says the critic, we should not worry about the loss of
those relationships. This debate hinges on some of the
deepest disagreements in animal ethics: utilitarians are likely
to side with our imagined critics, whereas those of us
sympathetic to care ethics will tend to see caring relationships
as valuable in themselves, even if the same consequences
could be produced another way.
We don’t think using AI to take care of animals is problematic
in all possible circumstances. Imagine a high-tech animal
sanctuary, with no goal other than to care for animals as well
as possible. In this imaginary sanctuary of the future, AI is
only ever used to facilitate caring relationships between
people and other animals, never to replace them. Residents
roam free but are tagged with collars. The collars track their
location and allow individual recognition and care.
Meanwhile, AI analyses livestreams from CCTV cameras,
monitoring for signs of bullying, aggression and poor health,
all while optimising the animals’ food and water intake, and
administering individualised doses of medication where
needed. Welfare is always the priority – there is never any
need to compromise with economic goals. Would it still be
wrong to use AI to monitor for emerging welfare risks?

On the whole, we think not. Some interventions, such as


rectal sensors, might still be too extreme. Proponents of
animal rights might argue that such sensors fail to respect the
animals’ right to bodily integrity. But purely external
monitoring seems less problematic. Admittedly, concerns
about privacy may remain. Think here of a ‘human sanctuary’,
where humans must put up with monitoring of their every
movement: this would lead to some privacy concerns. Yet it is
not obvious that nonhuman animals have an interest in
privacy. It may be that how they appear in the eyes of human
observers – or AI – is of no concern to them, and it’s not clear
why their flourishing would depend on not being watched.

This thought experiment suggests that the ethical problems in


this area are not intrinsic to AI. The problem is rather that AI
opens up new ways for people to use animals as mere means
for financial ends, failing to respect their interests or inherent
value, and the duties we have towards them. AI risks locking
in and exacerbating a tendency to see farmed animals
instrumentally – as units to be processed – rather than as
sentient beings with lives of their own, in need of our care.
This is the likely result when AI is put to work in service of
greater economic efficiency, unchecked by ethical constraints.
But AI doesn’t have to be used that way.

Let’s return to the present moment. How should governments


regulate the use of AI in farming right now? One option is to
ban it completely, largely pre-emptively. But while this may
sound appealing, it would lead to serious, probably
insurmountable difficulties on the ground. It would require
legal definitions of what counts as ‘AI-assisted farming’, as
opposed to just assistance by regular computers, which will
gradually come to have more and more AI products installed
on them. It’s hard to imagine a realistically enforceable ban
that targets only the products with possible farming
applications, leaving everything else intact. The AI genie is
out of the bottle.

A more realistic way forward is to come up with a code of


practice for this emerging industry – a set of ethical principles
tailored to farming applications. Here, too, there are pitfalls.
Recent years have seen many attempts to draw up ethical
principles for the AI sector as a whole. Yet principles aiming
to cover all uses of AI are extremely high level and vague,
allowing the industry to claim it takes ethics seriously while,
by and large, continuing to act as it wishes. For example, an
EU working group proposed in 2019 that AI systems ‘should
take into account the environment, including other living
beings’, but this is so broad it implies no meaningful limits at
all on the use of AI in farming. A review of 22 sets of AI ethics
guidelines concluded – brutally – that AI ethics, so far,
‘mainly serves as a marketing strategy’.

We need to do better. We need goldilocks principles for AI in


farming: detailed enough to provide real ethical constraints
and to steer the sector in the right direction, yet still general
enough to cover a wide range of future applications of the
technology. The goal should be a set of principles strong
enough to ensure AI is used in a way that improves rather
than erodes animal welfare standards. We don’t claim to have
all the answers, but we do want to make four proposals to get
the discussion started.

Principle 1: Advances due to AI must not be used as a


reason to increase maximum stocking densities, and
must not be allowed to drive a shift towards greater
use of cage-based intensive systems.

As we noted earlier, AI assistance is already helping


companies using cage-based methods to increase their
efficiency and reduce their reliance on human labour. This
raises the spectre of these inherently low-welfare methods
forming an ever-larger share of the global market.
Responsible development of AI in farming must take a clear
stand in opposition to this grim prospect.

We must be able to hold companies to account if they fail to act on welfare problems detected by their own systems

Principle 2: When AI systems monitor welfare


problems, data about how many problems are being
detected, what the problems are, and what is being
done about them must be made freely available.

‘Transparency’ is a major theme of AI ethics guidelines, but it


can mean many things, some more helpful than others.
Merely stating on a label that AI has been involved in the
production process says very little. Meaningful transparency
– the kind we advocate – is achieved when the public can
access key facts about the welfare problems AI is actually
detecting and how they are being dealt with.

Principle 3: Companies should be held to account for


welfare problems that are detected by their AI
systems but not investigated or treated. Companies
must not be allowed to dial down the sensitivity of
welfare risk sensors to reduce false alarms.

Some welfare problems are more costly, in economic terms,


than others. An avian flu outbreak could be extremely costly,
whereas lameness in a single chicken will cost little. Part of
the economic potential of AI detection systems is that they
can take this into account. They allow the user a degree of
control over their performance parameters, especially their
sensitivity (how many cases they detect vs how many they
miss) and their specificity (how often false alarms are
triggered). Accordingly, they will allow companies to be
hypervigilant about the costliest risks while remaining more
relaxed about less costly problems.

Is this a good thing? Not in the absence of meaningful


transparency and accountability. Suppose a company finds it
unprofitable to treat some health problem, such as keel-bone
fractures in hens. They tell regulators and the public,
correctly, that they have a state-of-the-art AI system
monitoring for that problem. What they don’t say is that, to
cut costs, they have dialled the system’s sensitivity right down
and only ever treat the most severe cases. We need to be able
to hold companies to account if they fail to act on welfare
problems detected by their own systems, and they need to be
prevented from dialling down the sensitivity of their
detectors.

Our last proposal is intended to protect the dignity and


autonomy of farmers:

Principle 4: AI technologies should not be used to


take autonomy and decision-making power away
from frontline farmers. Decisions currently under
the farmer’s control should remain under their
control.

Imagine, then, a world where Principles 1-4 are adopted and


enforced. Would it be an ideal world? Of course not: many
problems would remain. In an ideal world, we would relate to
other animals very differently, and might not raise them for
food at all. But in our far-from-ideal real world, our proposals
would at least make it a whole lot harder to use AI to drive
down welfare standards. With these principles in place, AI
might even be a friend of animal welfare – raising public
awareness of welfare issues rather than hiding them in black
boxes, and increasing the accountability of farming
companies for the welfare problems they create.
E2(The cell is not a factory)

Scientific narratives project social


hierarchies onto nature. That’s why we need
better metaphors to describe cellular life

When you think about it, it is amazing that something as tiny


as a living cell is capable of behaviour so complex. Consider
the single-cell creature, the amoeba. It can sense its
environment, move around, obtain its food, maintain its
structure, and multiply. How does a cell know how to do all of
this? Biology textbooks will tell you that each eukaryotic cell,
which constitutes a range of organisms from humans to
amoeba, contains a control centre within a structure called
the nucleus. Genes present in the nucleus hold the
‘information’ necessary for the cell to function. And the
nucleus, in turn, resides in a jelly-like fluid called the
cytoplasm. Cytoplasm contains the cellular organelles, the
‘little organs’ in the cell; and these organelles, the narrative
goes, carry out specific tasks based on instructions provided
by the genes.

In short, the textbooks paint a picture of a cellular ‘assembly


line’ where genes issue instructions for the manufacture of
proteins that do the work of the body from day to day. This
textbook description of the cell matches, almost word for
word, a social institution. The picture of the cytoplasm and its
organelles performing the work of ‘manufacturing’,
‘packaging’ and ‘shipping’ molecules according to
‘instructions’ from the genes eerily evokes the social hierarchy
of executives ordering the manual labour of toiling masses.
The only problem is that the cell is not a ‘factory’. It does not
have a ‘control centre’. As the feminist scholar Emily Martin
observes, the assumption of centralised control distorts our
understanding of the cell.

A wealth of research in biology suggests that ‘control’ and


‘information’ are not restricted at the ‘top’ but present
throughout the cell. The cellular organelles do not just form a
linear ‘assembly line’ but interact with each other in complex
ways. Nor is the cell obsessed with the economically
significant work of ‘manufacturing’ that the metaphor of
‘factory’ would have us believe. Instead, much of the work
that the cell does can be thought of as maintaining itself and
taking ‘care’ of other cells.

Why, then, do the standard textbooks continue to portray the


cell as a hierarchy? Why do they invoke a centralised
authority to explain how each cell functions? And why is the
imagery so industrially loaded?

Perhaps this view of the cell sounds ‘obvious’ and natural to


us because it resonates with our stratified societies and their
centralised institutions. But the trouble with doubling down
on this kind of metaphor as a stand-in for science is that
assumptions about how a cell ought to function prevent us
from understanding how the cell really functions. What is
more, when science projects social hierarchies onto the cell, it
also reinforces the notion that social hierarchies are ‘natural’.
The projection of social hierarchies onto nature is often not
deliberate. In the case of the cell, there is a long history of
how it emerged. One part of the story is that when biologists
began investigating chemical changes that happen in the cell,
they found the metaphor of a factory quite useful. The
19th-century German biologist Rudolf Virchow, for instance,
wrote that ‘starch is transformed into sugar in the plant and
animal just as it is in a factory’. As researchers investigated
the organelles, from the manufacture of protein in the
endoplasmic reticulum to the production of energy in the
mitochondria, his metaphor of ‘a factory’ guided how
scientists talked about these organelles.

Another part of the story involves a different field of biology,


where scientists were trying to figure out how tiny cells give
rise to multicellular organisms like us. Some thought that the
sperm contained a homunculus, a tiny version of the body,
already fully formed. Others thought that the biological
mother provided all material contribution to the embryo,
while the father lent only a ‘generative force’ to propel the egg
into development. Only when scientists could study the
process of fertilisation under the microscope could they see
that each parent contributes one cell to the next generation.
But the cells were not equal. The egg was huge compared with
the sperm, in humans, almost 10 million times larger in
volume.

It seemed that the age-old mystery was solved, and the


paternal contribution to the progeny was much less than the
maternal contribution. Unless, of course, what really
mattered was a tiny component present in both sperm and
egg. Microscopic observations in the late 19th century
revealed that when the sperm and the egg fused during
fertilisation, their nuclei fused as well. The nucleus of the
sperm and the egg were similar in size. Historians of science
like Hans-Jörg Rheinberger and Staffan Müller-Wille have
described how those early researchers began to think of the
nucleus that was created when egg and sperm merged as the
source of hereditary information. Biological research in the
20th century, consequently, focused much more on the
nucleus, noted the physicist and feminist scholar Evelyn Fox
Keller, giving short shrift to contributions from the rest of the
egg.

The glorification of the nucleus and its contents, of genes as


‘information’, still prevails in the scientific discourse. In line
with that, the metaphor of the cell as a ‘factory’ still
dominates today.

Science is often described as objective and value-free, but


philosophers of science have pointed out that values can
guide the questions that scientists ask, the hypotheses they
make, and the way they interpret their results. The field of
feminist science studies, in particular, has called into
question the sole role of the nucleus where heredity is
concerned.

The nucleus, of course, does make some hereditary


contribution, and we understand it in great detail. But the
nucleus is only a tiny subset of the hereditary material. If we
don’t even search for hereditary information in the egg cell –
if we never describe that information as hereditary – we will
keep propagating the idea that biological inheritance is
restricted to the nucleus alone.
In parallel with feminist scholars, challenges to the old way of
thinking have been mounting over the years. We now know
that several other kinds of hereditary information are spread
all over the cell. For instance, developmental biologists, who
study how an embryo develops from a single cell, have shown
that the spatial arrangement of various molecules in the
cytoplasm of the egg cell helps to determine where the head
and the tail of the growing organism will be, how the front
side will develop differently from the back side, and so on.
The cytoplasm of the egg doesn’t just ‘nourish’ the nucleus
but contains coded information passed down from
generations before.

These days, philosophers of biology like Marcello Barbieri are


trying to understand what the word ‘information’ even means
in the context of the cell. In biology, the genetic code is the
only code we seem to hear about, but is that actually fair – or
is it a bias emerging from the hierarchical societies that
scientists are part of?

In his book The Organic Codes (2009), Barbieri writes about


the assumptions that preceded the ‘discovery’ of the genetic
code in the nucleus as the pinnacle of it all. The idea of
information encoded in genes directing the construction of
proteins came first. And it was only following this prediction
that DNA was experimentally discovered and conceptualised
as a ‘genetic code’.

The nuanced interaction between cellular organelles is a


direct challenge to the top-down order of a factory
Barbieri calls this discovery a self-fulfilling prophecy. Since
scientists never made similar assumptions about ‘codes’ in
the cell’s cytoplasm, they weren’t as keen to look for them. We
are told that the genes contain blueprints to make proteins.
However, genes do not contain all the information needed to
make proteins. They only specify a one-dimensional protein
chain; the three-dimensional structure that the proteins take,
which is vital for their function, is determined by the cellular
environment as well. Further, the way proteins behave also
varies with where they are in the cytoplasm. The genetic
‘information’, on its own, is nowhere near enough for the cell
to function.

More insights about information in the cytoplasm come from


biologists who study how the cellular organelles interact with
each other. We now know that the linear ‘assembly line’ that
textbooks construct does not remotely capture the many
functions of organelles in the cytoplasm or the many different
ways in which they ‘talk’ to each other and influence each
other’s behaviour. The nuanced interaction between cellular
organelles, in fact, stands as a direct challenge to the coercive,
top-down notion of order that a centralised factory suggests.
The ‘departments’ in the ‘factory’ seem to be communicating
with each other and giving each other orders without keeping
the ‘head office’ in the loop.

All of this coded information in the cytoplasm leads us to ask:


why do modern textbooks, which are supposed to present the
standard, well-accepted knowledge of the day, continue to
portray the cell as hierarchical in structure? Why do science
journalists continue to refer to the codes and programs of
genes in the nucleus when discussing how life develops and
evolves?

I believe that the hold of the centralised view comes from how
it resonates with the human social order. The nucleus
providing instructions and the cytoplasm performing the
labour of ‘nurturing’ sounds ‘natural’ and even ‘obvious’ in a
patriarchal society. The central nucleus ordering its
‘underling’ cytoplasm to actually carry out tasks sounds
obvious in a class-stratified society.

Would scientists coming from different social situations come


up with a different view of the cell?

Possibly. Think about how the biologist E E Just viewed the


cell. Just worked on the peripheral cytoplasm of the early egg
cell. In his book The Biology of the Cell Surface (1939), he
held that cytoplasm was capable of ‘self-regulation and
self-differentiation’, and lamented the prominent view of
development that relegated the cytoplasm to a mere
nurturing shell. Just was also a Black scientist living in the
early 20th-century United States.

The developmental biologist Scott Gilbert has analysed Just’s


science in the context of his social position. The standard view
of development holds that the instructions for development
are located in the central genes. Contrast this with Just’s view
that the cytoplasm had ‘potential’ for ‘development’ and that
the function of the nucleus was to add or remove ‘obstacles’
from its path.
Just’s cytoplasm is able to function without explicit
instructions from the nucleus. It can govern itself and develop
if only the government would remove the ‘obstacles’ in its
path. Historically, the majority of scientists have been male,
upper class, and belonging to the dominant castes and races.
It is possible that the social position of scientists helped them
relate to the notion of a nucleus that continues discharging
instructions while taking for granted the knowledge and skills
required in actually doing the work. The Nobel laureate David
Baltimore described genes as the ‘executive suite’ and the
cytoplasm as the ‘factory floor’. The executive suite appears
more valuable and deserving of more remuneration, while the
toiling masses on the factory floor are thought to be merely
executing the instructions, undervaluing the wealth of explicit
and tacit knowledge and skill.

You might argue that ‘the cell as a factory’ is only a metaphor.


You could say that scientific metaphors should be judged
based on how useful they are, and no metaphor is perfect. The
‘cell as a factory’ metaphor has undoubtedly been useful in
guiding the trajectory of cell biology. I completely agree with
all of this. What I wish to point out is the lack of other
metaphors. Precisely because no metaphor is perfect, we
should employ multiple metaphors, each explaining certain
aspects of the cell. Unfortunately, the centralised and
hierarchical metaphor, so pervasive in textbooks, is often the
only one for the internal workings of the cell.

One alternative metaphor for the cell nucleus, I tentatively


suggest, could be a ‘collaborative notebook’. The cell keeps
this notebook, and all the cell’s components use it to keep
track of their activities and help maintain the cell. The cell
‘writes’ in the notebook, writes in the ‘margins’ and ‘refers’ to
its own notes. Cellular organelles sense each other’s needs
and take ‘care’ of each other. While the ‘factory’ metaphor
attributes control and information to the nucleus, the
‘nucleus as a collaborative notebook’ shows agency on the
part of the cell. While the factory metaphor makes the cell
seem obsessed with ‘production’, alternative metaphors can
highlight the mutual aid among the cellular components and
the labour of maintaining the cell.

Why do we find a lack of such metaphors in scientific


discourse? Why does it seem like too much
anthropomorphism to talk about organelles taking ‘care’ of
each other but not when we talk about genes ‘instructing’
their underlings? Could this selective anthropomorphism
reinforce the ideology of centralised control through the
accepted scientific metaphors? If that is so, we will fail to
capture how the cell works until we check our assumptions. If
we want to comprehend the unruly structure that is the cell,
we need to change the lenses through which we view the
world.

Beyond how the cell works, this discussion has wider


implications for science. The cell is not the only natural
system described using centralised metaphors. We talk of
insect societies having ‘queens’ and what is literally called a
‘caste’ structure. We have ‘alpha’ primates who ‘lead’ the
group and keep ‘harems’.

When values interfere with science, the quest for truth and
accuracy is put at risk
The reason we find centralised functioning everywhere is not
necessarily because it is everywhere. It just appears to be
everywhere because of the lens through which we view the
world. When scientific narratives, using all the authority of
science, project the social hierarchy onto nature, they can
reinforce the same hierarchy as ‘natural’. The centralised
model from cells to animal social groups suggests that
everything in nature is centralised, and that centralisation
works. The ‘truth’ about nature is influenced by our values,
and this ‘truth’ can then play a role in doubling down and
reinforcing the same social values in the world.

Why should it matter, you might ask. After all, regardless of


how nature is, what is considered as moral in human society
should be distinct. Violence is present in nature, but that does
not make it ‘right’.

Nevertheless, the science historian Lorraine Daston in


Against Nature (2019) has shown that arguments about what
is natural have always carried moral weight. What is natural
may not determine what is moral, but it can influence it.
Another important aspect, of course, is the concern about an
accurate depiction of nature. If the projection of social
inequalities onto the cell distorts our understanding of the
cell, we should try to be mindful of this projection – because
understanding the cell is vital to progress in the life sciences
and to human health.

How science conceptualises the cell also gives us insight into


how we think of scientific objectivity. We often think that,
when values interfere with science, the quest for truth and
accuracy is put at risk. Scientists are supposed to leave their
values and beliefs outside their labs. However, research in
feminist science studies suggests otherwise. One does not
necessarily need to be free of values to do good science, but
denying their influence undermines the quality of scientific
work. Instead of denial, reflecting on values and biases would
help researchers steer clear of the pitfalls. Self-reflection can
help scientists identify how their values are shaping their
science, and think of better experimental designs that could
‘catch’ their assumptions before they compromise results.

Science is undoubtedly a human endeavour. The feminist


philosopher Donna Haraway describes science as a
conversation between partial perspectives that each
individual gets from the vantage point of their position. As
Just’s science shows, people with different life experiences
might have different perspectives and may ask different
questions. Admittedly, the connections between scientists’
backgrounds and their work are not always so direct. But the
social position of scientists can still serve as one of the factors
that influence their work. We often say science is
self-correcting. We think that science changes its views when
new information comes to light. But this new information
doesn’t emerge from a vacuum. It doesn’t emerge only from
new techniques. It is also generated when people with
different perspectives take a look at the same data through
different lenses. While diversity and representation are
important in their own right from the perspective of equity,
diverse perspectives would benefit science most of all.
Objectivity is not an individual burden but a collective one.

If we are unable to conceive of the cell, the basic unit of


organisms like ours, without coercive hierarchies, we will
never fully appreciate the complexity of nature. If we fail to
imagine society without a centralised authority, we will find it
difficult to understand or empower the oppressed. Unless we
reflect on our assumptions, our science will be loaded with so
many landmines it may never unravel all the mysteries of life.

E3(A reader’s guide to microdosing)

How to use small doses of psychedelics to


lift your mood, enhance your focus, and fire
your creativity

sychedelics are resurging in the 21st century. The movement


is frequently described as a psychedelic renaissance; Michael
Pollan, author, journalist and psychedelics advocate, writes
that ‘There has never been a more exciting – or bewildering –
time in the world of psychedelics.’ Spanning numerous
domains and branches of modern society – including
medicine, psychotherapy, pharmaceutical drug development,
self-improvement and spiritual transformation – individuals
and communities worldwide are evolving with psychedelics as
the conduit.

The loudest declaration to date of the psychedelics comeback


may have been the Psychedelic Science 2023 conference in
Denver, Colorado, hosted by the Multidisciplinary
Association for Psychedelic Studies (MAPS) in June, attended
by some 12,000 people. Conference-goers heard talks about
psychedelics from a diversity of perspectives, from Rick
Doblin, the founder of MAPS and a decades-long champion of
psychedelics, to Rick Perry, who introduced himself as ‘the
dark, knuckle-dragging, Right-wing, Republican former
governor of the state of Texas’.

Despite unrestrained hype and valid criticisms of the


psychedelic-assisted psychotherapy model (MAPS had to
rectify fallout after revelations of sexual misconduct by
research therapists were publicised in 2021 and 2022),
psychedelics are increasingly accepted and endorsed among
both the populace and institutions. In 2021, 8 per cent of
young adults in the US reported using hallucinogens in the
past year, compared with just 3 per cent in 2011. Federal
governments and industry, including venture capital firms,
are investing billions to research and develop psychedelic
treatments and pharmaceuticals. A US National Institutes of
Health grant is currently supporting research into psilocybin
treatments, for instance, while other governments have
opened up access to the drugs. In 2023, the Australian
Therapeutic Goods Association gave psychiatrists approval to
prescribe MDMA and psilocybin for post-traumatic stress
disorder and depression, and a Special Access Program in
Canada, though still heavily bureaucratic and mired in red
tape, similarly opens a pathway to treat serious medical
conditions with psychedelics.

Meanwhile, another vibrant subculture based on microdosing


– consuming small amounts of psychedelic substances
semi-regularly over a period of time – has emerged in the
psychedelic landscape. The practice has been tantalising but
fraught. Indeed, while studies supported by government and
industry focus mainly on generating clinical trial evidence for
standard doses to treat mental health problems, most
research into microdosing falls on the lower end of the
scientific rigour pyramid, often exploratory and observational
in nature. In that sense, rather ironically, it resembles the
experience of psychonauts taking full and heroic doses to seek
enlightenment.

Of course, microdosers are more interested in enhancing


cognitive and intellectual performance or filling gaps in a
hobbled mental healthcare infrastructure than plumbing the
depths of their psyches. Research surveys of adults who
microdose regularly find that the most common motivations
for microdosing relate to strengthening or mending the
psyche – improving mood, decreasing anxiety, or boosting
focus and creativity. Some have compared its impact with
stimulants for ADHD or SSRIs for depression. Even US
Marines are looking to microdosing to increase productivity,
creativity, problem-solving and flow.

Are these positive cognitive and psychological benefits real?


From a strictly evidence-based perspective, the science is
young and undetermined. A sizeable slice of the microdosing
scientific literature debates whether the impact comes from
the placebo effect, in which expectation influences outcome.
But placebo effect or not, qualitative accounts have been so
positive and persuasive that scientists feel compelled to
understand the practice further.

Microdosing in the psychedelic world has a meaning unto


itself – and is a bit different than the use of the term in
pharmacology overall. A 2019 commentary in the Journal of
Psychopharmacology summarised guidelines from a handful
of drug and medicine regulatory agencies, and reported that a
microdose is ‘1 per cent of the pharmacologically active dose
[of any drug], up to a maximum of 100 µg [micrograms]’. A
microdose is a tiny dose of a pharmacological agent that is
likely not pharmacologically active, but is useful for studying
safety, side-effects and other properties of the substance.

The informal shared definition in psychedelic microdosing


communities, on the other hand, is 5-10 per cent of a
standard or ‘trippy’ dose of a psychedelic. Given that many
consider 2-5 grams of magic mushrooms or 100-200
micrograms of lysergic acid diethylamide (LSD) the dosing
amounts to consume for a proper psychedelic trip,
microdosing of these substances comes out to be about
0.2-0.5 grams of magic mushrooms and 5-20 micrograms of
LSD. (Note that most controlled research studies use
psilocybin, the psychoactive molecule in magic mushrooms.
Therefore, a standard dose of 25 milligrams of psilocybin
equates to about 1-2 milligrams per microdose. However,
pure psilocybin is not typically accessible to the general
public. Pretty much the only legal way to obtain psilocybin in
the US is in highly regulated clinical research studies.
Therefore, it is more practical to consider microdosing
psilocybin in the context of magic mushrooms. And since the
psilocybin potency of magic mushrooms can be highly
variable, these dose calculations are no more than broad
estimates.)

Many believe microdosing can yield benefits equal to a


full-dose psychedelic trip
Given all this, the commonly accepted phrase ‘microdosing’
when it comes to psychedelics is a bit of a misnomer; the
actual amounts consumed are more analogous to a ‘low dose’
or ‘very low dose’ of the psychedelic itself. Though these
semantic details may seem tedious, it is important to
understand that microdosing is not simply a product of
subcultures in psychedelia; it is a method of studying and
understanding the effects of pharmacologically active
substances overall.

It is also important to understand that much of what is


discussed among the general public and in online
communities about microdosing is implicit and inferred, with
standardised definitions, crucial for moving the enterprise
forward, still to come. Semantics aside, microdosing
psychedelics means merely consuming very small amounts of
the substance, usually 5-10 per cent of the standard
hallucinatory dose.

There’s another way to look at microdosing psychedelics that


may be a bit more helpful. Microdosing refers to consuming
very small amounts of psychedelics so that psychoactive
effects are sub-perceptual. The aim of microdosing is to
obtain whatever psychological and neurological benefits the
psychedelic may bestow, but at a level that is perceptually
unnoticeable to the consumer. This flies in the face of what is
widely understood to be why humans use psychedelics – to
hallucinate, undergo a deeply psychologically transformative
experience, and expand consciousness. Psychonauts are
wholly uninterested in anything sub-perceptual; rather, they
seek to completely alter how they perceive the physical,
mental and spiritual realms. Nonetheless, despite limited
controlled research evidence, many believe that microdosing
can yield benefits equal to what one may gain from a full-dose
psychedelic trip.

This is especially true if the microdosing occurs on a regular


schedule over a period of several weeks to several months.
Somewhat like taking a traditional antidepressant,
microdosing psychedelics requires adhering to a
predetermined dosing regimen, and remaining persistent
with it even if benefits are not immediately noticeable.
Microdosing psychedelics, however, does not occur daily but
once every couple of days, or perhaps once a week or even
every couple of weeks.

The reported effects of microdosing vary just as much as the


reported effects of traditional consciousness-bending doses of
psychedelics. Let’s consider some scientifically derived
conclusions before delving into some of the more glowing
anecdotal reports.

A comprehensive systematic review of microdosing,


published in 2022 in the journal Neuroscience and
Biobehavioral Reviews, may be the best peer-reviewed
empirical summary of the effects. The most striking finding
from this review is that numerous studies have shown that
psychedelic microdoses frequently cause mild noticeable
perceptual effects, including perceptual distortions, altered
mental states, somaesthesia, feelings of bliss, increased
vigilance, and experiences of unity. These perceptual effects
of course pale in comparison with a full-blown psychedelic
trip but diverge from the assumption that microdoses are
sub-perceptual. It may be that the customary 5-10 per cent of
a standard dose is far more than what is necessary for an
effective microdose, or, as the authors highlight, that mild
alterations in perception may indeed be a ‘prerequisite for the
beneficial effects of microdosing’. This is a quandary of
microdosing that will benefit from more experimental
research.

Other key findings of the review are summarised as follows:

​ Microdosing is regularly associated with improvements


in depression in uncontrolled observational studies,
though a handful of controlled lab studies have shown
no changes in depression after microdosing.
​ The relationship between microdosing and anxiety is
unclear – more studies have shown mixed or negative
results on measures of anxiety than studies showing
positive results.
​ Several studies have shown improvements in areas such
as wellbeing, self-fulfilment, self-efficacy, wisdom and
physical health, though none of these studies were
controlled laboratory experiments.
​ Several studies have shown increases in creativity
following ingestion of psychedelic microdoses. Other
studies have shown mixed positive and negative effects
of microdosing on other cognitive functions, including
psychomotor vigilance, attention, mindfulness, ability to
focus, and cognitive control.

Another systematic review of microdosing published in


January 2024 looked at the effects on mental health. The
authors found favourable correlations between microdosing
and several variables of psychological health including
emotionality, mood and mental wellbeing. Several studies in
this review also found adverse associations such as increased
anxiety and substance use. The article’s Table 4 adroitly
depicts these mercurial findings across all studies reviewed.
The more important finding from this study is probably the
overall quality assessment of the research (see Table 3); each
study received an average quality score of 1 or less on a
quality assessment scale ranging from 0-2.

The sum of the research literature so far suggests


microdosing may provide a useful boost in some cognitive
functions, a moderate reprieve from depression though foiled
by an increase in anxiety, and a mild enhancement of general
wellbeing and daily functioning. And these benefits may just
be marginal, although it’s impossible to know for sure. The
studies done so far look at small sample sizes of healthy,
demographically homogeneous research participants, making
it difficult to generalise the results, especially for people
diagnosed with longstanding mental disorders.

Neither faction in the microdosing coterie can prove positive


effects are more than placebo

Yet reports of microdosing in the media and online are often


glowing and remarkable. In Silicon Valley, workers have
turned to microdosing to aid with intensive daily cognitive
tasks as a healthier alternative to ADD medications such as
Adderall, according to an article in Wired in 2016. In these
circles, which are driven by productivity hacks and technical
prowess, microdosing can alleviate a ‘bevy of disorders,
including depression, migraines and chronic-fatigue
syndrome, while increasing outside-the-box thinking’,
according to a report in Rolling Stone in 2015. (If that is true,
we all should be microdosing!) Some claim that microdosing
has helped them become better parents, and mothers are
turning to microdosing to counter post-partum depression.
The toolkit for treating trauma-related cognitive impairment
suffered by combat veterans is increasingly incorporating
psychedelics, and microdosing may become a component. A
qualitative analysis of YouTube self-reports of microdosing
experiences includes claims of solving Rubik’s Cubes faster
and performing 20 per cent better in sales and marketing
tasks. The author Ayelet Waldman states in the subtitle of one
of her books that microdosing made a ‘mega difference’ in her
mood, marriage and life. In a conference talk in 2018,
Waldman said that, upon microdosing for the first time, ‘in
that instant, I went from being suicidally depressed and
unable to experience joy, to looking out the window and
finding myself exhilarated by beauty.’

The benefits of microdosing tallied in the peer-reviewed


literature are significantly tamer than the anecdotal
declarations, of course. Modern societies rely on science to
sift through facts and falsities, but some of the most profound
human experiences cannot (at least for now) be directly
measured with empirical tools: narratives and personal
accounts are a valuable source of understanding them.
Microdosing has become inexorably lodged between these
two paradigms, and we are still in the earliest of days.

Whether you side with the middling empirical findings or the


exuberant anecdotes of believers, neither faction in the
microdosing coterie can prove its positive effects are anything
more than placebo. Scientists haven’t found an explanation
for the reported effects, suggesting a placebo response. And
anecdotal accounts must be qualified because many who
claim benefits from microdosing may actually be taking small
doses that produce some perceptual changes, rather than true
microdoses below the perceptual threshold.

Nonetheless, we live in a society where small advantages


mean a lot – we maintain a ceaseless drive for minuscule
gains in work, hobbies and relationships. The cultural milieu
makes microdosing attractive, and the reality is that many
have microdosed, and many more will do so as acceptance of
psychedelics enters the mainstream. If you would like to try it
yourself, we suggest you proceed with an abundance of
caution, using this Aeonic guide.

Guidelines

For initiating a sensible microdosing programme, follow the


steps below:

1. Complete a physical and mental health assessment

As with any psychotropic substance taken as medication,


recreationally or otherwise, the benefits and risks to one’s
personal wellbeing should be evaluated and balanced
appropriately. Though consumed at very low, presumably
sub-perceptual doses, psychedelic microdoses are not without
potential harms. Some scientists are warning of potential
cardiovascular effects associated with repeated consumption
of the classic psychedelics, which affect neurotransmission
pathways that involve serotonin. Other research has shown
that microdosing LSD may elevate anxiety. Until larger, more
robust and rigorous experimental trials of microdosing are
completed, it is nearly impossible to determine how
microdosing may influence any individual person.

One’s personal experience during a psychedelic experience


can be blissful and enlightening or terrifying and dark – the
outcome is largely a factor of the psychonaut’s a priori
physical and psychological characteristics. That said, anyone
considering microdosing should assess their physical and
mental wellbeing before entering a microdosing cycle. If
possible, completing a checkup with a primary care physician
can help you evaluate your physical health and uncover any
risk factors that psychedelics may exacerbate. Consulting with
a mental health professional can help you evaluate your
transient and persistent mental states and any cognitive,
emotional and behavioural risk factors susceptible to
psychedelic microdosing. If medical and mental health
services are not accessible to you, an online self-assessment
tool can be helpful.

They are amusing and pleasant for some but may be shocking
and scary for others

Expectedly, microdosers living in regions and territories


where psychedelics are still illegal may choose to refrain from
disclosing drug use to physicians and therapists. Consulting
with clinical providers about specific healthcare questions
without disclosing microdosing intent or behaviour is a more
discreet tactic. There are two major attributes of microdosing
to consider when assessing for physiological and
psychological risk factors: hallucinogenic effects and
prolonged repeated exposure to psychedelics.

Hallucinogenic effects: psychedelics, especially high doses of


the so-called ‘classic psychedelics’, can stimulate dramatic
alterations to perception, causing hallucinatory visions,
voices, and other sensory distortions. They are amusing and
pleasant for some but may be shocking and scary for others.
It’s all a part of the trip, some would say, and a necessary
feature of consciousness-expansion. However, the
hallucinatory effects of psychedelics can provoke a more
dangerous break from reality, such as psychosis or mania.
Consequently, clinical trials of psychedelic therapies
stringently screen out individuals with personal or family
histories of psychotic or manic-depressive disorders. The
adverse effect of even a single psychotic or manic episode
triggered by a psychedelic outweighs any potential gains.

Fortunately, a 2022 review article affirmed that ‘no psychotic


episodes have been documented in modern clinical trials’,
suggesting that the risk for such events is low for those
without psychotic or manic psychiatric predispositions, and
when psychedelics are used in controlled, safe settings.

Prolonged and repeated exposure: the risk of mania and


psychosis pertain to single, high-dose consumption of
psychedelics. Studies and reviews of microdosing have not
associated it with the onset of severe mental impairment.
However, common sense suggests that repeated exposure to
hallucinogens, even in low doses, poses some risk to
psychological health. Indeed, in a 2020 systematic review,
upwards of 10 per cent of microdosers self-report negative
mental health side-effects such as insomnia, anxiety and
worsening depression.

Potential cardiovascular side-effects of microdosing are a


growing concern in medical discourse. Though clinical
investigators are cognizant of cardiovascular effects of single,
high-dose psychedelics (eg, increased heart rate and blood
pressure), associated risks of significant, adverse events are
generally low.

Some, however, are not convinced the same can be said of


microdosing psychedelics. Given the neurochemical
mechanism whereby psychedelics mimic serotonin and
trigger various receptors in the brain, the effect of repeated
dosing is an important question.

Furthermore, because serotonin can constrict blood vessels


and increase blood pressure, the effects of repeated dosing on
the cardiovascular system require scrutiny and additional
research, at the very least. Over the past few years,
researchers have theorised that chronic microdosers may be
at risk of developing valvular heart disease (VHD). Since
psychedelics have a high affinity for 5-HT2B receptors, found
in the peripheral and central nervous system, the theory goes,
and since medications with similar neurotransmitter activity
have been linked to VHD, microdosing could be a culprit as
well. A 2023 article in the Journal of Psychopharmacology
analysed in vitro, animal, and clinical studies evaluating the
risk of psychedelic-induced VHD. Though the analysis did not
control for microdosing schedules, the authors concluded that
repeated exposure to full doses of MDMA could cause VHD
(they did not find such risk associated with four other
psychedelics, but this was mostly due to a lack of relevant
research). Though microdosing is, obviously, done at lower
doses and at non-daily schedules, caution is still warranted
until more clinical and laboratory evidence is generated.

To summarise, a physical and mental health assessment in


preparation for microdosing should, at the very least,
consider risk for psychiatric and cardiovascular adverse
events. As with most drugs, side-effects and problems are not
limited to just two arenas. Other, less dramatic side-effects,
from nausea to changes in body temperature to panic and
more, are summarised in the Journal of
Psychopharmacology; you should refer to the
aforementioned 2022 article for a more comprehensive
review of risk factors and adverse effects.

2. Select a substance for microdosing

The most commonly microdosed psychedelics are LSD and


magic mushrooms (psilocybin). A 2019 study that tracked the
practices and experiences of microdosers over six weeks
found that a little under half of participants microdosed
either LSD or psilocybin; the aforementioned 2020
systematic review of 17 quantitative and qualitative studies
reported that ‘drugs most frequently used in this research
were LSD and psilocybin’; the majority of placebo-controlled,
experimental studies of psychedelic microdoses have
administered either LSD or psilocybin to research
participants. Many other psychedelics are microdosed, albeit
much less commonly, including MDMA, mescaline, ketamine,
ibogaine, and dimethyltryptamine (DMT).

Prudence suggests that the safest and most effective options


for microdosers are LSD or magic mushrooms. The safety
profile and ubiquity of these substances imply that healthy
individuals will fare best when microdosing either of these, as
opposed to other psychedelic drugs. Furthermore,
microdosers who select LSD or magic mushrooms can review
dozens of research articles to explore how these substances
may affect them, and connect with online communities where
most members are likely microdosing these substances – the
aforementioned 2022 systematic review is a good place to
start. The more thoroughly documented effects of standard
doses of psychedelics in general are also informative – the
book Psyched (2022) by the journalist Amanda Siebert covers
seven different psychedelics in detail.

3. Determine how much to microdose

After choosing a psychedelic, you will want to determine how


much of it to microdose. If you aspire for a truly
sub-perceptual experience because you’re interested to see if
microdosing can enhance your mental processes, then your
microdoses should be less than the 5-10 per cent standard,
perhaps 1-2 per cent. You can experiment and work your way
up from there (of course, people who are very sensitive to
psychedelics may feel substantial perceptual effects at 1-2 per
cent). If you want to mildly shift your mental states and
experiment with different levels of perception without going
on a full-blown trip, the 5-10 per cent standard is a good place
to start. At 15-20 per cent, chances are you will be flirting
with a light but very real psychedelic trip that stimulates an
elevation of consciousness. Previous experience with full
doses of psychedelics will be helpful for this step. Reflect on
how much of a psychedelic was necessary for you to have a
good trip, and go from there.

4. Procuring your psychedelics

Regarding legal access to psychedelics, these substances are


entering the exceptionally grey regulatory area already
inhabited by cannabis. In the US, at the federal level, most
psychedelics are classified as Schedule I substances and
considered to have ‘no currently accepted medical use and
high potential for abuse’, meaning that possessing or
procuring psychedelics is a criminal offence. However, many
states and municipalities have moved to decriminalise some
psychedelics.

In 2020, Oregon became the first state in the US to


decriminalise possession of most drugs, including many
psychedelics, and is establishing a system for residents to
receive psilocybin services. At the same time, it’s worth noting
that microdosing was a contentious part of these reforms and
remains legal only under supervision, just like higher doses of
psychedelics. Colorado passed a similar measure in 2022 to
initiate a regulatory framework for residents to procure and
use psilocybin at licensed healing centres. Many other cities
and towns across the US have passed measures to
decriminalise psychedelics, most often those naturally
occurring such as mushrooms and cacti, and develop systems
to launch psychedelic-assisted mental health services. A
recent preprint underscores that none of these laws have
specifically addressed microdosing and, therefore, its legal
status may be uncertain in any particular jurisdiction.

The vast majority of consumers of psychedelics are likely


procuring them via underground methods

Outright buying psychedelics for recreational purposes is still


prohibited in the US and much of the world. Even in Portugal,
where personal possession of all drugs has been
decriminalised since 2001, the manufacturing and selling of
drugs remains illegal. All of this is to say that, by and large,
procuring psychedelics is still an underground endeavour.

I do not endorse any specific underground method for


obtaining psychedelics for microdosing. Ideally, they should
be obtained legally and used in accordance with federal and
local laws. That said, the reality is that more and more people
are using psychedelics for microdosing or otherwise, and the
vast majority of consumers of psychedelics are likely
procuring them via underground methods. Readers should
use reasonable and sensible judgment in their attempts to
acquire psychedelics, and rely on their personal network and
known connections to source them rather than on social
media or shadowy delivery services. (If psychedelic
mushrooms are decriminalised in your city or state, you may
consider growing them yourself.)

5. Testing your psychedelics


Microdosers should opt to test their psychedelics before
consuming them as they would any other drug, at least any
substances sold in powder, pill or liquid form. (A 2020 survey
in the Journal of Psychopharmacology found that most
microdosers do not test the substances they use.)
Unfortunately, recreational drugs such as cocaine and MDMA
are regularly laced with adulterants such as fentanyl and
xylazine, and these laced drugs are contributing to an
intensifying overdose crisis in the US.

Fungi and plants (eg, cannabis and psychedelic mushrooms)


are not likely to be contaminated with fentanyl. However,
powder, pills and liquids can be laced with a potentially fatal
adulterant. Ensuring the safety of your drugs can be tricky
given the various levels of drug-testing measures. Fentanyl
test strips are relatively easy to access and use, and are a
quick-and-dirty way to detect fentanyl in street drugs, but
they cannot measure the potency of the fentanyl or confirm
that the drug you have acquired is actually what it was
advertised as.

Drug-specific testing is an option as well, but is more


expensive than fentanyl test strips which are often freely
available in the US. Dancesafe.org sells testing kits to detect
specific drugs such as MDMA and LSD, though these kits
range from $20 to $120. DrugsData, an independent
drug-testing laboratory based in Sacramento, California ‘tests
all psychoactive drugs including ecstasy tablets, powders,
research chemicals, novel psychoactive substances, and other
drugs through [their] DEA-licensed laboratory’. This
drug-testing option is likely the most comprehensive and
accurate available to recreational drug users, but there are
some noteworthy caveats. You will have to mail the drugs to
the lab, and for $100-$150 DrugsData will test the drugs and
post results publicly on their website 3-4 weeks later.

6. Storing your psychedelics

Storing psychedelics during microdosing is crucial given that


these substances will be used over a period of several weeks
or months. Psychedelics are susceptible to degrading and
loosing potency if stored in unfavourable conditions. The two
most commonly microdosed psychedelics – LSD and magic
mushrooms – are best stored in airtight containers and
placed in a cool, dry and dark environment. For long-term
storage, keep LSD in the fridge or freezer.

7. Design a microdosing protocol

The dosing schedule is crucial. The aforementioned 2020


survey in the Journal of Psychopharmacology reported
results from a study of microdosing practices from 414
microdosers. The authors found that just over a third of
respondents used a one-day-on, two-days-off microdosing
protocol, and about one-fifth of respondents used a
one-day-on, three-days-off protocol. In other words, most
microdosers consume their preferred psychedelic one day and
refrain for 2-3 days before microdosing again. Some
respondents also reported using a once-a-week or
once-every-two-or-more-weeks microdosing schedule. A 2019
survey of more than 1,000 respondents found that almost half
of them designed their own microdosing protocol.
Many practitioners follow schedules invented by some key
thought leaders in psychedelia. The most popular is the
Fadiman protocol, initially proposed by the psychologist
James Fadiman in his book The Psychedelic Explorer’s
Guide: Safe, Therapeutic, and Sacred Journeys (2011), and
further specified in a 2019 article in the Journal of
Psychoactive Drugs. The Fadiman protocol is one day of
microdosing followed by two days off. In the 2019 article, the
authors stated that this protocol was informed by anecdotal
reports suggesting that the effects of microdosing last two
days. Another popular microdosing protocol is the so-called
‘Stamets stack’. The renowned mycologist Paul Stamets
advises combining microdoses of dried psychedelic
mushrooms with Lion’s Mane, another mushroom purported
to enhance cognitive function, and Vitamin B3 on a dosing
schedule of 4-5 days on and 2-3 days off. This cycle is
repeated for 4-6 weeks, followed by a break of 2-6 weeks.
These schedules are popular, but have not been tested in any
systematic way against others.

At least a month of microdosing may be necessary to obtain


any potential benefits

Microdosers should tailor their protocol to best fit their needs


and desired outcomes, though this can be difficult to gauge
the first time someone embarks on a microdosing cycle. Some
may consider using microdoses for specific personal and
professional activities, such as deep cognitive work or social
engagements. Such protocols may not adhere to a fixed
dosing schedule but may provide a psychedelic boost in
preferred settings and situations. Despite limited consensus
on the best microdosing protocol, dosing every day should be
avoided. Psychedelic microdosing is not without risks and any
negative side effects are more likely to occur when a
substance is consumed daily.

In his guide, Fadiman counsels to go slow:

By going slow, you give yourself a chance to really


know, to really observe what is different, why it’s
different, and how you can best take advantage of it.
The day you’re completely off is great as a reset day,
kind of like clearing the mind/body palate. Then
you’re fresh and ready to undertake the experiment
again.

Little has been written about the ideal length of a complete


microdosing cycle. In Fadiman’s 2019 article, he and his
co-author collected self-reports of experiences with
microdosing from research participants after one month of
microdosing using the Fadiman protocol. The one-month
cycle was chosen for research purposes, however, and
lengthier schedules (from 2-4 months) are very common.
Unless you experience serious negative side-effects after the
first few microdoses, at least a month of microdosing may be
necessary to obtain any potential benefits. Going on for too
long, though, isn’t recommended. Even if you experience
significant positive effects, microdosing consistently for six
months to a year or more is not advisable due to
cardiovascular risk.

The best approach to a microdosing protocol is consistent


with any recreational drug use – proceed with caution.
8. Carefully measure and divide microdoses

Precision goes a long way when using drugs recreationally.


Along with adulterated black-market substances, a great deal
of drug harm stems from users miscalculating how much they
are actually consuming. In his book Drug Use for Grown-Ups
(2021), the neuroscientist Carl Hart provides readers with
four ‘important lessons to facilitate their health and
happiness’ when using drugs recreationally, the first of which
is dose; the other three are route of administration, set, and
setting. Hart writes that the dose, the amount of the drug
consumed, ‘is perhaps the most crucial factor in determining
the effects produced by the drug’. The most delightful
psychedelic experience can quickly become harrowing if too
much of the drug is consumed. It is imperative to be as
accurate as possible when measuring individual microdoses.

Psychedelic plants and fungi can be measured with milligram


scales. Generally speaking, a microdose is considered
one-10th to one-20th of a standard psychedelic dose. Things
are trickier with psychedelics like LSD, a liquid typically sold
as tiny squares of blotting paper. It is basically impossible to
ascertain the potency of an LSD blotter without hi-tech
drug-testing hardware. One blotter is commonly understood
to be one recreational dose of LSD, about 100 micrograms.
One method for dividing LSD blotters is to cut a square into
nine equal microdoses, though this is slightly greater than the
one-10th to one-20th standard. Those new to microdosing
may consider cutting smaller squares given how potent LSD
is, though it may be difficult to cut the microdoses into equal
portions.
Finally, dividing each microdose and storing them properly
before starting the cycle is prudent for a safe and effective
microdosing adventure.

9. Day 1 microdosing and beyond: track microdoses


and effects in a journal or spreadsheet

The first day of microdosing is exceptionally important,


especially if you have never previously used a psychedelic,
microdosing or otherwise. Though LSD and magic
mushrooms are considered relatively safe recreational
substances, there is always a chance you may experience a
bad reaction, as is true with any recreational or medicinal
drug. On day 1, jotting notes or journaling about how you
respond to the microdose is a good way to keep track of any
bad reactions. Be sure to track good reactions as well. The
goal is to just be mindful of how you respond to the
psychedelic, and ensure it is something you can manage on a
regular basis.

You may consider journaling and tracking other data points


regularly throughout a microdosing cycle. Given the varied
nature of how any individual person on any given day may
experience psychedelics, monitoring your day-to-day
experience may help you decode hype from reality. As I
detailed earlier, the anecdotal reports of microdosing soar far
higher than the scientific evidence. Reports of phenomenal
outcomes due to microdosing should not be dismissed, but
any positive effects you experience are likely to be less
dramatic. You can use tools like daily journaling, or
psychometric measures of depression, anxiety, flow, and
wellbeing to gain clarity about your experiences with
microdosing. You may consider N-of-1 citizen science, a
method based on a study population of one, currently gaining
ground in microdosing research, as a model for collecting and
interpreting data about yourself. Again, the goal is to use
more than personal intuition to delineate how microdosing
affects your mind, body and wellbeing.

It would be irresponsible to conclude this piece without the


cautionary note that microdosing suffers from a thorny
quandary: it is clear that it is influenced by prior expectations,
so we can’t be sure how much of the effects are due to this or
to inherent properties of the drugs. This is a complex
predicament for psychotropic substances, including
antidepressants, across the board.

In popular culture, microdosing has been associated with


transformative psychological and spiritual experiences. When
captivating and sparkling anecdotes are propagated in the
media, many people considering microdosing may inevitably
surmise that it will also be transformative for them. Hype
surrounding microdosing and psychedelics in general has
cultivated an air of exceptional optimism and enthralment
about their capabilities. The reality is that microdosing is
susceptible to expectancy effects, possibly more so than other
psychotropics. In fact, a 2021 study found no significant
difference between a microdosing group and a placebo group
on multiple psychological outcomes. There is a growing body
of academic literature and lay media discussing the relevance
of expectancy and placebo effects in microdosing, and it is
worth reviewing it before starting a microdosing cycle.

None of this means that microdosing should be written off –


placebo-controlled studies do show that microdosing has
positive (and sometimes negative) effects on mood, cognition
and wellbeing. And psychedelics across the board are
inherently difficult to investigate because they affect
fundamental features of consciousness, rendering one’s
response to them subject to the many vicissitudes of life itself.
This is not acceptable to scientists – scientists like to explain
how and why things happen, and in fine detail. They like to
quantify variables and generate equations. Psychedelic
experiences do not lend themselves to such precise
explanations.

Psychedelic experiences are esoteric phenomena that


humanity grapples with mercurially

Compounding the complexity is the fact that establishing


causal effects is difficult in microdosing research due to
inadequate blinding methods. Indeed, the most consistent
finding across the existing microdosing literature is that the
effects of microdoses are often subjectively noticeable, in
other words, microdosers can reliably tell they are ‘under the
influence’. (Some would argue that these slight alterations in
consciousness are necessary for microdosing to work.)
Nonetheless, blinding is a bedrock of establishing causality in
clinical trial and experimental research – scientists cannot
say microdosing is superior to placebo if participants can
reliably guess whether they have been given the actual drug or
not. In a recent review of placebo-controlled microdosing
studies, a duo of Australian researchers contend that ‘it is
likely that a substantial proportion of participants in these
studies were able to identify whether they had taken a
microdose.’
I believe that microdosing can have positive effects on
cognition, mental health and happiness, and can be extremely
useful for many when used correctly. I am also convinced that
controlled microdosing research will likely continue to
produce middling results, as has been the case so far.
Psychedelic experiences are esoteric phenomena that
humanity grapples with mercurially, as an individual does
with complicated emotions like fear and love. Neither
multitudes of subjective anecdotes nor scientific instruments
can capture the multidimensional value of psychedelics in
society. Historical evidence is more compelling in my view
and, historically, psychedelics are deeply intertwined with
some of the most ethereal elements of humanity – rituals,
healing, culture and consciousness. Psychedelics seem to vary
and respond in accordance with the society and era in which
they exist.

The question is: what is the role of microdosing in the context


of today? I obviously do not have the answer. However,
microdosing appears to agreeably integrate psychedelics with
21st-century society. In the modern era, genetic engineering,
brain-computer interfaces and digital biomarkers are possible
in humans. Those of us using digital and software
technologies for daily work are already augmenting our
productivity and our brains. And the need to gain an edge has
become more relentless than ever before. The microdosing
ethos aligns perfectly with these sentiments – just a little
more biohacking to drive results.

There may be a simpler interpretation. Psychedelics are


mainstream and widely accepted in the 21st century, possibly
more so than at any other time in the history of Western
societies. But at the same time, it’s not practical to enter the
altered state of the high-dose psychedelic experience in any
routine way. Microdosing may be that accessible and modest
psychedelic nudge we might all need in modern society to
expand our collective consciousness and see the world in new
and creative ways.

E4(A man beyond categories)

Paul Tillich was a religious socialist and a


profoundly subtle theologian who placed
doubt at the centre of his thought

My recollections of my grandfather are mostly from


childhood visits to my grandparents’ summer cottage in East
Hampton in the 1950s and ’60s. The village, with its
magnificent Atlantic beaches on the South Shore of Long
Island, had already become an intellectual and artistic
summer gathering place for European academics, writers and
artists displaced by the Second World War. And my
grandfather, who had an active social life, counted many of
them as friends.

My grandmother, Hannah (or ‘Oma’ as we called her), made


it very clear that her husband’s sacred time for writing from
8am to 11am was inviolable, and she protected him from the
noise and childish distractions my younger sister and I
provided. In the evenings, he would preside over dinner or
cocktail parties for friends and acquaintances from the
academic and artistic community of what is now called the
Hamptons. Occasionally, my grandfather would engage me in
a game of chess that, inexplicably, I always lost. I never had
the chance to discuss philosophy with him as he died when I
was 13, but the conversation at dinner in the Tillich
household was rich with ideas, political events and the work
of writers and artists I would learn about only much later.

Tillich had been among the first group of professors and the
first non-Jewish professor to be dismissed by Hitler for
opposing Nazism. The Nazis suppressed his book The
Socialist Decision (1933), and consigned it to the flames in
Nazi book burnings. In late 1933, he fled Germany with his
family to the United States, where he became established as a
public intellectual, holding positions as professor of
philosophy at Union Theological Seminary in New York and
then as a university professor at Harvard, and finally as
professor of theology at the University of Chicago Divinity
School. During the Second World War, Tillich made radio
broadcasts against the Nazi regime for the US State
Department and assisted European intellectuals in
emigrating to the US. In the 1940s, he served as chairman of
the Council for a Democratic Germany. Due to the interest of
the magazine magnate Henry Luce and his wife Clare Boothe
Luce, Tillich was featured on the cover of Time magazine in
March 1959 and was the featured speaker at Time’s
star-studded 40th anniversary gala dinner.

Paul Tillich was raised in the 19th century to conservative


parents in a walled medieval village in Brandenburg,
Germany. His father was a Lutheran pastor and Church
administrator, born and educated in Berlin. His strict parents
tried to imbue the young Paul with traditional religious
values. They failed.

Tillich lived through great social, political and technological


change driven by two world wars, the wild freedom of the
Weimar Republic and the fateful beginnings of the Nazi
regime. After fleeing to the US, he lived through the Second
World War, the McCarthy era, and then the beginnings of the
American Civil Rights Movement, student unrest and the
emergence of psychedelic drugs. Turning down an
opportunity from Timothy Leary and his own assistant, Paul
Lee, to try LSD while at Harvard, Tillich told them he was
from the wrong era for such experimentation.

He considered himself a boundary thinker between


philosophy and theology, the Old World and the New

During the First World War, Tillich was awarded the Iron
Cross for courage and military contributions in battle, after
surviving a four-year stint as a chaplain in the German army.
His traumatic experiences at Verdun and elsewhere on the
Western Front led to two nervous breakdowns. These
experiences along with his postwar life in Weimar Berlin, his
open marriage with Hannah Tillich, and his political and
philosophical engagements with socialist academic
colleagues, artists and writers shattered the 19th-century
worldview and traditional religious conceptions of God and
faith taught by his conservative parents and drove him to
redefine his philosophical outlook.
While actively participating in intellectual circles, Tillich
cultivated friendships with other key thinkers. As a
philosophy professor at the University of Frankfurt, he helped
establish a chair in philosophy to bring Max Horkheimer to
the faculty. He also supervised Theodor Adorno’s doctoral
dissertation (habilitation thesis). While not formally affiliated
with Horkheimer and Adorno’s neo-Marxist Institute for
Social Research, Tillich maintained lifelong relationships with
both men. Other friends and acquaintances included Mircea
Eliade, Erich Fromm, Adolph Lowe, Hannah Arendt, J Robert
Oppenheimer, Erik Erikson, Karen Horney and Rollo May.

Tillich considered himself a boundary thinker between


philosophy and theology, religion and culture, the Old World
and the New. His lectures ranged far beyond the usual
theological ones, and included secular topics such as art,
culture, psychoanalysis and sociology. His interdisciplinary
thinking encompassed the enormous social, political,
technological and intellectual change and conflict through
which he lived. He didn’t easily fit within existing categories.

In the 1920s and ’30s, while still in Germany, Tillich regarded


himself as a ‘religious socialist’ and a strong opponent of
Nazism. Written in 1932, his book The Socialist Decision
provided an alternative vision to the extremes of the
nationalist Right and the communist Left that were tearing
his homeland apart. He envisioned a unified harmonious
socialist community inspired by Christian ideals, justice and
political equality, believing, somewhat naively, that the
collapse of the Weimar Republic could be a ‘Kairos’ moment
(the ‘right time’ for a historic change) that might provide an
opportunity for this breakthrough.
According to Tillich, nationalist authoritarianism relies on the
origin myth of a culturally and racially pure society in some
idealised, romanticised past, a description that encapsulates
many modern populist movements. Tillich’s religious
socialism combined Christianity with politics and culture,
offering a Leftist humanistic conception of Christian
teachings. This was his attempt to unite Christian and Social
Democratic ideas against Nazism and its myths of origin,
blood and soil. Tillich wrote presciently that:

If … political romanticism and, with it, militant


nationalism proves victorious, a self-annihilating
struggle of the European peoples is inevitable. The
salvation of European society from a return to
barbarism lies in the hands of socialism.

By the time the book appeared in 1933, the Nazis had already
seized power and were rapidly eliminating all opposition.
Tillich’s attempt not only failed but made him a target. The
Nazis reviled The Socialist Decision and suppressed it soon
after publication. Tillich was lucky to escape Germany. Once
when the Gestapo knocked on the door looking for him, his
wife informed them that he was away. (He was actually out
for a walk.)

In April 1933, Hitler’s government suspended Tillich from his


Frankfurt University professorship. The dismissal stunned
Tillich and he was slow to react, not managing to leave
Germany until late October 1933. He even appealed his
dismissal to the German Ministry of Culture after he arrived
in New York. The appeal was curtly rejected.
His Frankfurt colleagues Horkheimer and Adorno were also
to leave Germany within a year. At Reinhold Niebuhr’s
invitation, Tillich joined the faculty of Union Theological
Seminary, where he became a professor of philosophical
theology at the age of 47. While he initially struggled to learn
English, he gradually developed into a charismatic and
sought-after lecturer.

Defying the traditional notion of a theologian, Tillich worked


across the fields of philosophy, theology and culture, focusing
on the personal search for answers to ultimate questions. He
examined the human quest for meaning, but without
theorising about the nature of God. God, he thought, could be
discussed only symbolically and never literally.

His thought was founded on the conception of humans as


finite and separate individual mortal beings. While limited by
their finitude, humans are always searching for meaning,
purpose and justification, concepts that refer to the infinite
Universe beyond themselves. As finite beings, humans can
never reach or grasp the infinite, but remain deeply
concerned with and driven by the ultimate questions of
meaning and purpose in their lives.

In understanding Tillich’s theology, it is important to begin


with his two key concepts: faith and God. Tillich considered
faith not a belief in the unbelievable, but the ‘state of being
grasped by an ultimate concern’; and he conceived of God not
as a being, but as ‘the ground of being’. Both concepts are
consistent with secular humanist as well as religious
conceptions of the Universe.
The ‘ground of being’ could mean the Big Bang, the Universe
itself or a universal God

Tillich’s thought fused religious and secular ideas of morality


by refusing any fixed moral ideology and by rejecting
traditional notions of an authoritarian, top-down ‘God rules
Man’ and ‘Man serves God’ religious approach. He conceived
of love and justice as the unifying social forces in the face of
the fundamental anxiety created by human mortality and
separateness. For Tillich, religion, morality and meaning
come from humans, not from God. He focused on the
experience and feelings of upward-looking humans searching
for meaning rather than on a religious superstructure of a
downward-looking God. This is as much a psychological
approach as a religious one. Tillich’s approach promotes
acceptance of our humanness, our mortality, our finite being,
and of the differing ideas, meanings and morals that we each
develop for ourselves. His openness to the existential
uncertainty confronting all people in considering questions of
ultimate importance was rare in theological circles during his
lifetime.

His key philosophical terms recast nominally religious


elements in a manner that expands their relevance beyond
Christianity. Tillich’s radical approach to faith as an
expression of ‘ultimate concern’ eliminates the importance of
narrow denominational religious orthodoxies. His idea of God
as not a being, but the ‘ground of being’ and Man as a finite
being, means that God is beyond the intellectual grasp of
humans, and religious statements about the nature of God
can never be taken literally. This broad conception brings
together religious faith and secular and scientific concerns
regarding the origin of Man and the Universe. For Tillich, the
‘ground of being’ could mean the Big Bang, the Universe itself
or a universal God. He rejected the traditional theistic notion
of God as a being that moves around the Universe doing great
things and worrying about, interfering with and scolding
human beings. Rather, Tillich conceived of God as a symbolic
object of the universal human concern for ultimate questions
of meaning and purpose. God is thus outside our Universe
and is a symbol for the answers to our deepest questions, but
the answers always elude our grasp.

One of the most difficult aspects of Tillich’s thought is the


ambiguity that characterises much of his writing. Among the
most puzzling and paradoxical ideas in his Systematic
Theology (1951) is his statement that ‘God does not exist’ and
that ‘to argue that God exists is to deny him.’ Tillich goes on to
state that the word ‘existence’ should never be used in
conjunction with the word ‘God’. These assertions fit with the
idea of God as a symbolic object that is a repository of
ultimate concern, but not a being. Tillich scholars have
disagreed on the meaning and significance of these passages.
Does Tillich mean that, since God is ‘beyond essence and
existence’ and exists outside of time and space, God is not part
of existence? Or is Tillich implying that God really doesn’t
exist and is not required to do anything in the Universe?
Certainly, in Tillich’s theology, God is an abstract and
somewhat inactive concept. The action all comes from the
human side through faith. God is the unreachable object of
our ultimate concern. This illustrates some of the difficulties
of interpreting Tillich’s intentionally paradoxical and
deliberately ambiguous assertions as he tries to avoid
discussing the literal nature of God.

Tillich describes faith as an ecstatic ‘centred act of the whole


personality’, but insists that faith always includes doubt and
can include demonic or idolatrous elements that are not
ultimate concerns. He wrote that uncertainty is inherent in
faith (and apparently sometimes in reading Tillich), and that
human courage is essential to overcoming the risks of the
unavoidable uncertainty and doubt regarding our ultimate
concern, ‘be it nation, success, a god, or the God of the Bible.’
The risk of this uncertainty is a loss of faith that breaks down
the meaning of one’s life. This loss of faith has happened
many times with the collapse of utopian ideologies, states and
empires from those of communism and fascism, to
monarchies and failed democracies.

The point here is simple. As humans, we share many varied


belief systems, any of which may be false or may ultimately
collapse, evolve or disappear entirely. This is the risk of faith.
Yet humans cannot live without it. We always have faith,
whether or not we acknowledge it, because we always have
ultimate concerns. Faith is a response to the finiteness of
human existence. Our faith in ultimate meaning takes us
beyond our finitude to something infinite that might answer
ultimate questions of meaning and purpose, yet answers
always lie beyond our grasp. Tillich’s humanistic and
existentialist theology analysed the path of each person in
their individual struggle with and community approach to
faith and God. And Tillich was above all else a humanist,
although he recognised that liberal humanism was also a
secular quasi-religion.

Faith based on utopian or populist political ideals could


become an idolatrous quasi-religion

This radical redefinition of faith as a concern for ultimate


questions broadened the meaning of faith beyond religion to
the universally shared human effort to address spiritual,
social, political and aesthetic concerns – humanity’s search
for meaning. He recognised that any belief system could be a
source of ultimate or at least ‘preliminary’ concern. For
example, faith in extreme nationalism risks turning the
nation itself into an authoritarian god. But nationalism was a
false, idolatrous and demonic god. Tillich, of course, had in
mind the example of Nazi Germany as the embodiment of a
demonic national state.

He recognised that the ultimate concerns of humans are


many and varied, and are not necessarily religious in nature.
But he also warned that faith based on utopian or populist
political ideals could become an idolatrous quasi-religion
when directed at narrow concerns. The state in authoritarian
ideologies is always a false idol that encourages and absorbs
faith and presents a false object for worship; a utopian,
nationalistic ideology and myth that harms rather than
benefits humanity. Tillich applied this logic even to the
Church, writing that ‘no church has the right to put itself in
the place of the ultimate.’

Tillich was clear that God could not be a being in the


Universe, else God could not have created the Universe. As we
have mentioned, for Tillich, God was outside the Universe
and beyond space and time, existence and essence. This broad
and somewhat inchoate conception of God led other thinkers
to accuse him of atheism, pantheism and even panentheism.
But Tillich rejected all these labels. Instead, he continued to
assert that we can speak of God only symbolically. Humans
lack the knowledge and capacity to speak directly and literally
about what God is. Instead, Tillich interested himself in the
human relationship to the understanding of God as an object
of ultimate concern. His perspective is always that of the
finite human looking up toward the infinite. Religious
symbols for Tillich are earthly finite things, but they point
toward the infinite and the unreachable Universe beyond
human understanding. They cannot define or describe God,
but only point to Him, and thus to our most fundamental
concerns.

Faith always includes doubt and an element of courage to


sustain one’s faith in the face of such doubt. Tillich spoke of
the social expressions of faith as a community of faith. But a
community of faith (ie, shared ultimate concern) is always
vulnerable to authoritarianism and must be ‘defended against
authoritarian attacks’. By enforcing ‘spiritual conformity’,
Church or scholarly authorities can turn faith into
authoritarianism. The inclusion of doctrines of infallibility in
the Church is an example of the kind of authoritarianism that
Tillich resisted. For him, no one is infallible and all doctrines
are subject to uncertainty and doubt.

To avoid the almost inevitable tendency toward institutional


authoritarianism, Tillich writes that ‘creedal expressions’ of
faith (ie, specific denominational beliefs, rituals and
sacraments) must never be regarded as ultimate, but must
always make room for criticism and doubt. Analogously,
Tillich defined morality, not as a system of religion-inspired
rules, but as the free expression by an individual of whom he
or she is, as a person. He rejected rigid ‘moralism’ as coming
from rules outside the individual. Instead, he conceived of
morality as arising in each of us based on our feelings of love
and justice for others in our world. Tillich believed that love
infused with justice, not ideology, was the source of all
morality and the unifying force to bridge the ontological
separation that we experience as individual beings. He rejects
the idea of a frozen concrete moral content: rules are merely a
collection of current social wisdom about how to live and are
not morality. Blind obedience to rules is mere submission to
an authoritarian master. Morality comes from within.

Shock of the non-existence of a religious God strengthens


knowledge of the power of your own being

When I think of my grandfather’s work, I always come back to


the centrality of doubt to his thought. He concluded The
Courage to Be (1952) with a much misunderstood final
sentence perhaps inspired by his horrific experiences in the
First World War:

The courage to be is rooted in the God who appears


when God has disappeared in the anxiety of doubt.

When the God of theism has disappeared in the anxiety of


doubt, what appears is the God above God or the power of
one’s own being. I take this to mean that when you confront
the shock of the non-existence of the religious or theistic
conception of God, you become strengthened with the
knowledge of the power of your own being, a power that is
above and beyond theistic conceptions and is in fact the
source of all of those religious conceptions. This
interpretation has Tillich crossing the boundary of religious
faith into the existentialist belief in his own personal courage
and power as a being. And this, after all, is the purpose and
conclusion of all philosophical thought that must always
come back to the self, the human being trying to understand
the Universe, but always returning to itself and its own
subjective interests that ultimately create the only world we
can live in, that of our own being.

E5
In 1938, near the end of the Great Depression, the US
president Franklin Delano Roosevelt commissioned a ‘Report
on the Economic Conditions of the South’, examining the
‘economic unbalance in the nation’ due to the region’s dire
poverty. In a speech following the report, Roosevelt deemed
the South ‘the nation’s No 1 economic problem’, declaring
that its vast levels of inequality had led to persistent
underdevelopment.

Although controversial, Roosevelt’s comments were


historically accurate. The president’s well-read and highly
educated young southern advisors had convinced him that the
South’s political problems were partially a result of ‘economic
colonialism’ – namely, that the South was used as an
extractive economy for the rest of the nation, leaving the
region both impoverished and underdeveloped. Plantation
slavery had made the planters rich, but it left the South poor.

Unlike the industrialising North and, eventually, the


developing and urbanising West, the high stratification and
concentrated wealth of the 19th-century South laid the
foundations for its 20th-century problems. The region’s
richest white people profited wildly from various forms of
unfree labour, from slavery and penal servitude to child
indenture and debt peonage; they also invested very little in
roads, schools, utilities and other forms of infrastructure and
development. The combination of great wealth and extreme
maldistribution has left people in the South impoverished,
underpaid, underserved and undereducated, with the shortest
lifespans in all of the United States. Southerners, both Black
and white, are less educated and less healthy than other
Americans. They are more violent and more likely to die
young.

Now, 86 years after Roosevelt’s report, the South has


returned to historically high levels of economic inequality,
lagging behind the rest of the US by every measurable
standard. The plight of the South is a direct result of its long
history of brutal labour exploitation and its elites’ refusal to
invest in their communities. They have kept the South in dire
poverty, stifled creativity and innovation, and have all but
prevented workers from attaining any kind of real power.

With the rapid industrialisation spurred by the Second World


War, the South made great economic strides, but never quite
caught up with the prosperity of the rest of the US. While the
South’s gross domestic product has remained around 90 per
cent of the US rate for dozens of years, deindustrialisation of
the 1990s devastated rural areas. Since then, hospitals and
medical clinics have closed in record numbers, and deaths of
despair (those from alcohol, drugs or suicide) have
skyrocketed, as has substance abuse. Southerners in general
are isolated and lonely, and wealth and power are heavily
concentrated: there are a few thousand incredibly wealthy
families – almost all of them the direct descendants of the
Confederacy’s wealthiest slaveholders – a
smaller-than-average middle class, and masses of poor
people, working class or not. The South, with few worker
protections, prevents its working classes from earning a living
wage. It’s virtually impossible to exist on the meagre income
of a single, low-wage, 40-hour-a-week job, especially since
the US has no social healthcare benefits.

The American South is typically defined as the states of the


former Confederacy, stretching north to the Mason-Dixon
line separating Maryland from Pennsylvania, and west to
Texas and Oklahoma. Today, one-fifth of the South’s counties
are marred with the ‘persistent poverty’ designation, meaning
they have had poverty rates above 20 per cent for more than
30 years. Four-fifths of all persistently poor counties in the
nation are in the states of the former Confederacy. The data is
clear that most Southern states continue to be impoverished
and politically backwards. Whether measured in terms of
development, health or happiness, the region is bad at
everything good, and good at everything bad.

The South was portrayed as anti-capitalist: enslavers had to


be dragged into modernity against their wills

The recent popularity in liberal circles of the New History of


Capitalism (NHC) to explain the region’s exceptionalism has
slowed in recent years. The NHC emerged in the 2000s and
2010s, as one historian wrote, by claiming ‘slavery as integral,
rather than oppositional, to capitalism.’ It seems likely that
during the post-Cold War triumph of capitalism, a subset of
historians began trying to tie much of the past to the term –
with the most extreme instance being the insistence that
slavery was the key to American capitalism. While the NHC
scholars rarely define terms like ‘capitalism’, the problems
with their theories are more than academic. Unfortunately,
presenting enslavers as cunning, profit-driven businessmen
not only obscures important features about the past, it also
downplays immense regional differences in economic
development.

Thinking back over the NHC trends, it is important to note


how other scholars, both past and present, have presented the
problems of the region, and discuss issues that may have been
obscured by a heavy emphasis on business and ‘slavery’s
capitalism’. As the economic historian Gavin Wright has
pointed out, the NHC’s central claim, echoed in The New
York Times’ Pulitzer Prize-winning ‘1619 Project’ – that
slavery was essential for American economic growth –
ignored decades of accepted historiographical work on
capitalism and slavery. It also contradicted nearly everything
economists have argued regarding slavery’s impact on the
South’s (under)development.

Beginning in the 1960s, many historians classified the South,


from the days of slavery until the Second World War, as
distinctively precapitalist in significant ways. They saw the
region as having a type of ‘merchant’ or ‘agrarian’ capitalism,
and never considered the states of the Old Confederacy as
shrewdly ‘capitalist’ (the term itself without any modifiers).
Primarily due to the absence of a free labour society, but also
because of the lack of infrastructure and development within
the region – a place with few cities, little industrialisation,
and few social services – the South was often portrayed as
distinctly anti-capitalist: enslavers had to be dragged into
modernity against their wills.
After the Second World War industrialisation boom ushered
the South into a more fully capitalist society, it essentially
became a colonial economy to the North, as it courted
investment and corporations from the capital-rich
Northeastern US. Existing in a dependency-type of
relationship, it was never really the South – or southern
labour, no matter how unfree or brutalised – driving the US
economy.

Finally, contrary to the position that American slavery


represents the key or essence of capitalism, the most recent
scholarship regarding economic analysis of slavery argues
that the institution was not economically efficient. All these
points highlight the need for studies on growth, and more
importantly, on underdevelopment. Slavery made the
planters very rich, but it made the South very poor. In the
19th century, capitalism, even industrial capitalism, did not
bring the South to the developmental standards of the rest of
the nation. The question remains: why is that so?

If we turn from looking at planters to studying labour, we see


that elite capture of the state is bad for democracy and worse
for development. It also helps us distinguish between growth
and development, highlighting the unevenness of both in
different areas of the US. The US is a large country and
awareness of the difference between growth and development
can help us see that perhaps it makes more sense to compare
the American South with places in the Global South rather
than the American North or West.

To begin with, southern militias proved an effective imperial


military tool during the brutal process of Indian Removal,
which lasted into the 20th century. The white colonialist push
westward robbed Native Americans of the greatest wealth the
region has: land. That land eventually became the richest
white people’s main source of power, owned by the few and
guarded like a religion. Elite white southerners were obsessed
with intermarriage and have kept their fortunes intact for
generations. While they hoarded riches and resources for
themselves, they neglected to invest in the communities in
which they lived. With few improvements in technology and
development, the South’s dependence on slavery enriched
enslavers and their descendants, but it left the rest of the
region, both people and resources, deeply and cyclically
impoverished.

Americans think of the US as having been at a crossroads in


1860, between slavery and freedom, but that impasse was
more than just political and ideological: it was also economic.
While the North had made fantastic gains over the previous
decades by investing in its people, from education to
infrastructure, the South lagged far behind. The wealthy
enslavers refused to invest in the poor and middling-class
whites surrounding them, finding no compelling reason to
put money into communities they would move away from as
soon as the western spread of slavery beckoned. In terms of
development, whether infrastructure, education, healthcare
or wealth distribution, the South remained woefully
underdeveloped in comparison with the rest of the country.

The Deep South instead functioned more like an oligarchy or


aristocracy
With one-third of the nation’s population in 1860, the South
was responsible for only 10 per cent of US manufacturing
output, and possessed only 10 per cent of the nation’s
manufacturing labour force and 11 per cent of its
manufacturing capital. Its transportation system, best
described as like a ‘conveyor-belt’, transported goods
effectively but did little for people. The northern and even
western US had been investing in building schools and
providing free public education, but the cotton South left its
people to fend for themselves: education was reserved for the
rich. The North built hospitals, asylums and places for the
invalid and indigent; the South built jails and prisons.

Far from a democratic region, the Deep South instead


functioned more like an oligarchy or aristocracy. As W E B Du
Bois wrote in 1935: ‘Even among the 2 million slaveholders,
an oligarchy of 8,000 really ruled the South.’ The wealthiest
slaveholders wielded immense and pervasive power as
lawmakers, law-enforcers, judges and even jury members.
They dominated the region’s politics and devised multiple
ways to disenfranchise their poorer fellow countrymen. The
oligarchic structure of the 19th-century South meant that the
men who controlled government also controlled everything
else in society, from rental properties and bank loans to arrest
warrants and vigilante violence.

In fact, the enormous cost of the South’s implementation of


its various forms of unfree labour still haven’t been
adequately calculated. The ubiquitous, police-state-like
criminal justice system, complete with slave patrols and night
riders, the overseers and slave-drivers and catchers and other
middlemen who had to be hired to keep people working –
none of that has truly been accounted for yet. Recently, the
economists Richard Hornbeck and Trevon Logan challenged
decades of accepted scholarship concerning the
cost-effectiveness of slavery, arguing that slavery was
inefficient when ‘including costs incurred by enslaved people
themselves’. Under this view, emancipation produced major
economic gains.

The first years of Reconstruction brought immense changes


to the South: a free labour economy threatened to change the
entire social order as Black politicians courted poor white
voters for a cross-racial, class-based coalition. For the first
time, the region experienced democratic elections, open to all
men, Black and white. The new state legislators established a
system of public education. They began funding public works
and infrastructure. They started developing the region.

Despite these transformations, former slaveholding families


remained rich – and powerful – because they held a
near-monopoly of the only real capital left in the war-torn
South: land. The leaders of the Confederate insurrection were
never held accountable for treason, and the same few wealthy
white families who ruled the slave South remained
entrenched in power even after the war. Their enduring place
at the top of Southern society helped give rise to the
‘continuity thesis’, in which some scholars argue that, despite
the Civil War and Reconstruction, little changed in the US
South. In many rural areas, even today, their heirs still lord
over their little locales. The South’s ruling elite eventually
regained complete control of the region, disenfranchising the
masses, terrorising the leaders and the intellectuals and the
brave, and undergirding this shadow world of unfreedoms
with the ever-present threat of violence.

The Southern elite may have, eventually, emerged from


Reconstruction back on top of the South, but the region no
longer dominated US politics. The US South remained
overwhelmingly agricultural well into the 20th century and
long after the rest of the country had become more urban.
From the vital perspective of social and labour relations, the
South’s transition to capitalism must be considered as late by
US standards. In the 1940s and ’50s, historians began arguing
that the type of capitalism in the South throughout the early
20th century was merchant capitalism (also known as
mercantile, or agrarian capitalism), not industrial capitalism.
Merchant capitalism is considered the earliest phase in the
transition to capitalism; it is more about moving goods to
market, and is characterised by the lack of industrialisation,
wage labour and commercial finance. A modified version of
the merchant capitalism model would be championed several
decades later by Eugene Genovese whose southern ‘in but not
of’ capitalism theory, replete with semi-feudalistic social
relations, appeared in the 1960s and generated great interest
and debate, and eventually, in the early 2000s, came under
sustained attack by some US historians who emerged
following the end of the Cold War.

These scholars, who eventually came to be grouped under the


designation ‘the New History of Capitalism’, never truly
engaged with the consequences of the fact that the slave
South, in a strict sense, cannot be considered capitalist
because the enslaved are unfree, forced labourers who cannot
sell their labour power. Even if we sidestep the
slavery-as-labour issue, vital to Marxist assessments of
capitalist society, we must acknowledge that impoverished
southern whites also had little to no control over their
respective labour power. The power of enslaved labour
consistently reduced the demand for workers, lowered their
wages, and rendered their bargaining power weak to
worthless. Labourers in the South, regardless of race, worked
within a world ruled by degrees of freedom. Never a stark
dichotomy, freedom emerged from a give-and-take process of
political contestation and negotiation with the planter
aristocracy. Slavery was simply one form – albeit the harshest
– of a range of unfreedom, lasting well into the 20th century.

But these merchant capitalist labour market features were not


simply rooted in the racism of southern white culture.
Instead, elite white southerners ensured a calculated and
well-codified social order, complete with exploitative labour
practices, debt peonage, and continuing forms of unfree
labour made possible by the burgeoning criminal justice
system. To maintain power and thus control of the region, at
times the elite chose to forgo higher profits in the short term
so that they could keep their labour force under tighter
control in the long term. This was certainly the case with
industrialisation during slavery, when enslavers could have
turned a much higher profit by industrialising but chose not
to; they did not want to disturb the fragile hierarchy.

South Carolina – the birthplace of the Confederacy – has the


lowest union membership, at 2 per cent
Compared with the rest of the US, with only a brief
interruption during early Reconstruction, the South’s lack of
labour power, infrastructure and internal development
extended well into the 20th century. Between the violence the
Ku Klux Klan and other white supremacists, and the lack of
opportunities for poor and working-class southerners, by the
mid-1870s former slaveholders and their descendants were
back in complete control of the South. Due to the absence of
land reform, reparations and the failure to punish
Confederate leaders or confiscate Confederate property, the
labour lords of the antebellum period merely became the
landlords of the postbellum period. Their primary source of
wealth changed, but they remained in power, controlling
everything in the South and reverting to their old ways of
undertaxing, underfunding and underdeveloping.

Given these facts, before the popularity of ‘the New History of


Capitalism’ 20 years ago, many historians viewed the
transition from slavery to capitalism as a long process
because, even after emancipation, unfree labour continued to
dominate a significant portion of the labour market. Some
argued that forms of unfree labour persisted in the South as
late as the 1920s and ’30s, and others claimed they lasted
until the Second World War. Even when technically free,
African American workers generally lacked the labour power
necessary to be deemed a proletariat, able to effectively
negotiate on a labour market. Due to predatory sharecropping
contracts and debt peonage, an extremely punitive criminal
justice system, and the added layer of domestic terrorism
from white supremacists, the poorest people in the South,
both Black and white, still worked in a society in which labour
was not entirely free, that is, able to be brought to a labour
market.

The South’s shadowland of unfree labour rested entirely on


an undemocratic government, also meant to control labour.
In the 1880s and ’90s, when Farmers’ Alliances and Populism
began mounting a serious political challenge to planter
domination, every southern state passed laws limiting the
vote. While primarily aimed at disenfranchising Black people,
the laws also disenfranchised poor white people, further
concentrating each state’s political power in the hands of
white elites in plantation districts. It was an effective strategy:
in 1880 there were 160,000 union members in the US, and
fewer than 6 per cent of them lived in the South. Today, while
a record low of just 10 per cent of Americans are union
members, the South’s numbers are roughly half of that, with
states like South Carolina – the birthplace of the Confederacy
– having the lowest union membership in the nation, just
over a measly 2 per cent.

During the last third of the 19th century, the value of output
rose and capital investment in the US increased tenfold.
Meanwhile, most of the Deep South (outside of a few large
cities like Atlanta and New Orleans) remained ‘capital
starved’ and ‘technologically laggard’, as the region’s elite
continued to baulk at infrastructure or other kinds of
developmental investment. To secure funding, the states of
the former Confederacy needed to court investment from
outside the region – first the North and West, later Europe
and Asia. Originally, southern politicians chased northern
capital by offering them generous tax breaks and other
financial incentives. Without a strong tax base or an effective
bureaucracy, the region suffered further because most profits
were routed out of the South back to northern owners and
investors. Taken together, these things meant that, well into
the 20th century, the South remained overwhelmingly rural,
without a strong system of infrastructure and no good plan
for development. In 1900, the country was 40 per cent urban
versus the South’s 18 per cent, and 25 per cent of the US
labour force was involved in manufacturing versus the
South’s 10 per cent. Something had to drastically change.

Following the Great Depression, which hit the rural,


already-impoverished South harder than it did the rest of the
nation, the New Deal influx of money, federal programmes,
jobs and infrastructure helped bring the region fully into the
20th century. Perhaps most importantly, in the 1930s
Roosevelt’s New Deal finally broke up the power stranglehold
by the big land (plantation) owners. The South was finally
able to evolve from an agricultural labour market that had
pre-capitalist characteristics, shifting to a much larger
industrial workforce during the Second World War. Some $4
billion in federal spending poured into the region, funding
military facilities and forever ending the isolated labour
market. Since 1940, the South has outperformed the rest of
the US in income, job and construction growth, finally
reaching about 90 per cent of national per-capita income
norms. It has also remained critically behind in multiple
important infrastructural and development measurements,
from education and transit to poverty levels and healthcare.

Without question, the Second World War changed the South


for the better. War industry jobs pulled workers from rural
areas, forcing southern farms to finally mechanise. This
mechanisation meant the destruction of sharecropping and
tenant farming, as well as debt peonage. Workers in the South
would finally be paid in wages – in cold, hard cash. And that
fact was incredibly freeing.

Outside of extractive industries, the type of industry that


came to dominate the South was reliant on intensive,
low-wage labour, a striking difference with the rest of the US.
‘A low-wage region in a high-wage country,’ the South
industrialised in a way that preserved and reinforced the class
and racial status quo, even when corporations were owned by
men from outside the region. Instead of being an agent of
radical change, southern industrialisation preserved the
region’s legacies of low taxes for the wealthy, heavy-handed
labour control, and little in the way of governmental oversight
or regulation.

Development is, quite simply, an essential part of restorative


justice

Even as the South experienced a period of relative prosperity


from the Second World War to the 1990s, with development
at its peak, it never quite caught up to the rest of the nation.
While there were myriad reasons for this remaining gap,
historians have attempted to explain them with a type of
regional dependency theory. Referring to the old Confederacy
as a ‘colonial economy’, they argued that northern-owned
corporations controlled southern money and power,
extracting resources and exploiting cheap labour, siphoning
both profits and tax dollars away from the impoverished
region – all while maintaining racist practices. Adding insult
to injury was that the southern economy became the domain
of men from outside the South, men with no stake in the local
communities their decisions ultimately affected (indeed, often
devastated).

Whether or not the South was truly a colonial economy,


framing it as such highlighted that the region remained
impoverished, infrastructurally stunted, and underdeveloped.
Even the golden era of the sunbelt South came to a bitter end
by the close of the 20th century. Rural developmental
problems began as far back as the 1980s, as local banks began
shuttering and hospitals closed. Things worsened in the
1990s, as the economic growth the South enjoyed for decades
came crashing to a halt when federal trade deals eviscerated
manufacturing. The racial tolerance and progress made
possible by labour unions and working-class solidarity began
to erode; deindustrialisation profoundly changed the region.
Never having invested much in public services, state
governments continued to slash budgets through the 2000s.
This not only stalled new development, it also let much of the
states’ infrastructure, education and healthcare plans fall
deeply into trouble, perhaps disrepair.

To address the continuing developmental gap in the poorest


areas of the US, the country’s staggering levels of inequality
must be addressed. Using policy to redistribute property, the
South – and the nation – may finally be brought up to the
standards of the rest of the developed world. From universal
healthcare and a thriving public education sector to
functional public transportation and reliable infrastructure,
development is, quite simply, an essential part of restorative
justice. With deeply progressive taxation coupled with
democratic reforms, the right to organise and collectively
bargain may be preserved; the right to retire fully funded; the
historically racist criminal justice system switched to a Nordic
model. Even the poorest rural Americans could lead lives with
dignity due to governmental programmes such as a universal
basic income (which would immediately lift more than 43
million people out of poverty) and a federal jobs guarantee –
a concept derivative of the best aspects of the New Deal.

Today, more than a lifetime after Roosevelt’s declaration of


the South as the ‘the nation’s No 1 economic problem’,
nothing has changed. The South remains poor,
underdeveloped, and lags behind the rest of the country by
every measurable standard. It is a moral blight on the nation’s
conscience, and far past time to truly lift the region out of
poverty, and into the 21st century.

You might also like