Algorithm Social Divide

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Feminist Media Studies

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rfms20

Algorithms for “hers”: in whose interests?

Beverley Skeggs

To cite this article: Beverley Skeggs (2020): Algorithms for “hers”: in whose interests?, Feminist
Media Studies, DOI: 10.1080/14680777.2020.1783798

To link to this article: https://doi.org/10.1080/14680777.2020.1783798

Published online: 24 Jun 2020.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rfms20
FEMINIST MEDIA STUDIES
https://doi.org/10.1080/14680777.2020.1783798

Algorithms for “hers”: in whose interests?


Beverley Skeggs
Sociology Department, Lancaster University, Lancaster, UK

We ran a sociological software project (ESRC ES/KO10786) between 2013 and 16 on “A


Sociology of Values and Value”1 (Beverley Skeggs and Simon Yuill 2016). It began as
a study of how the affects of friendship and faith were monetized, but it quickly—because
this is what our data identified—turned into a study of the tracking and trading of
people’s data. There was so much tracking on people’s browsers that at first, we were
overwhelmed. Then, we struggled to make sense of the radically different amounts of
tracking and trading that occurred between people. One case stands out. It was the
difference we saw between two women: one, a middle-class highly educated global
journalist (Lara) and another an older, disabled women in receipt of social security
(Belle). Our data showed how the trackers and traders were ever-present on Lara’s
browsers, offering her browser use up for sale on digital advertising auction sites thou­
sands of times per day. She was very valuable trade. Yet they were barely interested in
Belle. At first, we thought we had a software design fault, or server problem with
capturing data, but then we looked more closely. Belle was being targeted and traded,
but only by debt agencies, and occasionally Asda. Belle was traded for her lack of buying
power, her lack of economic value.
Who was responsible for this digital divide? Clearly, the value identifying algorithms
built for the purpose of tracking the trading potential of people for advertisers. From
primitive accumulation to planned economies to computational capitalism, the key driver
in the formation of any algorithm is usually the search for value. We have seen this take
different shapes, such as the story of the drive for computational power in the formation
of the Soviet Union’s food distribution system2 but mainly we see algorithms used and/or
appropriated to shore up the interests of the rich and the powerful as they seek to
constantly accumulate.
Yet whilst algorithms underpin a huge amount of our daily infrastructure, such as
assessing our value for trading purposes every time we open a browser, they do so by
stealth. We do notice some, especially when they carry the intersectional inheritances of
their designers,3 and when we see adverts sent to us as a result of our browsing history.
Yet we rarely pay attention to the way algorithms turn us into trade as our data persona
points are de- and re-aggregated, put up for auction, sold, remaindered, and re-
aggregated. Trading one’s information happens in less than a millisecond (less than the
time it takes to blink). There may be 100,000 bids for your data per second (especially if
you are Lara). That means trading data occur faster than the speed of light.

CONTACT Beverley Skeggs b.skeggs@lancaster.ac.uk Lancaster University, Lancaster, UK


© 2020 Informa UK Limited, trading as Taylor & Francis Group
2 B. SKEGGS

Computational algorithms don’t just enable speeded up trade, they enable repeated
trade; they are not based on a single form of exchange such as the wage given for labour
in a contract of employment and exploitation. Rather, they enable different combinations
of our data points, ones that we have freely given through our browsers that enable us to
become a potential source of value. Unlike labour, there is no compulsion to give away
information about ourselves, in fact, we often think we are just talking to friends. This is
radically different to entering a labour contract, which in most places in the world, if we
don’t work, we don’t eat. And we don’t just give away our information once, we cumula­
tively build up our information over time, through our friend networks, consumer pat­
terns, TV viewing, measuring steps, actually just anything we do with a smartphone,
laptop, computer.4
Algorithms are not just put to use by advertisers. The next most frequent use and trade
of one’s data points is by surveillance agencies of some kind. These range from national
level CIA/GCHQ to state surveillance; to Palantir5; to global security company snooping,
such as the recent breach at WhatsApp by Israeli security company NSO of human rights
activists; to local government checks on social media; to use of algorithms in predictive
policing6; to employment agencies. The separation between commodification and sur­
veillance is porous. Many of the surveillance sites are private companies who sell your
data between themselves, or in the case of the Indian government sell all their citizen’s
data to other nations and to private security companies,7 actually to anyone.
Trackers, traders, and surveillance agencies insert themselves through apps, browsers,
any computer use, and we let them in: did you click the agree to use button? Over
100million Alexa devices had been sold by January 2019, devices powered by machine
learning algorithms that listen and learn our every move. They are “pre-emptive.” They
insert themselves into time that was once protected from exploitation, time that is not
labour time. They make the boundaries between labour, exploitation, and life porous and
ambiguous, just as we do every time, we use social media. They build on our history and
behaviour, trying to predict our every move. It is the prediction that entraps us in our
structural classifications of gender, race, and class, making different intersectional “hers.”
Remember Belle and Lara? Belle was targeted for her lack of value. She was sold for
financialization. Had Belle taken out the debt she was offered through advertising, she
would have been further locked into her impoverished position. We know of the massive
increases in female debt, that working-class women in the UK bear the burden of debt,8
and that Black women in the US bear an even larger debt burden (Louise Seamster 2019).
Belle was not just monetized, with her data points sold to debt providers and agencies,
but also commodified for financialization, i.e., turned into a product to be sold again and
again, to accrue profit for debt financing agencies through astronomical interest charges.
The algorithms are most definitely not in Belle’s interests, the effects of their stealthy
operations are organized against her, they would make her life much harder. The algo­
rithms also made lack a key data signal to be aggregated into future tracking and trading
of Belle, repeatedly concretizing her “lack.” These value identifying algorithms (of which
there are millions) are computational compound classifications of class, race, and gender
(amongst others), constantly stratifying by stealth, operating as machine learning abstrac­
tions of inequality and injustice.
To conclude, algorithms produce very different “hers.” At the extremes, many women are
trapped in debt bondage whilst others free to play, experiment, and even capitalize upon
FEMINIST MEDIA STUDIES 3

their own networks. By making some “hers” visible to debt they reproduce centuries-old
systems of valuation and classification “Hers” are being algorithmically produced through
different interests, mainly by those who pay to develop, use, and capitalize upon people for
profit, for surveillance, often both. Basic non-computational algorithms were used for
accounting purposes within slave financing. Computational power extends and amplifies
their function and potential. What is consistent is that although the computational algorithm
may be (relatively) new they build on histories: histories of colonialism, capitalism, misogyny
(Beverley Skeggs 2019). But in the present, they are much more difficult to challenge we
can’t see them, they operate faster than the speed of light, through stealth, and we welcome
them in, we even take pleasure in their effects. During our tracking project, we tried many
tactics to confuse, obfuscate, torment, and game the algorithms that we saw. But we came
to the conclusion that the only solution is “just say no!” which in a world where convenience
has become an imperative, is difficult. But convenience for what, whom and why? To be
tracked and traded more effectively? To free up oneself for more inequality and injustice?

Notes
1. With Dr Simon Yuill https://www.thesociologicalreview.com/value-and-values-interaction-
infrastructures-and-accumulation/and for a full description see public lecture “You are
being tracked, evaluated and sold: Digital Inequalities”, http://www.lse.ac.uk/International-
In/Videos-Podcasts/You-Are-Being-Tracked
2. See the hilarious novel “Red Plenty” (Francis Spufford 2010) and the review by Nick Dyer-
Witheford 2013
3. See Sara Harrison (2019)
4. https://blogs.lse.ac.uk/equityDiversityInclusion/2017/09/wake-up-algorithms-are-trawling-
your-phone-while-you-sleep/
5. See: https://www.palantir.com/about/. Palantir was established by Peter Theil, Donald
Trump’s tech advisor, a Facebook founder, founder of PayPal (to de-regulate finance across
national borders), a well-known libertarian, who once argued in the Stanford Review that
women should not be given the vote. As a key algo designer one could argue that his
interests do not align with A4 H.
6. On predictive policing see Mark Andrejevic 2013.
7. See Ranjan Balakumaran @ https://realmedia.press/function-creep-fintech-india-aadhar-id-
system-part-1-trading-faces/
8. http://www.lse.ac.uk/International-Inequalities/Videos-Podcasts/You-Are-Being-Tracked
Women’s Budget Group Report (2019) Household Debt and Gender hhtps://wbg.org.uk/wp-
content/uploads/2019/10/DEBT-2019.pdf

Disclosure statement
No potential conflict of interest was reported by the author.

References
Andrejevic, Mark. 2013. Infoglut: How Too Much Information Is Changing the Way We Think and Know.
London: Routledge.
Dyer-Witheford, Nick. 2013. “Red Plenty Platforms.” Culture Machine 13. https://culturemachine.net/
wp-content/uploads/2019/05/511-1153-1-PB.pdf.
4 B. SKEGGS

Harrison, Sarah. 2019. “Five Years of Tech Diversity Reports and Little Progress. https://www.wired.
com/story/five-years-tech-diversity-reports-little-progress/
Seamster, Louise. 2019. “Black Debt, White Debt.” American Sociological Association 18 (1): 30–35.
Skeggs, Beverley. 2019. “The Forces that Shape Us: The Entangled Vine of Gender, Race and Class.”
The Sociological Review 67 (1): 28–35. doi:10.1177/0038026118821334.
Skeggs, Beverley, and Simon Yuill. 2016. “The Methodology of a Multi-model Project Examining How
Facebook Infrastructures Social Relations.” Information, Communication & Society 19 (10):
1356–1372. doi:10.1080/1369118X.2015.1091026.
Spufford, Francis. 2010. Red Plenty. London: Faber and Faber.
Women’s Budget Group Report. 2019. Household debt and Gender. https://wbg.org.uk/wpcontent/
uploads/2019/10/DEBT-2019.pdf

You might also like