Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Avaaz Response to Proposals for Initiative on

Greater Transparency in Sponsored Political


Content & Other Supporting Measures

At a time when we face so many pressing challenges-- the pandemic, climate change and
increasing polarisation, new technologies are developing at warp speed, and Europeans must be
equipped to make informed political decisions free from interference or manipulation.

Avaaz is a global campaigning organisation with over 64 million members worldwide and 22 million
in the EU. ​For the past three years, Avaaz has been one of the leading organisations ​in the EU​ ​and
around the world​, ​investigating and campaigning on disinformation ​distributed at ​scale​ by AI on
social media platforms. We have​ ​commissioned​ ​polls​ ​on its harmful effects​, ​evaluated platforms'
efforts​ ​to manage it, and identified ​their failures​ ​on a variety of platforms, including ​Facebook​,
YouTube​ and ​WhatsApp​.

Specifically, we carried out extensive research into political advertising used by political campaigns
during the 2020 US elections. ​Given Avaaz’s experience, we welcome the opportunity to provide
feedback on ​JUST’s Initiative on Greater Transparency in Sponsored Political Content, and Other
Supporting Measures​ (“the Proposals”), not only on specific proposals but also the directions in
which Avaaz believes the Roadmap must travel to really address the issues it outlines.

Why new obligations on political online ads transparency are


needed

Regulating advertising is nothing new. “Legal, decent and truthful” were the watch words of
previous generations of regulators in the print and broadcast advertising days. But as we will
outline below, the online advertising delivery systems, targeted precisely by our data profiles,
require new approaches -- particularly when they are a source of intentional disinformation
designed to skew the perceptions of voters.

The urgency of tackling online disinformation is now undisputed. It is a pervasive force in online
content, affecting advertising content as readily as organic content and often blurring the lines
between the two. The damage that disinformation can cause to our fundamental human rights is no
less real when disseminated through advertising. Any initiative seeking to improve the systems by

1
which advertising content is served directly to an individual's social media account must not only
provide basic transparency about the content’s origins, but also, embed mechanisms for ensuring
the accuracy of the content shared. The Roadmap must explore what consequences should follow
when high standards are not met, and must also consider whether or not tracking and targeting
should be permitted.

However, one of the key difficulties we face in taking such steps is the complete lack of
transparency on the nature, extent and mechanisms used by the platforms in targeting its
advertising content at the user. This, coupled with the lack of transparency over the extent and
reach of disinformation on online platforms and the efficacy of actions undertaken to combat it,
means that users, civil society and indeed legislators need to make a series of assumptions about
data and processes that are integral to the analysis of key issues such as the ​new techniques used
to amplify and microtarget political advertising online​.

Sound policy should be based on evidence. Online platforms that sell, host and target
political advertisements must provide the raw data generated through its political
advertising processes and information about how its algorithms operate to target users.

Despite the difficulties elaborated above, Avaaz’s experience researching and combating
disinformation and contributing to legal and policy proposals to eradicate online harms in the EU,
US and in Brazil can provide valuable insights into key areas outlined in the Roadmap.​1​ Avaaz
welcomes this chance to think about the direction that the EU should follow in regulating online
advertising technologies. Achieving measures to protect the integrity and resilience of political
debate, including the prevention of abuse of the system through advertising -- whose only goal is to
target disinformation at users -- and achieving greater transparency across all sponsored political
content, would mean a huge step forward in the fight against disinformation.

This is what is at stake at this moment of pathfinding, and we have taken a wide view of the
considerations we believe the Commission should keep in mind when looking at future regulation.
Specifically we will cover:

1) Section 1: The connection between online political advertising and disinformation during
election cycles
2) Section 2: How the AI used by online platforms to sell, host and target political ads amplifies
disinformation content
3) Section 3: Solutions, including:
a) Detoxing the algorithm
b) Implementing and enforcing transparent advertising policies
c) Ensuring that ad regulation efforts cover a sufficient period before and after the
election period
d) Incentivising preferred behaviors
4) Section 4: Conclusion

1
NPR, ​Report From Nonprofit Group Avaaz Says 'Europe Is Being Drowned In Disinformation'​ (24 May
2019),
https://www.npr.org/2019/05/24/726784208/report-from-nonprofit-group-avaaz-says-europe-is-being-drowned
-in-disinformation​.

2
Section 1 : The connection between online political advertising and
disinformation during Election Cycles
An Avaaz investigation into Super PAC advertisements on Facebook during the US 2020 election
highlighted dangers inherent in allowing private actors, the social media platforms, to become ​de
facto enablers and quasi-regulators of political ads​. They did not take advertising down with known
misinformation, and allowed it to build during a long run up period before the actual elections. This
is directly relevant to the Roadmap’s consideration of the length of time it should put its policies in
place, ahead of and beyond the pending election period.

The evidence in detail

Facebook allowed Donald Trump-supporting Super PAC, America First Action, to spend around
$287,500 and run at least 451 ads on Facebook’s platform between 19 August and 16 September
2020. ​The ads featured disinformation about Presidential Candidate Joe Biden. All of the
disinformation featured had already been debunked ​by independent fact-checkers and
reputable newspapers, and collectively earned over 9 million impressions.

Avaaz conducted an analogous study of political ads run on Facebook during the 2020 Georgia
Senate Runoffs. This similarly showed that a total of 95 ads run between December 23 and
December 29 2020 had targeted Georgia voters with fact-checked political disinformation. This
content violated company policies against ​misinformation​ and ​discrimination​, and garnered over 5
million impressions. From this work we were able to draw the following observations:

i. Facebook violated its own advertising policies by allowing paid political ads containing
fact-checked disinformation to run on its platforms

The ads containing false and misleading content appeared to violate Facebook’s own political
advertising policies​. In allowing paid political advertisements containing disinformation to run on its
platforms, Facebook also undermined its explicit promise to “secure the integrity of the US
elections, by encouraging voting, connecting people to authoritative information, and reducing the
risks of post-election confusion.”​2

ii. Advertisers easily evaded Facebook’s own measures to address political ads containing
false or misleading claims

Even when Facebook removed political ads containing false or misleading claims, it left up other
identical ads.​ ​We shared our Super PAC research with Facebook and by the time we drafted a
summary of our findings the platform had only removed 41 of the ads that we identified as
containing false or misleading claims, and had left up other identical ads.
2
Facebook, ​New Steps to Protect the US Elections​ (3 Sept. 2020)
https://docs.google.com/document/d/1U_U7JKqySMLSSvl3EuYVLqg3Mpdwo6LeeofXngaUf4E/edit?ts=5f6a2
662​https://about.fb.com/news/2020/09/additional-steps-to-protect-the-us-elections/​.

3
iii. Facebook allowed Super PACs systemically spreading fact-checked disinformation in
political ads to continue to run ads

The America First Action Super PAC, which systematically spread fact-checked disinformation in its
political advertisements, was allowed to run versions of the same ad even after some of the ads
were initially removed. In our Georgia focused investigation, 48 of the 95 ads containing
fact-checked political disinformation came from Super PACs, which are subject to Facebook’s
prohibition on false and misleading ads.

iv. Facebook allows politicians to run political ads containing disinformation and exempts
these ads from the fact-checking process

Forty seven of the 95 ads documented in the Georgia election investigation containing
fact-checked, political disinformation were run by politicians -- whom Facebook exempts from
fact-checking. ​According to Mark Zuckerberg​, "We don't do this to help politicians, but because we
think people should be able to see for themselves what politicians are saying. And if content is
newsworthy, we also won't take it down even if it would otherwise conflict with many of our
standards."

v. Super PACs, politicians and partisan groups are paying social media “influencers” to
create and post seemingly organic political content

As​ ​in the online environment, it is often difficult to recognise paid-for political material and
distinguish it from other political content,​ the Roadmap must provide proposals to address
the broadest range of political advertising, including apparently organic content. ​Super
PACs, politicians and other partisan groups have paid “influencers” and publishers on social media
platforms to create and post what often deceptively appears to be organic political content. This
tactic is​ ​a new form of non-transparent online targeting,​ ​ documented and reported on during
the US 2020 Election and adopted by Michael Bloomberg during his presidential campaign, the
Super PAC NextGen America, Turning Point USA, and the Pro-Biden “The 99 Problems” PAC, to
name a few.​3

These influencers, especially those appearing as “regular people” with fewer than 10,000 followers,
evoke trust and are well positioned to sway the behavior of their followers.​4​ As researchers at the
University of Texas at Austin warn, these “influencers pose a threat to campaign transparency,
accountability, and informational quality. Political influencer posts do not qualify for the stricter rules
imposed by Instagram, Facebook, and Twitter on political advertising due to the fact that payment
occurs off-platform. Without standardization of disclosure practices, differentiating between

3
Reuters, ​From Facebook to TikTok, U.S. political influencers are paid for posts (​ Oct. 2020); Reuters, ​Paid
social media influencers dip toes in U.S. 2020 election​ (Jan. 2020),
https://www.reuters.com/article/us-usa-election-socialmedia-sponsored/from-facebook-to-tiktok-u-s-political-in
fluencers-are-paid-for-posts-idUSKBN27E1T9​.
4
University of Texas at Austin, ​Social Media Influencers and the 2020 U.S. Election: Paying ‘Regular People’
for Digital Campaign Communication ​(Oct. 2020),
https://mediaengagement.org/research/social-media-influencers-and-the-2020-election/​.

4
coordinated political campaigns and genuine grassroots political speech will continue to be difficult
for the platforms.”​5

This kind of material currently falls outside of the Roadmap proposals and must be brought into any
legislative proposals otherwise it creates a clear and easy loophole for those seeking to sway an
election.

vi. Facebook threatens free speech and faith in the democractic system when it is not
transparent about why it removes advertisements

In terms of transparency, Facebook does not specify why an ad is removed, but simply notes the ad
in question “was taken down because it goes against Facebook Advertising Policies.” Those who
viewed the ad will not necessarily be aware that the ad was removed, let alone that the information
they were shown was false, and the damage will already have been done. This case study
illustrates that Facebook’s current measures are inconsistently applied, lack transparency, and do
not reduce the spread of verifiably false claims that have potentially serious effects on elections and
democracy at large.

Section 2: The amplification of disinformation content in online


political advertising

i. Online platforms’ AI will amplify disinformation contained in political ads

As the Roadmap aptly notes, ​political advertising is one way that disinformation and other
manipulated information, and divisive and polarising narratives can be disseminated, directed and
amplified, and through which interference can be achieved​.

Online platforms’ “curation algorithms” are responsible for deciding what users see, in what order,
and are designed to keep users glued to their screen. Our evidence detailed below shows that
without an approach to algorithmic design that places higher value on trustworthy content than
known disinformation, these algorithms amplify hate and disinformation because it generates more
engagement, as lies travel 6 times faster than truth.​6

For example, an April 2020 ​Avaaz investigation​ demonstrated that content from the top 10 websites
spreading health misinformation had 470 million estimated views on Facebook, which was almost
four times as many as equivalent content from the websites of 10 leading health institutions such as
the WHO and CDC. A separate Facebook-focused investigation revealed that between June and
September 2020 at least 50 pages and 27 public groups on the platform shared content glorifying
violence, praising a mass shooter, or spreading misinformation. Those pages and groups amassed
19.2 million followers in total and garnered over 114 million interactions.

5
Ibid.
6
PBS, ​False news travels 6 times faster on Twitter than truthful news​ (9 Mar. 2018),
https://www.pbs.org/newshour/science/false-news-travels-6-times-faster-on-twitter-than-truthful-news​.

5
Mandatory transparency and accountability measures must extend to the AI used by the platforms
that increasingly dominate the online advertising space. Furthemore platforms must be required to
continually improve and invest in the AI that moderates their systems, ensuring it has​ ​adequate data
sets and the predictive capacity to recognise hate speech and verified disinformation. Without
such capacity it will continue to target consumers with ads spreading disinformation. While some
online platforms have made efforts to improve their algorithms, overall their self guided efforts have
been inadequate.

ii. Online platforms are extremely inconsistent in their application of their own policies in
different European territories

Labelling, and consequently slowing the spread of fact-checked misinformation, is not consistent
between languages or territories. For example, in our 2020 report on COVID-19 disinformation in
Europe Avaaz found that Italian and Spanish-speaking users may be at greater risk of
misinformation exposure. Facebook had not issued warning labels on 68% of the Italian-language
content and 70% of the Spanish-language content we examined, compared to 29% of the
English-language content. We will be publishing a report in 2021 which further examines the
percentages of unlabeled fact-checked misinformation on Facebook.

Applying fact-check labels to content containing misinformation is crucial in curtailing the viral
spread of the misinformation and the rise of individuals or pages that repeatedly circulate it, as
platforms have pledged to subject these serial misinformers to heightened scrutiny.This uneven
application of existing rules, in combination with the platform’s failure to address the disinformation
in the Super PACs advertising above, indicates that formal statutory regulations and standards will
be required and that a soft power approach alone cannot guarantee equal treatment of citizens’
rights across Europe by the biggest global players.

 iii. Surveillance advertising: online platforms collect and use personal data for ad targeting
without transparency or control for users

Surveillance (or ‘behavioural’) advertising is the practice of using people’s personal data for digital
ad targeting. Platforms build comprehensive demographic and psychological profiles on each user
based on their behaviour online and offline. Advertisers use these profiles to place ads in front of
users in their desired audience via real-time bidding (RTB) in automated auctions hosted by ad tech
intermediaries. This issue goes far beyond political advertising of course, but as this Roadmap
relates to political advertising transparency we will focus on that aspect.

Companies track our behaviour, profile us and determine the content we see based on what will
keep us clicking, so that they can serve us more ads and extract more data. This lucrative practice
is a key part of Google and Facebook’s business models-- in 2020 advertising accounted for ​98% of
Facebook’s $86 billion​ in revenue and ​80% of Google’s $182.5 billion​. The problem is, that whether
intended or not, and we recognise that some platforms have taken steps to restrict the user
characteristics that advertisers can use for purposes of microtargeting, the advertising model can be
used to target disinformation at users. Independent analyses show how these companies profit off
of ​election lies​ and ​COVID-19 disinformation​ targeted at users, which they have allowed to circulate
on their platforms. This is something the user is not consistently given the opportunity to

6
meaningfully consent to in the way required by Europe’s General Data Protection Regulation
(“GDPR”).

While it is possible the lack of transparency and data privacy issues enumerated above might be
addressed under existing GDPR provisions, it is clear that on the ground, online platform users do
not have sufficient clarity on the ads that are served to them, the trustworthiness of the factual
claims made in them, nor do they consistently have any meaningful control to tailor the system to fit
their needs. Under the status quo, the user’s world view is tailored by the advertising revenue
system of the platforms. This is a further example of how either new specific legislation or massive
resourcing to enforce the pre-existing laws is urgently required.

Section 3: The Solutions

Soft power regulatory approaches?

The Roadmap asks for input on non-statutory regulatory approaches ​to promote and clarify​ t​ he
currently applicable EU and national frameworks on the basis of recommendations and potentially
professional and industrial codes and standards. ​For the reasons above, Avaaz does not believe
that the track record of the global online social media giants, in terms of data handling or combatting
disinformation, supports the idea that non-binding measures without a regulatory back stop are
likely to be the fix the system so badly needs.

Our research shows that current non-binding measures, such as the EU Code of Practice and the
platform's own “community standards” that they devise and enforce themselves, are insufficient and
inadequate when it comes to tackling the threat of online disinformation. Our Super PAC and
Georgia elections investigations, which showed that platforms routinely failed to adhere to their own
advertising policies during US elections, clearly illustrate this point. Moreover, efforts to create and
build momentum around adopting and utilizing these soft law tools have been piecemeal, slow and
demonstrate the need for enforceable legislation. Finally, as noted above -- there is a large deficit in
the enforcement of existing GDPR rules on informed and meaningful consent to data use.

Therefore, we urge that the following solutions be implemented alongside any new obligations on
the transparency of sources of advertising and its targeting as part of the Roadmap towards greater
transparency and accountability.

1) Detoxing the algorithm

Increasing transparency in political advertising necessarily includes putting a stop to the spread of
disinformation that often appears in political ads and curbing the rise and influence of serial
misinformers. Implementing clear regulations that bind online platforms is an important step in the
fight to ​promote free and fair elections in the EU and to prevent interference in democracy​.

7
As ​Commissioner Vera Jourova stated​, ​"​New technologies should be tools for emancipation --
not for manipulation."​ The VoteLeave campaign, of course, offers a painful example of the latter.
Since then, Facebook has increased transparency around its advertising. However, our recent
research into the US elections showed that Facebook is not consistently implementing its own
policies to tackle false information contained in ads targeted in key swing states. ​Holding
platforms and creators ​of misinformation content in advertising​ accountable through
enforceable EU-wide measures to ensure greater transparency, accuracy and responsible
usage of data in sponsored political content, is crucial to protecting our democracies and
the integrity of the European project.

Adopting a three-step process to detox their algorithms will allow platforms to stop the
spread of misinformation while safeguarding the freedoms of speech and thought.

Fortunately, having created these algorithms, platforms can also “detox” them. In order to
successfully complete this process, platforms should undertake the following steps:

1) Platforms must detect and downgrade known pieces of misinformation and all content from
systematic spreaders. This includes not accelerating any content that has been debunked
by independent fact-checkers, as well as all content from pages, groups, or channels that
systematically spread misinformation. They must do so transparently -- notifying users and
actors and allowing a right of appeal to their actions.

2) Platforms must demonetize systematic spreaders by banning actors that have systematically
posted fact-checked content from advertising and from monetizing their content.

3) Platforms must use clear labels to inform users when they are viewing or interacting with
content from actors who repeatedly and systematically spread misinformation, and must
also provide users with links to additional information.

This three step detox method safeguards free speech by requiring that all content remains available
and also guarantees users due process -- the right to be notified and to appeal the platforms’
decisions. It also protects freedom of thought by slowing the spread of harmful lies that change how
our brains are wired.

2) Implementation and enforcement of transparent advertising policies

As the stewards of online advertising, online platforms must take steps to ensure complete
transparency of their policies and processes. Platforms can enhance the transparency of their
advertising operations by requiring advertisers seeking to run political ads to verify their identity and
location with proof of ID. The platforms should also provide public access to all ads the payee has
run in the preceding 12 months, including all the ads it is currently running.

8
It is also important that online platforms provide all users with an accessible, easily identifiable way
to access the history of all advertisements the user has come into contact with over the past 6
months. Platforms should also ensure that the user ad experience includes:

a) Advertisements that display a prominent “paid for by” label that makes it clear to
users that they are seeing paid-for content and should also identify who paid for the
content. If the ad is paid for by a political party, the name of the party must appear in
the label.

b) An easily recognisable button that directly links the viewer to information showing the
name of the entity that is paying the ad, along with the contact information of the
advertiser and the targeting of the ad.

Online platforms would also benefit from establishing a standardized, easy to use reporting system
that allows users to flag content that they believe constitutes disinformation to the platform and/or
an associated fact-checking organization. Platforms should immediately share with third-party
independent fact-checkers any reported content receiving more than e.g. 10,000 views.

3) Ensuring that ad regulation efforts cover a sufficient period before and


after the election period

Focusing ad regulation efforts on narrowly defined election periods will not achieve the initiative’s
goal of ​promoting free and fair elections and combatting disinformation in the EU.

While election periods may be times of particular vulnerability in which the online ecosystem is flooded
with disinformation targeting susceptible voters, our research demonstrates that a considerable amount
of election related, democracy damaging disinformation spreads outside of election periods. ​Our
investigations have shown that “serial misinformers” steadily amass followers and expand their share of
voice in the online ecosystem months ahead of election periods. By the time elections roll around,
through the content acceleration possible on social media, the misinformers’ posts get a disproportionate
number of views and shares, rivalling on occasion the reach of mainstream media. This stretch of
heightened activity then continues through the election to the post election period, this time with the aim
of sowing mistrust and dissent in the election result.

Post election disinformation fueled the January 6, 2021 insurrection in the US.​7​ The individuals that
stormed the US Capitol building to “​stop the election from being stolen​” from Donald Trump, were
inundated online with baseless post-election conspiracies about fraudulent voting practices.

An Avaaz investigation revealed that “Stop the Steal” (STS) groups promoting voter fraud narratives
emerged and proliferated on Facebook immediately after the US 2020 election on November 3,
2020. By November 11 there were a total 608,400 members across 111 public and private STS

7
New York Times, ​How Misinformation ‘Superspreaders’ Seed False Election Theories​ (23 Nov. 2020),
https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html​.

9
groups, and between November 5-12, 46 public groups garnered over 828,000 interactions and
over 20.7M estimated views.

Despite Facebook’s ​announcement​ on January 11 that it would ​remove “content containing the
phrase ‘stop the steal’” Avaaz identified that as of January 13, there were at​ least 83 groups calling
2020 election results into question, although they did not contain “stop the steal” in their names.
These groups, which fell outside the scope of Facebook’s narrow ban, had names like ​“stop the
fraud” and “stop the rigged election,” and like the STS groups were clearly promoting the
unsubstantiated electoral fraud claims.

4) Incentivising preferred behaviors

If soft law is to be used as an industry response pending a proper legislative framework we would
recommend adopting tactics such as:

a) Incentivising preferred behaviors (e.g. platforms are given a seat at the table in high
level discussions if targets are met; providing “traditional” incentives such as tax
breaks, subsidies, quotas or permits).

b) Disincentivising non-compliance with measures and targets (e.g. require enhanced


reporting obligations for X number of months x Y number of targets that the platform
did not meet until it does meet these targets; “three strikes and you’re out” style rule
for signatories who are non-compliant for a certain period of time or on a certain
number of issues or key performance indicators).

c) Giving platforms periodic public ratings based on how many targets that they have
met and failed to meet.

d) Asking platforms to periodically publicly report on adherence to targets.

Section 4: Conclusion

Increasing transparency in political advertising could require that actors in the advertising sphere
invest, in some cases, substantial resources to come into compliance with new standards. While
these heightened obligations may be seen as burdensome, they are proportionate to the end that
they seek to achieve.

The evidence that we have shared in this response shows that the online advertising environment
as it currently exists is capable, particularly when combined with disinformation, of manipulating
citizens through algorithmic smoke and mirror tactics that boost and disseminate disinformation.
These practices have influenced electorates and undermined elections in the world’s strongest
democracies. For more information on the research on disinformation on social media undertaken
by Avaaz in the last 3 years please visit our ​Disinformation Hub​.

10
The damage that disinformation can cause to our fundamental human rights is no less real when
disseminated through advertising. We have attached as Appendix 1 a full review of the rights,
freedoms and values impacted by disinformation disseminated through social media collated for the
UN Special Rapporteur’s request for input into the annual thematic report on Disinformation to be
presented to the Human Rights Council at its 47th session in June 2021. We would argue that
where disinformation is delivered through opaque targeting as political advertising, its impact on our
rights to freedom of expression and freedom of thought are directly impacted.

Requiring accountability and transparency from participants in the political advertising process is a
small step when viewed in light of the rights impacted and the profound effect that adherence to
new standards could have on safeguarding democracy in the EU. Increasing transparency in
political advertising is a necessary and important step towards creating an environment in which EU
citizens are able to make informed decisions about political representation.

Do please contact us with questions or for further information:

Sarah Andrew - ​sarah.andrew@avaaz.org


Christine Vlasic - ​christine@disinfo.avaaz.org
Luca Nicotra - ​luca@avaaz.org

11
Appendix 1

The rights, freedoms and values impacted by disinformation


disseminated through social media

The Right to Life


Allowing hate speech to proliferate that incites harm to one’s safety or security of person, or
misinformation that exposes individuals to significantly elevated risks to their health arguably violate
a corporate entity’s duty to respect the human rights to life and health. Our analysis of hate speech
in Assam revealed dangerous levels of inciteful lies and hate, and in several instances, spread by
state agents, against Muslims in Assam. We also found a massive amount of misleading or false
information about covid-19 and vaccines that could be, and in some instances, has been relied
upon by individuals to harm their health.

Freedom of Speech / Freedom of Expression

It is clear that laws that contain blanket bans on misinformation or untruthful speech infringe on the
freedom of expression. But in certain instances spreading misinformation can deny individuals’ right
to seek and receive accurate information through what is termed “censorship through noise”.
Disinformation has also been used to ​target or troll journalists and human rights defenders critical of
governments or political movements. These online harassment campaigns can produce a chilling
effect on the freedom of expression, association and assembly of the targeted individuals, who may
refrain from publicly expressing their views and engaging in their normal activities for fears of further
verbal and physical attacks.

Freedom of Thought

When we think about how information is sifted, parsed and restricted through the automated
decision making of AI, a new human rights paradigm from the consumer rights or data rights
emerges, that of the freedom of thought. Freedom of Thought is an absolute right enshrined in the
EU Charter of Fundamental Rights and Freedoms, the European Convention of Human Rights, the
International Covenant of Civil and Political Rights, the Universal Declaration of Human Rights and
the American Convention on Human Rights, which protects individuals from incursions on their
innermost sanctum -- their forum internum -- and is the progenitor of many rights. For if we can’t
guard our own thoughts, how can we exercise our right to freely express ourselves or to speak?

The rights to freedom of thought and the closely related rights to freedom of opinion and
information was inscribed into human rights law following World War II, when the drafters of the
initial international human rights corpus had a fresh memory of the role that large scale propaganda
played in perpetuating the horrors of Nazi Germany. What is different about new technological

12
developments, however, is the ​manner​ in which they facilitate the deceptive amplification of
propaganda and microtargeting of users. So the risk and potential harm here is the ​scale and
speed​ at which disinformation reaches us, the manner in which the platforms facilitate it, and how
the disinformation is non-transparently tailored to influence each one of us individually. The EU
Charter has developed the rights contained in earlier instruments to reflect the evolution of
challenges and understanding of particular rights and the right to mental integrity included in the
Charter can be viewed as an additional aspect of the right to freedom of thought in the modern
context.

How are these rights engaged?

Fundamental to the platforms’ business model, algorithms are taught to manipulate users’ brain
chemistry in order to maximize their time online. What this often results in is an alteration of user
worldview and behaviors, because, as we know, algorithms amplify content built on outrage, hate,
and harmful material that generates more user engagement.

A clear example of the development of AI designed to alter individuals’ emotional states through the
delivery of information is Facebook’s 2012 experiment on mood alteration through curation of news
feeds.​8​ This is connected to their research on AI inferences about personality type through
Facebook ‘Likes’.​9​ The Cambridge Analytica scandal with its use of behavioural micro-targeting
techniques to profile and target voters in a bid to influence voter behaviour is an indication of the
way this type of AI can have very serious societal consequences as well as an impact on individual
rights. And the leak of Facebook documents in Australia in 2017​10​ which showed Facebook was
selling insights into teenagers’ emotional states in real time for targeted advertising is another
indication of the way this kind of technology can impact on vulnerable groups, including children, by
trying to access their inner states.

Equality of treatment

Much has been said about the manner in which AI is flawed because of the inherent bias built into
the algorithms, linked to the lack of diversity of participation and opportunity in the industry that
designs the algorithms. ​Related to these are concerns about the lack of equal treatment in
facilitating the inclusion of all users, and in monitoring for unequal impact on all users. This
has been termed algorithmic bias or algorithmic determinism,​ Through Avaaz’s own
investigation into hate speech on Facebook against communities of poor Muslims in the northeast
state of Assam in India, we learned that AI is not an equal-opportunity capability -- indeed, it actively
discriminates against some of the most vulnerable populations in the world.​11

8
​A.D.I. Kramer, J.E. Guillory, and J.T. Hancock ​Experimental evidence of massive-scale emotional contagion through
social networks​ (2014) issue 24 of Proc Natl Acad Sci USA (111:8788–8790)
9
​W. Youyou, M. Kosinski, D.Stillwell ,​Computer-based personality judgments are more accurate than those made by
humans​ (2015) Vol. 112 No. 4, PNAS 1036-1040.
10
​https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens
11
https://avaazpress.s3.amazonaws.com/FINAL-Facebook%20in%20Assam_Megaphone%20for%20hate%20
-%20Compressed%20(1).pdf

13
How are these rights engaged?

Through our investigation, we found that machine learning is not sophisticated enough, without
proactive human-led content reviews, to extract hate speech from platforms, particularly in
languages that are not very widely spoken. The danger of this , of course, was that this was the
case despite three UN letters sounding the alarm bells about an emerging humanitarian crisis in
Assam. Translation tools did not extend to these languages. But more fundamentally, the
deployment of AI tools in the domain of hate or dangerous speech rests on a faulty premise: that all
users have equal access to the flagging mechanism on Facebook’s platform. Automated detection
can only begin to function when there are an adequate number of posts flagged in the first instance
from which classifiers can be built, or in simpler terms: humans need to flag content to train
Facebook’s AI tools to detect hate speech on its own. But, often, the minorities most directly
targeted by hate speech on Facebook often lack online access or the understanding of how to
navigate Facebook’s flagging tools, nor is anyone else reporting the hate speech for them. As a
result, the predictive capacity of AI tools is not equally robust.

International corporate accountability principles require platforms to conduct human rights due
diligence on all products, such as identifying its impact on vulnerable groups like women, children,
linguistic, ethnic and religious minorities and others, particularly when deploying AI tools to identify
hate speech, and take steps to subsequently avoid or mitigate such harm. Ultimately, platforms
need to be able to implement their policies equally for all populations, including vulnerable
populations, so that hate speech can be accurately classified, identified, labelled, downgraded and
removed quickly.

As the High-Level Expert Group on AI has stated “Bias and discrimination are inherent risks of any
societal or economic activity. Human decision making is not immune to mistakes and biases.
However, the same bias when present in AI could have a much larger effect, affecting and
discriminating many people without the social control mechanisms that govern human behaviour.”

Data Rights

The AI Framework must keep up with and anticipate the rapid industry developments in the terrain
of content delivery. The conceptual framework must expand beyond current data rights concepts of
consent to user rights to understand, control and actively choose the degree to which they are micro
targeted or surveilled through use of their own data as well as data created or inferred during AI
automated decision making.

How are these rights engaged?

This data use creates repetitive patterns sending users down radicalization rabbit holes, draws
users into filter bubbles and echo chambers that narrow their exposure, and promotes addictive
behaviors, particularly in younger users who are more susceptible to the effects of disinformation. It
thus becomes clear that ​the harm of the unregulated algorithm is its potential to interfere with

14
human autonomy: our personal data is being extracted to draw hidden inferences about us,
which then allows our thoughts and emotions to be manipulated.

We can see the tragic outcome of AI driven content curation without regulation in the story of UK
teenager Molly Russell. Molly was just 14 when she took her own life.​12​ After Molly died in 2017, her
family looked into her Instagram account and found “bleak depressive material, graphic self-harm
content and suicide encouraging memes. Her father believes this social media encouraged her
desperate state, and described the process clearly: “Online, Molly found a world that grew in
importance to her and its escalating dominance isolated her from the real world. The pushy
algorithms of social media helped ensure Molly increasingly connected to her digital life while
encouraging her to hide her problems from those of us around her, those who could help Molly find
the professional care she needed.”​13

The Royal College of Psychiatrists has called on social media companies to share data with
researchers to measure mental health impacts on young people of microtargeting, filter bubbles,
and advertising.​14

12
https://www.bbc.co.uk/news/av/uk-46966009/instagram-helped-kill-my-daughter
13
​Ian Russell, Molly Russell’s father in his forward to the report on technology use and the health of children and young
people, from the Royal College of Psychiatrists in 2019 see
https://www.rcpsych.ac.uk/docs/default-source/improving-care/better-mh-policy/college-reports/college-report-cr225.pdf
14
​ibid

15

You might also like