Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Slide 1

Harmful effects on people's health and wellbeing, which can instigate racism, discrimination and
violence, and have profoundly negative effects on democratic processes.

Slide 2
- Most importantly for us: European Parliament elections
- But also United States Presidential elections, UK House of Commons, and in massive
countries such India, Indonesia and Mexico

Slide 3
- Elections: massive interest and potential for disinformation to spread

Slide 4 >> RQ
- I'll answer this question by first looking at self-regulatory measures and then at
legislation at the EU level

Slide 5 >> DisinfoCode

Slide 6
- 9 main pillars, 44 commitments

Slide 7
- To illustrate some commitments and to give you a better idea of what platforms are doing
concretely

Slide 8
- Not all illegal content will constitute disinformation, but there might be some overlap.
- For instance, fake news that might be of a racist character

Slide 9
- Socially harmful content is a lot harder to define, and more open to platforms'
interpretation
- Not all socially harmful content is disinformation! >> if you tell a small lie on Instagram,
that doesn't fall under the DSA
- Measures: take down content, ban users posting it, report to authorities etc.

Slide 10
- New rules adopted last month, March 11th >> ahead of European elections
- Entry into force autumn 2025


Extra info:
Disinformation Initiatives Around the World
● In the United States, creation of the Foreign Malign Influence Center - FMIC
● In Brazil, proposed legislation similar to the DSA but with a strict focus on the
dissemination of fake news.
○ Illegal and " possibly illegal" content
○ Doubts about who would oversee legislation.
● In India, IT Rules = online intermediaries have to remove any information relating to the
business of the Central Government that is identified as “fake or false or misleading” by
fact checking units of the government.

"Proibição da criação de contas falsas nas mídias sociais para simular a identidade de uma pessoa ou
entidade;
● Proibição de uso de ‘bots’, ou seja, contas automatizadas geridas por robôs;
● Limitação do alcance de mensagens muito compartilhadas;
● Determina que empresas mantenham o registro de mensagens encaminhadas em
massa durante três meses;
● Exige a identificação de usuários que patrocinam conteúdos publicados, essa seria
uma forma de evitar anúncios falsos de golpes financeiros, por exemplo;
● Proíbe que contas oficiais de organizações governamentais ou de pessoas de
interesse público (como políticos) bloqueiem contas de cidadãos comuns;
● Criação do Conselho de Transparência e Responsabilidade na Internet, entidade
autônoma de supervisão para regulamentar e fiscalizar os provedores;
● Determina que provedoras de redes sociais estabeleçam sedes no Brasil;
● Imposição de sanções ou punições, como advertências ou multas, às empresas que
descumprirem as medidas previstas em lei.

Code of Practice on Disinformation - Commitments:

1. SCRUTINY OF AD PLACEMENTS:
● DEMONETISATION OF DISINFORMATION
● TACKLING ADVERTISING CONTAINING DISINFORMATION
● COOPERATION WITH RELEVANT PLAYERS

2. POLITICAL ADVERTISING:
● A COMMON UNDERSTANDING OF POLITICAL AND ISSUE ADVERTISING
● EFFICIENT LABELING OF POLITICAL OR ISSUE ADS
● VERIFICATION COMMITMENTS FOR POLITICAL OR ISSUE ADS
● USER-FACING TRANSPARENCY COMMITMENTS FOR POLITICAL OR ISSUE ADS
● POLITICAL OR ISSUE AD REPOSITORIES AND MINIMUM FUNCTIONALITIES FOR
APPLICATION
● PROGRAMMING INTERFACES (APIS) TO ACCESS POLITICAL OR ISSUE AD DATA
● CIVIL SOCIETY COMMITMENTS
● ONGOING COLLABORATION

3. INTEGRITY OF SERVICES
● COMMON UNDERSTANDING OF IMPERMISSIBLE MANIPULATIVE BEHAVIOUR
● TRANSPARENCY OBLIGATIONS FOR AI SYSTEMS
● COOPERATION AND TRANSPARENCY

4. EMPOWERING USERS
● ENHANCING MEDIA LITERACY
● ‘SAFE DESIGN’ OF THE ARCHITECTURE OF THE SERVICES, TRANSPARENT POLICIES,
AND ACCOUNTABILITY OF RECOMMENDER SYSTEMS
● BETTER EQUIPPING USERS TO IDENTIFY DISINFORMATION
● FUNCTIONALITY TO FLAG HARMFUL FALSE AND/OR MISLEADING INFORMATION
● TRANSPARENT APPEAL MECHANISM
● MEASURES TO CURB DISINFORMATION ON MESSAGING APPS

5. EMPOWERING THE RESEARCH COMMUNITY


● DISCLOSURE OF AND ACCESS TO SIGNATORIES’ DATA FOR RESEARCH ON
DISINFORMATION
● Automated access to non-personal data and anonymised, aggregated or manifestly made public
data
● Governance structure for access to data for research purposes requiring additional scrutiny
● COOPERATION WITH RESEARCHERS
● TRANSPARENCY AND DATA SHARING FROM RESEARCH ORGANIZATIONS

6. EMPOWERING THE FACT-CHECKING COMMUNITY


● COOPERATION WITH THE FACT-CHECKING COMMUNITY
● USE AND INTEGRATION OF FACT-CHECKING IN SIGNATORIES’ SERVICES
● FACT-CHECKERS’ ACCESS TO RELEVANT INFORMATION
● FACT-CHECKERS’ STANDARDS

7. TRANSPARENCY CENTRE

8. PERMANENT TASK-FORCE

9. MONITORING OF THE CODE

You might also like