Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

February 2023 C4DT Focus N°5

design: blaise
design: blaise Magnenat — photography: Magnenat
iStock

Featuring interviews with: Written by Lionel Pousaz

Eric Sautedé James Larus


Professor at the French Center Professor of Computer Science at EPFL
for Research in Contemporary China,
Hong Kong

C4DT Focus

T
he digital world is evolving
at high speed and not a day
goes by without the subject making
headlines. With targeted interviews
of international experts
and a selection of the most relevant
articles on the subject, the C4DT
Focus offers you valuable insights
into a digital topic, which was
in the news recently.

2
FOCUS N°5

Center for Digital Trust


Housed at the Swiss Federal Institute of Technology
Lausanne (EPFL, www.epfl.ch), the Center for Digital Trust
(C4DT, c4dt.epfl.ch) brings together academy, industry,
not-for profit organizations, civil society, and policy actors
to collaborate, share insight, and to gain early access
to trust-building technologies, relying on state-of-the-art
research at EPFL. C4DT is supporting the public sector
by acting as an expert and facilitating technology transfer,
in domains such as privacy protection and security,
democracy and humanitarian assistance
and critical infrastructures.

3
INTRODUCTION

How is China
regulating big
tech algorithms

4
FOCUS N°5

In the spring of 2022, the Cyberspace Administration


of China (CAC) introduced the Algorithm Act. This law is
composed of 30 articles regulating the use of algorithms
by tech companies. It has been praised by some as
a welcome step to preserve users’ well-being
and criticized by others as a government overreach
that might further endanger innovation in China.
How has it been implemented so far? What are Beijing’s underlying intentions? How can we
implement transparency with black boxes such as deep learning-based systems? These are some of
the few societal, political, and technical questions addressed in this issue of C4DT Focus, with
the exclusive insight of two experts.

A few months after the law was promulgated, Chinese companies such as Alibaba, Tencent,
and Bytedance shared details about some of their algorithms with the government. Altogether,
30 algorithms were documented in about 500 characters on a public database. The brevity of
these descriptions has led several analysts to conjecture that authorities could probably not access
the companies’ “secret sauce”. But not all required information is made accessible and such
analyses are speculative.

This new law expressly calls for better protection of citizens from abuses. It prohibits price
discrimination, where users are charged differently for the same service depending on their online
profile. It also forbids algorithm-driven strategies aiming at inducing addiction or exploiting gig workers.
Another notable addition is the obligation for internet platforms to let users opt out of algorithmic
recommendations.

Other articles are more in line with Beijing’s authoritarian line. They introduce new legal duties for
service providers, such as identifying “unlawful and harmful information” and marking algorithmically
generated information prior to dissemination to facilitate tracing.

The Algorithm Act regulates a whole range of uses, from shopping recommendations to search filters
or personalized rankings. Western media often described Beijing’s initiative as “unprecedented”
and compared it to the EU AI Act, a proposed law that might even have served as an inspiration for
the CAC.

The Chinese Algorithm Act might be the first big move toward tighter regulation. In September 2022,
the CAC made clear its intention to set up new governance rules for algorithms in the next three years.
In a statement, it said that algorithms developed by technology firms should endorse the “core values
of socialism”. The 2022 law is likely to be just the beginning.

5
INTERVIEW

“The Algorithm Act


pursues two aims:
to protect citizens
and to bring internet
platforms to heel”
Eric Sautedé is an Associate Professor at the French
Center for Research in Contemporary China, Hong Kong.

6
FOCUS N°5

Eric Sautedé is an Associate Professor at the French


Center for Research in Contemporary China, Hong
Kong. He is also the founder of the consulting
firm Chinexpert and a journalist specializing in
the development of the Internet and artificial
intelligence in China. He shares his insights to better
understand the Algorithm Act from Beijing’s
perspective.

Is the Algorithm Act really about protecting citizens from


abuses, or is it about Beijing’s willingness to keep its tech
giants in check?
Eric Sautedé: It is a little bit of both. Some elements truly
aim at protecting citizens. In China, people use apps such as
Wechat to do everything, and if your account gets suspended
you literally cease to exist. Wechat is used to access a huge
number of services, from entering physical stores to paying
a taxi, going to a doctor, booking a table at a restaurant,
boarding a flight… Once you are blacklisted, you are really
running into difficulties. The Algorithm Act is an attempt at
preventing these situations. It all started during the summer
of 2021. At that time, several new regulations and official
statements revealed the government’s willingness to take
action against the abuses of tech companies. This is why
the Algorithm Act includes a few articles to protect those
who are most vulnerable. Of course, this regulation also aims
at limiting Chinese tech companies’ ability to compete with
the authorities with their own information resources. This is
why many analysts consider the Algorithm Act as two separate
laws: one to protect citizens, and one to bring internet platforms
to heel.

7
INTERVIEW

8
FOCUS N°5

This regulation has often been presented as a potential How does this regulation fit in the bigger picture of
model for future initiatives in Europe or the USA, maybe Beijing’s crackdown on its tech sector?
because it was unprecedented. To which extent do you We are clearly dealing with a one-party state claiming
think it could serve as an inspiration for the Western its monopoly on all resources and information about citizens.
world? It can not let the private sector autonomously use such
We could certainly use some of it. On Facebook, most information or develop its own systems. These past few
German far-right groups subscribers joined following years, Beijing has cracked down on large tech companies.
an automated recommendation. This shows that Europe The Algorithm Act is part of this greater tidying up. Europe
could also benefit from some level of transparency is still leading in the area of personal data and privacy, but
regarding algorithms. There is also the possibility of opting China gives an idea of what could be a tougher regulation
out of algorithms recommendation. After all, it already on the use of algorithms.
exists with commercial emails. Things get less inspiring
with the articles about positive information. The Chinese
government wants internet platforms to promote cheerful
content, with the underlying idea that it will somehow
facilitate the establishment of a more harmonious society.
That kind of governmental reach is clearly not in accordance
with democratic values.

Will the algorithm act impact the (fewer and fewer)


Western internet companies still active on Chinese soil?
Indeed, almost all of them left the Chinese market! To survive,
those remaining already separated their Chinese operations
from the rest. This is how Apple manages to stay in China,
and how it trusts the first rank in market shares for mobile
phones. Apple does not hesitate to censor apps and
regularly bends to the authorities’ wishes. Practically, I don’t
think that the Algorithm Act will change much in this situation.

9
INTERVIEW

“Deep learning is
not understandable,
not even by
its own creators.”
James Larus is a professor of Computer
Science at EPFL.

10
FOCUS N°5

James Larus is a professor of Computer Science We often hear that algorithms need to be transparent and
at EPFL, formerly Dean of I&C School and Director even “auditable”. What do we mean exactly by that? What
of Research at Microsoft. Recently, he was part of is an auditable algorithm?
the group that developed the privacy-preserving Jim Larus: It is a really good question that does not have
protocol used by Apple and Google for their Covid a good answer yet. There is a concept in machine learning
exposure alert software. called understandability. The idea is that one can look at
a machine-learned system and understand why it took
such or such decisions. But there is a problem: to achieve
understandability, you end up using less powerful machine
learning techniques. It works with simpler approaches such
as decision trees, where the result is produced by a series of
statements like “if this parameter is greater or smaller than X,
then the output is Y”, and so on. But most of today’s exciting
stuff, like language processing, automated translations, and
image recognition, relies on deep neural networks. With that
technology, currently, there is no good way to understand how
and why the system makes a decision. The neural net is not
understandable, not even by its own creators.

11
INTERVIEW

You developed privacy-enabling technologies in your Are there current attempts at developing a deeper
EPFL lab. Did the GDPR generate a stronger demand for understanding of how neural network function?
such research? A lot of people are working on it. There are clear reasons for
It certainly oriented our activities. Technologies like that. Companies would like to be able to improve and debug
homomorphic encryption or secure multi-party computation these systems. And from a policy point of view, there are
are tools to provide the guarantees required by GDPR. It is things that you don’t want AI to do, such as taking racial factors
much like protecting one’s home from intruders. The legislation into account.
offers a form of protection, in the sense that it is illegal to enter
someone’s property without consent. But most of us still lock
our homes, right? In other terms, legislation is not enough.
Likewise, GDPR provides a legal framework, but we need
to develop tools to enforce it.

12
FOCUS N°5

Could we go as far as making deep learning illegal just With China’s recent Algorithm Act, pundits have often
because it is impossible to document how it really works? argued that Beijing, and not Brussels, has the lead when it
If there is one place where it could more or less happen, I comes to protecting citizens against AI abuses. What do
think it would be Europe. These past few years, several EU you think about such interpretations?
reports have advocated an entirely new form of AI, one that While I am not an expert on China, I don’t see the Communist
would be safe and ethical. It was even sold as a way for Party flipping 180 degrees to protect individual rights and
Europe to compete in a world market where it is a distant third, privacy. In recent years, the Chinese government has mostly
well behind the United States and China. But the goal is not been trying to use AI and personal information collected by
connected to clear technical means in order to achieve it. It is companies for its own political purposes. As far as I can tell
mostly a top-down vision imposed on the industry. from the West, authorities mostly want to make sure that
they have a say over what tech companies do with the vast
amount of information they collect. Demanding understandable
algorithms, whatever it means in practice, seems like a way
to exercise further control.

13

You might also like