Professional Documents
Culture Documents
C4DT_Focus n°5-3
C4DT_Focus n°5-3
design: blaise
design: blaise Magnenat — photography: Magnenat
iStock
C4DT Focus
T
he digital world is evolving
at high speed and not a day
goes by without the subject making
headlines. With targeted interviews
of international experts
and a selection of the most relevant
articles on the subject, the C4DT
Focus offers you valuable insights
into a digital topic, which was
in the news recently.
2
FOCUS N°5
3
INTRODUCTION
How is China
regulating big
tech algorithms
4
FOCUS N°5
A few months after the law was promulgated, Chinese companies such as Alibaba, Tencent,
and Bytedance shared details about some of their algorithms with the government. Altogether,
30 algorithms were documented in about 500 characters on a public database. The brevity of
these descriptions has led several analysts to conjecture that authorities could probably not access
the companies’ “secret sauce”. But not all required information is made accessible and such
analyses are speculative.
This new law expressly calls for better protection of citizens from abuses. It prohibits price
discrimination, where users are charged differently for the same service depending on their online
profile. It also forbids algorithm-driven strategies aiming at inducing addiction or exploiting gig workers.
Another notable addition is the obligation for internet platforms to let users opt out of algorithmic
recommendations.
Other articles are more in line with Beijing’s authoritarian line. They introduce new legal duties for
service providers, such as identifying “unlawful and harmful information” and marking algorithmically
generated information prior to dissemination to facilitate tracing.
The Algorithm Act regulates a whole range of uses, from shopping recommendations to search filters
or personalized rankings. Western media often described Beijing’s initiative as “unprecedented”
and compared it to the EU AI Act, a proposed law that might even have served as an inspiration for
the CAC.
The Chinese Algorithm Act might be the first big move toward tighter regulation. In September 2022,
the CAC made clear its intention to set up new governance rules for algorithms in the next three years.
In a statement, it said that algorithms developed by technology firms should endorse the “core values
of socialism”. The 2022 law is likely to be just the beginning.
5
INTERVIEW
6
FOCUS N°5
7
INTERVIEW
8
FOCUS N°5
This regulation has often been presented as a potential How does this regulation fit in the bigger picture of
model for future initiatives in Europe or the USA, maybe Beijing’s crackdown on its tech sector?
because it was unprecedented. To which extent do you We are clearly dealing with a one-party state claiming
think it could serve as an inspiration for the Western its monopoly on all resources and information about citizens.
world? It can not let the private sector autonomously use such
We could certainly use some of it. On Facebook, most information or develop its own systems. These past few
German far-right groups subscribers joined following years, Beijing has cracked down on large tech companies.
an automated recommendation. This shows that Europe The Algorithm Act is part of this greater tidying up. Europe
could also benefit from some level of transparency is still leading in the area of personal data and privacy, but
regarding algorithms. There is also the possibility of opting China gives an idea of what could be a tougher regulation
out of algorithms recommendation. After all, it already on the use of algorithms.
exists with commercial emails. Things get less inspiring
with the articles about positive information. The Chinese
government wants internet platforms to promote cheerful
content, with the underlying idea that it will somehow
facilitate the establishment of a more harmonious society.
That kind of governmental reach is clearly not in accordance
with democratic values.
9
INTERVIEW
“Deep learning is
not understandable,
not even by
its own creators.”
James Larus is a professor of Computer
Science at EPFL.
10
FOCUS N°5
James Larus is a professor of Computer Science We often hear that algorithms need to be transparent and
at EPFL, formerly Dean of I&C School and Director even “auditable”. What do we mean exactly by that? What
of Research at Microsoft. Recently, he was part of is an auditable algorithm?
the group that developed the privacy-preserving Jim Larus: It is a really good question that does not have
protocol used by Apple and Google for their Covid a good answer yet. There is a concept in machine learning
exposure alert software. called understandability. The idea is that one can look at
a machine-learned system and understand why it took
such or such decisions. But there is a problem: to achieve
understandability, you end up using less powerful machine
learning techniques. It works with simpler approaches such
as decision trees, where the result is produced by a series of
statements like “if this parameter is greater or smaller than X,
then the output is Y”, and so on. But most of today’s exciting
stuff, like language processing, automated translations, and
image recognition, relies on deep neural networks. With that
technology, currently, there is no good way to understand how
and why the system makes a decision. The neural net is not
understandable, not even by its own creators.
11
INTERVIEW
You developed privacy-enabling technologies in your Are there current attempts at developing a deeper
EPFL lab. Did the GDPR generate a stronger demand for understanding of how neural network function?
such research? A lot of people are working on it. There are clear reasons for
It certainly oriented our activities. Technologies like that. Companies would like to be able to improve and debug
homomorphic encryption or secure multi-party computation these systems. And from a policy point of view, there are
are tools to provide the guarantees required by GDPR. It is things that you don’t want AI to do, such as taking racial factors
much like protecting one’s home from intruders. The legislation into account.
offers a form of protection, in the sense that it is illegal to enter
someone’s property without consent. But most of us still lock
our homes, right? In other terms, legislation is not enough.
Likewise, GDPR provides a legal framework, but we need
to develop tools to enforce it.
12
FOCUS N°5
Could we go as far as making deep learning illegal just With China’s recent Algorithm Act, pundits have often
because it is impossible to document how it really works? argued that Beijing, and not Brussels, has the lead when it
If there is one place where it could more or less happen, I comes to protecting citizens against AI abuses. What do
think it would be Europe. These past few years, several EU you think about such interpretations?
reports have advocated an entirely new form of AI, one that While I am not an expert on China, I don’t see the Communist
would be safe and ethical. It was even sold as a way for Party flipping 180 degrees to protect individual rights and
Europe to compete in a world market where it is a distant third, privacy. In recent years, the Chinese government has mostly
well behind the United States and China. But the goal is not been trying to use AI and personal information collected by
connected to clear technical means in order to achieve it. It is companies for its own political purposes. As far as I can tell
mostly a top-down vision imposed on the industry. from the West, authorities mostly want to make sure that
they have a say over what tech companies do with the vast
amount of information they collect. Demanding understandable
algorithms, whatever it means in practice, seems like a way
to exercise further control.
13