Question 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Research:

In this age of technological advancement, manpower is being increasingly replaced by


machines and other automated systems. Man, being the creator of these AI and ML systems, is
not yet psychologically matured enough to make them completely foolproof. The biases and
drawbacks of man seem to mirror themselves in these automated systems. A recent example
where ML systems were deployed in the U.S. police branch showed that the system was
increasingly biased against colored people while making the decision of who could be the
potential criminal. The world has worked very hard and has gone through so much of ugly
violence to get to this point where the discriminatory attitudes are slowly taking a step back.
Deploying such ML and AI systems without proper scrutiny, not only pushes us back in time but
also could lead to potential destruction through increasing further division in mankind.

Therefore it becomes of utmost importance, as machine learning systems advance in capability


and increase in use, to examine the impact of these technologies on human rights. Ensuring the
accountability of machine learning systems is a must. The Toronto Declaration is a step forward
in the direction, calling on both governments and private technology companies to ensure that
algorithms used, respect human rights, particularly the basic principles of equality and
non-discrimination.

Summarize the Toronto Declaration:

Preamble:​ The Preamble acknowledges the plethora of uses ML systems have, but is also
equally concerned about the chaos it could bring about and therefore poses the question of
fixing accountability in case of mishaps. It reaffirms the role of human rights in the world and is
against discrimination in any context. It urges the states and private sector actors to promote the
development and use of machine learning and related technologies where they help people
exercise and enjoy their human rights. Ex: Healthcare, Law enforcement, etc.,

Using the framework of international human rights law:​ states that the risks that machine
learning systems pose must be urgently examined and addressed at the governmental level and
by private sector actors who are creating and deploying these systems. It makes it of utmost
importance that potential harms are identified and addressed and that mechanisms are put in
place to hold those responsible for harms to account, if needed, through binding agreements. It
pushes for the participation of academic, legal, civil society experts and all other stakeholders to
critique and advise on the use of these technologies.

Duties of states, human rights obligations​: States must take the following steps to mitigate and
reduce the harms of discrimination from machine learning in public sector systems by Identifying
risks, Ensuring transparency & accountability and enforcing oversight.

Responsibilities of private sector actors, human rights due diligence: ​As part of fulfilling this
responsibility, private sector actors need to take ongoing proactive and reactive steps to ensure
that they do not cause or contribute to human rights abuses – a process called ‘human rights
due diligence’. There are three core steps to this process. One, Identify potential discriminatory
outcomes. Two, Take effective action to prevent and mitigate discrimination and track
responses. Three, Be transparent about efforts to identify, prevent and mitigate against
discrimination in machine learning systems.
The right to an effective remedy: ​In case of mishaps, it mandates to provide effective remedies
to victims of discriminatory harms linked to machine learning systems used by public or private
bodies, including reparation that, where appropriate, can involve compensation, sanctions
against those responsible, and guarantees of non-repetition. This is to be made possible using
existing laws and regulations or may require developing new ones.

Second, analyze the trade-offs inherent to the declaration. In following the declaration,
what innovations or opportunities may be lost? If the declaration were discarded, what
risks would there be to citizens?

Though man has not been entirely successful in ensuring basic human rights to the entirety of
mankind, the opportunity to do the same presented itself before him in the form of machine
learning systems. To look at humanity objectively and not based on various identities each of
them assumes. But it comes with a trade-off. War, the pinnacle of human violence and greed,
has no regard for human rights whatsoever. Therefore, following the Toronto declaration, it
becomes impossible to use machine learning systems in situations of war.

If the declaration were discarded, and public and private institutions went ahead with deploying
ML systems without a moral compass, repercussions are many folds. Machine learning systems
can quicken the process of disposal of court cases. Because courts are supposed to be
objective and positively obligated to promote human rights, machine learning systems without
the backup of the Toronto declaration will replicate the existing human biases in their verdicts
too. The result: complete mayhem. And this is just one of the many.

Third, determine your stance on the Toronto Declaration. What do you agree with? What
do you disagree with? What would you remove, what would you keep, and what would
you add?

I second the Toronto Declaration. As long as systems are developed by humans, discrimination
and bias are unavoidable. Although we as a species/humanity might have various ideals for
ourselves, the reality is that all of us are selfish and biased. Some may be conscious of it or be
entirely unconscious about it. So, repeated iterations through various minds should be done to
ensure foolproof and mishap-proof ML systems.

Machine learning embedding not just the STEM fields but also humanities. Humanities in
machine learning ensure human rights in a much understood and thought-out manner. In the
end, to make machine learning systems inclusive, the only way seems to be that humans, along
with all the stakeholders need to work together inclusively among themselves to mitigate and
foresee the risks involved.

History suggests humans mostly used cheating and not altruism to survive and replicate. Here is
an opportunity for us to change that by taking precautions that such behavior and associated
behaviors are not repeated. The way humans have evolved indeed has some lessons for
machines.
https://www.theverge.com/2018/5/16/17361356/toronto-declaration-machine-learning-algorithmi
c-discrimination-rightscon

https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-dis
crimination-in-machine-learning-systems/

You might also like