Philosophy of Science

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

1 Thinking may never submit it-

self, or may it?

What if I’d decide today to publish a tutorial on how to use artificial intelligence,
to create autonomous weapons? This knowledge will be captured on the internet
and its effects can never be reversed. A new door is opened where everyone has ac-
cess to this powerful tool of terrorism, and once opened it cannot be closed again.
As technologies are developed, doors are opened with new use cases. Some of these
are beneficial, while others prove to be very destructive. For example, the creation
of dynamite was initially intended for clearing mines quicker, but also paved the
way towards more destructive weapons. Fundamental research in nuclear energy
gave rise to nuclear weapons. The responsibility for these negative side effects is
rarely put back towards the academic field.
The academic field as we know it was shapen in the early 19th century and was
based on Von Humboldt theory of academic freedom. This theory mentions how
higher education must be free from economic, cultural and governmental con-
straints and must be free to share any knowledge. This was important in the
times where the catholic church blocked research that contradicted their explana-
tions.
But still today the theory holds where universities may freely choose what they
research and publish. This approach relies on a great trust on universities to eth-
ically choose what is published. But publishing in many fields mean that anyone
has access to it, while this may not always be necessary to achieve the positive
outcomes. Often there is only looked at the positive outcomes of publishing, rarely
the negative outcomes are seen as a reason not to publish. This because academics
their first priority is to quickly progress their field. Little incentive is given to slow
down their progress in order to control these negative outcomes.
In the past these negative outcomes almost always had to happen before regula-
tions were put in place to control those. But now that technology becomes more
powerful, we are coming to a point where only one occurrence of a negative out-
come may cause it to be never controllable again.
It is recognised how AI models can pose great danger when put in the wrong

1
hands. Models that use the wide amount of privacy sensitive data have shown
to very effectively steer opinions. The technology of Deepfakes uses AI to almost
perfectly replicate someones appearance and voice in a video. The technology is
made publicly available and poses great treat to the trustworthiness of our com-
munication.
While the risk is almost universally recognised by most machine learning experts
and some occurrences have already caused damage. Regulations are still not made
to control these negative outcomes.
As the capabilities of a technology increases, the safety concerns with publicly re-
leasing the technology also increases. In the field of biotechnology this has always
a been prevalent topic. With recent developments it became possible to modify
the human genome for many health benefits. But even tho it would allow for many
positive use cases, it has been made illegal since the negative outcomes would be
uncontrollable. This is one of the few cases where regulations actually came in
place restricting what the academic field may research.
In biotechnology these safety concerns have always been prevalent but little con-
sideration to these aspects are considered in computer science research. While
computer scientists do not disrupt something as crucial as the human genome,
they can cause irreversible effects on society.
The recent rise in machine learning allows them to create far more powerful models
than ever before. Models that can be used for fraud, terrorism and oppression.
The prevalent machine learning researcher Stuart J. Russel also talked in hes book
about the dangers that more powerful AI models may bring. Still, the field of
computer science always had a more open-source way of working. Believing how
all their research should be publicly available to enable quicker innovations. This
innovation driven view is not only seen in computer science, but in most scientific
fields. And how long may it take until this reckless chase for innovation causes a
negative effect that cannot be controlled.
The responsibility to deal with the negative effects is put onto the lawmakers, but
only after such event has occurred and public outcry demands for it. Which also
requires the public to be informed enough to request the needed restrictions. An
approach that often failed to stop negative usage of technology from happening. As
stopping these negative usages becomes more important than ever, it may become
time to consider other systems that may better prevent these negative outcomes.
For this we can no longer only rely on the public to be informed. Active in-
volvement from different scientific disciplines is needed to consider the dangers of
different technologies and suggest appropriate measurements to control those.
Multiple approaches can be taught of to steer the usage of technology, one is to
change the way publicising information works. Research that is known to allow
for destructive use cases can be shared only to those who do use it in ethical work.

2
For this to work, scientists need to ethically consider with whom they share their
research. Along with this, there should come a lot more public awareness around
new technologies to promote the discussion of what is ethical and what not. And
at last the responsibility falls on the lawmaker to consider laws before the public
outcry occurred by cooperating closely with the academic field.

You might also like