Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

I am writing to express my interest in the philosophy fellowship at the Center

for AI Safety, as I see it as an ideal opportunity to develop and improve my


philosophical research.

I have been a Ph.D. student in Philosophy of Science at the University of Barcelona


since 2018, where I have been working on scientific understanding and explanation.
More specifically, I investigate the methods, models, and theories with which
scientists try to understand the world around us. I argue for a pragmatic approach
highlighting the plurality of ways to gain understanding, including ways that do not
require the presence of explanations. As the first result of my thesis, I have published
the paper "Descriptive Understanding and Prediction in COVID-19 Modelling" (co-
authored with my colleague Javier Suárez), which coins the concept of descriptive
understanding. Our published article draws on how scientists obtained such
descriptive understanding in developing the statistical epidemiological IHME
COVID-19 model, a model that is explicitly non-causal and non-explanatory. Yet, as
we claim, scientists managed to extract relevant information about the dynamics of
the pandemic from it.

In the light of my current and my past research, the prospect of learning from and
working with leading researchers in machine learning and philosophy in the highly
relevant field of AI safety is truly exciting for me.

For one thing, I will be a visiting scholar at the University of Pittsburgh until the end
of November, where I am working on a project on the AI system AlphaFold under the
hospices of Prof. Sandra D. Mitchell. More precisely, my paper analyses how
AlphaFold provides scientific understanding of protein prediction with its deep
learning capabilities. I argue that the model's sophisticated methodology and the
scientists' decision to make it transparent to a broader audience are what grant its
status as a reliable instrument. Since starting this project, my philosophical interest in
the possibilities and opportunities, as well as in the limitations and risks, of AI models
has become even more profound. As AI models become increasingly indispensable
tools in science and technology but are still relatively underexplored in philosophy,
they are particularly fruitful objects for further research.
For another, the fellowship would be a great chance to revive my long-standing
philosophical interests in ethics and political philosophy and align them with my
recent research in philosophy of science. I acquired a taste for philosophy during my
early ethics courses in high school, motivating me to study a master's degree in
political philosophy and ethics. My master's thesis analyzed certain types of
environmental dilemmas and how consequentialist and deontological arguments help
solve them. I believe there is a similar need for clarifying which ethical considerations
underly the emerging safety challenges in machine learning.

I believe the Center for AI Safety fellowship would provide me with the exciting
opportunity to demonstrate my abilities and acquire new skills in a research field that
is part of my current and future professional goals. I especially value the possibility of
receiving intense training in many aspects of AI safety in an interdisciplinary
environment of a highly qualified academic level.

I am aware of the dedication and perseverance needed to achieve significant results in


the seven months duration of the program, and I believe I could contribute fruitfully
to the research at your center with my background.

I remain at your disposal for any further information, and thank you in
advance for your attention.

Kind regards

Johannes Findl

You might also like