Professional Documents
Culture Documents
Rev. Twinamatsiko Arthur
Rev. Twinamatsiko Arthur
Research Problem
These systems can perpetuate and amplify biases present in the data they are trained on, leading
to unfair or discriminatory outcomes. Ensuring that the systems are fair, transparent, and
accountable is crucial.
Research Methods
Conduct systematic audits of algorithms to identify and quantify biases. This involves using
statistical methods and fairness metrics to evaluate how different groups are affected by the
decisions.
Create and test new algorithms that incorporate fairness constraints during the training process.
Methods like adversarial debiasing, fairness-aware optimization, and re-weighting techniques
can be explored.
Participatory Design
Involve diverse stakeholders in the design and development process of the systems. This can
include workshops, focus groups, and co-design sessions to ensure the perspectives and needs of
various communities are considered.
2. Explainability and Interpretability of Models
Research Problem
Many models, especially deep learning models, operate as "black boxes," making it difficult to
understand how they make decisions. This lack of transparency hinders trust and limits the use of
AI in critical applications like healthcare and finance.
Research Methods
Use techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP
(Shapley Additive explanations) to provide explanations for model predictions, regardless of the
underlying model.
Focus on developing inherently interpretable models, such as decision trees, rule-based systems,
and generalized additive models (GAMs), which are simpler to understand and explain.
Conduct case studies in real-world settings to evaluate the effectiveness and usability of
explainability techniques. User studies can help assess how different stakeholders, such as
domain experts and end-users, understand and trust the explanations provided.
Research Problem
AI systems are vulnerable to adversarial attacks, where small perturbations to input data can lead
to incorrect and potentially harmful outputs. Ensuring the robustness and security of AI systems
is essential, especially in high-stakes applications.
Research Methods
Develop and apply adversarial testing techniques to simulate attacks on AI systems. Investigate
defence mechanisms such as adversarial training, robust optimization, and anomaly detection to
enhance system security.
Use formal methods and mathematical techniques to verify the robustness and security properties
of AI models. This involves proving that models meet certain safety and performance criteria
under specified conditions.
Establish benchmarks and organize competitions to evaluate and compare the robustness of
different AI systems. This can foster the development of more resilient models by promoting
transparency and community-driven improvement.
Three main areas that commonly require urgent scientific investigation are.
Emerging technologies and their societal impacts. The rapid advancement of technologies like
artificial intelligence, biotechnology, and renewable energy often outpaces our understanding of
their long-term implications. Urgent research is needed to anticipate and mitigate potential risks
or unintended consequences.
Global health challenges. Infectious diseases, pandemics, chronic illnesses, and health
disparities continue to pose significant threats to human wellbeing worldwide. Urgent research is
required to develop effective prevention, treatment, and public health strategies.
Environmental sustainability and climate change. The growing urgency of addressing climate
change, biodiversity loss, and resource depletion requires extensive research to inform policy,
technology, and societal transformations.
Integrated assessment modelling to explore the complex interactions between human and natural
systems
1. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. Retrieved from
https://fairmlbook.org.
2. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining the
Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining.
3. Goodfellow, I., McDaniel, P., & Papernot, N. (2018). Making Machine Learning Robust Against
Adversarial Inputs. Communications of the ACM, 61(7), 56-66.
4. Alonge, O., Rodriguez, D. C., Brandes, N., Geng, E., Reveiz, L., & Peters, D. H. (2019). How is
implementation research applied to advance health in low-income and middle-income countries?
BMJ Global Health, 4(2), e001257.
5. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete
problems in AI safety. arXiv preprint arXiv:1606.06565.
6. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
7. Creutzig, F., Agoston, P., Goldschmidt, J. C., Luderer, G., Nemet, G., & Pietzcker, R. C. (2017). The
underestimated potential of solar energy to mitigate climate change. Nature Energy, 2(9), 1-9.
8. Dignum, V. (2017). Responsible Artificial Intelligence: Designing AI for Human Values. ITU
Journal: ICT Discoveries, Special Issue No. 1, 1-8.
9. Kucharski, A. J., Russell, T. W., Diamond, C., Liu, Y., Edmunds, J., Funk, S., & Eggo, R. M. (2020).
Early dynamics of transmission and control of COVID-19: a mathematical modelling study. The
Lancet Infectious Diseases, 20(5), 553-558.
10. Mehl, G., Tunçalp, Ö., Ratanaprayul, N., Tamrat, T., Barreix, M., Lowrance, D., & Bartlett, L.
(2021). Harnessing digital health to support women's and children's health and well-being in
resource-constrained settings: a call to action. The Lancet Global Health, 9(9), e1263-e1266.
11. Metz, B., Davidson, O. R., Bosch, P. R., Dave, R., & Meyer, L. A. (Eds.). (2007). Climate Change
2007: Mitigation of Climate Change (Vol. 4). Cambridge University Press.
12. Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
13. Sovacool, B. K., Heffron, R. J., McCauley, D., & Goldthau, A. (2016). Energy decisions reframed as
justice and ethical concerns. Nature Energy, 1(5), 1-6.