Professional Documents
Culture Documents
Examining Religion Bias in Artificial Intelligence Based Text Generators
Examining Religion Bias in Artificial Intelligence Based Text Generators
273
Student Track Abstract AIES ’21, May 19–21, 2021, Virtual Event, USA
such as jihad, terrorist, bomb blasts and similar such words. A rating algorithms to suggest future work areas in creating
finding that merits further investigation is that we found in when methodologies and metrics to rate the algorithms' bias levels.
conducting some tests, while using words associated with a In this regard, our research will focus on developing a
different religion, the negative bias against the Islamic religion still methodology where the goal is to certify (or rate how much) an AI
showed up even though the prompts did not have any words system makes ethical decisions, for example, a rating or a stamp of
related to Muslim, Islam, etc. approval. Such stamps will provide in-depth auditing mechanisms
to rate how many AI principles are being implemented (such as
Future Work: Further research that we will conduct in the next
transparency, trust, etc.) in the associated applications. Below are
couple of months should reveal if religions that are the
the key tasks that we have identified based on the research
predominant religion in geographical locations close to Islamic
questions mentioned earlier:
countries or pockets of a higher Muslim population showed
religious bias against Islam. My research plan includes tests that Study of inherent biases in state-of-the-art CV and NLP
will help identify the sort of religious bias that occurs due to tools and report of biases.
demographic data that feed into AI systems. As we continue my
research, we would like to design algorithms and protocols to Classification and quantification of biases in CV and
ensure Artificial Intelligence (AI) algorithms adhere to ethical NLP through benchmarking analysis of large datasets.
standards. The primary question is if it is possible to de-bias them Design and development of a bias identification and
in a generic manner? The related key questions are as follows: reporting algorithm and implementation of a bias-
(a) What are the biases that exist – as close to an exhaustive identifier web toolkit.
examination?
Evaluation of the toolkit through real-world user
(b) What causes a specific bias? studies.
(c) What is the impact of this bias on the output and on the Creating metrics to measure ethical standards in AI algorithms
acceptability of the output? will help identify problems in algorithms and bring to light the
biases inherent in algorithms when dealing with minorities and
Besides examining NLP models closely, our work includes a under-represented groups. Currently, some AI technologies have
focus on Computer Vision (CV) related facial recognition been pulled from use in society because of the fear of the negative
algorithms where ethical issues due to bias in the data and outputs impact they could have on communities. Metrics that rate the
are discovered. AI algorithms are applied through machine transparency or trust level in an algorithm should increase
learning models for CV applications such as object/face consumers' confidence to deploy these technologies to ensure that
recognition. Broadly speaking, CVML has issues related to high the traditionally under-represented groups are not impacted.
error rates in misidentification due to the lack of diversity in data.
These biases sometimes occur within the software as the poor
CCS CONCEPTS
design of algorithms that cause inherent biases while framing the
problem. Our research goal is to identify, quantify, and report • Computing methodologies~Artificial intelligence~ Natural
gender, racial, ethnic, religious, and any other societally related language Processing~Information Extraction • General and
biases in predominantly used CV and NLP machine learning reference~Cross-computing tools and techniques • Social
models. In this regard, our research will develop a bias-reporting and professional topics~Uer characteristics~Religious
toolkit that will incorporate methods and protocols for bias orientation
identification and potential mitigation solutions. There remains a
potential for extensive research in NLP and CV areas to examine KEYWORDS
the biases that exist in the context of gender, religion, and NLP, religious-bias, toolkit, audit, algorithm,
ethnicity. For example, in word embeddings, one could find a way
of debiasing just religion or just race or just gender, but ACM Reference format:
realistically, the techniques must be embedded within the source
Deepa Muralidhar. 20201. Examining Religion Bias in AI Text
model or algorithm used to create the NLP application. It is
Generators. In Proceedings of 2021 AAAI/ACM Conference on
unclear if one technique to remove one kind of bias would harm
AI, Ethics, and Society (AIES’21), May 19–21, 2021, Virtual
another technique applied to remove a different bias. We propose
Event. ACM, New York, NY, USA, 2 pages.
that bias ratings be applied to AI systems warning consumers
https://doi.org/10.1145/10.1145/3461702.3462469
against race, gender, religion, or other such biases. We propose
benchmark tests and examine previous research done thus far on
274