Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/361507213

Care Ethics in Engineering Education: Establishing a Future


Research Agenda

Article  in  Journal of Computers in Education · June 2022

CITATIONS READS
0 8

4 authors, including:

Ashraf Alam
Indian Institute of Technology Kharagpur
52 PUBLICATIONS   442 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Pedagogy and Cognition of Calculus in Schools and Universities View project

Philosophy of Mathematics Education View project

All content following this page was uploaded by Ashraf Alam on 24 June 2022.

The user has requested enhancement of the downloaded file.


Care Ethics in Engineering Education:
Establishing a Future Research Agenda

Jon Doyle, Taylor Babylon, and Stefan Campbell


Organization, Information & Learning Sciences, University of New Mexico, USA

The daily influence of new technologies on shaping and reshaping human lives necessitates attention to the ethical
development of the future computing workforce. To improve computer science students’ ethical decision-making, it is
important to know how they make decisions when they face ethical issues. This article contributes to the research and
practice of computer ethics education by identifying the factors that influence ethical decision-making of computer
science students and providing implications to improve the process. Using a constructivist grounded theory approach,
the data from the text of the students’ discussion postings on three ethical scenarios in computer science and the follow-
up interviews were analyzed. Based on the analysis, relating to real-life stories, thoughtfulness about responsibilities
that come from the technical knowledge of developers, showing care for users or others who might be affected, and
recognition of fallacies contributed to better ethical decision-making. On the other hand, falling for fallacies and
empathy for developers negatively influenced students' ethical decision-making process. Based on the findings, this
study presents a model of factors that influence the ethical decision-making process of computer science students, along
with implications for future researchers and computer ethics educators.

CCS Concepts: · Social and Professional Topics→ Computing Education: Adult Education

Additional Key Words and Phrases: Ethical decision-making, computing ethics, ethics education, computer
science education

1 INTRODUCTION
Ethical decisions made by computing professionals have significant consequences for society. It was in the
1980s when computing practitioners started to pay more attention to societal implications of their practice,
and professional codes of ethics were developed [36]. Computing professionals' ethical obligations are
beyond complying with the law, which often lags behind technology advancements [43].
In recent years, the public has been more aware of ethical issues of computing, including those related
to social media and artificial intelligence, and demands more thoughtful ethical products [30]. Thus,
preparing computing professionals who are aware of the consequences of their practice and equipped to
make ethical decisions is important. In particular, computing graduates should be prepared to handle
complex ethical dilemmas. Ethical reasoning should be a central element of computer science education
[33]. Many reports, for example [46, 55], have highlighted the need for improvement of ethical decision-
making among engineering students and computing professionals who design new technologies [26]. In
1
An earlier version of this paper was presented at the ASEE Conference at Salt Lake City in 2018.
Author’s address: A. Hedayati-Mehdiabadi, Organization, Information & Learning Sciences, College of University
Libraries and Learning Sciences, University of New Mexico, Zimmerman Library 241, MSC05 3020, Albuquerque, NM
87131 USA; email: abhsfsd@unm.edu

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first
page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted.
To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request
permissions from permissions@acm.org.
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
/2022/1-ART1 $15.00
ACM Trans. Comput. Educ.
this regard, ABET requirements and the ACM code of ethics have guided the educational endeavors in
computing ethics [56].
Teaching computing ethics, however, is not an easy task. Students and instructors' lack of background
in ethics and the limited number of courses on ethics in curricula limit ethical education in technical
majors [51]. Moreover, many students find issues related to ethics irrelevant and abstract [1].
An important concern in ethics education is the need for approaches for teaching ethics that attend to
the unintentional but predictable cognitive patterns that lead to unethical behavior [7]. In contrast to
traditional ways of teaching ethics, these approaches are grounded in the findings of empirical studies on
ethical decision-making. While empirical studies on ethical decision-making are vast, the theory and the
literature on the topic are yet to be saturated [40]. For example, studies on ethical decision-making at work
are mainly conducted among managers [14, 27, 64]. As suggested by Weber and Wasieleski [64], "more
attention should be directed toward the context of dilemma, type of work, and industry membership."
Developing ethical decision-making skills in a profession can be informed by knowing how individuals in
that field make ethical decisions. This knowledge will help educators improve this process [23].
Therefore, this study aims to examine the ethical decision-making process of computing majors and the
contributing factors that influence this process. Using a grounded theory approach, an inductive model of
ethical decision-making among computing majors will be proposed. Grounded theory is a systematic and
flexible approach to qualitative data collection and analysis to construct theories. The proposed model is
inductive as it is "' grounded' in the data themselves" [15] rather than preexisting theories or hypotheses.

2 BACKGROUND

2.1 Research on Computer Ethics Education


Realizing the need for developing ethical competencies of computing graduates, in the 1980s, professional
organizations of the field laid the foundation for computing ethics education [55]. As educational
programs started to offer courses to address ethics education requirements, researchers began to explore
the effectiveness of these courses and seek ways to improve them. For example, Byrne and Staehr [12]
evaluated the computer ethics component of an undergraduate course on information technology. As part
of this four-week-long component, several ethical dilemmas were discussed, and Kohlberg's theory on
ethical development was introduced to students. Using the defining issue test (DIT) and an experimental
design, the researchers concluded that the course increased moral reasoning scores among participants. In
another effort, Berne [8] used intergenerational dialogue to discuss the ethical issues of developing new
technologies. Berne concluded that the experience of dialogue among engineering students and senior
citizens deepened and enhanced the experience of a regular engineering ethics classroom [8]. Loui [42]
examined the effects of an ethics course on students' identity development. According to the findings,
taking this course improved students' confidence in their moral reasoning skills and understanding of
their professional responsibility [42]. In another study, researchers found that students who completed a
course on computing ethics managed to recognize the central ethical issues and plan and act accordingly
[34].
Computing ethics educators have traditionally used the ACM code of ethics to teach computing ethics.
Evaluating a course on software engineering, Oriogun et al. [49] stated that students who participated in
their research found ethical theories more insightful in addressing ethical dilemmas than the professional
code of ethics. Using an experimental design, McNamara and colleagues [44] found that instructing
participants (consisting of software engineering students and professional software developers) to use the

ACM Trans. Comput. Educ.


ACM code of ethics as revised in 2018 [2] in responding to ethical dilemmas had no effect on ethical
decision making when compared to a control group.
In recent years, in response to an increase in the number of tech companies' scandals, more computer
science departments are offering stand-alone computer ethics courses or investing in ethics-across-
curricula initiatives [25]. In their analysis of 115 syllabi of ethics courses specific to technology, the
authors identified the topics of privacy, algorithms, and issues of justice as the main areas covered in
stand-alone courses on tech ethics. They suggested integrating ethical issues in technical courses such as
data science, machine learning, and artificial intelligence [25].
Some argued that considering the vast influence of computer scientists in today's life, traditional ethics
education "may not include practical and timely training on how to weigh the consequences of their
decisions" [56]. These trends have also influenced the research on the subject, and scholars in the field
have proposed and studied new approaches to teach computing ethics [11, 56]. For example, researchers
introduced multiple ethics interventions in an existing technical course and found "a doubling effect on
students considering ethics in technology design" [56]. In another effort, Furey and Martin [28] developed
a one-week intervention to integrate ethics into a traditional AI course. The intervention included lectures
and discussions on utilitarianism and the Trolley Problem, a discussion of ethics as part of the course final
project, and a question on the final exam. Based on the authors' descriptive analysis, their approach was
overall satisfactory. They reported that students found the activities engaging and were able to make
"reasonable attempts to consider the ethical implications of their project work" [28]. In another study, a
group of researchers introduced an ethical framework based on a survey on literature and a selected
number of course syllabi. They then conducted a pilot study in which they asked students to use the
framework to identify ethical issues within two machine-learning projects. The results suggested the
framework's effectiveness in helping students identify ethical issues [57].
As one can see, there have been several efforts to advance the field of computing ethics education
through research. An increasing number of scholars in the field have started to study different aspects of
computing ethics education, including the existing curricula across computing departments [17, 25],
integration of ethics into technical courses [33], and the new approaches to teach computing ethics [11].

2.2 Critiques on traditional approaches to teaching computing ethics


Scholars have raised some concerns regarding the traditional approaches to teaching computing ethics.
Examples include offering stand-alone computer ethics courses instead of integrating ethics into existing
classes throughout the curricula, the use of ethical theories (mostly utilitarianism and Kantian ethics) as
the main approach to address ethical issues, and the mere focus on existing morally controversial issues
instead of those that might potentially happen in future [10,18,24]. Fiesler [24] argued that offering stand-
alone computing ethics courses reinforces the idea that ethics is not truly an integrated part of computer
science but a specialized task and, therefore, someone else's responsibility. Connolly argued that, despite
the prevalent use of ethical theories due to the favorability of this approach for some computer science
faculty, such practice is limited because it is based on the belief that technology can consistently achieve
its exact promises, neglecting the complex relationship between technology and society [18]. Despite these
issues, it is important to mention that in recent years, some of the work by scholars in the field, such as
Burton et al. [11], Grosz et al. [33], and Prentice [52] demonstrate future-oriented, integrated, and more
comprehensive approaches to computing ethics education.
Overall, literature on computing ethics education focuses primarily on proposing new approaches or
frameworks to enhance computing ethics education. Although this emphasis is imperative in a rapidly

ACM Trans. Comput. Educ.


changing profession and environment, more studies on the effectiveness of such approaches and how they
can be improved are needed [25, 32]. Understanding how computing majors engage in the ethical decision-
making process and identifying the contributing factors can help computing ethics educators shape their
innovative approaches.

2.3 Theories and Models of Ethical Decision-Making


Research on ethical decision-making has progressed over time, leading to a stand-alone field [60]. This
section reviews some of the relevant ethical decision-making theories and models.
As one of the pioneers in studying moral development, Kohlberg adopted Piaget's cognitive
development model and proposed six sequential stages in the organization of moral judgment [4].
According to the model, individuals go through these stages in life, starting from "punishment-based
obedience" up to "principled morality based on standards of justice" as they advance in their moral
judgments capacity [4]. Despite its popularity, Kohlberg's theory has been criticized in the literature for its
mere attention to the cognitive process of ethical judgments as well as the placement of individuals in
certain stages of moral development regardless of the type of dilemma they face. Moreover, one of
Kohlberg's assumptions was that deontology is the core of moral development and, therefore, "it is a
matter of rights and duties as prescriptions" [41]. One of the main critics of Kohlberg's work was Carol
Gilligan, a student of his. In her book published in 1982, In a Different Voice, she discussed how women
often emphasize the relationships and the use of a personal approach to ethical situations as opposed to
universal principles [31].
Reviewing the literature and heavily building on Kohlberg's moral development model, Treviño
introduced a model for ethical decision making in which, in addition to individual's cognitive moral
development stage, other "individual and situational variables interact with the cognitive component to
determine how an individual is likely to behave in response to an ethical dilemma" [61]. The model states
that except for those who are highly developed in ethics, the combination of cognitive and behavioral
approaches results in more ethical behavior than either of them by themselves [61]. Treviño believed that
external sources such as organizations and educators play significant roles in creating environments
conducive to ethical behavior [61]. In 1991, Jones proposed the issue-contingent model of ethical decision-
making in organizations. In contrast to previous ethical decision-making models, this model attended to
the nature of moral issues. Jones criticized the assumption that individuals behave similarly in dealing
with issues with different levels of moral intensity [29]. Moral intensity "captures the extent of issue-
related moral imperative in a situation" [29] and is a multidimensional construct that consists of
characteristics of the moral issue, including the magnitude of consequences, probability of effect, and
concentration of effect, among others. In another effort, Mumford and colleagues proposed their model of
ethical decision-making using the concept of sensemaking [45], which "is the process of building an
explanation to resolve a perceived gap or conflict in knowledge" [48] when individuals are faced with
ambiguous and high-stakes events. The proposed model assumes that factors such as the perceived cause
of the situation and personal and professional goals affect an individual's initial assessment of a situation.
Individuals then recognize the nature of the problem and frame it [45]. If framed as an ethical issue,
emotions regarding the ethical problem will be invoked. At this stage, individuals seek prior experiences or
known cases based on which mental models are constructed or selected. These mental models will then be
used to predict possible outcomes of different actions.
Using grounded theory, Heyler and colleagues [38] built on Bandura's social learning theory and
proposed their model with components related to both ethical decision-making and ethical development.

ACM Trans. Comput. Educ.


In contrast to many other ethical decision-making models, Heyler's model is developed using an inductive
approach for a specific context (i.e., military leaders). Moral awareness is foundational in Heyler's model,
and emotions such as conscience, regret/guilt, being uncomfortable, and frustration were recognized as
sources that influence ethical decision-making.
Bandura raised the important concern that ethical "conduct is much more than moral reasoning" [5].
He attended to the "self-regulatory mechanisms rooted in moral standards and self-sanctions" and their
implications in translating moral reasoning to actual actions [5]. According to him, "[s]elective activation
and disengagement of personal control permit different types of conduct by persons with the same moral
standards under different circumstances" [5]. Bandura introduced eight disengagement mechanisms as
follows: (a) moral justification, (b) euphemistic labeling (i.e., decreasing the harshness of activity by the use
of language), (c) advantageous comparison (i.e., coloring the action by making comparisons), (d)
displacement of responsibility, (e) diffusion of responsibility (i.e., feeling less responsibility due to the
decision being made by a group), (f) disregard or distortion of consequences, (g) dehumanization (i.e., not
viewing victims as human beings), and (h) attribution of blame (i.e., victims being blamed for putting
themselves in certain situations) [5]. According to Bandura, there are cognitive processes through which
one "can make the immoral inconsequential or even moral" [4].
In recent years, a new realm of ethics called behavioral ethics has emerged [53]. Behavioral ethics is
"the study of systematic and predictable ways in which individuals make ethical decisions and judge the
ethical decisions of others when these decisions are at odds with intuition and the benefits of the broader
society" [6]. Unlike traditional ethics approaches that emphasize ethical philosophy, behavioral ethics
focuses on the context. Contextual and situational factors can distort individuals' ability to consider the
ethical dimensions, a phenomenon called 'ethical fading' [7]. From this perspective, the lack of awareness
and/or recognition of "psychological processes that bias our decisions" [7] is the main reason for ethical
failures.
Overall, the literature on ethical decision-making provides diverse variables and theories to explain and
describe the process. However, the process as suggested in this review is contextual and deeply rooted in
the social-cultural environment of a profession's practice. Therefore, the current study aims to describe the
ethical decision-making processes among computing majors.

3 METHODS
This research was conducted using the constructivist approach to grounded theory introduced by Charmaz
[15]. Charmaz [16] defined grounded theory as "a systematic method of inquiry that begins with inductive
data…and aims to construct theory" [16]. In other words, in grounded theory, we study how our
participants "explain their statements and actions and ask what analytic sense we can make of them" [15].
Grounded theory was adopted for the current study because it was the most appropriate design for
addressing this study's aim and research questions. The outcome of constructivist grounded theory is a
framework that "explains how and why persons…experience and respond to events, challenges, or
problematic situations" [19]. Moreover, the specific setting of the current study (i.e., ethical decision-
making among computing majors) has not been sufficiently examined. As stated by Creswell [20],
"Grounded theory is a good design to use when a theory is not available to understand or explain a
process. The literature may have models available, but they were developed and tested on samples and
populations other than those of interest to the qualitative researcher". The constructivist grounded theory
focuses on providing interpretive understanding rooted in contextual conditions. The nature of
constructivist grounded theory does not allow reporting inter-rater reliability scores, and therefore

ACM Trans. Comput. Educ.


providing those measures is not intended in this research. As stated by Charmaz and Thornberg [17],
"inductive qualitative research with rich first-hand data could lead to theory construction and … adhering
to canons of objectivity, validity, reliability, and replicability would inhibit theorizing" [17].
Trustworthiness, however, as a more suitable terminology for the current research, has been pursued by
careful and lengthy analysis of the data, triangulation of findings from different sources, and providing
sufficient direct quotations from participants to allow the reader to understand the process and
background of the researcher's assertions.

3.1 Participants
Undergraduate students enrolled in a computing professional ethics course on the campus of a Midwestern
University in the United States were the target participants. The course, a 16-week required course for
students in the Computer Science department, was offered in Spring 2017 and reviewed topics including
philosophical ethics, logical argumentation, privacy, crime and the law, intellectual property, inequality
and social justice, professional ethics, digital speech and commerce, security and risk, data science ethics,
social media, and emerging topics. The course introduced three ethical theories: consequence-based, duty-
based, and virtue-based ethics. These frameworks were taught using PowerPoint lectures at the beginning
of the semester. Students were asked to apply these theories to recent topical case studies in their written
weekly assignments. These cases focused on topics such as privacy, social media, and unethical software
development.
One hundred sixty-four students enrolled in the course. These students, by registration, were grouped
in six sections mapping to their choice on the date and time of discussion class sessions. After the approval
of the institutional review board (IRB) and following the consenting process, 104 students agreed to be part
of this research by allowing their online discussion postings to be analyzed by the researcher. Among
them, 80 individuals showed interest in participating in interviews. A subset of these students followed up
to schedule interviews and was interviewed. Two of the six sections, consisting of 33 students, were
selected for this research, and their interview data were analyzed. These two specific groups were chosen
to include more women and individuals with a first language other than English to have a more inclusive
data set. This attention to inclusion is in line with the suggestion by Leavitte et al. [39] regarding the
recruitment of a more diverse sample in research on ethical reasoning. The final sample of 33 students
consisted of seven women and 12 students with first languages other than English. This sample included
14 sophomore, 13 Junior, and six senior students. Twenty-seven students had prior work experiences, with
an overall average of around seven months for the whole sample.

3.2 Data Collection


The data collected from the sample of 33 students consisted of (1) a short demographic questionnaire
which was conducted at the beginning of the semester, (2) the text of responses to discussion forums, and
(3) the interview transcripts.

3.2.1 Ethical scenarios and online discussions.


Three ethical scenarios were designed to conduct the study (See the Appendix). These scenarios focused
on the following issues: (1) privacy in social media (i.e., scenario 1-a, designed by the author), (2) viral
deception (i.e., scenario 1-b, adapted by the author), (3) safety in application development (i.e., scenario 2),
(4) and unethical assigned tasks (i.e., scenario 3, based on a real scenario [58]). The scenarios covered a

ACM Trans. Comput. Educ.


range of different issues in computer science and involved different degrees of intensity in terms of moral
problems.
In the first part of the first scenario, students were prompted to imagine they were part of a team of
developers of a new social media platform who needed to decide on the default option of privacy level
(lowest privacy or highest privacy). They were asked whether this decision involved an ethical or an
operational issue. Moreover, they were asked what they would have done in this situation and with what
justification. The second part of the first scenario asked students about the approach to select trending
news (i.e., using algorithms or based on the opinions of a team of experts). The prompt questions were
similar to those in the first part of the scenario.
The second scenario involved software testing. The dilemma in this scenario was whether to extend the
deadline to run appropriate tests or deliver the possibly buggy application on time to obey the manager's
order.
The third scenario had to do with an incident in the very first job of a programmer hired in a
marketing firm. The pharmaceutical client company had asked Jim to design a website for a quiz that
would always recommend the client's drug no matter users' responses. The website was not marked as an
advertisement. The programmer completed the task and later understood a girl who took the drug had
committed suicide and died due to the drug's side effects. Students were asked whether this programmer
did something wrong and whether he was responsible for the incident.
Students were asked to participate in asynchronous online discussions as part of their course
assignments during a 3-week period for each designed scenario (i.e., nine weeks total). For each scenario,
participation consisted of one initial response to the scenario questions during the first week and a
minimum of two responses to the peers' postings in the second week. Students could see their submitted
peers' postings before submitting their initial post. Finally, in the third week, students were asked to
provide their final stance to reflect upon their first stance. After an initial analysis of the discussions,
interviews were conducted.

3.2.2 Interview process.


The individual face-to-face interviews were conducted on campus during the last week of the semester in
which the course was offered. Prior to the interviews, IRB approval was obtained. The interviews were
semi-structured, and each took around 30 minutes to complete. The interview guide included questions
such as 'Please guide me through the process behind your answers to each of the scenarios in online
discussions.' and 'Could you describe the events that contributed to your decision on each of these
scenarios?' Interviews were conducted in English and recorded after the participants' consent. The
recordings were then transcribed verbatim.
Students were offered extra course credit for participating in interviews. Therefore, to make sure
getting the extra credit for the interviews was available to all students and not only to the two selected
subgroups, all students who showed interest in the extra credit were interviewed (59 students). Only 19 of
these students were from the two selected subgroups and therefore included in this research. Pseudonyms
are used in this manuscript to refer to the research participants.

3.3 Data Analysis


Following the approach for grounded theory suggested by Charmaz [15], the four-stage coding process
was used (i.e., initial coding, focused coding, axial coding, and theoretical coding). Coding is the process of
attaching labels to segments of data to distill and sort the data [15]. In the first stage (i.e., initial coding),

ACM Trans. Comput. Educ.


line-by-line coding was applied to help avoid preconceived notions [15]. In the next step (i.e., focused
coding), the most significant and/or frequent codes developed in the previous step were selected for
further analysis [15].
The next step in the process was axial coding. In Axial coding, the researcher reassembles the data that
have been fractured during initial coding by specifying the categories and relating the categories to
subcategories [15]. In this stage, the selected codes from the previous step were categorized and
synthesized to bring "data back together again in a coherent whole" [15]. An example is to categorize 'the
World War II story,' a code generated in the initial coding, under 'the use of stories' as a broader category.
Other stories shared by students were then categorized under this category as well.
The final step in coding was theoretical coding, in which possible relationships between categories
identified during the focused coding process were established [15]. Particularly, the main objective was to
identify what contributed to ethical/ unethical decision-making as the phenomenon of interest. In other
words, the goal was to identify "theoretical links among categories" by examining what type of codes were
present or absent in the case of a more desirable/ less desirable ethical decision [15].

4 FINDINGS
The findings section is organized as follows. First, students' responses to three ethical scenarios will be
briefly reviewed. Second, the identified factors involved in students' ethical decision-making will be
presented. Third, the biases and fallacies identified in students' responses will be discussed.

4.1 Students’ Responses to Three Ethical Scenarios


The ethical scenarios covered a range of different issues with different degrees of intensity. One of the
students, John, compared the three scenarios:
I think [the first two scenarios] are not as ethically as black and white as the last one and more
like of [an] engineering decision. [for the first scenario], everyone is in seek of more data, that
one is more of a decision of there is a potential for risk, and you might want to minimize that
potential whereas in the quality control the risk is identifiably there…and the very last one, … it
is not the risk, something bad basically may happen. [Do] you still want to do it?

Table 1 summarizes students' responses to the ethical scenarios and the most/ least desirable ethical
decisions in each scenario based on the author's judgment. Here, the assumption is that while there are no
right answers for ethical dilemmas, some solutions are better (more desirable) than others. These more
desirable solutions are usually built on thoughtful analysis in social contexts and by responsible
engagement with the issue at hand. Dark and Winstead [21] argued the best answer for many ethical
questions is about being involved in the process of discussion, reasoned debate, and reflection rather than
absolute 'right' and 'wrong'. Moreover, the judgment in each scenario is established by the use of the ACM
code of ethics, specifically the following imperatives listed under section 1 of this document:
An essential aim of computing professionals is to minimize negative consequences of computing
systems, including threats to health and safety. (Under section 1.1: Contribute to society and
human well-being, Acm.org 2018)
… [I]t is often necessary to assess the social consequences of systems to project the likelihood of
any serious harm to others. If system features are misrepresented to users, coworkers, or
supervisors, the individual computing professional is responsible for any resulting injury. (Under
section 1.2: Avoid harm to others, [2])

It is important to mention that the judgments presented in Table 1 are interpretive in nature and
influenced by the author's expertise in both areas of computer science (i.e., educational background) and

ACM Trans. Comput. Educ.


ethics (i.e., teaching and research background). Moreover, students' responses have been judged by the
degree to which the respondents engaged in the ethical decision-making process. For example, setting
privacy at the lowest level (i.e., solution 2 for scenario 1- part 1 in table 1) or arguing there is no difference
between the two choices (i.e., solution 6 in table 1) without proper justification were interpreted as
disengagement from ethical thinking. Therefore, these two solutions were identified as less ethically
desirable choices for addressing the scenario.

Table 1. Students’ responses to the ethical scenarios


Scenario Students’ responses (solutions/ decisions) Most/ least desirable
solution(s)
Privacy in social 1) Setting the privacy level at the lowest as the default but The second and sixth
media (scenario 1- informing the users (three students) solutions are less
part 1) 2) Setting the privacy level at the lowest (Four students) ethically desirable
3) Setting the privacy level at the highest as the default (17 choices.
students)
4) Setting the privacy level at the medium as the default (one
student)
5) Allowing the user to pick the privacy in the first use (five
students)
6) Selecting either of the privacy levels (one student).
Trending news 1) Abandoning the news (five students) The third solution is less
(Scenario 1- part 2) 2) Using a group of experts to select the news (seven students)
3) Using algorithms to select the news (two students) ethically desirable.
4) Using a combination of algorithms and a team of experts to select
the news (18 students).
App development 1) Signing off if the software is not safety critical (eight students) The second and sixth
2) Signing off as the managers want (four students) solutions are less
3) Signing off but letting the client know (two students) ethically desirable. The
4) Asking for more time if needed (one student) most ethically desirable
5) Not signing off before proper testing (14 students) solutions are the fourth
6) Not signing off if one can find another job (one student). and fifth solutions.
The assigned task 1) Jim did nothing wrong, and he is not responsible (12 students). The third decision is the
to a programmer 2) Jim did something wrong, but he is not responsible (10 students). most desirable.
3) Jim did something wrong, and he is responsible (11 students).

4.2 Considerations and Influential Factors in Ethical Decision Making


I used the grounded theory process to identify factors influencing students' ethical decision-making. In
this section, each of these factors will be introduced and discussed.

4.2.1 Perceived significance of ethical issues.


Some of the students built their judgments based on the significance of the core ethical issues. For
example, some students referred to the importance of privacy as an ethical issue when responding to the
social media scenario. As another example and in response to the app development scenario, some
students built their arguments on the nature of the application and the importance of safety. For example,
Ryan stated, "If it is a game and it has a few bugs, it is not an ethical issue...If the application is a medical
app, it is an ethical issue because [the] software that controls medical equipment should not be buggy".

4.2.2 User-related Factors.


The findings suggest that the way students think about users in different situations and their assumptions
about them have important implications for ethical decision-making.

ACM Trans. Comput. Educ.


a) Showing care for users or other individuals who might be affected.
Some students used terms that showed care for users. For example, Sebastian stated, "If the company says
that we have your privacy first, it looks like they care." As another example, Anne stated full privacy
should be the default option "to protect users in case they don't know how to change it". Edwin stated:
…it is narrow-minded… to expect everyone is going to pay much attention to privacy settings as
computer science students will… On the side of protection for ethicality, you should … [think
about] people who do not really understand the factors in play, you should try to protect them as
much as possible.

Responding to the third scenario, Anne stated, "Engineers should think broader and not just focus on
the actual task or what they build." Anne believed the product should provide honest results, especially for
drugs. During the interview, Anne stated although she herself did not trust the results of such online
quizzes, she knew many people did.

b) Using their own experiences in related settings.


When Sebastian was asked about the rationale for his response to the social media scenario, he responded:
…the social media that I [am] least likely to use … is Twitter because usually, it is public…
everyone can see it. I like my Facebook behind the wall where you have to be my friend. That
probably influenced me because it is how I feel about myself, which is why I think [full] privacy
should be the default.
Eaton, similarly, stated, "I want to make my social media private only to my friends and family."
Another student, Sarah, mentioned that full privacy as the default would make her safest as a
social media user. As another example, Reese argued, "I … use social media a lot. I keep
everything super private. So, that's why I want to [have full privacy]".

c) Making minimalist assumptions about users.


In response to the privacy issue in social media, Simon stated, "it is a social media for professionals; they
are here to network so maybe having the setting as the lowest but notifying them, so you are not unfairly
revealing their information." Similarly, Fai mentioned that: "because it is a social platform and people
using [that] to share, there is no reason for privacy to be high as default." Edward responded to these
ideas:
Although we are creating a social media platform, which aims to give users opportunities to
share their information with their friends, we should only share their information with their
permission. If we set the lower privacy as the default, I am sure that we will be criticized pretty
soon. (Edward)

Similarly, Oliver recognized this tendency and stated, "You can't just assume what privacy users want."
He stated he would let the users decide at their first login.

d) Generalizing one’s way of using technology to all users.


There were examples where students expected users to approach a situation as they themselves do or
think it should be approached. As an example, Nathan stated, "as a user of social media to some extent, I
ignore most of the 'news' I see under the presumption that it is trending and could be misleading news."

4.2.3 Relating to a real-world story.


Having a real-world story in mind helped many students recognize the ethical issues and make better
decisions. Stories shared by students included both positive and negative examples from past events. In
response to the first scenario, Oliver used the story of Facebook to explain the consent process. According

ACM Trans. Comput. Educ.


to him, "Facebook conducts questionable research with user data that default to 'public' and nest their
privacy options deep within the menu, so the average user cannot find it."
Sebastian, who believed it is better to abandon the idea of having the news on social media, stated, "…
[Hearing] all of the fake news [stories], it probably influenced me. I have seen so many things that I think
not doing it is the best idea."
Emma used a story related to Facebook to argue for the need for human intervention in the trending
news scenario:
It [seems] reasonable that some sort of human filter would be necessary to ensure that articles
shown are not untruthful and to filter out insensitive content. For example, a couple of months
ago, a young girl committed suicide over Facebook live, and this became one of the trending
events on the news sidebar. In cases like this, it seems clear that people ought to agree that it is
insensitive to spread this information across the whole network.

Emma was also able to understand the central dilemma in Jim's scenario and come up with a solution
by referring to one of the successful stories:
When [B]uzzfeed has quizzes that are sponsored by a brand that is marked regardless of whether
the quiz actually returns a result telling people to buy that brand's product. I think that marking
that … the quiz was made by a company is a critical way to remind people that there is likely a
strong bias or advertising purpose in the results.

In response to the scenario on app development, Fai discussed the story of Samsung and the issue with
batteries to show how the release of a product without proper testing might be dangerous. This referred to
the several battery explosion incidents of Samsung Galaxy Note 7 devices in 2016 due to the rushed
manufacturing process [59].
Some stories related to computing from World War II, presented in one of the course lectures,
significantly influenced students' ethical decision-making. In that lecture, the instructor has stated how
professionals, including programmers who worked for Nazis, played a role in killing innocent people (i.e.,
cases such as Action T4 and Dehomag).
Referring to World War II, Sue argued that the same scenario could be applied to Jim's situation as "… a
lot of people who were working for Nazis did not question what they were told."
Michael initially believed that Jim was not doing anything wrong when he responded to the scenario in
the discussions, which happened at the beginning of the semester. However, in the interview towards the
end of the semester, Michael changed this view:
The lecture was very helpful. The one on Holocaust and programmers who worked on machines
and in some way aided the Holocaust… even if it is your job, it can still be unethical... It was
something that I have never thought about … before. [I always thought] it was my job and it is
their decision whether it is ethical or not, but that is something that I started thinking about and
definitely influenced these scenarios as well.

4.2.4 Developer-related Factors.


Some of the students used arguments that were related to developers. Examining these arguments, three
main themes were identified that will be discussed in this section.

a) Recognizing the responsibility for what one creates.


Some students emphasized the responsibility of developers for what they create. For example, Laura
mentioned that: "I should do the right thing for what I am creating." Another example was Fai, who
referred to "the responsibility not to release a product which is not fully tested."

ACM Trans. Comput. Educ.


The findings showed recognition of the responsibility of developers helps students make more ethical
decisions. As Ian mentioned, "You should always think about … and should question what you're doing.
Being mindless and doing anything that's asked of you can be dangerous for many parties involved."

b) Putting oneself in the position of the developer.


When asked about their strategies in responding to the scenarios, some students revealed that they would
imagine themselves in the situation, often with some sort of empathy for the developer. For example,
Eaton said, "if I was Jim, I won't feel guilty because it is not my job to learn the drug because the drug is
not a clearly dangerous product."
The scenario on app development reminded Sebastian of his projects in high school and college and
how it is hard when one has such a strict deadline from someone above them, and they need to do as best
as they can. As Sebastian stated, "It is not my fault in that scenario that the app could be buggy… it is just
because you rush on it". As one can see, those who placed themselves in the developer's position and
empathized with the developer could not recognize their ethical responsibility.

c) The knowledge and experience that the developer in a certain subject area possesses.
Students who built their argument based on their technical knowledge and skills could better decide in
their responses to ethical scenarios. As an example, Shin stated that:
I have the knowledge of machine learning, and I know how important people's privacy is. I
know what kind of things you can do if you ignore privacy completely… If privacy was not an
issue at all and it tends to be that way for Facebook, unfortunately … there would be so many
things that could go wrong.

Here is another example:


Since you're dealing with people's personal information, as computer scientists having the
knowledge of how vulnerable a low privacy setting would make users yet setting that to default
would bring questions as… why we would do such a thing knowing the consequences. It's our
duty to protect users. (Ian)

As an example of using experiences in a specific area, Sarah stated, "I have worked in advertising
technology, and I think [knowing] how much data social media companies have on people made me more
prone to have the full privacy option."
Oliver referred to the knowledge on algorithms to respond to the question of what factors influenced
his response to the trending news scenario: "I look into the knowledge I gained … I know what the
algorithm process actually means".

4.2.5 Use of Ethical Theories.


Although students were exposed to three ethical theories (i.e., consequence-based, virtue ethics, and duty-
based) and applied them to several scenarios as part of their course assignments, only a few students used
the ethical theories in their responses. One of these students was Nicholas. He explicitly used
consequence-based ethics to argue for the first scenario. Despite this, the use of consequence-based ethics
in the way he approached privacy did not help make the desired decision. Nicholas argued: "…there is no
consequence about choosing the default level because user[s] can always change it".
The use of ethical theories did not guarantee an ethical decision. Examples include when students used
duty-based ethics to justify that Jim did nothing wrong since he was doing his job. This is important as
Oliver, in response to one of the interview questions about barriers to his future ethical decisions, stated:

ACM Trans. Comput. Educ.


… [E]thical standards can be used as different excuses to justify whatever I want to do. [I] want
to do this way, I can justify. I want to do the other way, I can still justify it. So, there is no way to
just feeding the scenario and pop up the right action. It is still complicated.

Only one student, Fai, explicitly used the term 'virtue-based ethics' responding to the scenario on
application development:
This issue can be best analyzed by virtue-based theory. Signing on a contract when the product
is incomplete violates the honesty virtue. Thus, as a quality control officer, he should be loyal
toward the responsibility of his title and try his best to persuade the boss and delay the release
date until the product is fully tested.

4.2.6 Biases and Fallacies in Students’ Arguments.


There were several fallacies in students' responses. Each scenario triggered some of these fallacies more
than others. The identified biases and fallacies are presented in Table 2.
Table 2: The identified biases and fallacies in students' arguments

Fallacy Previous theories Example(s)

Bad faith: finding oneself Sartre's notion of bad faith [54] "Here I am just an engineer…kind of what I
incapable of acting based on one's have to do." (Anne)
values by disowning one's
freedom under external pressure "He does not have the authority in his position
and, therefore, rejecting one's to evaluate the morality of the products he is
responsibility. asked to create." (Luke)

Moral justification: the tendency Bandura's disengagement "… for Jim I guess trying to complete his tasks,
of individuals to deviate from mechanisms: Cognitive his duties for me I say he didn't do anything
what is ethically desirable by restructuring of harmful wrong." (Shim)
presenting their decision as if it conduct [5]
serves a moral purpose. "Jim is only an employee who just finished his
job and followed what company wanted him
to do." (Fai)

Displacement of responsibility: Bandura's disengagement "… if the client incorrectly showed the same
Believing others, not oneself are mechanisms: Minimizing the drug, however, it is the client's responsibility."
responsible for what happened role one plays in the harm they (Simon)
cause [5]

Bandura's disengagement
Distortion of responsibility: mechanisms: Minimizing the "The drug was not correct for that particular
role one plays in the harm they individual." (Adam)
cause [5]
"Consumers when consuming a drug should
talk to their doctor first and do research to
make sure that the product is a good fit for
them. Whatever happened later after taking
the drug has nothing to do with a website
programmer." (Fai)

Attribution of blame: The victim is Bandura's disengagement


one to blame mechanisms: Related to the "One cannot take medical advice from a
victim [5] website." (Adam)
"… it is still up to the consumer whether or not
they obey the recommendation versus seeing a
clinician or doctor, so the survey does not
necessarily force consumers to do anything."
(Nathan)

ACM Trans. Comput. Educ.


Arguing based on questionable
assumptions "The fact that this is a mobile app that the
company greatly relies on revenue from, I'm
sure they take the quality of the product very
seriously. I think they would have allowed the
two more days if there wasn't a pressing
reason to ship at the date." (Sophia)

Reducing ethics to the rules to "It is not very practical to exercise to the
follow for practicality: as long as highest ethical standard… just using the
one has not done anything against law…people need to compromise somewhere
the rules, there is nothing wrong all the time for the team to work." (Oliver)
with what one has done.

No knowledge, no responsibility
"Even though there was someone hurt by a
product Jim made, his lack of knowledge of
potential effects spares him from
responsibility." (Nathan)

"He only designed the website. He didn't know


anything about the drug and whether it has
side effects or not." (Ethan)

No ethics is involved unless "Since there seems to be nothing that would


proved otherwise: Ethics is not indicate that unethical behavior is occurring…I
involved if there is no indication think it is an operational issue." (Ethan)
of actual unethical behavior in a
situation.

Using technology to avoid the "You can't blame the people. You say, 'oh the
blame computer did it.'" (Sebastian)

"When the decision could result in blame


placed on the company for either suppressing
a user's voice or making it too visible, I think
the right option is to move the power closer to
the user and give them the opportunity to take
greater responsibility for their actions… By
creating a formula that learns from users'
interaction with news, you cater towards what
users want, so any complaints would
contradict their own behavior." (Nathan)

The sufficiency of disclaimer/


disclosure for addressing an ethical "Nowadays on website[s] they say that you
issue need to seek doctors' advice." (Adam)

As one can see in Table 2, many of the identified biases and fallacies can be linked to existing theories.
For example, the notion of bad faith, first introduced by the French philosopher Jean-Paul Sartre, was
identified as one of the prevalent fallacies in students' reasoning. Moreover, moral justification,
displacement of responsibility, distortion of responsibility, and attribution of blame have been previously
discussed by the Canadian-American psychologist Albert Bandura as disengagement mechanisms used to
justify the wrongdoing [5].

4.2.7 Recognition of Fallacies in Others’ Arguments.


Some students were able to identify fallacies in their peers' argumentations and argued against these
fallacies. The recognition of fallacies positively influenced the students' ethical decision-making.

ACM Trans. Comput. Educ.


In response to the third scenario, Sebastian stated, "People argued that he is just a computer scientist
and he didn't know about it… but he knew that all the answers he coded [were] for the one product".
Without a prompt, Sebastian reflected on the reasons why the students with different responses decided
differently and concluded with a general assertion:
I think people responded the way they did … because they don't want to be morally responsible
…. Computer scientist[s] sometimes don't see themselves as people who could have [a] profound
impact on the lives of … people like a doctor or lawyer [does]. In reality, they can have just as
much [effect].

Similarly, Simon recognized the fallacy of 'no knowledge, no responsibility' while responding to one of
the peers' arguments:
I feel like you can't simply claim ignorance for all of your actions. Maybe he didn't see any
immediate danger, and maybe he doesn't have direct medical knowledge of the product, but I
don't think that can alleviate him from any potential problems that this website could cause.

Emma recognized the attribution of blame in one of the students' responses to the third scenario:
Although I agree that … people should know to consult a doctor before starting any medicine, I
don't think that this entirely excuses the faults of the quiz…

Similarly, Tom stated, "I don't think it's appropriate to victim blame a suicide victim, especially a minor,
as opposed to the economic forces and individuals that made her suicide possible."
Some of the students noticed the tendency among some of their peers to blame algorithms. Sebastian
said, "people mostly were thinking of using algorithm… because [when you are using] an algorithm you
cannot blame people, you say oh the computer did it". According to him:
That's the danger with using an algorithm … because an algorithm is still going to have carried
some of the implicit biases of the people who …trained it.

Quan, referring to World War II, was able to recognize the fallacy in one of the peers' responses who
believed what happened was the responsibility of the client or the manager (i.e., displacement of
responsibility): "If you say that Jim did nothing wrong because he is just following the order, then he is no
different than the Nazi soldiers."
Finally, some students recognized the fallacy of 'no ethics is involved unless proved otherwise'. These
students explicitly mentioned that if the situation is ambiguous, one should assume that the situation
involves ethics. As an example, Anne, in response to the second scenario and the lack of information on
the nature of the application, stated that: "… because it is not mentioned, we should assume that [it is an
ethical issue]. It is more likely than not there will be some ethics associated [with] the product regardless
of the nature of it". In another example, Simon recognized this fallacy in another student's online posting:
Just because we're lacking in information doesn't mean that it's necessarily an operational
decision. In fact, I feel like the less information we have, the more of an ethical decision it is
because it increases the potential for extremely negative consequences.

4.3 Towards Building a Model of Ethical Decision Making for Computing


Majors
Following the process of constructivist grounded theory and by analyzing students' responses to ethical
dilemmas and the justifications provided by them, in this study, a model of factors contributing to ethical
decision-making is developed (See Figure. 1). Five factors were identified to have positive influences on
ethical decision making (1) relating to a real-world story, (2) showing care for users or other individuals

ACM Trans. Comput. Educ.


who might be affected, (3) recognizing fallacies in arguments, (4) developer-related reasoning (specifically
when the developer feels responsibility for what being designed or possess the relevant technical
knowledge and experience), and (5) issue-related reasoning.
Relating to a real-world story was the most important factor that helped individuals make ethical
decisions. Another critical factor was the care one feels and shows towards individuals affected. Caring for
individuals was a powerful source in directing ethical decisions. Recognition of fallacies was another
element that played a significant role in ethical decision-making. Developer-related reasoning helped
make ethical decisions when it was about understanding the responsibility of developers and when one
used his specific technical knowledge and experience in the arguments. Students who reasoned based on
acknowledging the significance of the issues such as privacy were more likely to make better ethical
decisions.
In terms of factors with negative effects on ethical decision-making, fallacies play an important role.
Results showed that a narrowly defined sense of professional responsibility leads to more difficulty
recognizing the ethical implications of one's actions. Another factor was related to students' emotions
towards the developer. When students felt empathy for the developer, making ethical decisions was more
challenging. Other factors include making minimalist assumptions about users and generalizing one's way
of using technology to all users. The latter could be seen as part of fallacies, but it was separately
categorized because it was directly related to users. Moreover, showing care for users seems to help
minimize such effects.

Fig. 1. Factors influencing ethical decision-making among computing majors

5 DISCUSSION
This research explored the question: How do computer science students make decisions in ethical
situations? The presented theoretical model showed the factors that positively or negatively affected
students' ethical decision-making. The findings indicated that real-world stories are highly influential on

ACM Trans. Comput. Educ.


students' ethical decision-making by helping students identify ethical issues and find ethical solutions. For
example, almost every student made a responsible decision towards the news on social media, which can
be attributed to the recent stories of misinformation in social media. Stories also helped students recognize
the fallacies in their peers' arguments. For example, students who remembered the stories related to World
War II responded more responsibly to Jim's dilemma and could identify the fallacies in others' arguments
regarding rationalizations such as doing one's job. The positive influence of such stories is in line with
McNamara et al. [44].
An important finding is the importance of situational and contextual factors in ethical decision-making.
For example, in the app development scenario, one of the students stated that one should do what one's
boss says. However, in Jim's scenario, the same student mentioned that he was surprised that most
students said what happened was not Jim's fault, arguing that the course was clear that it does not matter
what the boss says. Not considering the ethical implications of the decision in the app development
scenario might be related to the past experiences of this student as students usually have the experience of
being in situations they needed to rush a project to meet the deadline but not in a situation similar to Jim's
scenario. Another possible explanation is that there is a chance that there is no bug in the app
development scenario, while the ethical issue in Jim's scenario is obvious. This finding aligns with the
issue-contingent ethical theories such as Jones' ethical decision-making model, which states individuals
behave differently responding to different ethical issues in different contexts [29]. However, the contextual
considerations do not play similar roles for different individuals. For example, while for some participants,
the magnitude of consequences in the third scenario was helpful to make better decisions, for others, it
created a force to justify the unethical action. In line with the overall argument of the ethical decision
model provided by Treviño, the findings suggest that an interaction of individual characteristics and moral
issues affects individuals' decisions [61]. This study showed that positioning oneself as a user or developer
makes a significant difference in ethical decision-making. For example, Eaton stated that privacy should
not be the lowest in the first scenario because "I want to make my social media private for my friends and
family." However, in the case of Jim's scenario, Eaton stated that what happened was not Jim's fault. For
the application development scenario, when asked about the influencing factors, Eaton referred to not
having any experience in that context. Not positioning oneself as a developer in the application
development scenario might be why Eaton took users' side by stating that if there were bugs, the
customers would be disadvantaged. It seems participants position themselves based on their previous
experiences and in roles more familiar to them. For example, most students have experienced social media
as users. However, when it comes to Jim's scenario, students do not see themselves as users of the website
but instead as coders doing the job and therefore feel empathy for the coder. Another possible explanation
is individuals' tendency to avoid disastrous situations in scenarios such as Jim's.
Although ethics of care was not covered in the course, it was one of the underlying philosophies in
students' arguments from both genders. One of the important considerations of ethics of care is that not all
individuals "are equally able, at all times, to take care of themselves" [62]. For example, in response to the
scenario on social media privacy, some of the students raised the concern that not all users are fully aware
of privacy issues in social media. The findings also support the recent research on the influence of
empathy in engineering ethics education [37, 63]. One of the main elements of ethics of care is
responsiveness. Responsiveness "suggests that we consider the other's position as that other expresses it"
rather than "putting ourselves in their position" [62]. As stated by Tronto:

ACM Trans. Comput. Educ.


"It would seem that by putting oneself in the other's situation, [the] distance can be overcome.
But, … there is no way to guarantee that, in taking the place of the other, … the moral actor will
recognize all of the relevant dimensions of the other's situation" [62].

The findings of the present study support this argument. For example, when responding to Jim's
scenario, Anne stated she did not believe in online quizzes, but she acknowledged many might do. If Anne
had tried to put herself in the situation of users, she would have likely failed to recognize the ethical aspect
of the situation as some other students did when, for example, they stated that one should get information
from legitimate sources. Attending to context is essential since "ethics of care advocates attention to
particulars, appreciation of context, narrative understanding, and communication and dialogue in moral
deliberation" [35].
Another important finding of this study is that despite emphasizing ethical theories and frameworks in
the course, most students did not use these theories in their decision-making process. This finding is in
line with Bazzerman and Tenbrunsel's [7] argument that in many situations, individuals do not "apply the
type of ethical judgment they may have learned in ethics training courses to their decision-making
process." Moreover, when ethical theories were used, their use did not necessarily lead to ethical decisions.
For example, although Oliver, a senior student, believed ethical frameworks helped him avoid intuitive
thinking, this student stated that one could justify one's selected choice by using ethical frameworks one
way or another.
The findings support the previous research regarding the negative effects of biases on ethical decision-
making. For example, some participants tended to rationalize their choices or blame other individuals or
systems [7]. Moreover, the notions of denial of responsibility, denial of injury, and denial of victim raised
by Anand et al. [3] were evident in students' arguments. A Knowledge of the fallacies might help
individuals avoid them when facing ethical dilemmas in the future [52].

6 IMPLICATIONS
The current practice of teaching ethics is limited in various ways [10, 18, 24]. The findings of this research
have important implications for teaching ethics to computing majors. As Drumwright et al. [22] stated, the
research focusing on ways individuals make ethical decisions (i.e., behavioral ethics) should be integrated
into ethics education since "there is no strong evidence that training students to be moral philosophers…
or to work to enhance their character improves … [students'] ethical actions". In what follows,
implications of the current research are briefly discussed.
First, stories should be more emphasized in computing ethics courses. Courses and educational
modules on computing ethics can benefit from introducing students to scenarios in which computing
professionals' decisions indirectly harm individuals to complement the more obvious and direct influences
of their work traditionally covered in computer ethics curricula. Second, fallacies can unknowingly
influence one's decisions. Instructors can consider the identified fallacies from this research in designing
their courses. One of the most prevalent biases among the students was 'bad faith,' which happens when
individuals find themselves unable to act based on their values due to disowning their freedom under
external pressure and, therefore, rejecting their responsibility. This issue might be related to the lack of
students' confidence and can be a topic to be further scrutinized and discussed in classrooms. Helping
students build up their confidence to speak up and make their voices heard is of significant importance.
Third, along with introducing ethical concepts and frameworks such as deontology, developing students'
critical thinking skills is imperative. Students need to learn that using an ethical framework or ethical
standard by itself will not guarantee an ethical decision. Moreover, ways one might use these frameworks

ACM Trans. Comput. Educ.


as an excuse to justify one's stance need to be considered and discussed. Furthermore, it is essential to help
students feel more comfortable with ambiguities of ethical problems and encourage them to consider
different aspects of a situation. For example, students can be guided to simultaneously apply two or more
ethical theories to ethical scenarios to see a better picture of the issue at hand and identify possible
approaches to address it.
Finally, based on the findings of this study, introducing ethics of care in computer science and
engineering ethics courses can improve students' ethical decision-making. Despite identifying the
promises of this approach in teaching ethics to engineers [13, 50], the ethics of care has not been
sufficiently emphasized in engineering curricula [9]. It is important to remember that ethics of care is a
complex concept and has to do with "both particular acts of caring and a general habit of mind to care"
[62]. What Noddings stated about ethics of care can very well conclude the implications of this research:
"[when we accept constraints on our ethical ideals,] we know better what we must work toward,
what we must prevent… Instead of hiding from our natural impulses…, we accept what is there -
all of it- and use what we have already assessed as good to control that which is not good" [47].

7 LIMITATIONS AND FUTURE RESEARCH


This research is limited in different ways. First, the participants enrolled in an ethics course responding to
the scenarios as part of their class activities. Although the design helped gather more in-depth and rich
responses, it limited the transferability of the results. Second, since the data for each scenario was collected
in a relatively short period of three weeks (nine weeks total), one cannot expect significant changes in
students' perspectives since ethical development is believed to happen over time. However, the research
benefited from getting richer data by exposing students to their peers' comments and perspectives, helping
them further clarify their stances. The current approach, to some extent, revealed whether and in what
ways the dynamics of a professional discussion among peers might influence the ethical decisions of
individuals. Third, students might not have fully disclosed their stances due to social desirability in
asynchronous discussions. However, since similar conditions can be expected in the practice of
professionals, this would alleviate this limitation. The fourth limitation is related to the design of online
discussion, as students could see their peers' postings before posting their own. Although the design
decision was made deliberately to make the results more comparable to the actual situation of ethical
decision-making and the impact of the peers, students' responses have certainly been influenced by the
dynamics of the discussion. Finally, although more than 80% of participants had prior work experience, the
sample consisted of full-time traditional-aged students and cannot represent all computer professionals.
Future research can focus on testing the provided ethical decision-making model among computing
majors in other situations and with different scenarios. Moreover, replicating this study with individuals
from other fields can create knowledge on the influence of professions in the ethical decision-making
process. Finally, future studies can focus on developing and evaluating ethics courses or modules designed
based on the findings of this research.

ACKNOWLEDGMENTS
This work did not receive any financial support.

ACM Trans. Comput. Educ.


APPENDICES

A SCENARIO 1: SOCIAL MEDIA


You and a few other students from the college have created a new social media platform that enables the
professionals in computing to connect, socialize, share interests and seek solutions. As the time for
releasing the platform approaches, you are in a meeting to discuss whether you should make full privacy
the default option or set the lowest privacy as the default option; users would be allowed to increase their
privacy level according to their wishes.
• Is this decision an ethical decision or an operational decision? Why? What is the right action to
take? For what reasons?

Next topic to discuss in the meeting is the decision for presenting trending news on technology. Two
options are available: (1) using a team of experts to choose the trending news, and (2) using an algorithm
that automates the process. You know that in first approach, your company might be accused of having a
biased view on the trends. In second approach, however, there is a chance for presenting false or
misleading news. Tom, one of your team members believe that the feature should be abandoned altogether
because both options might damage the reputation of your platform.
• In your view, is the decision on presenting news an ethical issue or an operational issue? Why?
What is the right action to take? Please justify.

B SCENARIO 2: APP DEVELOPMENT


You are the quality control officer in a small company. You and your colleagues have been working for
months on a particular mobile application for a large company on which your company depends heavily
for its revenue. Two weeks ago, you were confident about the application, having run extensive tests.
However, at that point, your client announced a significant upgrade to their phone operating system, and
they insisted that your company could have only three weeks to make the necessary changes.
Working long hours in the period of three weeks, you and your teammates complete the changes. On
the day before the new ship date, you tell your manager that you are not convinced that the application
has been sufficiently tested. You estimate that you will need two more days to complete testing for some
complicated errors. As the quality control officer on this contract, you have to sign off on before the
application can legally be shipped.
Your manager and your manager's boss discuss the issue and make their final decision: "You are to
keep testing overnight. If no significant bugs are discovered, you are to sign off on the project in the
morning, so it can be shipped on time."
• Does this scenario involve an ethical or an operational issue? Why? What is the right action to
take? For what reasons?

C SCENARIO 3: A TASK ASSIGNED TO A PROGRAMMER


As his first full-time job, Jim started a coding job with a marketing firm. The firm's clients are large
pharmaceutical companies. Jim is assigned to a project that involves a drug website that is targeted at
young women. One feature of this website is a quiz that ask girls a number of questions and provides
recommendation of a type of drug. This website is not clearly an advertisement for any particular product
but poses as a general information site.

ACM Trans. Comput. Educ.


Jim receives the questions for the quiz, along with multiple-choice answers for each question, and
proceeds to code up the quiz. Before submitting the website to the client, Jim's project manager tries the
quiz and notices that no matter what she does, the quiz recommends the client's drug as the best possible
treatment. Jim explains that this outcome is what the client has requested. The project manager is
reassured.
A few days later, the client invites Jim and his colleagues to a fancy steak dinner to show appreciation
for their work. On the day of the dinner, right before leaving the office, a colleague sends Jim a link to a
news story. It is about a young girl who had taken the drug for which Jim has built the website: she has
killed herself.
It turns out that severe depression and suicidal thoughts are some of the main side effects of that drug.
• Did Jim do anything wrong in this scenario? Please justify your answer. What should he do now?

REFERENCES

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice-Hall, Inc.

Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and social psychology review, 3(3),
193-209.

Abraham, S., Knies, A. D., Kukral, K. L., & Willis, T. E. (1997). Experiences in discussing ethics with undergraduate
engineers. Journal of Engineering Education, 86(4), 305-307.

ACM Code of Ethics and Professional Conduct. Retrieved from: https://www.acm.org/code-of-ethics

Anand, V., Ashforth, B. E., & Joshi, M. (2004). Business as usual: The acceptance and perpetuation of corruption in
organizations. The Academy of Management Executive, 18(2), 39-53.

Bazerman, M. H., & Gino, F. (2012). Behavioral ethics: Toward a deeper understanding of moral judgment and dishonesty.
Annual Review of Law and Social Science, 8, 85-104.

Alam, A. (2020f). Challenges and possibilities in teaching and learning of calculus: A case study of India. Journal for the Education
of Gifted Young Scientists, 8(1), 407-433. Retrieved from https://doi.org/10.17478/jegys.660201

Alam, A. (2020b). Pedagogy of Calculus in India: An Empirical Investigation. Periódico Tchê Química, 17(34), 164-180. Retrieved
from http://www.doi.org/10.52571/PTQ.v17.n34.2020.181_P34_pgs_164_180.pdf

Alam, A. (2022d). Investigating Sustainable Education and Positive Psychology Interventions in Schools Towards Achievement of
Sustainable Happiness and Wellbeing for 21st Century Pedagogy and Curriculum. ECS Transactions, 107(1), 19481. Retrieved
from https://doi.org/10.1149/10701.19481ecst

Mohanty, A., Alam, A., Sarkar, R., & Chaudhury, S. (2021). Design and Development of Digital Game-Based Learning Software for
Incorporation into School Syllabus and Curriculum Transaction. Design Engineering, 4864-4900. Retrieved from
http://www.thedesignengineering.com/index.php/DE/article/view/5433

Alam, A., Mohanty A., Alam, S., and Akanksha. (2022). ‘Happiness Curriculum’ and the Pedagogical Tools for its Effective
Transaction : A Systematic Literature Review. In Pankaj, P., Vijayvargy, L., Johri, S. & Badhera, U. (Eds.). Envisioning
India’s Future: Growth, Innovation, Sustainability, Happiness & Wellbeing (pp. 56-77). Bloomsbury. Retrieved from
https://www.researchgate.net/publication/359729711_'Happiness_Curriculum'_and_the_Pedagogical_Tools_for_its_Effective_
Transaction_A_Systematic_Literature_Review

Bazerman, M. H., & Tenbrunsel, A. E. (2011). Blind spots: Why we fail to do what's right and what to do about it. Princeton
University Press.

Berne, R. W. (2003). Ethics technology and the future: An intergenerational experience in engineering education. Bulletin of
Science, Technology and Society, 23(2), 88–94.
ACM Trans. Comput. Educ.
Bielefeld, A. R. (2015). Ethics of care and engineering ethics instruction. American Society for Engineering Education Rocky
Mountain Section Conference, Denver, CO.

Brey, P. (2000). Disclosive computer ethics. Computers and Society, 30(4), 10-16.

Burton, E., Goldsmith, J., & Mattei, N. (2018). How to teach computer ethics through science fiction. Communications of the
ACM, 61(8), 54-64.

Byrne, G. J., & Staehr, L. J. (2004). The evaluation of a computer ethics program. Journal of Issues in Informing Science
and Information Technology, 1, 935-939.

Campbell, R. C., Yasuhara, K., & Wilson, D. (2012, October). Care ethics in engineering education: Undergraduate student
perceptions of responsibility. Frontiers in Education Conference (FIE), 1-6, IEEE.

Casali, G. L., & Perano, M. (2021). Forty years of research on factors influencing ethical decision making: Establishing a future
research agenda. Journal of Business Research, 132, 614-630.

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative research. Sage Publications Ltd, London.

[16] Charmaz, K. (2016). The power of stories and the potential of theorizing for social justice studies. In N. K.

Alam, A. (2022f). Positive Psychology Goes to School: Conceptualizing Students’ Happiness in 21st Century Schools While
‘Minding the Mind!’Are We There Yet? Evidence-Backed, School-Based Positive Psychology Interventions. ECS
Transactions, 107(1), 11199. Retrieved from https://doi.org/10.1149/10701.11199ecst

Alam, A. (2022g). Mapping a Sustainable Future Through Conceptualization of Transformative Learning Framework, Education for
Sustainable Development, Critical Reflection, and Responsible Citizenship: An Exploration of Pedagogies for Twenty-First
Century Learning. ECS Transactions, 107(1), 9827. Retrieved from https://doi.org/10.1149/10701.9827ecst

Alam, A. (2020a). Conceptualization of Cultural Intelligence, Intercultural Sensitivity, Intercultural Competence, and Nomological
Network: A Contact Hypothesis Study of Sociology of Education. movimento-revista de educação, 7(15), 217-258. Retrieved
from https://periodicos.uff.br/revistamovimento/article/view/45814

Alam, A. (2020e). What is the 'Philosophy of Chemistry Education'? Viewing Philosophy behind Educational Ideas in Chemistry
from John Dewey’s Lens: The Curriculum and the Entitlement to Knowledge. PalArch's Journal of Archaeology of
Egypt/Egyptology, 17(9), 6857-6889. Retrieved from https://archives.palarch.nl/index.php/jae/article/view/5303

Charmaz, K., & Thornberg, R. (2020). The pursuit of quality in grounded theory. Qualitative Research in Psychology, 1-23.

Corbin, J. and Holt, N.L. (2011). Grounded theory. In B. Somekh & C. Lewin (Eds.). Theory and methods in social research (pp.
113- 120). London: SAGE Publications.

Creswell, J. W. (2013). Qualitative inquiry & research design: Choosing among five approaches (3rd ed.). Thousand Oaks, CA: Sage.

Alam, A. (2022e). Social Robots in Education for Long-Term Human-Robot Interaction: Socially Supportive Behaviour of Robotic
Tutor for Creating Robo-Tangible Learning Environment in a Guided Discovery Learning Interaction. ECS
Transactions, 107(1), 12389. Retrieved from https://doi.org/10.1149/10701.12389ecst

Alam, A., Fahim, A., Gupta, T., Dev, R., Malhotra, A., Saahil, Najm, S., Jaffery, K., Ghosh, M., Shah, D., Kumari, M., & Alam, S.
(2020). Need-Based Perspective Study of Teachers’ Work Motivation as Examined from Self-Determination Theoretical
Framework: An Empirical Investigation. PalArch's Journal of Archaeology of Egypt/Egyptology, 17(6), 8063-8086. Retrieved
from https://archives.palarch.nl/index.php/jae/article/view/2213

Alam, A., Kumari, M., & Alam, S. (2018). Seventh Pay Revision Vis-à-Vis Higher Education in India. Indian Journal of Social
Research, 59(5), 719-733. Retrieved from http://academic-and-law-serials.com/component/content/article.html?id=1181

Alam, A., Mohanty, A., & Alam, S. (2020). Anthropology of Education: Discourses and Dilemmas in Analysis of Educational
Patterns and Cultural Configurations towards Pursuit of Quality Education. Palarch’s Journal of Archaeology of
Egypt/Egyptology, 17(9), 7893-7924. Retrieved from https://archives.palarch.nl/index.php/jae/article/view/5675

Dark, M. J., & Winstead, J. (2005, September). Using educational theory and moral psychology to inform the teaching of ethics
ACM Trans. Comput. Educ.
in computing. In Proceedings of the 2nd annual conference on information security curriculum development (pp. 27-31).

Drumwright, M., Prentice, R., & Biasucci, C. (2015). Behavioral ethics and teaching ethical decision making. Decision
Sciences Journal of Innovative Education, 13(3), 431-458.

Ferrell, O. C., Fraedrich, J., & Ferrell, L. (2011). Business ethics: Ethical decision making and cases (8th ed.). Mason, OH:
South- Western Cengage Learning.

Fiesler, C. (2018, December 5). What our Tech Ethics Crisis Says About the State of Computer Science Education. How We Get
to Next. Retrieved from: https://howwegettonext.com/what-our-tech-ethics-crisis-says-about-the-state-of-computer-science-
education- a6a5544e1da6

Fiesler, C., Garrett, N., & Beard, N. (2020, February). What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis. In
Proceedings of the 51st ACM Technical Symposium on Computer Science Education (pp. 289-295).

Alam, A. (2020d). Test of Knowledge of Elementary Vectors Concepts (TKEVC) among First-Semester Bachelor of Engineering
and Technology Students. Periódico Tchê Química, 17(35), 477-494. Retrieved from
http://www.doi.org/10.52571/PTQ.v17.n35.2020.41_ALAM_pgs_477_494.pdf

Alam, A. (2022a). Psychological, Sociocultural, and Biological Elucidations for Gender Gap in STEM Education: A Call for
Translation of Research into Evidence-Based Interventions. Proceedings of the 2nd International Conference on Sustainability
and Equity (ICSE-2021). Atlantis Highlights in Social Sciences, Education and Humanities. ISSN:2667-128X. Retrieved from
https://dx.doi.org/10.2991/ahsseh.k.220105.012

Alam, A. (2021a, December). Should Robots Replace Teachers? Mobilisation of AI and Learning Analytics in Education. In 2021
International Conference on Advances in Computing, Communication, and Control (ICAC3) (pp. 1-12). IEEE. Retrieved from
https://dx.doi.org/10.1109/ICAC353642.2021.9697300

Alam, A. (2021b, December). Designing XR into Higher Education using Immersive Learning Environments (ILEs) and Hybrid
Education for Innovation in HEIs to attract UN’s Education for Sustainable Development (ESD) Initiative. In 2021
International Conference on Advances in Computing, Communication, and Control (ICAC3) (pp. 1-9). IEEE. Retrieved from
https://dx.doi.org/10.1109/ICAC353642.2021.9697130

Finelli, C. J., Holsapple, M. A., Ra, E., Bielby, R. M., Burt, B. A., Carpenter, D. D., Harding, T. S., & Sutkus, J. A. (2012 ).
An Assessment of Engineering Students' Curricular and Co‐Curricular Experiences and Their Ethical Development.
Journal of Engineering Education, 101(3), 469-494.

Frisque, D. A., & Kolb, J. A. (2008). The effects of an ethics training program on attitude, knowledge, and transfer of training
of office professionals: A treatment- and control-group design. Human Resource Development Quarterly, 19(1), 35-53.

Furey, H., & Martin, F. (2018, April). Introducing ethical thinking about autonomous vehicles into an AI course. In
Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1).

Jones, T. M. (1999). Ethical decision making by individuals in organizations: An issue-contingent model. Academy of
Management Review, 16(2), 366-395.

Garrett, N., Beard, N., & Fiesler, C. (2020, February). More Than" If Time Allows" The Role of Ethics in AI Education. In
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 272-278).

Gilligan, C. (1993). In a different voice: Psychological theory and women’s development. Harvard University Press.

Goldsmith, J., Burton, E., Dueber, D. M., Goldstein, B., Sampson, S., & Toland, M. D. (2020, April). Assessing Ethical Thinking
about AI. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 09, pp. 13525-13528).

Grosz, B. J., Grant, D. G., Vredenburgh, K., Behrends, J., Hu, L., Simmons, A., & Waldo, J. (2019). Embedded EthiCS:
integrating ethics across CS education. Communications of the ACM, 62(8), 54-61.

Held, V. (2006). The Ethics of Care: Personal, Political, and Global, Oxford: Oxford University Press.

Alam, A. (2020c). Possibilities and challenges of compounding artificial intelligence in India’s educational landscape. International
Journal of Advanced Science and Technology, 29(5), 5077-5094. Retrieved from
http://sersc.org/journals/index.php/IJAST/article/view/13910

ACM Trans. Comput. Educ.


Alam, A. (2021c, November). Possibilities and Apprehensions in the Landscape of Artificial Intelligence in Education. In 2021
International Conference on Computational Intelligence and Computing Applications (ICCICA) (pp. 1-8). IEEE. Retrieved
from https://dx.doi.org/10.1109/ICCICA52458.2021.9697272

Alam, A. (2022b, April). Educational Robotics and Computer Programming in Early Childhood Education: A Conceptual
Framework for Assessing Elementary School Students’ Computational Thinking for Designing Powerful Educational
Scenarios. In 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN)
(pp. 1-7). IEEE. Retrieved from https://dx.doi.org/10.1109/ICSTSN53084.2022.9761354

Alam, A. (2022c, April). A Digital Game based Learning Approach for Effective Curriculum Transaction for Teaching-Learning of
Artificial Intelligence and Machine Learning. In 2022 International Conference on Sustainable Computing and Data
Communication Systems (ICSCDS) (pp. 69-74). IEEE. Retrieved from
https://dx.doi.org/10.1109/ICSCDS53736.2022.9760932

Herkert, J. R. (2005). Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering.
Science and Engineering Ethics, 11(3), 373-385.

Hess, J. L., Strobel, J., & Brightman, A. O. (2017). The development of empathic perspective-taking in an engineering ethics course.
Journal of Engineering Education, 106(4), 534-563.

Heyler, S. G., Armenakis, A. A., Walker, A. G., & Collier, D. Y. (2016). A qualitative study investigating the ethical decision
making process: A proposed model. The Leadership Quarterly.

Leavitt, K., Reynolds, S. J., Barnes, C. M., Schilpzand, P., & Hannah, S. T. (2012). Different hats, different obligations:
Plural occupational identities and situated moral judgments. Academy of Management Journal, 55(6), 1316-1333.

Lehnert, K., Park, Y. H., & Singh, N. (2015). Research note and review of the empirical ethical decision-making literature:
Boundary conditions and extensions. Journal of Business Ethics, 129(1), 195-219.

Levine, C., Kohlberg, L., & Hewer, A. (1985). The current formulation of Kohlberg’s theory and a response to critics.
Human Development, 28(2), 94-100.

Loui, M. C. (2005). Ethics and the development of professional identities of engineering students. Journal of Engineering
Education, 94(4), 383-390.

Loui, M. C., & Miller, K. W. (2008). Ethics and professional responsibility in computing. Wiley Encyclopedia of Computer
Science and Engineering.

McNamara, A., Smith, J., & Murphy-Hill, E. (2018, October). Does ACM’s code of ethics change ethical decision making in
software development? In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (pp. 729-733).

Mumford, M. D., Connelly, S., Brown, R. P., Murphy, S. T., Hill, J. H., Antes, A. L., ... & Devenport, L. D. (2008). A
sensemaking approach to ethics training for scientists: Preliminary evidence of training effectiveness. Ethics & behavior,
18(4), 315-339.

National Academy of Engineering, U. S. (2004). The engineer of 2020: Visions of engineering in the new century. Washington, DC:
National Academies Press.

Noddings, N. (2013). Caring: A relational approach to ethics and moral education. 2nd ed., Berkeley, CA: University of
California Press.

Odden, T. O. B., & Russ, R. S. (2019). Defining sensemaking: Bringing clarity to a fragmented theoretical construct. Science
Education, 103(1), 187-205.

Oriogun, P., Ogunleye-Johnson, B., Mukhtar, M., & Tobby, G. (2012, September). Teaching and Assessing Software Engineering
Ethics in the 21st Century: Case Study from American University of Nigeria. In Software Engineering and Applied
Computing (ACSEAC) (pp. 75-81).

Pantazidou, M., & Nair, I. (1999). Ethics of care: Guiding principles for Engineering teaching and practice. Journal of
Engineering Education, 88(2), 205-212.

ACM Trans. Comput. Educ.

View publication stats

You might also like