Professional Documents
Culture Documents
Artificial Intelligence in Healthcare
Artificial Intelligence in Healthcare
MECH 8260
Prepared for:
Bob Gill
Prepared by:
Gurkaran Gill - A00852732
Anthonye Palma – A00853411
Lucas Estabrook – A00923734
Rohan Chawla – A00866089
typically “defined as the capability of a computer program to perform tasks or reasoning processes that
we usually associate to intelligence in a human being. [1].” Artificial intelligence has been on the
forefront of discussion by not only notable names such as Bill Gates and Elon Musk, but by anyone who
is concerned about the ethical implications and societal impacts of artificial intelligence. This is
especially true in applications where the general population would interface with artificial intelligence
directly. Artificial intelligence has improved leaps and bounds since it was created in 1956, from when it
was named the Logic Theorist [2], to its current state where it is implemented widely a cross various
industries. One of these industries is the healthcare industry. Artificial intelligence is utilized universally
across numerous subfields within healthcare such as home care for the elderly, managing data such as
medical records, assistance with drug creation and assistance with diagnosis and treatments for
patients. Artificial intelligence is used often in these fields because the healthcare industry has a great
deal of financial resources put into it while also generating a large amount of money, therefore there is
an incentive to create the most efficient processes possible to reduce the resources and time required
while increasing the effectiveness of healthcare in general. However, currently the most promising
solution to this issue is using artificial intelligence because the more it is improved, the greater the
impact it will have on the healthcare industry. However, what are some of the ethical implications and
impacts while using artificial intelligence so widely and liberally have on our society? There are issues of
concern such as labor replacement, quality of care, trust, safety and privacy and protection of the data
An issue that may not seem apparent to people who don’t understand the inner workings of an
artificial intelligence algorithm is the issue of bias. This could be presented as bias towards patients of a
specific race, financial situation or the kind of insurance they may have (or lack thereof). Artificial
intelligence in healthcare would require the ability to have a machine learning algorithms, which means
that the system would require an input of data and it would learn correlations and patterns in minutes
that would normally take months or years for a team of researchers. However, the solutions would only
be as good as the data and the algorithm that the machine is fitted with. This information is vital when
trying to understand the three types of bias that may result from artificial intelligence, which are
“human bias; bias that is introduced by design; and bias in the ways health care systems use the data.
[3]”
Human bias is an issue that is already present in the healthcare industry. Although it may be
decisions, it cannot be completely eradicated as long as humans make the final decisions. A human
could take the information and recommended courses of action presented by the artificial intelligence
but continue to assume they have a better grasp on the problem and may actively choose to ignore the
recommendations. Another example of this would be to use the information presented and only focus
on the details that provide confirmation bias to the physician, rather than analyzing everything that’s
been presented. This could also fall under the bias associated with how the data is used by the health
Another type of bias is bias introduced by design. What this means is that algorithms are written
by real people who work for companies who have their own agendas, such as earning profits.
Algorithms could be made so that they work the same as any machine learning algorithm but with key
biases incorporated. These may include bias towards people who may not be able to afford the most
reliable or efficient treatment for a particular illness and replacing that with something that may be in
their range of affordability but still presented as the best method of treatment [3]. It is also possible that
the designer may have inadvertently added bias into the algorithm without realizing it. There are also
chances that the bias present was learned by the machine on its own. An example of this is artificial
intelligence that is used to assist decisions made by judges in court. There is evidence that this type of
artificial intelligence “have shown an unnerving propensity for racial discrimination, [4]” which could
potentially be learned by any artificial intelligence, not just those present in healthcare.
These biases that could be present in the algorithm of artificial intelligence used in healthcare
could present some issues relative to the code of ethics. Firstly the machine would not be holding the
health and welfare of the patient paramount. It also wouldn’t be acting as a faithful agent towards the
Another issue that arises when considering the implications of artificial intelligence in the
medical field is the issue of moral agency. What is meant by “moral agency” is that the robots ability to
make judgements and decisions based on morals and a sense for what is considered right or wrong.
Stahl identifies this in his paper when he states that robots “unlike humans does not have the capacity
to reflect on the ethical quality of what it does” [5]. This issue would be intensified if the procedure is
completely automated and lacking any human intervention. However, if the procedure does use
artificial intelligence alongside human intervention, the problem may be completely mitigated.
Furthering the point of artificial intelligence lacking moral agency, Cheshire identified a
potential problem with the use of artificial intelligence that parallels the inability of artificial intelligence
to make decisions based on what’s right or wrong. The problem he identifies is loopthink - his definition
of loopthink states that computers have an inability to “redirect executive data flow as a result of its
fixed internal hardwiring, uneditable sectors of its operating system, or unalterable lines of its
programming code” [6]. He believes that although new information may become present to the
computer as a procedure is taking place, it doesn’t necessarily mean it will be able to take this new
information and adjust its course of action to ensure that it is doing what is considered right. Of course
this can become problematic as dealing with medical procedures and the fact that humans are hardly
ever predictable. This lack of being able to use new changing information brings our attention to the first
rule of the code of ethics, which is to hold the public’s safety paramount. A system which cannot make
moral decisions and adapt to changing circumstances is unable to hold safety to the highest of
standards.
A major ethical dilemma comes into place when considering who would be held responsible if
the AI robots made an error. The error could range anywhere from giving a false diagnosis to a
treatment which led to a fatality, depending on the extent of the robots usage and level of mistakes
made. It’s impossible to hold the robot accountable morally for the mistake so the responsibility would
need to fall onto somebody else [5]. For example, if a doctor is working in parallel with the AI and the
robot makes a poor decision or assumption in a task at hand, would the doctor be given full
responsibility for the error which occurred? The robot bases its decisions off of the information and
knowledge it has been fed so it’s impractical to understand exactly how it came to its conclusion making
it an improbable task for the doctor using the AI to be certain of its decision. Another option would be to
hold the designers of the AI responsible for the errors, but this also has its challenges. When you
consider a design team for such a large program, the team would have hundreds of contributors and
determining exactly who is to blame is an unlikely if not impossible feat. Finally, the organization
running the AI could be held responsible. As with the other options this has its own concerns. Hart says
this is similar to “holding every car manufacturer responsible for how others have used its product” [7].
Death or false diagnosis are a couple of issues that somebody would need to be held responsible
for but another issue which has arisen in the past is data sharing of private information. In 2017, IBM
Watson’s project with MD Anderson Cancer Centre had to be halted due to this very issue of data
sharing confidential patient data [8]. Again determining who was legally responsible for this error was a
battle. Until it can be determined who is to be held responsible for issues such as the ones presented, AI
difficult to determine who is to take responsibility for the issues associated with AI robots and this is in
clear violation of EGBC’s second point in their code of ethics: “Undertake and accept responsibility for
As artificial intelligence technology become more prevalent in the medical and healthcare
industries, there will be an inevitable transition from human to machine-operated labor. This main issue
lies in the most distinguishable differences between man and AI - what humans lack in data retention
For example, hospitals today are required to employ many surgeons whose practices cover the
broad spectrum of healthcare, ranging from diabetes to cancer. Through years of training and exposure
to their specific fields of study, surgeons can apply their knowledge to the best of their ability and
determine the best course(s) of action for a given patient. However, even though the surgeon may be
considered a specialist in their respective field, they are only capable of possessing so much knowledge
as individual human beings. If an anomaly was presented to a surgeon who had never witnessed such a
unique case and they were forced to act swiftly, the situation could quickly become dire and threaten
This problem could potentially be solved by applying the power of AI. By accumulating data from
surgeries performed across the world, the AI could assist a surgeon before and during the surgery by
notifying them of potential problems that may arise, and how to perform the surgery in the most
efficient and effective manner in live time. This is because AI can incorporate the experiences of
previous surgeries done by doctors from around the world. Note that this surgeon may not necessarily
have to be a specialist in a field, but rather someone who could follow the specific directions that the AI
provides them – thus leading to “a lesser reliance on human expertise”, as stated by an Italian
researcher [1]. Furthermore, if connected to the required machinery, the AI may be capable of
performing the surgery itself with minimal human intervention, removing the need for surgeons
entirely.
AI systems could also displace nurses and caretakers in hospitals who perform general tasks.
Some of the procedures that are commonly carried out are blood and/or urine tests, IV level monitoring,
and blood pressure analysis. The hospital could avoid the costs of employing tens of employees
performing these similar tasks by having one AI system carrying out the tasks on all patients
simultaneously. Specific details pertaining to each patient would be recorded in a patient database, and
Beyond potentially leaving many people unemployed with a skill that is no longer deemed
attractive/necessary [10] , there is another ethical issue that arises with the transition to AI systems in
health care. As a socially interactive species, the lack or complete loss of human communication,
compassion and sense of care may be harmful for certain patients [10]. Barring a significant
breakthrough in AI behavioral and interface improvements, AI would most likely perform the required
tasks at hand to a satisfactory level and then leave the patient to themselves. The unfathomable sense
of loneliness and neglect that a patient would have to endure after receiving their prescribed treatment
could potentially be so harmful that it would leave them in a more emotionally/mentally damaged state
than before they were hospitalized. As mentioned in the article, “A theoretical approach to artificial
intelligence systems in medicine”, the “concepts of health, sickness and illness are subject to the specific
socio-cultural conditions under which they are considered” [3], and this is a concept that AI would not
be able to grasp to the capability that humans can. If it reached a point of extreme distress, the AI may
act in an unpredictable manner, or simply not act at all because it does not recognize the patient is
suffering.
If such a scenario were to unfold, this would contradict the first point found in EGBC’s Code of
Ethics. It states that professionals must “Hold paramount the safety, health and welfare of the public”,
and simply put, the implementation of AI into the health care system would not satisfy this. Rather, the
use of AI may be viewed by some as a financially-biased action that would save a governing body a
come as no surprise. However, the way in which we implement it may be unethical, particularly in the
health care system. While AI may make specific tasks easier or faster to complete, it will not come
without some negative repercussions, such as job loss, unintended biases and quality of care to patients.
Before AI is fully integrated into our healthcare system after spending millions of dollars, it is our
responsibility to ensure that safety, well-being, and high levels of ethical standards are of the utmost
priority.
Bibliography
[1] F. Rossi, "Artificial Intelligence: Potential Benefits and Ethical Considerations," European
Parliament.
[3] P. Hannon, "Researchers say use of artificial intelligence in medicine raises ethical questions," 15
March 2018. [Online]. Available: https://medicalxpress.com/news/2018-03-artificial-intelligence-
medicine-ethical.html.
[4] A. Weintraub, "Artificial Intelligence Is Infiltrating Medicine -- But Is It Ethical?," Forbes, 16 March
2018. [Online]. Available: https://www.forbes.com/sites/arleneweintraub/2018/03/16/artificial-
intelligence-is-infiltrating-medicine-but-is-it-ethical/#7e085c1b3a24.
[5] M. C. Bernd Stahl, "Robotics and Autonomous Systems," Elsevier, pp. 152-161, 2016.
[6] W. Cheshire, "Loopthink: A limitation of Medical Artificial Intelligence," Grey Matters, vol. 33, no.
1, pp. 7-12, 2017.
[7] R. Hart, "When artificial intelligence botches your medical diagnosis, who’s to blame?," Quartz, 23
May 2017. [Online]. Available: https://qz.com/989137/when-a-robot-ai-doctor-misdiagnoses-you-
whos-to-blame/. [Accessed 16 March 2018].
[8] The Lancet, "Artificial intelligence in health care: within touching distance," The Lancet, p. 2739, 23
December 2017.
[9] A. Weintraub, "Artificial Intelligence Is Infiltrating Medicine -- But Is It Ethical?," 16 March 2018.
[Online]. Available: https://www.forbes.com/sites/arleneweintraub/2018/03/16/artificial-
intelligence-is-infiltrating-medicine-but-is-it-ethical/#9d92ebc3a24b. [Accessed 2018 March 2018].
[10] M. C. Bernd Carsten Stahl, "Ethics of healthcare robotics: Towards responsible research and
innovation," Elsevier B.V., Vienna, 2016.
[11] G. B.Spyropoulos, "A theoretical approach to artificial intelligence systems in medicine," Artificial
Intelligence in Medicine, vol. 7, no. 5, p. 10, 1995.