Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

On the Rights and Privileges of Artificial Intelligences

It seems clear that machines and their living counterparts are quite separate. One is made
of metal and wiring, and the other cells and tissue; one is programmed via humans, the other
through DNA. Despite this, the two are growing increasingly close in intelligence and mental
capacity, and it seems that the future, for which this essay is intended, will bring a time of AI
(artificial intelligence) integrated into society. It may seem absurd to consider allowing AI full
rights and privileges of ordinary humans at this point, but one will discover that these two are set
apart only by a few minute, inessential details.
There are two components to consider that may reveal difference between humans and
advanced machines: the physical (body) and the mind (this will be used synonymous to the soul
or personality here, as well as consciousness itself). As for the body, the machine could be very
similar to humans if the design were intentional and precise; living human tissue could even be
incorporated to make a sort of cybernetic organism, a component of AI often perpetuated in
science-fiction. In any case, both humans and machines need an energy source and are limited by
some laws of physics. A machine may have increased strength, but there remains a limit to what
it could lift. A machine may be technologically enhanced, but so may a human, either by
technological aid (transhumanism) or via genetic manipulation (designer babies). As proposed
by Descartes long before the age of technology, humans themselves are mere machines, working
through the products of simple combinations and processes.
The mind, however, is much more difficult to consider. The mind here will be treated as
consciousness, which must have a definition before further speculation ensues. I propose the
following definition: consciousness is the ostensible ability to fulfill an independent, uninstinctual free will. (Note that, to carry out a free will, one must first recognize that their actions
affect the future, and to realize this, one must first be self-aware initially.)
I have worded this definition carefullythe phrase ostensible merely allows room for
scientific or philosophical arguments that our will may not be our own. For example, we may be
the slave of our subconscious biases, as supported by the fact that the brain makes decisions up
to ten seconds before the decider is even aware that they have made that decision. A more
philosophical school of thought may propose that our actions only appear to be of our own will,
when we are truly being controlled by a nebulous omnipotent or divine force. Also note on the
definition that the ability to fulfill an independent will does not necessarily require the success in
carrying out this will, but merely the potential to do so. Independent and un-instinctual are words
to clarify that the conscious entity is working by its own mental decisions; that is, not a
mosquito, fleeing from a shadow because it has become an evolutionary advantage, or part of the
mosquitos instincts. Thus, creatures like a mosquito may not be said to be conscious, while
creatures like crows, who may act apart from their instincts, may be said to possess
consciousness.
Notice that this definition does not relate to living or organic consciousness, and thus AI
may be said to be conscious under this view. AI would merely be another expression of organic
consciousness, harnessed by different means. While the soft and living brain is a means of

human consciousness, a computer chip may be the root of AI consciousness; in other words,
brains and their ability to hold consciousness is to humans as software and hardware are to
machines. Ergo, these forms are merely manifested in separate ways (organically vs. artificially).
One cannot build consciousness directly, but rather provide an environment that suits it as well as
a seed for it to grow (a sperm and egg in a womb, or programming within a computer), much like
one cannot build corn directly, but rather provide an environment that suits it and its respective
seed (corn kernels and sunlight, water, and fertile ground).
Humans are wont to question, When will AI surpass human intelligence? Is AI as
intelligent as a human? This, however, is an inherently egotistical question; it is a comparison of
two different forms of consciousness. AI simply is, and is not required to exist relative to human
standards. Despite this, we should expect to see AIs created that closely resemble human
consciousness; this is because, as nuclear physicist Thomas Campbell points out, Intelligent
computers will initially be made in the image of their creator. We will design them to be as much
like us as possible and will judge them on how well they can achieve and maintain that status
because in our minds, we are the supreme model for functional mechanics, intelligence, and
consciousness.
Before morality is brought into this topic, let us first examine another means of
approaching consciousness. As proposed by some philosophers and scientists, the question
What is consciousness? may be the wrong question itselfnot everything needs a definition or
is able to be defined. For example, if you ask a biologist What is life? they will respond with
eight separate characteristics, and if you asked a geometer What is a line? they could only
describe it, it being considered one of the three indefinable axioms of geometry. Similarly,
Wittgenstein proposed the concept of family values in response to the philosophical question,
What is art? According to this approach, art is not one definable object or appearance, but
rather something that fits into a variety of multiple categories for which it is known. This
approach bears the name of family values because one could not define the Smith family,
though they could say that some, but not all, of these family members have red hair, blue eyes,
are tall, are prone to cancer, etc. Thus, art is not one restricted area of acceptance, and so could
consciousness too be imagined as a sort of indefinable concept. However, this approach is more
difficult to use as a framework for philosophical analysis of the position of AI in modern society,
and so a simple definition will suffice for the purposes of this argument.
I am a self-proclaimed early advocate of artificial rights, as I would like to label it, even
before AI has been created for which I may lobby my support. I say this because I expect what I
may call technisma discrimination against AI because it is not made of meat and flesh like
humans. It is most likely that AI will first arise into slavery and servitude. We often see service
bots rise up against their owners in science-fiction, though it is clear that even duller machines
are currently used as slaves. They build the toys of humankind, operate the systems that protect
humans, and act as a luxury in modern society. This is not unethical treatment, for machines are
not yet sentient or conscious, but once they do attain these qualities, the ethics of this practice
must be reconsidered. It is clear that AI is inevitable, but AS (artificial sentience) is a matter of
debate.

In my opinion, morality and fundamental rights center not around consciousness, but
around sentience. Sentience is defined as the ability to feel things; this includes both a mental
and physical manifestation: in terms of the former, these things would be emotions, while the
latter would indicate physical perception, such as pain. An initial examination of the nature of
machines will often result in the often hasty conclusion that AS will never be possible, but this is
not necessarily the case. Recall that intelligence arises though the environment provided only
from the pink goo inside ones head, and so too does mental sentience likely dwell there, and
physical sentience within the nervous system.
In asking if an AI might feel emotion, one must first ask if a human might feel emotion.
Indeed, one can perceive tears, laughing, and smiling, though never is a person enabled to travel
into anothers mind to prove that that person is truly feeling emotions. And one must ask, what is
emotion but the release and subsequent reaction to a chemical or set of thereof? Surely, this
could be programmedthough the existence of programming does not make the perception of
emotions any less genuine, much like the existence of genetic coding or the discoveries of
neuroscience do not make the perception of human emotions any less genuine. In other words,
though it may not be able to be objectively proven, any arbitrary standards that may be used to
claim that AS is impossible can be applied just as readily to humans themselves, and therefore
advanced machines deserve the benefit of the doubt in this field in the same respects in which it
is given to humans.
Physical sentience may be approached similarly. How does one know that anyone else
truly experiences pain? Until the observer can experience pain for the observed, they can never
be sure that pain has objective existence outside of their own mind. We only act in a way that
would not be offensive to humans because we make assumptions based on the similar structure
of our nervous systems, and so too should AIs be treated as if they have physical AS. Physical
response to damage may be sensed and perceived within the mind of an AI much like is occurs in
humansfor what is pain but a process of sensation, perception, and reaction?
Now that AIs have been brought into the realm of sentience, they must have due rights
and privileges. This is perhaps a bit too much to ask, for humans do not grant similar sentient
creatures (such as animals) and semblance of human rights, and even humans themselves are
constantly stripped of their rights by other humans. However, AIs will be in as much need of
these rights as any other sentient entity, most probably being the result of technism, which is
even visible before the birth of AI.
Let us first examine a basic realization of this new AI: are these machines things, or more
akin to people? Should they be called it or her/him/they? Humans have already been
recognized calling machines by him or her, even relatively lower-intelligence systems, and
so this adaption should not be do difficult. It is my belief that AIs, with AS, should be recognized
as nonhuman entities, and therefore regarded not as things, but by preferred names and pronouns.
It may be argued that machines are merely a collection of metal and silicon pieces, but so too are
humans a mere collection of cells and tissuesboth may harbor consciousness and sentience.
The concept of calling a machine him or her based on that machines request seems
odd, for machines are not often programmed with a desired sex in mind, but it may be observed

that gender itself is a largely societal myth; that is, gender is a spectrum, like race or ethnicity
the male or female dichotomy is merely a product of millennia of tradition, and imposing this
mindset on machines without first asking their preference is an even stranger extension.
Therefore, machines should be able to make their own choices about their gender and name.
Yet this raises an interesting point. If AIs are produced, how will any variety be created
unless some are intentionally programmed to have certain specific qualities? In humans, for
example, two forces shape a personality: nature (inherited qualities) and nurture (experiences
that affect the mind without being genetically present). In the case of machines, I believe that all
should be produced with the same, basic standard: a clean slate, much like the mind of a child
(therefore machines will develop through infancy, childhood, adolescence, adulthood, and so on,
though not necessarily physically aging). Afterward, each AI will be shaped by the experiences
that it has. It is thus that machines would only be affected by nurture, and not nature. Though I
may propose an institution of artificial hereditary qualities into the production of AIs, closely
resembling human biology. Certain tendencies and desires may be programmed into the AI via a
random generator, thereby making each AI unique. This seems like an unjust tampering with
minds, but it is not so dissimilar from human production: a baby has no choice of its hereditary
penchants at birth, and they are intentionally random. This practice would also ensure variety so
that one virus may not wipe out all of the AIs, much like genetics are mixed up randomly in
sexually-reproductive organisms to ensure variety for the same reason (one virus, learning the
inner workings of one system, may copy itself and infect all systems quickly, already knowing
the workings and defenses of these systemsthis applies for biology as well as computer
science).
Continuing from this diversion, it seems clear that machines, if AS is present, should be
allowed all of the same rights enumerated to humans. That is, life and the continued sustenance
to sustain it, shelter, land, freedom from slavery, freedom from prejudice, liberty to pursue
happiness independently, and the right to be left alone or in private. This is only a partial list of
the most familiar human rights, but the constitutions of the nation from which the AI in question
originates should be extended to this AI under the same category as a human. These rights, I
believe, must be guaranteed once reasonable evidence for sentience is recognized. However,
moving on to higher rights may prove some difficult moral ground.
For instance, the right to vote. AIs will be affected by governmental and political affairs,
and would certainly be conscious enough to make a reasonable decision, but such questions arise
as, could they be hacked? Do they have advantages over humans? Might AIs oppose humans?
To answer the first, let us assume that the same modern day hacking and virus protection
programs exist at the time of AIs integration into society. It might be expected that anti-virus
systems will improve by the time these AIs come to enter modern society, but so too should
hacking abilities be expected to increase. Realistically, AI systems could be expected to be
hacked, though perhaps not if the AI system is not in any hackable way broadcasting (such as
accessing the internet through its mind). In any case, the potential to be hacked may result in
disenfranchisement of the AIs, as well as denial of access to other higher rights. It may be argued
that humans are often impaired as well, such as by physical sickness or injury, mental illness or

disability, or drug usage, yet these humans are not widely disenfranchised. However, these
ailments are not equivalent to an intentional control; even though humans have biases and can be
swayed, their minds are not literally being controlled.
We therefore arrive at a difficult moral question. The specific predicament of this must be
determined before any ultimate decision is made; for example, if anti-virus systems have
advanced enough to give the odds of only 0.05% of AIs having potential to be hacked, or if AIs
can be tested for viruses and their votes retracted if any are found. Both of these situations might
affect suffrage (or lack thereof) of this AI, but presently no conclusion is easily determined.
Another question regards any mental or physical advantages AIs would have over
humans. However, I do not believe that this would immediately affect humans, as, in a futuristic
society, the advantages of AI could be matched with genetic altering or transhumanism, as
mentioned earlier. For example, AIs might be able to access the internet at any point, but so too
could humans via wearable or implantable technology. Or perhaps AIs could use machine-like
power to lift incredible weights, but humans have already been known to wear metal-framed
exoskeletons to dramatically increase the weight they are able to lift. Humans might also be able
to be genetically altered to be biologically immortal in the future (similar to the species of
jellyfish Turritopsis dohrnii that could theoretically live forever), which closely matches AIs
ability remain technologically immortal.
The question of whether or not AIs might be inherently set against humans is one often
asked in relation to artificial intelligence in pop culture. Once the issues of AI arise, many are
wary that a system such as Skynet might ensue, and even respectable scientists such as Steven
Hawking have warned against the development of such a form of consciousness. However, the
real fears are not along the lines of an apocalypse, but simple mistakes. For example, if an AI
were to manage the stock market and miscalculated or had a glitch momentarily, chaos could
ensue. There would be no reason that AIs would be inherently opposed to humanity unless they
were either programmed to have this innate hate (or programmed with some quality that would
result in this penchant) or provoked by humans into warring against humankind, perhaps by such
reasons as slavery or dehumanization of the AIs and technism.
While the future of AI and AS remains only the future, and just that, the arrival of
advanced AI will surely change the world, and ideally all possibilities must be explored and
decided before this eventartificial rights will be an issue in the future. It is therefore that I
propose that AIs, if equipped with AS, must have fundamental constitutional rights as guaranteed
to humans by their governments, even though higher rights such as suffrage may depend on
specific situations. It becomes apparent that humans and machines are not all that different as
technology advances, and so too should these two forms of consciousness be treated the same.

You might also like