Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Name: Sonalees Tang

ID: 220403025

“Homework 3”
The 21st century has seen tremendous advancements in technology, which have altered
humankind. However, given how pervasive technology has become in our daily lives, I believe
that if I had the chance to develop a machine that could act morally in accordance with human
values and ethical standards and had a set of built-in rules, I would choose to incorporate some of
Isaac Asimov's moral theories in addition to a few other guidelines.

Subjectively speaking, I think that technological innovations and tools are far more
prevalent in our society than we could have ever dreamed. Transportation, sanitary technology,
artificial intelligence (AI), including AI-embedded software, and many more areas are examples
of technology. Because of this, it is necessary to combine Kantianism with utilitarianism in the
creation of these technical instruments, while constantly maintaining human safety and the
welfare of all people. Like the ideas of utilitarianism and Kantianism, the design of my machine
should be centered on how to maximize its functions while upholding the notion of effective task
performance. This does not, however, imply that they are not allowed to violate moral and
ethical laws that uphold human norms. Certain rules and guidelines that I would personally
embed in my machine are the following:

1. Human safety prioritization, whether it is physical or psychological safety. The usage of


my design should maintain a safe and convenient use for our daily lives, without being
harmed or affected in any way.
2. Self-protection from threats that may cause technological malfunctions isn’t prohibited as
long as it does not bring harm to humans or living entities.
3. Directions and orders given by humans must be followed upon in order to assist and
provide help.
4. Despite this, there must be a set of limitations to what type of orders are allowed. For
example: murders, harm, or illegal activities are prohibited.
5. Set up certain limitations when it comes to acting on their own accords or machine’s
“self-interest”

It would be detrimental to society's welfare to include the whole idea of


utilitarianism—which emphasizes whether specific activities or instructions are good or wrong
depending on the results—into the operations of these robots, even though it would reinforce the
notion. It is important to remember that these consequences will not be founded on the selfish
agreements of individuals, but rather on efficiency and value. The adoption of virtue ethics will
also help society by preventing long-term errors or malfunctions for its users, as it provides the
framework for societal stability and long-term satisfaction.
In conclusion, when it comes to the use of technology and new creations, it is imperative
to give careful consideration to the safety and welfare of society. Even if its original intent was to
help people with their everyday lives, there has to be a set of boundaries in place to avoid issues
and hazards. The harmonious coexistence of Kantianism and utilitarianism will provide a secure
a balance between individual and collective interests. And the implementation of virtue ethics
will help ensure that this balance will remain as a function for the society for the long term.

You might also like