Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Challenges of AI and Data Privacy—And How to Solve

Them
Artificial intelligence (AI) has developed rapidly in recent years. Today, AI and its applications
are a part of everyday life, from social media newsfeeds to mediating traffic flow in cities to
autonomous cars to connected consumer devices such as smart assistants, spam filters, voice
recognition systems and search engines.

AI and Data Privacy Challenges


AI has the potential to revolutionize society, however, there is real risk that the use of new
tools by states or enterprises could have a negative impact on human rights. The following are
some of the major data privacy risk areas and problems related to AI:

 Reidentification and deanonymization—AI applications can be used to identify and


track individuals across different devices in their homes, at work and in public spaces.
For example, facial recognition, a means by which individuals can be tracked and
identified, has the potential to transform expectations of anonymity in public spaces.
 Discrimination, unfairness, inaccuracies and bias—AI-driven identification, profiling
and automated decision making can lead to discriminatory or biased outcomes. People
can be misclassified, misidentified or judged negatively, and such errors or biases may
disproportionately affect certain demographics.
 Opacity and secrecy of profiling—Some applications of AI can be obscure to
individuals, regulators or even the designers of the system themselves, making it
difficult to challenge or scrutinize outcomes. While there are technical solutions to help
improve some systems’ interpretability and/or ability to audit, a key challenge remains
whenever this is not possible, and the outcome can significantly impact people’s lives.
 Data exploitation—People are often unable to fully understand what kinds of—and how
many—data their devices, networks and platforms generate, process or share. As
consumers continue to introduce smart and connected devices into their homes,
workplaces, public spaces and even bodies, the need to enforce limits on data
exploitation has become increasingly pressing.
 Prediction—AI can utilize sophisticated machine-learning algorithms to infer or predict
sensitive information from non-sensitive forms of data. For instance, someone’s
keyboard typing patterns can be analyzed to deduce their emotional state, which
includes emotions such as nervousness, confidence, sadness or anxiety. Even more
alarming, a person’s political views, ethnic identity, sexual orientation and even overall
health status can also be determined based on activity logs, location data and similar
metrics.

Solutions and Recommendations


A data protection principle that underpins all AI development and applications is
accountability. This principle is central to all data privacy laws and regulations, and places
greater responsibility on the data controller to ensure that all processing is compliant. Data
processors are also bound by the accountability principle.

The following are 2 requirements that are especially relevant for organizations using AI:

 Privacy by design—The data controller should build privacy protection into systems
and ensure that data protection is safeguarded in the system’s standard settings. The
tenets of privacy by design require that data protection be given due consideration in all
stages of system development, in routines and in daily use. Standard settings should be
as protective of privacy as possible and data protection features should be embedded at
the design stage.
 Data Protection Impact Assessment—Anyone processing personal data has a duty to
assess the risk involved. If an enterprise believes that a planned process is likely to pose
a high risk to natural persons’ rights and freedoms, it has a duty to conduct a Data
Protection Impact Assessment (DPIA). Moreover, there is a requirement to assess the
impact on personal privacy by systematically and extensively considering all personal
details in cases where these data are used in automated decision-making or when
special categories of personal data (i.e., sensitive personal data) are used on a large
scale. The systematic and large-scale monitoring of public areas also requires
documentation showing that a DPIA has been conducted.

The following are some recommendations for purchasing and using AI-based systems:

 Carry out a risk assessment and, if required, complete a DPIA before purchasing a
system.
 Ensure that the systems satisfy the requirements for privacy by design.
 Conduct regular tests of the system to ensure that it complies with regulatory
requirements.
 Ensure that the system protects the rights of the users, customers and other
stakeholders.
 Consider establishing industry norms, ethical guidelines or a data protection panel
consisting of external experts in the fields of technology, society and data protection.
Such experts can provide advice on the legal, ethical, social and technological
challenges–and opportunities–linked to the use of AI.

AI systems can be a tremendous benefit to both individuals and society, but organizations using
AI must address the risk to data subjects’ privacy rights and freedoms.

As artificial intelligence (AI) becomes increasingly important to society, experts in the field have
identified a need for ethical boundaries when it comes to creating and implementing new AI tools.
Although there's currently no wide-scale governing body to write and enforce these rules, many
technology companies have adopted their own version of AI ethics or an AI code of conduct.

AI ethics are the moral principles that companies use to guide responsible and fair development and
use of AI. In this article, we'll explore what ethics in AI are, why they matter, and some challenges and
benefits of developing an AI code of conduct.

What are AI ethics?


AI ethics are the set of guiding principles that stakeholders (from engineers to government officials)
use to ensure artificial intelligence technology is developed and used responsibly. This means taking
a safe, secure, humane, and environmentally friendly approach to AI.

A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data, and
mitigating environmental risks. Codes of ethics in companies and government-led regulatory
frameworks are two main ways that AI ethics can be implemented. By covering global and national
ethical AI issues, and laying the policy groundwork for ethical AI in companies, both approaches help
regulate AI technology.
More broadly, discussion around AI ethics has progressed from being centered around academic
research and non-profit organizations. Today, big tech companies like IBM, Google, and Meta have
assembled teams to tackle ethical issues that arise from collecting massive amounts of data. At the
same time, government and intergovernmental entities have begun to devise regulations and ethics
policy based on academic research.

Stakeholders in AI ethics

Developing ethical principles for responsible AI use and development requires industry actors to
work together. Stakeholders must examine how social, economic, and political issues intersect with
AI, and determine how machines and humans can coexist harmoniously.

Each of these actors play an important role in ensuring less bias and risk for AI technologies.
 Academics: Researchers and professors are responsible for developing theory-based
statistics, research, and ideas that can support governments, corporations, and non-profit
organizations.
 Government: Agencies and committees within a government can help facilitate AI ethics in
a nation. A good example of this is the Preparing for the Future of Artificial Intelligence report
that was developed by the National Science and Technology Council (NSTC) in 2016, which
outlines AI and its relationship to public outreach, regulation, governance, economy, and
security.
 Intergovernmental entities: Entities like the United Nations and the World Bank are
responsible for raising awareness and drafting agreements for AI ethics globally. For
example, UNESCO’s 193 member states adopted the first ever global agreement on
the Ethics of AI in November 2021 to promote human rights and dignity.
 Non-profit organizations: Non-profit organizations like Black in AI and Queer in AI help
diverse groups gain representation within AI technology. The Future of Life Institute created
23 guidelines that are now the Asilomar AI Principles, which outline specific risks, challenges,
outcomes for AI technologies.
 Private companies: Executives at Google, Meta, and other tech companies, as well as
banking, consulting, health care, and other industries within the private sector that uses AI
technology, are responsible for creating ethics teams and codes of conduct. This often
creates a standard for companies to follow suit.

Why are AI ethics important?


AI ethics are important because AI technology is meant to augment or replace human intelligence—
but when technology is designed to replicate human life, the same issues that can cloud human
judgment can seep into the technology.

AI projects built on biased or inaccurate data can have harmful consequences, particularly for
underrepresented or marginalized groups and individuals. Further, if AI algorithms and machine
learning models are built too hastily, then it can become unmanageable for engineers and product
managers to correct learned biases. It's easier to incorporate a code of ethics during the
development process to mitigate any future risks.
AI ethics in film and TV

Science fiction—in books, film, and television—has toyed with the notion of ethics in artificial
intelligence for a while. In Spike Jonze’s 2013 film Her, a computer user falls in love with his
operating system because of her seductive voice. It’s entertaining to imagine the ways in which
machines could influence human lives and push the boundaries of “love”, but it also highlights the
need for thoughtfulness around these developing systems.

Examples of AI ethics

It may be easiest to illustrate the ethics of artificial intelligence with real-life examples. In December
2022, the app Lensa AI used artificial intelligence to generate cool, cartoon-looking profile photos
from people’s regular images. From an ethical standpoint, some people criticized the app for not
giving credit or enough money to artists who created the original digital art the AI was trained on [1].
According to The Washington Post, Lensa was being trained on billions of photographs sourced from
the internet without consent [2].

Another example is the AI model ChatGPT, which enables users to interact with it by asking
questions. ChatGPT scours the internet for data and answers with a poem, Python code, or a
proposal. One ethical dilemma is that people are using ChatGPT to win coding contests or write
essays. It also raises similar questions to Lensa, but with text rather than images.

These are just two popular examples of AI ethics. As AI has grown in recent years, influencing nearly
every industry and having huge positive impact on industries like health care, the topic of AI ethics
has become even more salient. How do we ensure bias-free AI? What can be done to mitigate risks in
the future? There are many potential solutions, but stakeholders must act responsibly and
collaboratively in order to create positive outcomes across the globe.

Ethical challenges of AI
There are plenty of real-life challenges that can help illustrate AI ethics. Here are just a few.

AI and bias

If AI doesn’t collect data that accurately represents the population, their decisions might be
susceptible to bias. In 2018, Amazon was under fire for its AI recruiting tool that downgraded
resumes that featured “women” (such as “Women’s International Business Society”) in it [3]. In
essence, the AI tool discriminated against women and caused legal risk for the tech giant.

AI and privacy

As mentioned earlier with the Lensa AI example, AI relies on data pulled from internet searches,
social media photos and comments, online purchases, and more. While this helps to personalize the
customer experience, there are questions around the apparent lack of true consent for these
companies to access our personal information.

AI and the environment

Some AI models are large and require significant amounts of energy to train on data. While research
is being done to devise methods for energy-efficient AI, more could be done to incorporate
environmental ethical concerns into AI-related policies.

How to create more ethical AI


Creating more ethical AI requires a close look at the ethical implications of policy, education, and
technology. Regulatory frameworks can ensure that technologies benefit society rather than harm it.
Globally, governments are beginning to enforce policies for ethical AI, including how companies
should deal with legal issues if bias or other harm arises.

Anyone who encounters AI should understand the risks and potential negative impact of AI that is
unethical or fake. The creation and dissemination of accessible resources can mitigate these types of
risks.

It may seem counterintuitive to use technology to detect unethical behavior in other forms of
technology, but AI tools can be used to determine whether video, audio, or text (hate speech on
Facebook, for example) is fake or not. These tools can detect unethical data sources and bias better
and more efficiently than humans.

The following list enumerates all the ethical issues that were identified, totalling 39.

1. Cost to innovation
2. Harm to physical integrity
3. Lack of access to public services
4. Lack of trust
5. “Awakening” of AI
6. Security problems
7. Lack of quality data
8. Disappearance of jobs
9. Power asymmetries
10. Negative impact on health
11. Problems of integrity
12. Lack of accuracy of data
13. Lack of privacy
14. Lack of transparency
15. Potential for military use
16. Lack of informed consent
17. Bias and discrimination
18. Unfairness
19. Unequal power relations
20. Misuse of personal data
21. Negative impact on justice system
22. Negative impact on democracy
23. Potential for criminal and malicious use
24. Loss of freedom and individual autonomy
25. Contested ownership of data
26. Reduction of human contact
27. Problems of control and use of data and systems
28. Lack of accuracy of predictive recommendations
29. Lack of accuracy of non-individual recommendations
30. Concentration of economic power
31. Violation of fundamental human rights in supply chain
32. Violation of fundamental human rights of end users
33. Unintended, unforeseeable adverse impacts
34. Prioritisation of the “wrong” problems
35. Negative impact on vulnerable groups
36. Lack of accountability and liability
37. Negative impact on environment
38. Loss of human decision-making
39. Lack of access to and freedom of information

Three categories of ethical issues of artificial intelligence

You might also like