Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Societal risks and rewards associated with

generative AI.

Physicist Richard Feynman once said, “I think I can safely


say that nobody understands quantum mechanics.”
Following in the same vein, it’s safe to say that nobody
understands generative AI…yet. Generative AI, like
ChatGPT, playground AI, Midjourney, Stable Diffusion, and
more have the potential to revolutionize businesses, but
we must understand the risks and rewards it can create.
Generative AI refers to a category of AI algorithms that
generate new outputs based on the data they have been
trained on. It uses a type of deep learning called
generative adversarial networks and has a wide range of
applications, including creating images, text and audio. In
the case of ChatGPT or other like text generators, it
“learns” from text data to understand context, relevancy,
and how to generate human-like responses to questions.
Instead of just replicating existing text, its generative AI
algorithms identify patterns in text and then create
something original. Generative AI can also transform data,
such as turning an audio recording into text, or text into
actual speech, as in a speaking video avatar. It can also
be used to translate languages, improve the resolution of
existing images, and even transform images from one
medium to another—turning photographs into paintings
using a specific artistic style, for example.
From a professional standpoint, generative AI puts us on
the brink of a new wave of software creativity and the
seemingly limitless business solutions that can result from
it. From a societal standpoint, generative AI has the
potential to alter civilization to the degree that the invention
of the wheel, the printing press, or power-generating
machines did. And as with any technological
advancements, there are significant risks to consider.
1. Impact on the labor market
One major concern around generative AI is the near-term
effect it could have on the labor market. As economist Paul
Krugman recently wrote in The New York Times: “It’s
possible that in some cases, AI and automation may be
able to perform certain knowledge-based tasks more
efficiently than humans, potentially reducing the need for
some knowledge workers.” Krugman listed data analysis,
research, and report writing as examples of knowledge
work that could be performed by AI, which means workers
in those fields could eventually find themselves out of a job
or earning much less.
2. The war for reality
Every day, it’s becoming harder and harder to distinguish
between what’s real and what’s not. There are now serious
challenges for the public in assessing reality and trusting
that what they’re seeing is authentic. AI-generated text,
images, and videos only exacerbate these challenges,
requiring additional software that can flag AI-generated
content.
3. Who created what?
The person (or machine) doing the creating can get called
into question too. Last year over a period of just over three
months, more than 10 million people used the Stable
Diffusion text-to-image tool for generating images.4 Text-
to-video models soon followed, which allowed users to
generate video clips, 3-D pictures, and animations. These
images and videos are now out there to be scraped for
future generative AI models, further diffusing—and
confusing—who originally created what.
4. Existential crisis in education
Generative AI raises even more questions in schools and
universities, in which intellectual achievement is dependent
on the student’s own thoughts, research, and writing.
Though derived from existing content, AI-generated
content is essentially original. This makes it difficult to
correctly distinguish between, say, academic papers that
were authentically produced by a human and text that was
generated by a machine. On the plagiarism side, software
that identifies and watermarks content from ChatGPT is
already being developed to alert educators to
cheating.5 But it’s still unclear if watermarking will work for
all text-generating tools that come to market.
5. AI in the hands of malicious actors
In the wrong hands, generative AI can be used for truly
nefarious reasons. Malicious actors can use it to create
everything from propaganda to phishing emails and
malware, to fake websites and businesses, to text that’s
meant to impersonate someone.6 It can even be used to
create new forms of warfare and weapons.7Because AI-
generated text, images, and other outputs can seem so
authentic, they can be difficult to detect before real harm is
done. Business professionals and politicians alike have to
be prepared for the reputational fallout, social unrest, and
danger that can come from the unidentified and
unmitigated use of generative AI by threat actors.
Though there’s more to be done to address the risks
associated with generative AI, we also can’t deny its
benefits, especially in our connected world.
While connected enterprises need to approach generative
AI with caution, there’s no doubt it’s an exciting time to see
how this technology will shape the future—and the
positives that can come from it. Generative AI can be
used to enhance society and come up with important
solutions to some of our biggest problems today.
1. Maintaining productivity in a shrinking workforce
As older populations age out of the workforce, and not
enough younger workers can fill open roles, the economic
consequences could be severe. To maintain economic
stability and fuel growth, countries around the world need a
productivity boost. Conceding the bulk of information work
to generative AI algorithms and models can speed up,
increase, and even improve output without adding human
capital, introducing far greater efficiency in writing,
reporting, and analyzing. For example, the web Methods
ChatGPT API connector is already helping businesses
incorporate AI-generated text into their business processes
by integrating ChatGPT with existing software and
applications.
2. Creating new opportunities for the workforce
It was long thought that generative AI would only automate
jobs that required repetitive tasks. The reality is; however,
generative AI tools have the power to affect positions at all
levels and unlock new opportunities for different types of
jobs and job titles. This impact could change many
industries, but at a basic level, generative AI still requires
an actual person to interact with it. It is unclear how many
new jobs generative AI will create, however, we are seeing
some already appear, mostly around the ability to write
good prompts for the AI.
3. Contributing to important scientific issues
Since generative AI is especially adept at consuming large,
complex datasets, it could become an invaluable player in
various scientific fields, contributing to breakthroughs in
cancer research, sustainable energy sources, climate and
environmental change, and other critical issues facing
humanity.
4. Unleashing software creativity
When we talk about the potential of generative AI, we’re
talking about models with hundreds of billions of
parameters—on par with the number of cells in the human
brain. It is a truly mind-blowing technology. Creative
professionals can develop domain-specific AI-based tools
for multitudes of niche use cases that stretch the
imagination, enabling new ways of connecting people,
technology, and processes along with new business
models.
Generative AI tools like ChatGPT are disrupting the world
as we know it. There are still many unknowns about how
generative AI will ultimately be used, by whom, and for
what purposes. But the technology offers as much promise
as it does risk. For enterprises that are seeking to create
the connected experiences their employees, partners, and
customers demand, generative AI has enormous potential
for use in business processes and enablement of new
business models. Generative AI has great potential for
use in business growth and enablement, but it’s just one
piece of the vast puzzle that is the connected enterprise.
In my opinion, AI is probably our most important invention
since electricity. We’re at a critical tipping point and if we
can power through the hurdles ahead, there’s an exciting
future that awaits us all. We’ll need to collaborate and
adopt a framework for responsible AI innovation, including
data governance (quality, diversity and security of data
used), model governance (transparent testing and
validating for accuracy, reliability, bias and fairness),
content governance (ethical and legal use of content,
attribution, verifying authenticity and accuracy) and
regulatory governance (alignment with law and ethics,
protecting rights and enforcing accountability, promoting
plenty of work to be done. We’re standing at the cusp of a
new era. AI is a powerful and promising technology that
can help unlock new possibilities for human creativity and
innovation. However, the challenges and risks are very
real and need to be addressed responsibly. We’ll inevitably
hit dizzying highs and nauseating lows. By adopting a
framework for responsible innovation though, we can
ensure that this technology is largely used for good and
not evil, for benefit and not harm, for trust and not distrust.

Palak Shinghal
Modern School Barakhamba Road New Delhi, India

You might also like