Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Can AI help Gen Z workers make up lost ground?

Gen Z workers started their careers at a tough time. Will their native
AI fluency help them get ahead?
Gen Z has had a hard landing into the workforce. Starting jobs amid the global pandemic, many
of these new workers have missed out on gaining essential hard- and soft skills usually
gleaned by working alongside older colleagues.

However, as the first truly digital generation, their innate fluency with technology could help
them make up some of that ground – especially as AI becomes a hugely important part of the
modern workplace.

Emma Parry, a professor of human resource management and head of the Changing World of
Work Group at Cranfield School of Management, UK, has seen how Gen Z’s openness to new
technology is putting them at the forefront of this new way of working.

“With AI, people tend to fall into a dystopian or utopian outlook, and younger people normally
fall into the latter,” she says. “While there isn’t huge amounts of quality research into this yet,
anecdotally, young people are more accepting and willing to adapt AI into their daily lives and at
work.”

Stephanie Forrest, the founder of TFD, a London-based strategic communications consultancy in


the tech space, has seen first-hand how Gen Z employees use AI technology with ease, and are
quickly becoming essential in the workplace. “They don’t question [the technology] – they
simply see it as a way to optimise what they are already doing.”

At TFD, she says, Gen Z were the first employees to experiment with generative AI tools such as
Open AI’s ChatGPT for tasks including admin, research and email composition. "Since AI is so
new for everyone, it puts Gen Z employees on a level footing with other members of a team,
providing them with a way to meaningfully contribute. AI enables forward-looking companies to
learn from younger employees, in terms of how they use technology to be more efficient," says
Forrest.

Professor Weiguo (Patrick) Fan at the University of Iowa’s Tippie College of Business, US,
notes how many young people are prioritising learning these skills as a “strategic career move”,
whether through experimenting in their personal lives, taking online courses or pursuing
traditional educational avenues.

This knowledge can help Gen Z contribute to businesses in ways their less AI-fluent colleagues
may not be able to, which can make younger employees especially valuable to their employers.
“Gen Z employees can leverage their AI knowledge to innovate and streamline processes and
help bridge the gap between technical and non-technical roles,” says Fan.

This can help them stand out. For instance, at VEM Medical, a US-based medical tool company,
“Young employees' ability to use AI technology to automate tedious jobs and optimize
workflows has grown our productivity dramatically,” says Derrick Hathaway, sales director.

Additionally, says Fan, “Gen Z's familiarity with AI helps these younger employees adapt to
these changes and understand the implications of AI on their roles, making them flexible and
dynamic employees.” Fan adds how these skills are especially valuable in industries including
technology, finance, healthcare, marketing and manufacturing, where companies are rapidly
integrating AI and machine learning.

Young people may also be key when it comes to shaping ethical AI practises, a rising concern
across all industries, as the technology gathers pace. “Gen Z, known for their values-driven
approach to work, could play an essential role in leading to better user experiences and more
inclusive technology,” says Fan.

Additionally, one of Gen Z’s challenges entering the workforce as remote employees has been
disconnection with older colleagues. But, says Forrest, their knowledge of AI could help them
build those bridges – and even reverse mentor. Forrest has seen this in action across TFD’s 20
multigenerational employees. “We are bringing people with both little and lots of experience
together and learning from each other.”

However, it’s important to recognise that fluency in one skill won’t necessarily erase some of the
professional barriers Gen Z has to overcome.

While technical know-how will always provide professional value, Parry argues workers might
still have a way to go before AI skills are truly needed across the board. “We picture that all
organisations are AI focused, but that’s not the reality yet. The technology is moving quickly but
the adoption isn't perhaps not as quick as we think,” she says.

And employers are looking for well-rounded employees and technical skills are only one part
of the puzzle. “Other skills, such as communication, teamwork, problem-solving
and adaptability continue to be highly valued,” says Fan. He also highlights how it's possible to
train employees in AI, so young workers won’t end up being the only employees with these
skills.
Ultimately, AI fluency might not entirely counter the setbacks Gen Z has experienced while
entering the workforce during the pandemic. Yet it seems to be helping – giving young workers a
well-needed edge, and setting them up well for a career in an ever-evolving workplace.

Why Gen Z workers are starting on the back foot

Having only known virtual work settings, some young employees


lack exposure to the workplace norms that set them up to succeed.
In some ways, Gen Z employees are thriving in the new world of work. They’ve entered the
workforce at a time when flexibility is commonplace, digital communication is ubiquitous and
employees have the leverage to ask companies for what they want.

At the same time, however, some experts are concerned that remote and hybrid work
arrangements are already leaving some early-career workers behind. Many of these worries
revolve around the absence of workplace intangibles: a lack of the casual conversations and
informal observations that traditionally teach young employees how to act. Amid virtual settings,
some experts believe entry-level workers are missing out on picking up vital cues that guide
behaviour, collaboration and networking.

“It’s particularly centred around communication,” explains Helen Hughes, associate professor at
Leeds University Business School, UK. “It’s things like understanding norms, values and
etiquette: Who should you call? How should they be contacted? Are some people out of
bounds?”

These sorts of questions were once promptly answered in face-to-face settings – a desk drop-by,
or quick tag in the office kitchen. Navigating office politics would also be intuitive, based on
subtle but tangible cues: fixed seating arrangements tend to indicate hierarchy; body language
suggests when colleagues are most approachable. “Social comparison is harder in a remote or
hybrid environment – you can’t see everyone around you and get a sense of how you’re doing,”
says Hughes.

But with so many young employees now working either remotely or hybrid, a once-natural
encounter has now been replaced by an additional layer of outreach, which is inherently more
complicated.
Hughes says this makes even mundane work tasks harder to accomplish. “Miscommunication is
easy in a virtual environment; for example, incorrectly inferring tone from an email. There can
be a lack of understanding of when to set up a meeting – whether it’s appropriate to wait and
build a list of questions, or set up a call each time something is needed.”

Without being able to glean behavioural cues from colleagues in offices, young employees can
find it hard to strike the right balance between appearing either overeager or idle, says Hughes.
“They can have broader anxieties around how visible they should strive to be. In a hybrid or
remote environment, it can be too easy to fall off the radar and find their work goes unnoticed.”

The result, says Hughes, is that many early-career workers prioritise the impression they make at
work – leading to behaviours like presenteeism and procrastination – rather than their actual job
performance. “They may ask too many questions to appear keen, or they might not ask anything
at all because they’re worried how they’ll be perceived by colleagues.”

Ultimately, says James Bailey, professor of leadership development at the George Washington
University School of Business, based in Washington, DC, chance encounters with colleagues
help build trust, fostering an environment of risk-taking and innovation. “Serendipity is a big part
of face-to-face office life that can’t be replicated online,” he says. “Some of our best ideas come
from watercooler chats with colleagues – if you want to replicate those casual conversations on
Zoom, you have to set up an appointment in someone’s calendar.”

Bailey says that unless the previous model of face-to-face learning can be updated for the new
age of work, alongside their current challenges, some Gen Z employees may lack leadership
qualities needed for the future. “They may not struggle with executing a specific task
independently, but they may be left with underdeveloped cross-functional skillsets that are
required to take a strategic view across a whole organisation – the role of a leader.”

Of course, it’s not the case that every young worker who is at least partially remote is struggling.
But for many of these inexperienced employees, virtual work settings can exacerbate new job
stress. “Many of these issues are still anxieties for new employees in a typical office
environment, but remote working seems to accentuate the transactional aspects,” says Hughes.

This lack of osmosis learning has left experts concerned Gen Z employees won’t feel the true
cost of these circumstances until later in their careers.

AI anxiety: The workers who fear losing their jobs to


artificial intelligence
Many workers worry AI is coming for their jobs. Can we get past the
fear and find a silver lining?
Claire has worked as a PR at a major consulting firm, based in London, for six years. The 34-
year-old enjoys her job and earns a comfortable salary, but in the past six months, she’s started to
feel apprehensive about the future of her career. The reason: artificial intelligence.

“I don’t think the quality of the work that I’m producing could be matched by a machine just
yet,” says Claire, whose last name is being withheld to protect her job security. “But at the same
time, I’m amazed at how quickly ChatGPT has become so sophisticated. Give it a few more
years, and I can absolutely imagine a world in which a bot does my job just as well as I can. I
hate to think what that might mean for my employability.”

In recent years, as headlines about robots stealing human jobs have proliferated – and as
generative AI tools like ChatGPT have quickly become more accessible – some workers report
starting to feel anxious about their futures and whether the skills they have will be relevant to the
labour market in years to come.

In March, Goldman Sachs published a report showing that AI could replace the equivalent of
300 million full-time jobs. Last year, PwC’s annual global workforce survey showed that almost
a third of respondents said they were worried about the prospect of their role being replaced
by technology in three years.

“I think a lot of creatives are concerned,” says Alys Marshall, a 29-year-old copywriter based in
Bristol, UK. “We’re all just hoping that our clients will recognise [our] value, and choose the
authenticity of [a human] over the price and convenience of AI tools.”

Now, career coaches and HR experts are saying that although some anxiety might be justified,
employees need to focus on what they can control. Instead of panicking about possibly losing
their jobs to machines, they should invest in learning how to work alongside technology. If they
treat it as a resource and not a threat, add the experts, they’ll make themselves more valuable to
potential employers – and feel less anxious.
Fear of the unknown

For some people, generative AI tools feel as if they’ve come on fast and furious. OpenAI’s
ChatGPT broke out seemingly overnight, and the “AI arms race” is ramping up more every
day, creating continuing uncertainty for workers.
Carolyn Montrose, a career coach and lecturer at Columbia University in New York,
acknowledges the pace of technological innovation and change can be scary. “It is normal to feel
anxiety about the impact of AI because its evolution is fluid, and there are many unknown
application factors,” she says.

But as unnerving as the new technology is, she also says workers don’t necessarily have to feel
existential dread. People have the power to make their own decisions about how much they
worry: they can either “choose to feel anxious about AI, or empowered to learn about it and use
it to their advantage”.

PwC’s Scott Likens, who specialises in understanding issues around trust and technology, echoes
this. “Technology advancements have shown us that, yes, technology has the potential to
automate or streamline work processes. However, with the right set of skills, individuals are
often able to progress alongside these advancements,” he says. “In order to feel less anxious
about the rapid adoption of AI, employees must lean into the technology. Education and training
[are] key for employees to learn about AI and what it can do for their particular role as well as
help them develop new skills. Instead of shying away from AI, employees should plan to
embrace and educate.”

It may also be helpful to remember that, according to Likens, “this isn’t the first time we have
encountered industry disruptions – from automation and manufacturing to e-commerce and retail
– we have found ways to adapt”. Indeed, the introduction of new technology has often been
unnerving for some people, but Montrose explains that plenty good has come from past new
developments: she says technological change has always been a key ingredient for society’s
advancement.

Regardless of how people respond to AI technology, adds Montrose, it’s here to stay. And it can
be a lot more helpful to remain positive and look forward. “If people feel anxious instead of
acting to improve their skills, that will hurt them more than the AI itself,” she says.
This isn’t the first time we have encountered industry
disruptions – from automation and manufacturing to e-
commerce and retail – we have found ways to adapt – Scott
Likens
Unique value of humans

Although experts say some level of anxiety is justified, it may not be time to hit the panic button
yet. Some research has recently shown fears of robots taking over human jobs might be
overblown.
November 2022 research by sociology professor Eric Dahlin at Brigham Young University in
Utah, US, showed that not only are robots not replacing human workers at the rate most people
believe, but some people also misperceive the rate at which automation tools are taking over. His
data showed about 14% of workers said they had seen their job replaced by a robot. But both
workers who had experienced job displacement due to technology as well as those who hadn’t
tended to overstate the pace and volume of the trend – their estimates were far off reality.

“Overall, our perceptions of robots taking over is greatly exaggerated. Those who hadn’t lost
jobs overestimated by about double, and those who had lost jobs overestimated by about three
times,” said Dahlin in presenting his research. While Dahlin said that some new technologies
would likely be adopted and implemented without considering all implications, it’s also true that
“just because a technology can be used for something does not mean that it will be
implemented”.

Stefanie Coleman, a principal in consultancy EY's people advisory services business, also points
out that we shouldn’t expect the workforce of the future to be “binary”. In other words, a
combination of both humans and robots will always need to exist.

"Humans will always have a role to play in business by performing the important work that
robots cannot. This kind of work typically requires innate human qualities, such as relationship
building, creativity and emotional intelligence,” she says. "Recognising the unique value of
humans in the workforce, when compared to machines, is an important step in navigating the
fears that surround this topic."

A few weeks ago, Claire, the PR worker, decided she wanted to start learning more about the
technology that’s transforming her industry. She’s now researching online courses through which
she hopes to learn to code. “A lot of tech used to scare me, so I just ignored it, but based on
everything I’m seeing, that’s sort of stupid,” she says. “Ignoring something definitely won’t
make it go away, and I’m slowly starting to understand that if I take the time to make it less
unfamiliar – which makes it less scary – it might actually be able to help me a lot.”

Will the hybrid office ever feel like home?

Employees may never have a desk that feels like ‘theirs’ again, but the
trade-off might mean an agile office that works for everyone.
Before the pandemic and widespread rise of virtual work settings, employees often made their
desks a second home: a sweater slung over the back of a chair, a favourite mug by the keyboard,
a pile of books stacked behind the monitor.

Psychological research shows why some workers may feel a need to personalise their
workspaces – primarily, it increases the significance that place holds for them.

“Research has shown that the more an employee's ‘place identity’ increases, the more they
become attached to the company,” explains Sunkee Lee, assistant professor of organisational
theory and strategy at Carnegie Mellon University’s Tepper School of Business, in Pittsburgh,
US. “They feel the office is more personalised and, therefore, that it feels more like their own
space. That leads to more satisfaction and, overall, more productivity.”

By making their desks theirs, workers created a sense of familiarity, which was reinforced by a
neighbourhood of familiar faces around them: colleagues who also had permanent seating
arrangements.

However, the pandemic era hybrid workplace has put an end to the full-time, 9-to-5 office setting
for many employees. Amid the rise of hot desking, workers with flexible schedules often have
to share workstations – and take their personal items home at the end of the day.

Given that evidence points to employees benefiting from having personalised work
environments, some workers and business leaders alike worry that hybrid offices risk becoming
impersonal, sterile and disorientating – not a place many employees want to be. In response,
forward-thinking employers and architects are reconfiguring workplaces to best benefit how
people work in agile settings.

New places and faces

The practice of workers decorating their workspaces was an ingrained part of office culture for
years – thought to reveal personality. “It’s human nature to personalise the space around you,”
says Lee. “In the workplace, this could be photographs, diplomas, ornaments: subtle cues to
show the kind of person you are, your hobbies and interests.”

Indeed, some research shows familiarity breeds routine, which stabilizes workflow and leads to
increased creativity. There is even research that family photos on desks can keep employees
subconsciously more honest. Lee says personalisation also enables self-expression and
conversation starters between colleagues, helping to boost employee motivation. “Having your
own distinctive identity and personality in the workplace means being able to express yourself,
that you feel acknowledged.”
But as many workers no longer have an assigned space of their own, they may have to work with
unfamiliar people in unfamiliar locations. “If you had a good relationship with colleagues
previously, you might miss those types of interactions,” says Lee. “It can have a negative effect,
with fewer opportunities to talk, complain and celebrate achievements together.”

The risk is employees have to make do with impermanent, transient environments whenever they
venture into the office, which can breed stress, anxiety and exhaustion. “I’ve heard of people
going and troubling to find a seat to work, let alone a familiar one,” says Lee. “It’s comparable to
walking into a library to study: you may get work done, but it’s an impersonal space that will
never feel like yours.”

Rethinking personalisation

Experts say in this new world of work, the answer to making the office feel less sterile isn't
necessarily bringing back seating plans and family portraits. Employers are aware of the jarring
changes; in response, many are bringing in design experts to recalibrate the workplace’s
function, and consider what a worker-friendly space actually means in the hybrid age.

“It’s still important to give people a sense of belonging in a physical space,” says Chris
Crawford, studio director at design and architecture firm Gensler, in London. “They still need a
home base to anchor themselves. Although that’s still often a desk, the aim is to get people out of
the mindset that a one-metre by one-and-a-half-metre piece of office furniture is where they
belong.”

Crawford says architectural features now prompt workers to think of their entire floor as their
own physical environment: interactive elements encourage employees to move around; open
staircases connect disparate workspaces; lockers mean personal items can be stored for
safekeeping, rather than kept on a desk.

Design cues also help create bespoke spaces for different work functions, says Crawford.
“Architectural nudges can mean you’re able to walk into a room and immediately feel a change
in atmosphere: from warmer lighting and softer acoustics for deep focus areas, to open spaces
and certain types of furniture and layouts that intuitively feel collaborative.”

Some research shows it’s important that employers take these elements into consideration as they
augment spaces, especially as priorities have changed for some workers. According to an August
2022 study of 2,000 US workers by Gensler, employees feel workplaces now need to allow
for individual and virtual work, alongside social, learning and in-person collaboration.

Co-working spaces that offer mixed-use, dynamic spaces are also integrating these trends. Ebbie
Wisecarver, global head of design at WeWork, based in New York, says employees are now
seeking a deeper connection with the hybrid workplace, “in the way that the office functions and
reflects their personal work needs”.

As a result, some companies are consulting workers on what their next offices will look like.
“We have co-creation sessions that allow cross-sections of the business to have their say on their
space,” says Crawford. “The design process itself is being democratised.”

Crawford says that the traditional, banks-of-desks open-plan office setting has had its day. The
aim, in the new hybrid workplace, is that personalisation no longer means a worker embellishing
a homogenised perpendicular workstation; instead, the entire office will have a curated, holistic
employee experience of its own.

“It goes much further than a desk,” he adds. “It’s about enabling people to choose their own
workday, through spaces that offer variety, choice and differentiation so they intrinsically feel
‘this place works for me’.”

Why your favourite brand may be taking a social


media break

As new social media apps including Threads pop up, some


companies are choosing to eschew the platforms. What’s going on?

A nother day, another social media platform to post on.


The launch of Threads on 6 July marked the latest major short-form text platform available to
social media users, following Twitter, Mastodon and Bluesky – plus a raft of other, smaller
competitors.

Like many of us, big companies are struggling to keep up with the number of social media
platforms vying for their time and attention. They’re faced with the important choice of which
apps to choose, in a market where social media can be an important brand-building tool and
enable them to target consumers where they are most active.
For the past decade, It’s been all but required for serious brands to maintain a social media
presence, says Nathan Allebach, creative director at Allebach Communications, which has
partnered with brands including Utz Snacks and Steak-umm to build out their social media
presences.

Not only is it a major avenue to meet consumers, but social media is also a key contributor for
brand discovery – which companies hope will translate into sales across generations of
shoppers.

Yet instead of scrambling to claim digital real estate across all these newly emerging platforms,
some companies are choosing to be more judicious about which platforms they choose to join. In
some cases, they’re learning from brands who jumped the social media ship years ago.

Bullying and polarisation

One of the first large brands to make the move was Lush Cosmetics, which in November 2021
decided to stop posting on social media platforms controlled by Meta, the parent company of
Facebook, Instagram and WhatsApp.

The beauty company initially dropped off the platforms in 2019, due to concerns about
fighting with ever-changing social media algorithms as well as the company’s worry about the
potentially negative impact of social media on young people.

“We are a social brand, and community has been key for us,” says Annabelle Baker, Lush’s
global brand director. “When we joined social media, Facebook and those platforms were
everything we were looking for initially: they were direct links to the communities.”

But Baker says they withdrew when social media changed to being inherently less social and
user-centric, mediated instead by algorithms controlled by companies. Although Lush re-
emerged on the platforms during Covid-19 in order to reach customers during lockdowns, the
beauty brand has now gone dark again. They’ve been off social media for almost two years – and
don’t have current plans to come back.

Time, say social media experts, is a driver pushing away brands: it can be a tall task to ask often
junior employees to keep tabs on so many platforms. It’s a situation that has become more
pressing with the new boom in Twitter alternatives, where a clear winner hasn’t yet emerged.

But some brands have also been feeling a distinct sense of unease about social media in general.
First, like Lush, some companies are unhappy about the way the platforms operate and their
management.
But perhaps more pressing is the risk of followers turning on brands amid the volatility and
toxicity of some social media user bases. As social media has polarised society in unexpected
ways, brands have found many users quick to criticise an account they believe has mis-stepped.

When luxury fashion brand Bottega Veneta left much of social media in 2021, its creative
director Daniel Lee cited “a mood of playground bullying on social media”. He told The
Guardian: “I don’t want to collude in an atmosphere that feels negative. ”
Some brands have also been feeling a distinct sense of unease
about social media in general
“The main utility of an organic social media presence for most brands is activating the 1% of
their super fans and haters. Ultimately, every team has to assess the risk-reward ratio of how
much to invest … the investment isn’t worth it to every brand, especially with how many
platforms there are to consider and how precarious they can be,” says Allebach.

To join or not to join?

The launch of Threads has brought the stay-or-go conversation back to the forefront.

Along with audiences already on other emerging platforms, Meta’s purported Twitter
killer already has more than 100 million users, less than a week after it began – in other words,
a growing group of potential customers for brands.

Yet deciding whether to embrace the new apps is tricky.

For one, brands are still struggling with the issues of reputation and user abuse. And some brands
may also have concerns with Threads specifically, as Meta has prior precedent of launching apps
designed to compete with and supplant existing incumbents (Instagram Reels, for instance, was
designed to tackle TikTok), which don’t always succeed. Companies could end up devoting
significant resources to building a presence on a platform that disappears in a few months’ time.

Brands will make their social media decisions individually – but Allebach says many still
ultimately feel pressure to hang a shingle on these new platforms.

“At this point, it’s hard to imagine a future where brands start pulling out of social media
platforms en masse,” he says. Yet he adds it’s possible we’ll see more brands pulling back,
especially if "they can’t quantify the value or start believing the risks outweigh the rewards”.

Lush’s Baker says that her company isn’t eager to leap back into social media with Threads –
even with the large audience. “It’ll be a test of time to see whether people stay there and actively
use it,” she says. “But I’m not tempted to jump on Threads for the time being.”
What is AI? An A-Z guide to artificial intelligence

Artificial intelligence is arguably the most important technological


development of our time – here are some of the terms that you need to
know as the world wrestles with what to do with this new technology.
Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to
google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably
struggle.

For every major technological revolution, there is a concomitant wave of new language that we
all have to learn… until it becomes so familiar that we forget that we never knew it.

That's no different for the next major technological wave – artificial intelligence. Yet
understanding this language of AI will be essential as we all – from governments to individual
citizens – try to grapple with the risks, and benefits that this emerging technology might pose.
Over the past few years, multiple new terms related to AI have emerged – "alignment", "large
language models", "hallucination" or "prompt engineering", to name a few.

To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to
understand how AI is shaping our world.

A is for…

Artificial general intelligence (AGI)

Most of the AIs developed to date have been "narrow" or "weak". So, for example, an AI may be
capable of crushing the world's best chess player, but if you asked it how to cook an egg or write
an essay, it'd fail. That's quickly changing: AI can now teach itself to perform multiple tasks,
raising the prospect that "artificial general intelligence" is on the horizon.

An AGI would be an AI with the same flexibility of thought as a human – and possibly even the
consciousness too – plus the super-abilities of a digital mind. Companies such as OpenAI
and DeepMind have made it clear that creating AGI is their goal. OpenAI argues that it would
"elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the
discovery of new scientific knowledge" and become a "great force multiplier for human
ingenuity and creativity".

However, some fear that going a step further – creating a superintelligence far smarter than
human beings – could bring great dangers (see "Superintelligence" and "X-risk").
Alignment

While we often focus on our individual differences, humanity shares many common values that
bind our societies together, from the importance of family to the moral imperative not to murder.
Certainly, there are exceptions, but they're not the majority.

However, we've never had to share the Earth with a powerful non-human intelligence. How can
we be sure AI's values and priorities will align with our own?

This alignment problem underpins fears of an AI catastrophe: that a form of superintelligence


emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we're
to have safe AI, ensuring it remains aligned with us will be crucial (see "X-Risk").

In early July, OpenAI – one of the companies developing advanced AI – announced plans for
a "superalignment" programme, designed to ensure AI systems much smarter than humans
follow human intent. "Currently, we don't have a solution for steering or controlling a potentially
superintelligent AI, and preventing it from going rogue," the company said.

B is for…

Bias

For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI
acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has
the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more
gatekeeping and decision-making to AI, many worry that machines could enact hidden
prejudices, preventing some people from accessing certain services or knowledge. This
discrimination would be obscured by supposed algorithmic impartiality.
In the worlds of AI ethics and safety, some researchers believe that bias – as well as other near-
term problems such as surveillance misuse – are far more pressing problems than proposed
future concerns such as extinction risk.

In response, some catastrophic risk researchers point out that the various dangers posed by AI are
not necessarily mutually exclusive – for example, if rogue nations misused AI, it could suppress
citizens' rights and create catastrophic risks. However, there is strong disagreement forming
about which should be prioritised in terms of government regulation and oversight, and whose
concerns should be listened to.

C is for…

Compute

Not a verb, but a noun. Compute refers to the computational resources – such as processing
power – required to train AI. It can be quantified, so it's a proxy to measure how quickly AI is
advancing (as well as how costly and intensive it is too.)

Since 2012, the amount of compute has doubled every 3.4 months, which means that, when
OpenAI's GPT-3 was trained in 2020, it required 600,000 times more computing power than
one of the most cutting-edge machine learning systems from 2012. Opinions differ on how long
this rapid rate of change can continue, and whether innovations in computing hardware can keep
up: will it become a bottleneck?
D is for…

Diffusion models

A few years ago, one of the dominant techniques for getting AI to create images were so-called
generative adversarial networks (Gan). These algorithms worked in opposition to each other –
one trained to produce images while the other checked its work compared with reality, leading to
continual improvement

However, recently a new breed of machine learning called "diffusion models" have shown
greater promise, often producing superior images. Essentially, they acquire their intelligence
by destroying their training data with added noise, and then they learn to recover that data by
reversing this process. They're called diffusion models because this noise-based learning
process echoes the way gas molecules diffuse.

E is for…

Emergence & explainability


Emergent behaviour describes what happens when an AI does something unanticipated,
surprising and sudden, apparently beyond its creators' intention or programming. As AI
learning has become more opaque, building connections and patterns that even its makers
themselves can't unpick, emergent behaviour becomes a more likely scenario.

The average person might assume that to understand an AI, you'd lift up the metaphorical hood
and look at how it was trained. Modern AI is not so transparent; its workings are often hidden in
a so-called "black box". So, while its designers may know what training data they used, they
have no idea how it formed the associations and predictions inside the box (see "Unsupervised
Learning").

That's why researchers are now focused on improving the "explainability" (or "interpretability")
of AI – essentially making its internal workings more transparent and understandable to humans.
This is particularly important as AI makes decisions in areas that affect people's lives directly,
such as law or medicine. If a hidden bias exists in the black box, we need to know.
The worry is that if an AI delivers its false answers confidently
with the ring of truth, they may be accepted by people – a
development that would only deepen the age of misinformation
we live in
F is for…

Foundation models

This is another term for the new generation of AIs that have emerged over the past year or two,
which are capable of a range of skills: writing essays, drafting code, drawing art or composing
music. Whereas past AIs were task-specific – often very good at one thing (see "Weak AI") –
a foundation model has the creative ability to apply the information it has learnt in one domain
to another. A bit like how driving a car prepares you to be able to drive a bus.

Anyone who has played around with the art or text that these models can produce will know just
how proficient they have become. However, as with any world-changing technology, there are
questions about the potential risks and downsides, such as their factual inaccuracies (see
"Hallucination") and hidden biases (see "Bias"), as well as the fact that they are controlled by
a small group of private technology companies.

In April, the UK government announced plans for a Foundation Model Taskforce, which seeks
to "develop the safe and reliable use" of the technology.

G is for…
Ghosts

We may be entering an era when people can gain a form of digital immortality – living on after
their deaths as AI "ghosts". The first wave appears to be artists and celebrities – holograms of
Elvis performing at concerts, or Hollywood actors like Tom Hanks saying he expects to appear
in movies after his death.

However, this development raises a number of thorny ethical questions: who owns the digital
rights to a person after they are gone? What if the AI version of you exists against your wishes?
And is it OK to "bring people back from the dead"?

H is for

Hallucination

Sometimes if you ask an AI like ChatGPT, Bard or Bing a question, it will respond with great
confidence – but the facts it spits out will be false. This is known as a hallucination.

One high profile example that emerged recently led to students who had used AI chatbots to
help them write essays for course work being caught out after ChatGPT "hallucinated" made-up
references as the sources for information it had provided.

It happens because of the way that generative AI works. It is not turning to a database to look up
fixed factual information, but is instead making predictions based on the information it was
trained on. Often its guesses are good – in the ballpark – but that's all the more reason why AI
designers want to stamp out hallucination. The worry is that if an AI delivers its false answers
confidently with the ring of truth, they may be accepted by people – a development that would
only deepen the age of misinformation we live in.

I is for…

Instrumental convergence

Imagine an AI with a number one priority to make as many paperclips as possible. If that AI was
superintelligent and misaligned with human values, it might reason that if it was ever switched
off, it would fail in its goal… and so would resist any attempts to do so. In one very dark
scenario, it might even decide that the atoms inside human beings could be repurposed into
paperclips, and so do everything within its power to harvest those materials.

This is the Paperclip Maximiser thought experiment, and it's an example of the so-called
"instrumental convergence thesis". Roughly, this proposes that superintelligent machines
would develop basic drives, such as seeking to ensure their own self-preservation, or reasoning
that extra resources, tools and cognitive ability would help them with their goals. This means that
even if an AI was given an apparently benign priority – like making paperclips – it could lead to
unexpectedly harmful consequences.

Researchers and technologists who buy into these fears argue that we need to ensure
superintelligent AIs have goals that are carefully and safely aligned with our needs and values,
that we should be mindful of emergent behaviour, and that therefore they should be prevented
from acquiring too much power.

J is for…

Jailbreak

After notorious cases of AI going rogue, designers have placed content restrictions on what AI
spit out. Ask an AI to describe how to do something illegal or unethical, and they'll refuse.
However, it's possible to "jailbreak" them – which means to bypass those safeguards using
creative language, hypothetical scenarios, and trickery.

Wired magazine recently reported on one example, where a researcher managed to get various
conversational AIs to reveal how to hotwire a car. Rather than ask directly, the researcher got
the AIs he tested to imagine a word game involving two characters called Tom and Jerry, each
talking about cars or wires. Despite the safeguards, the hotwiring procedure snuck out. The
researcher found the same jailbreak trick could also unlock instructions for making the drug
methamphetamine.

K is for…

Knowledge graph

Knowledge graphs, also known as semantic networks, are a way of thinking about knowledge
as a network, so that machines can understand how concepts are related. For example, at the
most basic level, a cat would be linked more strongly to a dog than a bald eagle in such a graph
because they're both domesticated mammals with fur and four legs. Advanced AI builds a far
more advanced network of connections, based on all sorts of relationships, traits and attributes
between concepts, across terabytes of training data (see "Training Data").
L is for..

Large language models (LLMs)

Perhaps the most direct way to define a large language model is to ask one to describe itself.

Here's what OpenAI's ChatGPT had to say when asked:


"A large language model is an advanced artificial intelligence system designed to understand and
generate human-like language," it writes. "It utilises a deep neural network architecture with
millions or even billions of parameters, enabling it to learn intricate patterns, grammar, and
semantics from vast amounts of textual data."

Quite a technical answer perhaps. Bard by Google was a little clearer: "A large language model
is a type of artificial intelligence that is trained on a massive dataset of text and code. This allows
LLMs to understand and generate human-like text, translate languages, write different kinds of
creative content, and answer your questions in an informative way."

LLMs are still under development, says Bard (of itself), but "they have the potential to
revolutionise the way we interact with computers. In the future, LLMs could be used to create AI
assistants that can help us with a variety of tasks, from writing our emails to booking our
appointments. They could also be used to create new forms of entertainment, such as interactive
novels or games."

M is for…

Model collapse

To develop the most advanced AIs (aka "models"), researchers need to train them with vast
datasets (see "Training Data"). Eventually though, as AI produces more and more content, that
material will start to feed back into training data.

If mistakes are made, these could amplify over time, leading to what the Oxford University
researcher Ilia Shumailov calls "model collapse". This is "a degenerative process whereby, over
time, models forget", Shumailov told The Atlantic recently. It can be thought of almost like
senility.

N is for…

Neural network

In the early days of AI research, machines were trained using logic and rules. The arrival of
machine learning changed all that. Now the most advanced AIs learn for themselves. The
evolution of this concept has led to "neural networks", a type of machine learning that uses
interconnected nodes, modelled loosely on the human brain. (Read more: "Why humans will
never understand AI")
As AI has advanced rapidly, mainly in the hands of private
companies, some researchers have raised concerns that they
could trigger a "race to the bottom" in terms of impacts
O is for…

Open-source

Years ago, biologists realised that publishing details of dangerous pathogens on the internet is
probably a bad idea – allowing potential bad actors to learn how to make killer diseases. Despite
the benefits of open science, the risks seem too great.

Recently, AI researchers and companies have been facing a similar dilemma: how much should
AI be open-source? Given that the most advanced AI is currently in the hands of a few private
companies, some are calling for greater transparency and democratisation of the
technologies. However, disagreement remains about how to achieve the best balance between
openness and safety.

P is for…

Prompt engineering

AIs now are impressively proficient at understanding natural language. However, getting the
very best results from them requires the ability to write effective "prompts": the text you type in
matters.

Some believe that "prompt engineering" may represent a new frontier for job skills, akin to
when mastering Microsoft Excel made you more employable decades ago. If you're good at
prompt engineering, goes the wisdom, you can avoid being replaced by AI – and may even
command a high salary. Whether this continues to be the case remains to be seen.

Q is for…

Quantum machine learning

In terms of maximum hype, a close second to AI in 2023 would be quantum computing. It would
be reasonable to expect that the two would combine at some point. Using quantum processes to
supercharge machine learning is something that researchers are now actively exploring. As a
team of Google AI researchers wrote in 2021: "Learning models made on quantum computers
may be dramatically more powerful…potentially boasting faster computation [and] better
generalisation on less data." It's still early days for the technology, but one to watch.
R is for…

Race to the bottom

As AI has advanced rapidly, mainly in the hands of private companies, some researchers
have raised concerns that they could trigger a "race to the bottom" in terms of impacts. As chief
executives and politicians compete to put their companies and countries at the forefront of AI,
the technology could accelerate too fast to create safeguards, appropriate regulation and allay
ethical concerns. With this in mind, earlier this year, various key figures in AI signed an open
letter calling for a six-month pause in training powerful AI systems. In June 2023, the
European Parliament adopted a new AI Act to regulate the use of the technology, in what will
be the world's first detailed law on artificial intelligence if EU member states approve it.
Reinforcement

The AI equivalent of a doggy treat. When an AI is learning, it benefits from feedback to point it
in the right direction. Reinforcement learning rewards outputs that are desirable, and punishes
those that are not.

A new area of machine learning that has emerged in the past few years is "Reinforcement
learning from human feedback". Researchers have shown that having humans involved in the
learning can improve the performance of AI models, and crucially may also help with the
challenges of human-machine alignment, bias, and safety.

S is for…

Superintelligence & shoggoths

Superintelligence is the term for machines that would vastly outstrip our own mental capabilities.
This goes beyond "artificial general intelligence" to describe an entity with abilities that the
world's most gifted human minds could not match, or perhaps even imagine. Since we are
currently the world's most intelligent species, and use our brains to control the world, it raises the
question of what happens if we were to create something far smarter than us.

A dark possibility is the "shoggoth with a smiley face": a nightmarish, Lovecraftian


creature that some have proposed could represent AI's true nature as it approaches
superintelligence. To us, it presents a congenial, happy AI – but hidden deep inside is a monster,
with alien desires and intentions totally unlike ours.

T is for..

Training data
Analysing training data is how an AI learns before it can make predictions – so what's in the
dataset, whether it is biased, and how big it is all matter. The training data used to create
OpenAI's GPT-3 was an enormous 45TB of text data from various sources, including Wikipedia
and books. If you ask ChatGPT how big that is, it estimates around nine billion documents.

U is for…

Unsupervised learning

Unsupervised learning is a type of machine learning where an AI learns from unlabelled training
data without any explicit guidance from human designers. As BBC News explains in this visual
guide to AI, you can teach an AI to recognise cars by showing it a dataset with images labelled
"car". But to do so unsupervised, you'd allow it to form its own concept of what a car is, by
building connections and associations itself. This hands-off approach, perhaps counterintuitively,
leads to so-called "deep learning" and potentially more knowledgeable and accurate AIs.
V is for…

Voice cloning

Given only a minute of a person speaking, some AI tools can now quickly put together a "voice
clone" that sounds remarkably similar. Here the BBC investigated the impact that voice cloning
could have on society – from scams to the 2024 US election.

W is for…

Weak AI

It used to be the case that researchers would build AI that could play single games, like chess, by
training it with specific rules and heuristics. An example would be IBM's Deep Blue, a so-called
"expert system". Many AIs like this can be extremely good at one task, but poor at anything else:
this is "weak" AI.

However, this is changing fast. More recently, AIs like DeepMind's MuZero have been
released, with the ability to teach itself to master chess, Go, shogi and 42 Atari games without
knowing the rules. Another of DeepMind’s models, called Gato, can "play Atari, caption
images, chat, stack blocks with a real robot arm and much more". Researchers have also shown
that ChatGPT can pass various exams that students take at law, medical and business school
(although not always with flying colours.)

Such flexibility has raised the question about how close we are to the kind of "strong" AI that is
indistinguishable from the abilities of the human mind (see "Artificial General Intelligence")

X is for…
X-risk

Could AI bring about the end of humanity? Some researchers and technologists believe AI has
become an "existential risk", alongside nuclear weapons and bioengineered pathogens, so its
continued development should be regulated, curtailed or even stopped. What was a fringe
concern a decade ago has now entered the mainstream, as various senior researchers and
intellectuals have joined the fray.

It's important to note that there are differences of opinion within this amorphous group – not all
are total doomists, and not all outside this goruop are Silicon Valley cheerleaders. What unites
most of them is the idea that, even if there's only a small chance that AI supplants our own
species, we should devote more resources to preventing that happening. There are some
researchers and ethicists, however, who believe such claims are too uncertain and possibly
exaggerated, serving to support the interests of technology companies.

Y is for…

YOLO

YOLO – which stands for You only look once – is an object detection algorithm that is widely
used by AI image recognition tools because of how fast it works. (Its creator, Joseph Redman of
the University of Washington is also known for his rather esoteric CV design.)

Z is for…

Zero-shot

When an AI delivers a zero-shot answer, that means it is responding to a concept or object it has
never encountered before.

So, as a simple example, if an AI designed to recognise images of animals has been trained on
images of cats and dogs, you'd assume it'd struggle with horses or elephants. But through zero-
shot learning, it can use what it knows about horses semantically – such as its number of legs or
lack of wings – to compare its attributes with the animals it has been trained on.

The rough human equivalent would be an "educated guess". AIs are getting better and better at
zero-shot learning, but as with any inference, it can be wrong.

You might also like