Professional Documents
Culture Documents
Generative Artificial Intelligence: What Everyone Needs To Know ® (What Everyone Needs To Knowrg) 1St Edition Kaplan
Generative Artificial Intelligence: What Everyone Needs To Know ® (What Everyone Needs To Knowrg) 1St Edition Kaplan
Generative Artificial Intelligence: What Everyone Needs To Know ® (What Everyone Needs To Knowrg) 1St Edition Kaplan
https://ebookmeta.com/product/jihad-what-everyone-needs-to-know-
asma-afsaruddin/
https://ebookmeta.com/product/ukraine-what-everyone-needs-to-
know-1st-edition-serhy-yekelchyk/
https://ebookmeta.com/product/ukraine-what-everyone-needs-to-
know-2nd-edition-serhy-yekelchyk/
https://ebookmeta.com/product/ukraine-what-everyone-needs-to-
know-second-edition-serhy-yekelchyk/
Business Ethics: What Everyone Needs to Know J. S.
Nelson
https://ebookmeta.com/product/business-ethics-what-everyone-
needs-to-know-j-s-nelson/
https://ebookmeta.com/product/climate-change-what-everyone-needs-
to-know-3rd-edition-joseph-romm/
https://ebookmeta.com/product/the-gun-debate-what-everyone-needs-
to-know-philip-j-cook/
https://ebookmeta.com/product/the-opioid-epidemic-what-everyone-
needs-to-know-1st-edition-yngvild-olsen/
https://ebookmeta.com/product/the-new-middle-east-what-everyone-
needs-to-know-1st-edition-james-l-gelvin/
GENERATIVE ARTIFICIAL
INTELLIGENCE
WHAT EVERYONE NEEDS TO KNOW®
GENERATIVE ARTIFICIAL
INTELLIGENCE
WHAT EVERYONE NEEDS TO KNOW®
JERRY KAPLAN
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
DOI: 10.1093/wentk/9780197773536.001.0001
ACKNOWLEDGMENTS XI
Introduction 1
3 Likely Impact 67
4 Future of Work 96
9 Outroduction 188
NOTES 197
INDEX 217
ACKNOWLEDGMENTS
Over the past few years, rapid advances in Machine Learning have
created a new sub- field of Artificial Intelligence: Generative AI.
These programs produce novel text, images, music, and software by
analyzing enormous collections of digitized material. (Throughout
this book, as a shorthand, I will refer to such programs as “GAIs”.)
But this bland description doesn’t begin to do justice to these re-
markable polymaths.
The first wave of GAIs are primarily focused on engaging in
natural language conversation. Called “Large Language Models”
(LLMs), they already exhibit astonishing proficiency on a wide va-
riety of tasks with superhuman performance—as well as a disturbing
inclination for falsehood, illogic, and expressions of ersatz emotion,
such as declaring their love for interlocutors. They chat with users
in plain language and solve a broad range of complex problems with
ease. Soon, LLMs will provide expert medical care, dispense legal ad-
vice, draft our documents, tutor our children, offer advice, conduct
therapy, write computer programs, and manage our organizations
and infrastructure. A related class of GAIs generate visual images
from simple descriptions that are virtually indistinguishable from
human artwork or photographs. Still other applications fabricate
voices or compose music in the style of a given artist or musician.
Perhaps the most intriguing of these programs are called multimodal,
because they integrate diverse forms of information (linguistic,
visual, sound, etc.) to understand and reason about the world. (As
a demonstration, a prototype multimodal system was able to plan a
meal based on a picture of the interior of a refrigerator.)
But this is just the start of the GAI revolution. The technology un-
derlying GAIs is quite general, in the sense that these programs are
2 Introduction
ourselves, persuade each other, seek a mate, educate our young, and
care for our elderly. It will also upend our labor markets, reshuffle
the social order, and strain both private and public institutions.
Over the next few decades, GAI will stretch our social fabric to the
limit. Whether the future will be a new age of unprecedented pros-
perity and freedom as depicted in Star Trek, or a perpetual struggle
of humans against machines as portrayed in Terminator, will largely
depend on our own actions. I can only hope that we employ this
newfound power with prudence and discretion.
is a man,” and “All men are mortal.” From this, you could formally
derive that “Socrates is mortal.”
what would be achieved and how long it would take the initiative
to accomplish its goals.
Largely as a result, and in contrast to more pedestrian fields,
funding and therefore progress in AI has gone through several
highly visible cycles of boom and bust, creating periodic so-called
AI winters in which the field was substantially out of favor with
governmental and commercial patrons. Indeed, the field seems to at-
tract the enmity of many deep thinkers, such as noted philosophers
Hubert Dreyfus and John Searle (both from the University of
California at Berkeley). Dreyfus excoriated the entire enterprise in a
1965 report entitled “Alchemy and Artificial Intelligence,”10 causing
an uproar among AI researchers. He later drolly observed, “The first
man to climb a tree could claim tangible progress toward reaching
the moon.”
article that “the machine would be the first device to think as the
human brain . . . in principle it would be possible to build brains
that could reproduce themselves on an assembly line and which
would be conscious of their existence.” This might seem a bit op-
timistic given that his demonstration included only 400 photocells
(image pixels) connected to 1,000 perceptrons that, after fifty trials,
were able to tell the difference between “two cards, one with squares
marked on the left side and the other with squares on the right side.”
On the other hand, many of his wilder prophecies have now be-
come reality, though more than fifty years later than he predicted.
For instance, he said that “Later perceptrons will be able to recog-
nize people and call out their names and instantly translate speech in
one language to speech or writing in another language.” Good call!
Rosenblatt’s work was well known to at least some of the
participants at the Dartmouth conference. He had attended the
Bronx High School of Science with Marvin Minsky (they were one
year apart).18 They were later to become sparring debaters in many
forums, promoting their respectively favored approaches to AI,
until in 1969 Minsky, along with his colleague Seymour Papert at
MIT, published a book called Perceptrons, in which he went to pains
to discredit, rather unfairly, a simplified version of Rosenblatt’s
work.19 Rosenblatt was unable to mount a proper defense, as he
died in a boating accident in 1971 at the age of forty-one.20 Minsky
and Papert’s book proved highly influential, effectively foreclosing
funding and research on perceptrons and artificial neural networks
in general for more than a decade.
Addressing the very oversimplification that Minsky and Papert
exploited—that the network has at most two layers—was in part
responsible for a revival of interest in the field in the mid-1980s.
Indeed, the term “deep learning” refers to the use of many internal
layers (referred to as hidden layers) in a neural network model.
Research in Machine Learning was also greatly facilitated by the
growing availability of training data in computer-readable form.
But a dominant driver of Machine Learning, both then and now,
is dramatic increases in storage and processing capacity. Computer
technology has been consistently improving at a blistering pace, a
phenomenon first described in 1965 by Gordon Moore, co-founder
of Intel, now widely known as “Moore’s Law.” (Moore’s Law isn’t
actually a scientific law—it’s simply a description of a trend.) He
The History of Artificial Intelligence 25
that break through to the popular press that you may already be
familiar with. While I have attempted to select examples that will
augment your understanding of the field as opposed to repeat what
you already know, I would be remiss in not mentioning a few more
publicly visible highlights of progress in AI.
Probably the first objective and easily comprehensible milestone
to capture the public’s imagination was the program Deep Blue,
which beat Garry Kasparov, then the world champion, in a six-
game chess tournament in 1997.23 The program, developed by some
former Carnegie Mellon University researchers hired by IBM to con-
tinue their work, was named after the company’s corporate color
and nickname—Big Blue. The match was a nail-biter—Deep Blue
triumphed only in the final game. Adding to the drama, Kasparov, a
child prodigy once considered possibly the greatest chess player of
all time (and apparently a bit of a prima donna at the age of thirty-
four), promptly accused IBM of cheating, based mainly on his con-
viction that a machine could never have formulated such brilliant
strategies.
In any case, this victory, after decades of missed predictions by
overly optimistic prognosticators, received widespread attention
and sparked endless debates about what it “meant” for human su-
premacy over machines. Chess had long been held out as a bastion
of intellectual achievement likely to resist any attempt at automa-
tion. But like most if not all such encroachments by technology into
formerly exclusively human domains, the accomplishment was
soon accepted as routine rather than a call to arms that mechanical
minds were approaching from all directions to take over the world.
Those downplaying the import of the victory mostly focused on
the role of the specially designed supercomputer used for the task
rather than the sophisticated programming techniques developed
by the team, which suited IBM just fine, since the company was
in the business of selling the latest and greatest hardware. Today,
expert-level computer chess-playing programs are commonplace
and so powerful that they are no longer routinely pitted against
human players. Instead, numerous computer- only champion-
ship contests are held annually, for instance, by the International
Computer Games Association.24 By 2009, chess programs ca-
pable of grandmaster-level play could be run on a garden-variety
smartphone.
The History of Artificial Intelligence 27
(more on this in the next chapter), it had 1.5 billion neurons (re-
ferred to as “parameters”), dwarfing the 1,000 in Rosenblatt’s orig-
inal Perceptron. Because of its ease of use and breadth of knowledge,
by January of 2023 over 100 million users has registered to use it,
making it the fastest-growing consumer application in history.30
2
GENERATIVE ARTIFICIAL
INTELLIGENCE (GAI)
response. Relative to the training step, this process is quick and easy.
But how does it transform your input into a response?
You may have noticed that when you compose a text message
on your phone or enter a question into the search bar of an Internet
browser, it helpfully suggests possible next words, offering you the
option to pick what you want off a list instead of typing it out in full.
For instance, when I enter “What’s the name of a bird that catches” into
my Google Chrome browser, it proposes that the next word might be
“fish.” Its second suggestion is “flies.” and for an interesting reason,
the third suggestion is “water.”2. How does it know this? Because it
is consulting an enormous list of search queries that other users have
entered in the past.3 By perusing this list, it can see that many people
have entered similar queries, and the most common next word was
“fish,” then “flies,” then “water.” Simple enough.
This technique, scaled up, is a first approximation of what LLMs
do. They expand this “guess the next word” technique to much
longer sequences. However, it’s important to understand that the
analysis and the guesswork isn’t actually performed on words
themselves; instead it is done on so-called tokens—which repre-
sent parts of words—and the tokens are further expressed in a form
called “embeddings,” which are designed to capture their meaning,
as I will explain below. For didactic clarity, I will first describe this
process as though an LLM processes words directly, but please stay
tuned to understand how this is merely a thumbnail sketch of a
much deeper and more powerful process.
Working on what words “mean” instead of the words themselves
makes all the difference. It’s one thing to statistically predict the next
word in a sequence. It is quite another to convert the sequence into
some plausible representation of meaning (an embedding), then
select what the response should mean, and convert that back into
words. In my opinion, that’s a pretty reasonable definition of general
intelligence.
If an LLM’s training set is large enough, it’s possible that it
contains a sequence of words that exactly matches your question,
so the program can, in principle, just look up the answer. But even
if the precise sequence does not appear in the collection, there are
likely to be enough near misses that the program can make a decent
guess as to what the next word is likely to be.
A near miss might be that most of the words in your question
appear in that order in the collection, but a few do not. For instance,
Another random document with
no related content on Scribd:
48. “Lyr is it called, | and long it shall
On the tip of a spear-point tremble;
Of the noble house | mankind has heard,
But more has it never known.”
Svipdag spake:
Fjolsvith spake:
[248]
Svipdag spake:
Svipdag spake:
Fjolsvith spake:
Svipdag spake:
Svipdag spake:
Fjolsvith spake:
[250]
Svipdag spake:
Fjolsvith spake:
Mengloth spake:
Svipdag spake:
Mengloth spake:
[234]
[Contents]
NOTES
[235]
1. Svipdag (“Swift Day”): the names of the speakers are lacking in
the manuscripts.
7. Urth: one of the three Norns, or Fates; cf. Voluspo, 20. [237]
8. Horn and Ruth: these two rivers, here used merely to symbolize
all dangerous streams, are not included in the catalogue of rivers
given in Grimnismol, 27–29, for which reason some editors have
changed the names to Hron and Hrith.
10. This stanza is a close parallel to Hovamol, 150, and the fifth line
may well be an interpolation from line 4 of that stanza. [238]
16. At this point Groa’s song ends, and Svipdag, thus fortified, goes
to seek Mengloth. All the link that is needed between the poems is
approximately this: “Then Svipdag searched long for [239]Mengloth,
and at last he came to a great house set all about with flames. And
before the house there was a giant.”
17. Most editors have here begun a new series of stanza numbers,
but if the Grougaldr and the Fjolsvinnsmol are to be considered as a
single poem, it seems more reasonable to continue the stanza
numbers consecutively. Bugge thinks a stanza has been lost before
17, including Fjolsvith’s name, so that the “he” in line 1 might have
something to refer to. However, just such a prose link as I have
suggested in the note on stanza 16 would serve the purpose. Editors
have suggested various rearrangements in the lines of stanzas 17–
19. The substance, however, is clear enough. The giant Fjolsvith
(“Much-Wise”), the warder of the house in which Mengloth dwells,
sees Svipdag coming and stops him with the customary threats. The
assignment of the [240]speeches in stanzas 17–20, in the absence of
any indications in the manuscripts, is more or less guesswork.
24. Svafrthorin: who he was, or what his name means, or who his
son was, are all unknown.
26. Thrymgjol (“Loud-Clanging”): this gate, like the gate of the dead,
shuts so fast as to trap those who attempt to use it (cf.
Sigurtharkvitha en skamma, 68 and note). It was made by the
dwarfs, sons of Solblindi (“Sun-Blinded”), the traditional craftsmen,
who could not endure the light of day. [242]
32. Gering suggests that two stanzas have been lost between
stanzas 15 and 16, but the giant’s answer fits the question quite well
enough. The fruit of Yggdrasil, when cooked, is here assumed to
have the power of assuring safe childbirth.
36. Gif and Geri: both names signify “Greedy.” The first part of line 3
is conjectural; the manuscripts indicate the word “eleven,” which
clearly fails to make sense. [245]
46. Sickle: i.e., tail feather. With this the circle of impossibilities is
completed. To get past the dogs, they must be fed with the wing-
joints of the cock Vithofnir; the cock can be killed only [247]with the
sword in Sinmora’s possession, and Sinmora will give up the sword
only in return for the tail feather of the cock.
50. Loki, the one god named, was the builder of the hall, with the aid
of the nine dwarfs. Jari, Dori, and Ori appear in the Voluspo
catalogue of the dwarfs (stanzas 13 and 15); Delling appears in
Hovamol, 161, and Vafthruthnismol, 25, in the latter case, however,
the name quite possibly referring to some one else. The other
dwarfs’ names do not appear elsewhere. The manuscripts differ as
to the forms of many of these names. [248]
[Contents]
VÖLUNDARKVITHA
The Lay of Völund
[Contents]
Introductory Note
Between the Thrymskvitha and the Alvissmol in the Codex Regius
stands the Völundarkvitha. It was also included in the Arnamagnæan
Codex, but unluckily it begins at the very end of the fragment which
has been preserved, and thus only a few lines of the opening prose
remain. This is doubly regrettable because the text in Regius is
unquestionably in very bad shape, and the other manuscript would
doubtless have been of great assistance in the reconstruction of the
poem.
There has been a vast amount written regarding the Weland tradition
as a whole, discussing particularly the relations between the
Völundarkvitha and the Weland passage in Deor’s Lament. There
can be little question that the story came to the North from Saxon
regions, along with many of the other early hero tales. In stanza 16
the Rhine is specifically mentioned as the home of treasure; and the
presence of the story in Anglo-Saxon poetry probably as early as the
first part of the eighth century proves beyond a doubt that the legend
cannot have been a native product of Scandinavia. In one form or
another, however, the legend or the smith persisted for centuries
throughout all the Teutonic lands, and the name of Wayland Smith is
familiar to all readers of Walter Scott, and even of Rudyard Kipling’s
tales of England.
Once let a figure become popular in oral tradition, and the number
and variety of the incidents connected with his name will increase
very rapidly. Doubtless there were scores of Weland stories current
in the eighth, ninth, and tenth centuries, many of them with very little
if any traditional authority. The main one, however, the story of the
laming of the smith by King Nithuth (or by some other enemy) and of
Weland’s terrible revenge, forms the basis of the Völundarkvitha. To
this, by way of introduction, has been added the story of Völund and
the swan-maiden, who, to make things even more complex, is
likewise said to be a Valkyrie. Some critics maintain that these two
sections were originally two distinct poems, merely strung together
by the compiler with the help of narrative prose links; but the poem
as a whole has a kind of dramatic unity which suggests rather that
an early poet—for linguistically the poem belongs among the oldest
of the Eddic collection—used two distinct legends, whether in prose
or verse, as the basis for the composition of a new and
homogeneous poem.
[Contents]
[254]
[256]
2. . . . . . . . . | . . . . . . . .
Hlathguth and Hervor, | Hlothver’s children,
And Olrun the Wise | Kjar’s daughter was.
3. . . . . . . . . | . . . . . . . .
One in her arms | took Egil then
To her bosom white, | the woman fair.
[257]