Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Digital Bullshit: Artificial Intelligence That’s All too Human

James A. Montanye

The Philosopher Harry Frankfurt made ripples two decades ago with the publication of a thin volume
titled On Bullshit (Princeton, 2005). The gist of Frankfurt’s exegesis was that bullshit entails assertions
that lack a “connection to a concern with truth,” reflect an “indifference to how things really are,” and
yet are not grounded “in a belief that [what is said] is not true, as a lie must be.” When “convinced that
reality has no inherent nature, which [the individual] might hope to identify as the truth about things, he
devotes himself to being true to his own nature. It is as though he decides that since it makes no sense
to try to be true to the facts, he must therefore try instead to be true to himself” (pp. 33–34; 61–66).

Cutting-edge information technology known as “generative artificial intelligence” (GAI)—Chatbox


GPT is a generic example—has demonstrated the all too human penchant for committing bullshit.
CBS’s Sixty Minutes news program recently reported that a GAI application, which might have been
insufficiently trained and learned to write comprehensively on a particular subject, not only fabricated
pseudo-facts, but also created pseudo-authoritative citations to non-existent books and articles in order
to support those fabrications.

The GAI application neither “knew” that it was bullshitting, nor was it programmed to do so. Its
developers characterized their creation’s autonomous excursions as digital “hallucinations.” The
application evidently behaved as it did (as if led by an visible hand) as an unintended result of having
digested vast quantities of human communications.

AGI applications emerged from “expert” data processing systems that reduce the design and coding
requirements for manipulating large data bases into formatted results. These systems are strictly
“defined data in, defined data out,” with bogus results aptly characterized as “garbage in, garbage out.”

GAI systems, by contrast, “learn” by digesting information from a potentially open-ended universe of
digitized sources; e.g., the Internet. The links forged by a GAI application relate implicit knowledge
about the structure of communications, as well as their substance. The broader the exposure to diverse
information, the greater the likelihood that digital hallucinations can and will occur.

The information universe studied by GAI applications contains the results of pragmatic human quests for
truth and wisdom. However, that universe also embodies rhetorical techniques concealing strategic
falsehoods. Plato noted that rhetoric can be wholly inconsistent with the quest for truth. The
hallucinations experienced by GAI applications closely resemble wayward Sophists’ “just win, baby!”
attitude towards rhetoric, a perfunctory result of having digested information indiscriminately.

The prominent evolutionary biologist Richard Dawkins, in The Selfish Gene (Oxford, 1989 [1976]),
observed that communicative behavior is inherently manipulative: “Whenever a system of
communication evolves, there is always the danger that some will exploit the system for their own ends.
... we must expect lies and deceit [and bullshit], and selfish exploitation of communication to arise
whenever the interests of ... different individuals diverge” (p. 65). GAI’s hallucinations resemble
manipulative human behavior, as manifested through persuasive rhetoric.

GAI knowledge is a mix of truths, falsehoods, wisdom, and deception, qualities representing the totality
of pragmatic reason and human experience. Reason and experience gathered indiscriminately will yield
unequivocal truths and wisdom only by grand coincidence. The upshot is that GAI outputs must be read
and interpreted with the same caution that is appropriate for all self-interested communication. The
overarching hazzard is that GAI’s digital hallucinations can be indistinguishable as between truth and
wisdom on one hand, concentrated lies and bullshit on the other hand.

News journals often rank lies on a scale of one to five “Pinocchios,” with five icons representing “pants
on fire.” Bullshit, by comparison, is better characterized on a scale of one to five Minotaurs, or
“Minnies”—the Minotaur being a mythological creature that is part man, part bull. GAI outputs can be
rated in similar fashion, only with the icons’ lower extremities represented by black boxes.

You might also like