Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

27

MIS- AND DIS-


I N F O R M AT IO N
Don Fallis

Information is a powerful tool. It allows us to cure diseases. It allows us to send rockets into
space. More generally, it allows us to carry out a whole variety of joint projects of immense
value. Work in philosophy can help us to understand and improve this process. According to
Luciano Floridi (2011: 15), the philosophy of information focuses on “how information should
be adequately created, processed, managed, and used.”
But as Floridi (1996: 509) also emphasizes, we should also be concerned with when “the
process of information is defective.” Kay Mathiesen (2014) provides a useful classification
of the various things that can go wrong. Sometimes the problem is that the information that
we need does not exist. It may have been destroyed (as with the Watergate tapes) or it may not
have been collected in the first place. Sometimes the information exists, but we cannot get
access to it. It may have been censored by our government. It may be too expensive for us to
purchase, or we may simply lack the tools and/or skills to find it. Sometimes we can access
the information, but it is of insufficient quality to serve our purposes. It may not be legible. It
may not be comprehensible. It may not be up-to-date. It may not have the precision that we
need, or it may actually be misleading. This chapter focuses on this final issue of inaccurate
and misleading information.
Whenever an information process is defective, it can have profound epistemological and
Copyright © 2016. Taylor & Francis Group. All rights reserved.

practical consequences. If we are unable to access high quality information, we may fail to
acquire knowledge that it would be useful for us to have. But misinformation and disinformation
can actually put us into an even worse epistemic state than we started from. Moreover, the
acquisition of false beliefs can potentially cause serious financial and physical harm. For instance,
errors in databases have cost companies and consumers millions of dollars (see English 1999:
7–10). Misinformation about the risks of vaccination has cost lives (see Poland and Jacobson
2011). Indeed, disinformation about the existence of weapons of mass destruction has arguably
led to an unnecessary and “disastrous war” (see Carson 2010: 212–21).
In addition to being a pressing practical problem of the Information Age, misinformation
and disinformation raise important epistemological and ethical questions. How can we acquire
knowledge from the information that others provide if it might be inaccurate and misleading
(cf. Hume 1977; Fallis 2004)? Also, is the intentional dissemination of inaccurate and
misleading information ever morally justified (cf. Kant 1959; Bok 1978)? In order to answer

332

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

such questions, we need to understand what misinformation and disinformation are (cf.
Carson 2010: 13).
Philosophers have proposed several analyses of both of these concepts. That is, they have
tried to identify a concise set of necessary and sufficient conditions that correctly determines
whether or not something falls under the concept in question. This chapter provides a survey
of these analyses of misinformation and disinformation. In addition, it discusses the main
problems that each of these analyses face.
When philosophers raise worries about conceptual analyses, it usually takes the form of
proposed counter-examples. The most serious proposed counter-examples purport to show
that an analysis is too broad. For instance, Edmund Gettier (1963) famously argued against
the justified true belief analysis of knowledge by giving examples of justified true beliefs that we
would definitely not want to classify as knowledge. Likewise, an analysis of misinformation
(or disinformation) might be criticized on the grounds that it includes cases that should not be
classified as misinformation (or disinformation). It is also problematic though if an analysis is too
narrow. For instance, an analysis of misinformation (or disinformation) might be criticized on
the grounds that it excludes cases that should be classified as misinformation (or disinformation).
This chapter ends with a brief discussion of the epistemology and ethics of misinformation
and disinformation.

Misinformation
Misinformation comes in a many different varieties. The headline of The Chicago Tribune
on November 3, 1948, which reported that “Dewey Defeats Truman,” is an example of
misinformation that results from an honest mistake. Websites on treating fever in children
sometimes recommend that parents administer aspirin, despite the fact that it could lead to
a dangerous disorder known as Reye’s syndrome (see Impicciatore et al. 1997: 1876). This is
(probably) an example of misinformation that results from ignorance. Even when politicians
believe what they say about an issue, such as climate change, they often believe it because
it fits with what they already believe rather than because the evidence supports it. Thus,
they can be a source of misinformation that results from unconscious bias (see Edelman 2001:
5–8). Finally, the shepherd boy’s cry, “There is a wolf chasing my sheep!” is an example of
misinformation that results from intentional deception. He intended to mislead the villagers
about the presence of a wolf so that they would come running. This sort of misinformation
is commonly referred to as disinformation. In other words, disinformation is the species of
misinformation that is intended to mislead people.1
Copyright © 2016. Taylor & Francis Group. All rights reserved.

All of these types of misinformation have the potential to mislead people. That is, they can
cause false beliefs. But what exactly makes something misinformation?

Misinformation as false information


Christopher Fox (1983: 201) makes the plausible suggestion that misinformation is simply
information that is false (cf. Floridi 2011: 260). But what exactly does this analysis amount to?
Many different analyses of information have been proposed (see Fox 1983: 39–74; Floridi
2010; Floridi 2011: 81–82). Fortunately, a discussion of misinformation does not require that
we settle on a specific analysis of information. It does require, however, the assumption that
information has semantic content (see Scarantino and Piccinini 2010: 324; Floridi 2011: 80).2
Basically, information has to be something that represents some part of the world as being a
certain way. For instance, the text “The cat is on the mat” represents the cat as being on the

333

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

mat. A piece of information is true if it represents the world as being a way that it is, and it
is false (i.e., it is misinformation on Fox’s analysis) if it represents the world as being a way
that it is not.3 For instance, “The cat is on the mat” is true if the cat is on the mat and false if
the cat is not on the mat.
The main problem that has been raised for this analysis is actually not a counter-example.
Despite its intuitive appeal, some philosophers (e.g., Dretske 1983: 57; Floridi 2011: 93–104)
suggest that Fox’s analysis is simply incoherent. They claim that there is no such thing as
false information (that it is a contradiction in terms). As Fred Dretske (1983: 57) puts it, “false
information, misinformation, and (grimace!) disinformation are not varieties of information
– any more than a decoy duck is a kind of duck.” According to these philosophers, something
can only count as information if it is true. Fred Dretske (1981: 46) writes that

information is, after all, a valuable commodity. We spend billions on its collection,
storage, and retrieval. People are tortured in attempts to extract it from them.
Thousands of lives depend on whether the enemy has it. Given all the fuss, it would
be surprising indeed if information had nothing to do with truth.

Even if information must be true, however, we can still give an analysis of misinformation
that is in the same spirit as Fox’s analysis. Namely, as Floridi (2011: 260) suggests, we might
say that misinformation is “semantic content that is false.”
Alternatively though, we might stick with Fox’s analysis by adopting a broader notion of
information. While the term ‘information’ is sometimes used in a sense that implies truth, it
is also commonly used in a sense that does not (see Fox 1983: 157; Fetzer 2004a; Scarantino
and Piccinini 2010). Andrea Scarantino and Gualtiero Piccinini (2010: 323–26) point out
that computer scientists and cognitive scientists use the term information in a way that does
not require that it be true. Similarly, when information scientists say that a library is full
of information, they do not mean to be referring to just that subset of the collection that
happens to be true.
For the sake of simplicity of expression, this chapter adopts the broad notion of
information such that information can be false. But even setting aside the controversial
question of whether or not information must be true, Fox’s analysis faces several potential
counter-examples. First, there are two respects in which it seems to be too broad. That is, it
is not clear that all instances of false information count as misinformation.
As Graham Oddie (2014) points out, different pieces of false information can be closer to
or further from the truth. Consider the following two claims:
Copyright © 2016. Taylor & Francis Group. All rights reserved.

The distance from New York City to Los Angeles is 2,500 miles.
The distance from New York City to Los Angeles is 25,000 miles.

Both claims overestimate the actual distance between the two cities.4 Thus, they are
both false. In most contexts though, we would only want to count the second claim as
misinformation. The first claim only overestimates the distance by fifty miles or so. Thus, it
is sufficiently close to the truth for most practical purposes.
In addition to counting such simplified information as misinformation, Fox’s analysis
counts many jokes and sarcastic comments as misinformation. For instance, the headline
of the satirical newspaper The Onion on June 30th, 2008, which reported that “Al Gore
Places Infant Son in Rocket to Escape Dying Planet,” is certainly false. Even so, it is not
misinformation. It is just a joke and is not likely to mislead people.5

334

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

Second, there are two respects in which Fox’s analysis seems to be too narrow. That is, it
is not clear that all misinformation is false.
As noted above, a few websites on treating fever in children actually advise parents to
administer aspirin. But several websites simply fail to warn parents not to administer aspirin
(see Impicciatore et al. 1997: 1876). Even if all of the information that is actually posted
on such a website is perfectly true, the website as a whole still seems to be an example of
misinformation. There is the implication that all of the important stuff about treating fever
that you should know is being presented.6 As a result, incomplete information can be as
misleading (and as dangerous) as false information (see Fallis 2004: 468).
In addition to failing to count incomplete information as misinformation, Fox’s analysis
may fail to count inaccurate and misleading images as misinformation. Images as well as
propositions can have semantic content (see Chen and Floridi 2013). For instance, a drawing
can represent the cat as being on the mat. Also, a map can represent one island as being
much larger than another island. Moreover, such images can be misleading. Despite how it
is portrayed, the cat might not actually be on the mat. Also, the island might not be nearly as
large as it is depicted to be (see Monmonier 1991: 45–46). However, it is not clear that such
images are false period and, thus, would count as misinformation on Fox’s analysis.
While propositional information (such as “The cat is on the mat”) is always either true
or false, it makes more sense to say that visual information is simply more or less accurate
(cf. Floridi 2010: 50). For instance, a map might misrepresent the relative sizes of certain
geographical features to a greater or lesser degree. Also, a map might get the relative sizes
right, but get the relative locations wrong, or vice versa.
We do say that a conjunction is false even if just one of its conjuncts is false. Similarly, we might
say that a map is false if there is any way in which it represents the world as being a way that it is
not. This move would allow Fox to count inaccurate and misleading images as misinformation.
But it would also create another respect in which his analysis is too broad. Maps often include
small inaccuracies that are intended to enhance comprehensibility. For instance, if roads were
really drawn to scale, they would be too small to see (see Monmonier 1991: 30). In a similar
vein, subway maps frequently sacrifice “geometric accuracy” in order to more effectively convey
the information that riders need (see Monmonier 1991: 34–35). As with the numerical example
above, despite the small misrepresentations, such simplified information is not misinformation.

Misinformation as inaccurate information


Peter Hernon (1995: 134) suggests that misinformation is information that is inaccurate. This
Copyright © 2016. Taylor & Francis Group. All rights reserved.

analysis is in the same spirit as Fox’s analysis. In fact, it subsumes Fox’s analysis because false
information is a type of inaccurate information. However, Hernon’s analysis avoids some of
the problems with Fox’s analysis.7
First, whereas incomplete information can be perfectly true, there is an important sense in
which incompleteness can be a type of inaccuracy. For instance, a website that fails to warn parents
not to administer aspirin diverges significantly from the truth about treating fever in children.
Thus, unlike Fox’s analysis, Hernon’s analysis can count such a website as misinformation.
Even so, Hernon’s analysis may still fail to count some cases of incomplete information
as misinformation. Incompleteness need not always be a type of inaccuracy. For instance,
when pursuers – who did not recognize him – asked Saint Athanasius, “Where is the traitor
Athanasius?,” he replied, “Not far away.” Even though Athanasius’s statement succeeded in
misleading his pursuers about his identity (precisely as it was intended to do), Athanasius’s
statement was perfectly accurate. Yet it still seems like an example of misinformation.8

335

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

Second, unlike Fox’s analysis, Hernon’s analysis clearly applies to visual information
as well as propositional information. As it stands though, Hernon’s analysis still counts
simplified information, such as a geographically inaccurate subway map, as misinformation.
However, since inaccuracy is something that comes in degrees, Hernon’s analysis might
easily be modified to include a threshold of inaccuracy that misinformation must meet. It
could be some set level of inaccuracy (e.g., 5 percent, 10 percent, or 25 percent). Or it could
be a level of inaccuracy determined by the purpose for which the information is intended to
be used. Either way, very few maps will count as misinformation because, despite their small
inaccuracies (intentional or otherwise), most maps fail to exceed this threshold.9
But whatever threshold we adopt, Hernon’s analysis still seems to be too broad. For
instance, the headline in The Onion about Al Gore is certainly inaccurate in the extreme, but
(as noted above) it is not misinformation.

Misinformation as misleading information


Brian Skyrms (2010: 80) suggests that misinformation is “misleading information.” That is,
it is information that has the propensity to cause false beliefs. According to Skyrms, what
makes something misinformation is not a purely semantic property (such as being false or
inaccurate). Rather, what makes something misinformation is its likely effect on someone’s
epistemic state (viz., causing her to have a false belief).
Skyrms’s analysis has the advantage of focusing on the property of misinformation that
is most troubling. After all, inaccurate information is not a big deal as long as we are able to
recognize it for what it is and avoid being misled by it. Moreover, Skyrms’s analysis avoids
some of the problems with the preceding analyses. For instance, since visual information as
well as propositional information can certainly be misleading, it clearly applies to drawings
and maps. Also, since Skyrms’s analysis does not require falsity or inaccuracy, it counts
incomplete information that is misleading as misinformation even if it is perfectly true.
As it stands though, Skyrms’s analysis arguably counts jokes and simplified information as
misinformation. Jokes and simplified information are certainly not particularly misleading.
But that does not mean that no one will ever be misled. There are going to be rare cases where
people do not get the joke or where they take some simplified information too literally. For
instance, a significant number of people (including a few serious journalists) have actually
been fooled by the satirical stories published in The Onion (see Fallon 2012). Similarly, a
novice subway rider might mistakenly assume that the map of the London Underground
is drawn to scale. Thus, just as we needed to set a threshold for inaccuracy in Hernon’s
Copyright © 2016. Taylor & Francis Group. All rights reserved.

analysis, we need to set a threshold for misleadingness in Skyrms’s analysis. In other words,
we need to specify how likely it must be that a piece of information will cause false beliefs in
order for it to count as misinformation.10
But whatever threshold we adopt, Skyrms’s analysis faces another difficulty. Whether a piece
of information meets that threshold may depend on who receives the information. For instance,
a piece of information that is likely to mislead Dr. John Watson may be unlikely to mislead
Sherlock Holmes, Esq. Thus, if we want to analyze misinformation in terms of misleadingness
as Skyrms suggests, we also need to specify who a piece of information must be likely to mislead.
One obvious suggestion is that something is misinformation for a particular person if it is likely
to mislead that person. However, this introduces a large degree of contextual dependence
into the analysis of misinformation. For instance, there may very well be children, and other
highly gullible individuals, for whom the headlines in The Onion are misinformation. Also,
if misinformation is person-relative in this way, it will sometimes be possible to change the

336

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

ontological status of a piece of information simply by altering the person who receives it. For
instance, a person might gather sufficient evidence on a topic, or she might develop sufficient
critical thinking skills, that she is unlikely to be misled by information that previously would
have been very likely to mislead her. Since this information no longer has a propensity to
cause her to acquire false beliefs, it no longer counts as misinformation for this person.
In order to avoid this sort of contextual dependence, we might utilize a “reasonable
person” standard, along the lines of the Federal Trade Commission’s definition of deceptive
advertising (see Fallis 2009: §4.5; Carson 2010: 187). That is, we might say that a piece of
information is misinformation period if it is likely to mislead a reasonable person.11 On this
modification of Skyrms’s analysis, the headlines in The Onion are not misinformation even if
a few extremely credulous individuals are regularly taken in.12 Also, prototypical instances of
misinformation are still misinformation even if they would never fool Sherlock Holmes. In
a similar vein, iocane powder is still poison even though the Dread Pirate Roberts has “spent
the last few years building up an immunity.”
Finally, even with this further modification, since Skyrms’s analysis of misinformation does
not require it to be false or inaccurate, there may be a respect in which it is too narrow. For
instance, suppose that everyone is well aware that island A is about the same size as island B. In
that case, no one is likely to be misled by a map that depicts it as being much larger. If such a map
is not intended as a joke (i.e., if it is seriously intended to represent the way that the world is, but
fails to do so), we might consider it to be misinformation despite its not being at all misleading.

Disinformation
Most philosophers have focused on the one specific type of misinformation known as
disinformation. Other types of misinformation can sometimes be just as dangerous as
disinformation (e.g., those websites that advise parents to administer aspirin to children with
fever). But in the same way that acts of terrorism tend to be more troubling than natural
disasters, disinformation is a particularly problematic type of misinformation because it
comes from someone who is actively engaged in an attempt to mislead (see Piper 2002: 8–9;
Fetzer 2004b). Moreover, in addition to directly causing harm, disinformation can harm
people indirectly by eroding trust and thereby inhibiting our ability to effectively share
information with each other.
Disinformation is commonly associated with government or military activity. As George
Carlin quipped, “the government doesn’t lie, it engages in disinformation.” A standard
example is Operation Bodyguard, a World War II disinformation campaign intended to hide the
Copyright © 2016. Taylor & Francis Group. All rights reserved.

planned location of the D-Day invasion. Among other deceits, the Allies sent out fake radio
transmissions and created fraudulent military reports in a successful attempt to convince the
Germans that a large force in East Anglia was ready to attack Calais rather than Normandy
(see Farquhar 2005: 72).13
However, many non-governmental organizations, such as political campaigns and
advertisers, also disseminate intentionally misleading information. In fact, single individuals
are often the source of disinformation. For instance, individual reporters, such as Jayson
Blair of the New York Times and Janet Cooke of the Washington Post, have made up news
stories that have misled their readers (see Farquhar 2005: 25–29). Also, there are several
high-profile cases of purported memoirs that turned out to be fiction, such as James Frey’s
A Million Little Pieces.
Disinformation is often distributed very widely (to anyone with a newspaper subscription,
to anyone with a television, to anyone with internet access, etc.). This is typically the case

337

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

with government propaganda and deceptive advertising. But disinformation can also be
targeted at specific people or organizations. This is humorously illustrated in a cartoon by
Jeff Danzinger (of the Los Angeles Times) that shows a couple working on their taxes. The
caption is “Mr. and Mrs. John Doe (not their real names) hard at work in their own little
Office of Strategic Disinformation.” Such disinformation is presumably aimed directly at the
Internal Revenue Service.
In addition, while the intended victim of the deception is usually a person or a group of
people, disinformation can also be targeted at a machine. For instance, as Clifford Lynch
(2001: 13–14) points out, managers of websites sometimes try to fool the automated
“crawlers” sent out by search engines to index the internet. Suppose that you have just started
selling a product that competes with another product Y. When an automated crawler asks
for your webpage to add to its index, you might send it a copy of the webpage for product Y.
That way, when someone uses the search engine to search for product Y, the search engine
will return a link to your webpage.
Like misinformation in general, all of these types of disinformation have the potential to
mislead people (or machines). With disinformation, however, it is no accident that people are
misled. But what exactly makes something disinformation?

Disinformation as lies
James Fetzer (2004b: 231) makes the plausible suggestion that disinformation “should be
viewed more or less on a par with acts of lying. Indeed, the parallel with lying appears to be
fairly precise.” In other words, disinformation is a statement that the speaker believes to be
false and that is intended to mislead.
However, Fetzer’s analysis faces several potential counter-examples. First, there are
two respects in which it seems to be too broad. That is, it is not clear that all lies count
as disinformation. Someone who intends to spread disinformation with a lie might not
succeed in doing so. Even though she believes that what she says is false, it might actually
(unbeknownst to her) be true (see Mahon 2008: §1.2). While such accidental truths are lies,
they are not disinformation because they do not have the potential to mislead anyone.14
In addition to counting accidental truths as disinformation, Fetzer’s analysis counts
implausible lies as disinformation. Even if she says something that actually is false, someone
who intends to spread disinformation still might not succeed in doing so. For instance, even
though they are (unrealistically) intended to be misleading, lies that no one will believe are
not disinformation because they do not have the propensity to cause false beliefs.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Second, there are two respects in which Fetzer’s analysis seems to be too narrow. That is,
it is not clear that all instances of disinformation are lies. Lies are linguistic expressions, such
as “A wolf is chasing my sheep!” (see Mahon 2008: §1.1). However, doctored photographs
and falsified maps are also common instances of disinformation (see Monmonier 1991: 115–
18; Farid 2009: 98). It is no accident when people are misled by such visual disinformation,
because that is precisely what the source of the information intended.
In addition to failing to count intentionally misleading images as disinformation, Fetzer’s
analysis fails to count half truths as disinformation. Although prototypical instances of
disinformation are false (and believed by the source to be false), disinformation (just like
misinformation) can sometimes be true. As Thomas Carson (2010: 57–58) explains, “half-
truths are true statements or sets of true statements that selectively emphasize facts that
tend to support a particular interpretation or assessment of an issue and selectively ignore or
minimize other relevant facts that tend to support contrary assessments.” Athanasius’s truthful

338

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

statement to his pursuers is an example of disinformation of this sort. In addition, politicians


often use spin to mislead the public without saying anything false (see Manson 2012). Like
prototypical instances of disinformation, such true disinformation is intentionally misleading.15

Disinformation as inaccurate information that is intended to mislead


Floridi (2011: 260) suggests that disinformation is “misinformation purposefully conveyed
to mislead the receiver into believing that it is information.”16 Recall that Floridi thinks that
information is accurate semantic content whereas misinformation is inaccurate semantic
content. So, if we adopt the broad notion of information (such that information can be
inaccurate) when we state Floridi’s analysis, the idea is simply that disinformation is
inaccurate information that the source intends to mislead the recipient. This analysis is in
the same spirit as Fetzer’s analysis. However, Floridi’s analysis avoids some of the problems
with Fetzer’s analysis.
First, since Floridi requires that disinformation be inaccurate, accidental truths do not
count as disinformation. Second, since Floridi’s analysis applies to visual information as well
as propositional information, intentionally misleading images count as disinformation.
Even so, Floridi’s analysis still seems to be too broad. An implausible lie is inaccurate
information that the source intends to mislead the recipient. Thus, it counts as disinformation
on Floridi’s analysis. But as noted above, an implausible lie does not have the propensity to
cause false beliefs. Also, Floridi’s analysis still seems to be too narrow. A half truth is not
inaccurate information. Thus, it does not count as disinformation on Floridi’s analysis.
In fact, there is another respect in which Floridi’s analysis may be too narrow. Although
disinformation is always misleading (i.e., it always has the propensity to cause false beliefs), it
is not always intended to mislead. For instance, inaccurate information has been intentionally
placed on the internet for purposes of education and research (see Hernon 1995; Piper 2002: 19).
A fake website advertising a town in Minnesota as a tropical paradise was created to teach people
how to identify inaccurate information on the internet. In such cases, while the educators and
researchers certainly foresee that people might be misled by their inaccurate information, they
do not intend that anybody actually be misled. Even so, such side effect disinformation probably
should count as disinformation.17 Just as with prototypical instances of disinformation, it is
no accident when people are misled. Although the educators and researchers do not intend to
mislead anyone, they do intend their inaccurate information to be misleading. For instance, a
fake website would not be a very effective tool for teaching people how to identify inaccurate
information on the internet if it was clear to everyone that it was a fake.18
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Disinformation as misleading information that is intended to be misleading


Don Fallis (2009: §5) suggests that disinformation is “misleading information that is intended
to be (or at least foreseen to be) misleading.” This analysis avoids many of the problems with
Fetzer’s and Floridi’s analyses. First, since Fallis explicitly requires that disinformation be
misleading, accidental truths and implausible lies do not count as disinformation. Second,
since Fallis does not require that disinformation be inaccurate, true disinformation counts as
disinformation. Third, Fallis does not require that disinformation be intended to mislead. It
is sufficient that it be intended to be misleading. Thus, side effect disinformation counts as
disinformation on Fallis’s analysis.
Even so, Fallis’s analysis faces several potential counter-examples. First, there is one
respect in which it seems to be too broad. Strictly speaking, Fallis does not even require

339

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

that disinformation be intended to be misleading. It is sufficient that it be foreseen to be


misleading. As a result, Fallis’s analysis incorrectly counts some subtle forms of humor as
disinformation. For instance, as the editors of The Onion are no doubt aware, their articles
have the propensity to cause false beliefs in some (highly gullible) readers. However, since
the editors do not intend these articles to be misleading to these people, it is an accident if
anyone is misled.
However, Fallis’s analysis can easily be modified so that it does not count such satire as
disinformation. We can simply leave off the “foreseen to be misleading” clause and say that
disinformation is misleading information that is intended to be misleading.
Second, there are a couple of respects in which Fallis’s analysis seems to be too narrow.
Someone can clearly spread disinformation even if she does not intend what she says to be
misleading. For instance, a press secretary might innocently pass along disinformation on
behalf of her boss. But in that sort of case, there is someone (namely, the boss) who does
intend that people be misled. So, this sort of case does count as disinformation on Fallis’s
analysis. However, as I discuss in the following section, there are yet other cases that do
indicate that Fallis’s analysis is too narrow.

Adaptive disinformation
All three of the analyses of disinformation discussed so far require that the source of the
information intend that it be misleading. Indeed, that is how disinformation was characterized
at the outset of this chapter. However, even if the source of a piece of information does not
intend that it be misleading, it may be no accident that it is misleading.
Many species of animals give fake alarm calls (see Skyrms 2010: 73–75). When animals
give alarm calls, there is usually a predator in the vicinity. However, about 10 to 15 percent
of the time, animals give alarm calls even when there is no imminent threat. In such cases,
the call causes other animals of the same species to run away and leave food behind which
the caller can then eat.
There is some evidence that primates understand that other primates can have false beliefs
(see Fitzpatrick 2009). Thus, it could be that primates do intend to mislead conspecifics
into believing that a predator is nearby with their fake alarm calls. However, when less
sophisticated animals who lack this understanding, such as birds and squirrels, give fake
alarm calls, they do not intend their calls to be misleading. Even so, these animals (or at least
their genes) systematically benefit from giving such deceptive signals.19 In other words, there
is a mechanism that reinforces the dissemination of this sort of misleading information.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Namely, the caller gets a free meal. Thus, like prototypical instances of disinformation, such
adaptive disinformation is not misleading by accident.
If the three preceding analyses (Fetzer’s, Floridi’s, and Fallis’s) simply failed to count the
deceptive signals of animals as disinformation, it might not be a serious objection. After all, it
is not clear that alarm calls have semantic content and, thus, count as information. However,
humans can also disseminate adaptive disinformation. For instance, many of the people who
disseminate conspiracy theories (e.g., that the President was not born in the United States
or that the United States government was behind the 9/11 terrorist attacks) believe that what
they are saying is true. They do not intend what they say to be misleading. Even so, just as
with prototypical instances of disinformation, these false claims can mislead people and it is
no accident that people are misled. There is a mechanism that reinforces the dissemination
of these false claims. By promoting these false claims, certain websites and media outlets are
able to attract more readers and viewers who find these claims convincing.20

340

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

Disinformation as misleading information that systematically benefits the source


Recent work in biology on deceptive signaling in animals suggests another possible analysis
of the concept of disinformation. According to Skyrms (2010: 80), “if misinformation is sent
systematically and benefits the sender at the expense of the receiver, we will not shrink from
following the biological literature in calling it deception.” Although Skyrms and the biologists
that he cites use the term “deceptive signal” rather than the term “disinformation,” they are
trying to capture essentially the same concept. Thus, we might say that disinformation is
misleading information that systematically benefits the source at the expense of the recipient.
This analysis avoids the main problem with the three analyses that have been discussed so
far. Although people who disseminate conspiracy theories may not intend to mislead others,
they do systematically benefit from others being misled. Thus, adaptive disinformation
counts as disinformation on Skyrms’s analysis.
Even so, Skyrms’s analysis seems to be too narrow. Most of the time, disinformation
imposes a cost on the recipient, as when the villagers waste their time running to the shepherd
boy’s aid. However, disinformation need not always impose a cost on the recipient. In fact, it
is sometimes intended to benefit the recipient. For instance, when a friend asks you how he
or she looks, you might very well say, “You look great!” in order to spare his or her feelings,
even if it is not true. Admittedly, such altruistic disinformation does not pose the same risk of
harm to the recipient that prototypical instances of disinformation do. But like prototypical
instances, altruistic disinformation can be intentionally misleading.
Skyrms’s analysis can easily be modified though so that it counts altruistic disinformation
as disinformation. We can simply leave off the “at the expense of the recipient” clause and
say that disinformation is misleading information that systematically benefits the source.21
However, Skyrms’s analysis still seems to be too narrow because it rules out the possibility
of disinformation that does not benefit the source. Most of the time, disinformation does
systematically benefit the source. However, it need not always do so. For instance, in order
to avoid embarrassment, people often lie to their doctors about their diet, about how much
they exercise, or about what medications they are taking (see Reddy 2013). If their doctors
are misled, it can lead to incorrect treatment recommendations that can harm the patient.
Admittedly, this particular example of detrimental disinformation may not pose the same
risk of harm to the recipient that prototypical instances of disinformation do. But as with
prototypical instances, it is no accident that people are misled by such disinformation.

A disjunctive analysis of disinformation


Copyright © 2016. Taylor & Francis Group. All rights reserved.

An analysis of disinformation in terms of an intention to be misleading (or an intention to


mislead) seems to be too narrow. Also, an analysis in terms of a systematic benefit to the
source seems to be too narrow. But it might be possible to come up with an adequate analysis
of disinformation by combining the two. That is, it might be suggested that disinformation
is misleading information is that is intended to be misleading or that systematically benefits
the source.
Such an analysis avoids all of the problems raised for the preceding analyses of
disinformation. For instance, it counts visual disinformation, true disinformation, side effect
disinformation, adaptive disinformation, and detrimental disinformation as disinformation.
Also, it does not count accidental truths or implausible lies as disinformation.
Such an analysis of disinformation has one major flaw though. It is disjunctive. If an analysis
requires two independent criteria, it suggests that we are really dealing with two separate
phenomena rather than just one (see Kingsbury and McKeown-Green 2009: 578–81).
341

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

Fallis (2014), however, suggests that there is something that unifies all of the cases of
disinformation discussed above. He claims that disinformation is misleading information that
has the function of misleading someone. It is just that a piece of information can acquire this
function in at least two different ways. For instance, in the case of detrimental disinformation,
it has this function because someone intended it to have this function. By contrast, in the case
of altruistic disinformation, it has this function because someone systematically benefits from
its having this function.
However, it is not clear that this move really keeps the analysis of disinformation from
being disjunctive. People certainly treat the design functions of artifacts (such as chairs) and
the etiological functions of biological organisms (such as hearts) as being two species of
the same genus (see Krohs and Kroes 2009). But it is a difficult task (and one yet to be
completed) to give a unified account of functions that subsumes one type of function under
the other (see, e.g., Vermaas and Houkes 2003).

The epistemology and ethics of mis- and dis-information


Much of our knowledge about the world comes from information that we receive from
other people (see Hume 1977: 74). If a significant amount of this information is misleading,
we may easily acquire false beliefs instead of knowledge. Moreover, we may (out of an excess
of caution) fail to acquire true beliefs that we might have acquired. Thus, the existence of
misinformation and disinformation is an important problem for the epistemology of testimony.
One response to this problem is to give people better tools for identifying inaccurate
and incomplete information so that they will be less likely to be misled. For instance, David
Hume (1977: 75) recommends that

we entertain a suspicion concerning any matter of fact, when the witnesses


contradict each other; when they are but few, or of a doubtful character; when
they have an interest in what they affirm; when they deliver their testimony with
hesitation, or on the contrary, with too violent asseverations.

In other words, when evaluating a piece of testimony, we are advised to consider things like
the authority of the source and the degree to which other sources corroborate what she says.
Along the same lines as Hume, researchers in information science have attempted to identify
features of websites that are indicative of accurate and inaccurate information on the internet
(see Fallis 2004: 472–73).
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Unfortunately, information science research on indicators of accuracy and inaccuracy often


fails to differentiate among the various different types of misinformation. This is a serious
oversight as the clues that suggest that someone is lying are unlikely to be the same clues that
suggest that she just does not know what she is talking about. Conceptual analysis in this
area can help to fill this lacuna. For instance, since disinformation is very closely related to
lying, existing research on lie detection can potentially be applied to disinformation detection.
Such research often focuses on physiological indicators of deception, such as perspiration and
high pulse rate (see Vrij 2008). However, research is also being done to identify indicators of
deception in both text (see Newman et al. 2003) and images (see Farid 2009: 100–06).
But making people better evaluators of information quality is not the only way to respond
to the problem of misinformation and disinformation. As Mathiesen (2014) points out with
respect to low quality information in general, we can take steps on the production side as well
as on the receiver side. For instance, we can address the comprehensibility of information by

342

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

creating information that is easier to read as well as by creating better readers. Similarly, we
can produce information that is easier to verify as well as train people to be better verifiers.
For instance, providers of high quality information can incorporate “robust” indicators of
accuracy in much the same way that the United States Treasury utilizes security features that
make is easier for people to determine that currency is genuine (see Fallis 2004: 474–80).
In addition to making information more verifiable, we can simply try to reduce the amount
of misinformation and disinformation that is out there. One obvious strategy is to restrict
access to information that is inaccurate and incomplete. However, as John Stuart Mill (1978:
15–52) famously pointed out, such censorship is epistemically problematic since it can
just as easily keep people from accessing and/or recognizing accurate information. Thus,
a better strategy is to give people incentives to disseminate information that is accurate and
complete (see English 1999: 401–19). Toward this end, work in philosophy can potentially
help us to devise policies that will deter people from disseminating misinformation and
disinformation. For instance, a few philosophers (e.g., Sober 1994: 71–92; Skyrms 2010:
73–82) have developed formal models of the creation and spread of disinformation.
Finally, as noted at the outset, when people are misled by inaccurate and misleading
information, it can sometimes have dire practical (as well as epistemic) consequences. So, it can
be important to understand the moral wrongness involved in the dissemination of the various
forms of misinformation. For instance, even though they do not intend to mislead anyone, the
authors of websites that advise parents to administer aspirin to children with fever (or even just
fail to warn them not to) are probably guilty of negligence. But of course, information that is
intentionally misleading is particularly problematic from a moral perspective.
Although disinformation is not exactly the same as a lie, much of what philosophers (e.g.,
Bok 1978; Carson 2010) have written about the ethics of lying almost certainly applies to
the ethics of disinformation as well. It is prima facie wrong to disseminate disinformation (cf.
Bok 1978: 50). But pace Kant (1959), there are some circumstances in which it is morally
permissible. For instance, the Allies were presumably justified in using disinformation to
prevent Nazi world domination. Of course, we have to keep in mind that people who spread
disinformation may easily overestimate the likelihood that they are in such circumstances
(cf. Bok 1978: 26).

Notes
1 Misinformation and disinformation are sometimes treated as mutually exclusive categories.
For instance, Peter Hernon (1995: 134) says that “inaccurate information might result from
Copyright © 2016. Taylor & Francis Group. All rights reserved.

either a deliberate attempt to deceive or mislead (disinformation), or an honest mistake


(misinformation).” However, it is more common to treat disinformation as a subcategory of
misinformation as this chapter does (see Skyrms 2010: 80; Floridi 2011: 260).
2 Not all analyses of information assume that it is something that is true or false (see Floridi 2010:
44–45).
3 This way of describing things is suggestive of a correspondence theory of truth. But just as many
different analyses of information have been proposed, many different theories of truth have
been proposed (see Floridi 2011: 184). Nothing in this chapter requires that we settle on a
specific theory.
4 They overestimate the distance even further if we have in mind the straight-line distance
through the planet rather than the distance along the surface.
5 While articles in The Onion are not intended to mislead anyone, it should be noted that some
jokes (e.g., April Fools’ Day jokes) are intended to mislead, at least for a short time. Such jokes
arguably are misinformation.
6 Of course, such websites would be false information if they explicitly stated that “this website
includes all of the important stuff about treating fever that you should know.”

343

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

7 Although Floridi characterizes misinformation as “semantic content that is false,” his analysis
of misinformation is probably closer to Hernon’s than to Fox’s. Floridi (2010: 50) suggests that
“we speak of veridical rather [than] true data,” and, like accuracy, veridicality comes in degrees.
8 Of course, Hernon might stand his ground here and insist that Athanasius’s statement is not
misinformation because it is not inaccurate.
9 It might even be suggested that simplified information it is not inaccurate at all. It just has a low
level of precision (see Hughes and Hase 2010: 3) or it is at a high level of abstraction (see Floridi
2011: 46–79).
10 It is probably not necessary to give a precise numerical value. In a similar vein, epistemologists
rarely specify the exact degree of justification that is required for a belief to count as knowledge.
Most simply say that “very strong justification” is required (see Feldman 2003: 21).
11 Even with the reasonable person standard, it would still be possible (at least in principle) to
change the ontological status of a piece of information. But it would be much more difficult to
make the necessary alteration to an entire population than to make it to a single individual.
12 The famous editorial published in the New York Sun in 1897, which said, “Yes, Virginia, there
is a Santa Claus,” was not likely to mislead a reasonable adult. But it might have been likely to
mislead a reasonable child under eight years of age. So, even if we adopt a “reasonable person”
standard, we still might want to say that whether or not something counts as misinformation is
relative to the group.
13 Not everything that is created in order to mislead someone counts as disinformation. For
instance, in addition to sending out fake radio transmissions and creating fraudulent military
reports, the Allies built tanks and airplanes out of rubber and canvas to give the false impression
that a huge force was preparing to attack Calais (see Farquhar 2005: 73). These fake tanks and
airplanes are not examples of disinformation because they are simply material objects rather
than something that has semantic content.
14 Like lying, disinformation is not a “success” term. It may not actually mislead the person who
receives it. But it would be strange to count something as disinformation if it does not at least
have the propensity to mislead.
15 The two categories of disinformation that Fetzer’s analysis excludes can sometimes overlap. For
instance, a television commercial that pitted Black Flag Roach Killer against another leading brand
misled viewers about the effectiveness of Black Flag without saying or showing anything that
was literally false. According to Carson (2010: 187), “the demonstration used roaches that had
been bred to be resistant to the type of poison used by the competitor.”
16 Floridi had previously proposed two other analyses of disinformation. Floridi (1996: 509)
originally claimed that “disinformation arises whenever the process of information is defective.”
However, this would incorrectly count honest mistakes, such as the headline in The Chicago
Tribune, as disinformation. Floridi (2010: 50) later claimed that “if the source of misinformation
is aware of its nature … one speaks of disinformation.” However, this would incorrectly count
jokes, such as the headline in The Onion, as disinformation. The editors of The Onion were well
aware that the story about Al Gore was inaccurate.
17 Even though they are not intended to mislead, such websites arguably are lies. Thus, they count
as disinformation on Fetzer’s analysis.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

18 Floridi (2012: 306–07) claims that it is not a problem that his analysis excludes true disinformation
and side effect disinformation. He wants to suggest that these are “peculiar cases” of only
philosophical interest. However, true disinformation and side effect disinformation are fairly
common, and potentially dangerous, real-world phenomena.
19 Birds and squirrels probably just learn to associate (a) making a certain vocalization and (b)
conspecifics running away and leaving food behind. Other species, such as fireflies, have evolved
to send deceptive signals.
20 There is also a mechanism that reinforces the dissemination of the false claims made in The
Onion. However, in this case, more readers are attracted to the website just because they find
these claims amusing.
21 Skyrms (2010: 76) himself notes that his analysis might be modified in this way. He just failed
to see that this sort of modification was actually necessary.

344

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Mis- and dis-information

Related topics
Semantic Information, The Decisional Value of Information, The Epistemic Value of
Information.

References
Bok, S. (1978) Lying, New York: Random House.
Carson, T. L. (2010) Lying and Deception, Oxford: Oxford University Press.
Chen, M. and L. Floridi. (2013) “An Analysis of Information Visualisation,” Synthese 190: 3421–3438.
Dretske, F. I. (1981) Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
Dretske, F. I. (1983) “Précis of Knowledge and the Flow of Information,” Behavioral and Brain Sciences 6:55–
90.
Edelman, M. (2001) The Politics of Misinformation, Cambridge: Cambridge University Press.
English, L. P. (1999) Improving Data Warehouse and Business Information Quality, New York: Wiley.
Fallis, D. (2004) “On Verifying the Accuracy of Information: Philosophical Perspectives,” Library Trends
52:463–87.
Fallis, D. (2009) “A Conceptual Analysis of Disinformation,” iConference Proceedings, http://hdl.handle.
net/2142/15205
Fallis, D. (2014) “A Functional Analysis of Disinformation,” iConference Proceedings, http://hdl.handle.
net/2142/47258
Fallon, K. (2012) “Fooled by ‘The Onion’: 9 Most Embarrassing Fails,” Daily Beast, http://www.
thedailybeast.com/articles/2012/09/29/fooled-by-the-onion-8-most-embarrassing-fails.html
Farid, H. (2009) “Digital Doctoring: Can We Trust Photographs?” in Deception, B. Harrington (ed.),
Stanford, MA: Stanford University Press, pp. 95–108.
Farquhar, M. (2005) A Treasury of Deception, New York: Penguin.
Feldman, R. (2003) Epistemology, Upper Saddle River, NJ: Prentice Hall.
Fetzer, J. H. (2004a) “Information: Does It Have to Be True?” Minds and Machines 14:223–29.
Fetzer, J. H. (2004b) “Disinformation: The Use of False Information,” Minds and Machines 14:231–40.
Fitzpatrick, S. (2009) “The Primate Mindreading Controversy: A Case Study in Simplicity and
Methodology in Animal Psychology,” in The Philosophy of Animal Minds, R. Lurz (ed.), Cambridge:
Cambridge University Press, pp. 258–77.
Floridi, L. (1996) “Brave.Net.World: The Internet as a Disinformation Superhighway?” Electronic
Library 14:509–14.
Floridi, L. (2010) Information – A Very Short Introduction, Oxford: Oxford University Press.
Floridi, L. (2011) The Philosophy of Information, Oxford: Oxford University Press.
Floridi, L. (2012) “Steps Forward in the Philosophy of Information,” Etica & Politica 14:304–10.
Fox, C. J. (1983) Information and Misinformation, Westport, CT: Greenwood Press.
Gettier, E. L. (1963) “Is Justified True Belief Knowledge?,” Analysis 23:121–23.
Hernon, P. (1995) “Disinformation and Misinformation through the Internet: Findings of an
Exploratory Study,” Government Information Quarterly 12:133–39.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

Hughes, I. and Hase, T. (2010) Measurements and their Uncertainties, Oxford: Oxford University Press.
Hume, D. (1977) An Enquiry Concerning Human Understanding, Indianapolis, IN: Hackett.
Impicciatore, P., Pandolfini, C., Casella, N. and Bonati, M. (1997) “Reliability of Health Information
for the Public on the World Wide Web: Systematic Survey of Advice on Managing Fever in Children
at Home,” British Medical Journal 314:1875–79.
Kant, I. (1959) Foundations of the Metaphysics of Morals, New York: Macmillan.
Kingsbury, J. and J. McKeown-Green. (2009) “Definitions: Does Disjunction Mean Dysfunction?”
Journal of Philosophy 106:568–85.
Krohs, U. and P. Kroes (eds.). (2009) Functions in Biological and Artificial Worlds, Cambridge: MIT Press.
Lynch, C. A. (2001) “When Documents Deceive: Trust and Provenance as New Factors for Information
Retrieval in a Tangled Web,” Journal of the American Society for Information Science and Technology 52:12–
17.
Mahon, J. E. (2008) “The Definition of Lying and Deception,” Stanford Encyclopedia of Philosophy, http://
plato.stanford.edu/entries/lying-definition/
Manson, N. C. (2012) “Making Sense of Spin,” Journal of Applied Philosophy 29:200–13.

345

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.
Don Fallis

Mathiesen, K. (2014) “Facets of Access: A Conceptual and Standard Threats Analysis,” iConference
Proceedings, http://hdl.handle.net/2142/47410
Mill, J. S. (1978) On Liberty, Indianapolis, IN: Hackett.
Monmonier, M. (1991) How to Lie With Maps, Chicago, IL: University of Chicago Press.
Newman, M. L., J. W. Pennebaker, D. S. Berry, and J. M. Richards. (2003) “Lying Words: Predicting
Deception from Linguistic Styles,” Personality and Social Psychology Bulletin 29:665–75.
Oddie, G. (2014) “Truthlikeness,” Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/
truthlikeness/
Piper, P. S. (2002) “Web Hoaxes, Counterfeit Sites, and Other Spurious Information on the Internet,”
in Web of Deception, A. P. Mintz (ed.), Medford, NJ: Information Today, pp. 1–22.
Poland, G. A. and Jacobson, R. M. (2011) “The Age-Old Struggle against the Antivaccinationists,” New
England Journal of Medicine 364:97–99.
Reddy, S. (2013) “‘I Don’t Smoke, Doc’,” and Other Patient Lies,” Wall Street Journal, http://online.wsj.
com/article/SB10001424127887323478004578306510461212692.html
Scarantino, A. and Piccinini, G. (2010) “Information without Truth,” Metaphilosophy 41:313–30.
Skyrms, B. (2010) Signals, New York: Oxford University Press.
Sober, E. (1994) From a Biological Point of View, Cambridge: Cambridge University Press.
Vermaas, P. E. and Houkes, W. (2003) “Ascribing Functions to Technical Artefacts: A Challenge to
Etiological Accounts of Functions,” British Journal for the Philosophy of Science 54:261–89.
Vrij, A. (2008) Detecting Lies and Deceit, Chichester: John Wiley & Sons, Ltd.
Copyright © 2016. Taylor & Francis Group. All rights reserved.

346

The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, Taylor & Francis Group, 2016. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/bsb/detail.action?docID=4560459.
Created from bsb on 2024-03-14 08:30:54.

You might also like