Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Logical Positivism and Scientific Theories

Pablo Acuna

The following paper aims to present an examination of the original conception of what a scientific
theory is supported by LP, and of its further development. The course of such development is quite
interesting because it shows how important concepts introduced by LP authors had to be modified and
weakened as the difficulties rising from the actual application of the model came out. In the second half of
this paper, I also offer an overview of several criticisms made on the LP view of theories which accused it
of being inadequate or too simplistic. Those criticisms are interesting not only because they show the
weak points of the model and why it was generally abandoned or rejected, but also because they point out
the general lines of subsequent ways of thinking in the field of Philosophy of Science, and how such ways
are still connected to some of the core ideas of LP1.

The Original Formulation of the LP Conception of Theories

The first main feature of LP view of scientific theories is its thorough empiricism. Theories are the
expression and systematization of phenomenal regularities. Such regularities are expressed via theoretical
terms, which were originally conceived by LP as conventions that “summarize” phenomenal information.
Consequently, theoretical terms were supposed to be able to be defined by means of observational terms
which refer directly to empirical phenomena. Theoretical terms were a sort of abbreviation of phenomena.
In this semantic description of the concepts in a theory, LP’s criterion of meaning is easily recognizable.
The second feature of LP’s view lies on the formal structure they assigned to a theory. They were to be
axiomatized in first order logic with identity. The terms in this formal expression were divided in three
classes: logical, theoretical and observational. The axioms of the system are the main laws of the theory,
and they specify the interrelations among the theoretical terms. As was already said, the latter are
abbreviations of phenomenal descriptions, and they were to be defined in terms of observational terms.
The form of such definitions were originally conceived as explicit definitions of the form Tx≡Ox, where T
is a theoretical term and O an observational one. These definitions were called correspondence rules,
because they coordinate theoretical terms with their defining observational concepts; and the only
interpretation assigned to the former was given by such rules.
This general description of the original proposal of LP regarding scientific theories shows their strong
empiricist spirit and inheritance: all cognitively meaningful discourse could be expressed in terms of
observational talk, so that any metaphysical introduction was avoided as meaningless from the outset. On
the other hand, and what distinguishes LP from classical empiricism, the formal structure they assigned to
a theory shows the impact of the development of formal logic by Frege, Russell and Whitehead; and of the

1 In the following exposition I follow articles by Hanson, Hesse, Shapere, Mormann; and specially the article by
Suppe. Full references in the Bibliography in the end of the paper.

1
axiomatization of arithmetic made by Hilbert. Such works led LP to think that the precision and clarity
achieved in formal sciences could be achieved in natural sciences as well.
In a more precise and formal way, the original conception of scientific theories proposed by LP was
thus stated:
1. The theory is formulated in a first order logic with identity, L.
2. The vocabulary in L is divided in three classes: logical, theoretical (VT) and observational (VO).
3. Terms in VO are interpreted as referring to directly observable phenomenal objects or to directly
observable properties of phenomenal objects.
4. There is a set of theoretical postulates T whose only nonlogical terms belong to VT.
5. Terms in VT are given an explicit definition in terms of VO through correspondence rules C of the
form (x) (Tx≡Ox), where T belongs to VT, and O to VO.

The set T is the set of axioms of the theory, and C is the set of correspondence rules which stipulate
the allowable applications of the theory to empirical phenomena. The theory is conceived as the
conjunction of T and C, TC. It follows form this characterization that all theoretical terms are cognitively
meaningful, insofar as they accomplish the verification criterion of meaning.
When this original model was to be applied to actual scientific discourse, several modifications had to
be made. Even though such changes were the outcome of difficulties with the original formulation of the
model, they were not conceived as essential flaws in its general spirit and motivation (solid empirical
foundations and clear formal expression). However, they led to interesting and deep modifications of some
core principles of LP. In the following sections I will address the nature and motivations of these changes.

Correspondence Rules

The main difficulty concerning the sentences in C came from the requirement that they were to be
formulated as explicit definitions. Such requirement could not be accomplished by a very important and
common type of scientific concepts: dispositional terms. These terms refer to a property of an object
which is not directly observable, and that is a sort of tendency of the object. For example, in order to give
an explicit definition of the term fragile via observational ones, one would stipulate something like (x) [Fx
≡ (Sx→Bx)], where F is fragile, S is to be struck, and B is breaks. But given that any conditional statement
with a false antecedent is true, it follows that any object whatsoever which is not struck is fragile! Explicit
definitions do not work for dispositional terms.
An attempted way out to this difficulty was to “relax” the requirement of C as explicit definitions, that
is, as the expression of necessary and sufficient conditions for the applicability of the theoretical terms
they define. Instead of such kind of statements, reduction sentences were to be used. These sentences only
establish sufficient conditions for the theoretical term to be connected to empirical data. For instance,

2
fragile would be specified by (x) [Sx→ (Bx ≡ Fx)]. In this case, if a is a nonfragile object which is never
struck, it does not follow that Fa is true; but Sa→(Ba ≡ Fa) is still true. This sentence does not completely
define what is it to be fragile, it only states a sufficient condition for an object to be fragile.
Since a certain property can have several different conditions to be the case, reduction sentences are
not able to provide a definition of theoretical terms. A set of reduction sentences would count as an
explicit definition only if the disjunction of all the sufficient conditions they state is a logical truth, or else
is a logical consequence of the axioms of the theory. Since these conditions are rarely, if ever, met;
collections of reduction sentences are not typically enough to define a theoretical term. Consequently,
condition number 5 of the LP view of theories had to be replaced by 5’: terms in VT are given a partial
interpretation in terms of VO by means of reduction sentences. It follows from this modification that the
criterion of meaning for theoretical terms has to be altered as well. Now every theoretical term with
cognitive meaning must be capable to be introduced through a reduction sentence or through a chain of
reduction sentences (a chain of reduction sentences is a sequence of them such that every nonlogical term
in a given reduction sentence is either in VO, or occurs in a previous reduction sentence in the sequence) 2.
However, this new formulation of the nature of C was not completely successful. Not all of the
theoretical terms are capable to be introduced by reduction sentences. Metrical concepts, such as length,
do not fit the model for the simple reason that their values range the continuum of the real numbers; and to
express an observational condition for a certain body to have a length of an irrational value, for example,
would be quite problematic, to say the least. Moreover, it could be said that very important theoretical
concepts such as force, Carnot process, ψ function, are introduced not by a specific correspondence rule:

Terms of this kind are not introduced by definition or reduction chains based on observables; in fact they are
not introduced by any piecemeal process of assigning meaning to them individually. Rather, the constructs
used in a theory are introduced jointly, as it were, by setting up a theoretical system formulated in terms of
them and by giving this system an experimental interpretation, which in turn confers empirical meaning on
the theoretical constructs.3

It is not a fully reasonable option to require that theoretical terms, one by one, should be defined by
correspondence rules. It seems to be the case that many of them are introduced as a part of a whole block.
Consequently, it becomes unreasonable to require that the criterion of empirical meaning were conceived
in terms of individual correspondence rules. Therefore, condition number 5’ was to be modified once
again and replaced by 5’’: correspondence rules C constitute an interpretative system for T. But in this
case, how such system can guarantee the empirical significance of each and every theoretical term? None
of the attempts to specify how C can do that was successful. From the three functions assigned to C in the
original conception of a theory, namely, define theoretical terms, assure their cognitive meaning, and
specify the procedures in order to apply the theory to phenomena; only the latter remained after the
modifications made upon 5.

2 Carnap tackles the issue of reduction sentences in his article “Testability and Meaning”, in Philosophy of Science
3, 420-68, and 4, 1-40.
3 Hempel, C. G. Fundamentals of Concept Formation in Empirical Science. Quoted in Suppe, 23-4.

3
The Ontological Status of Theoretical Terms

Another issue in which further reflection showed that things were not as easy and straightforward as
the original conception of theories indicated consisted in the ontological status of the reference of
theoretical terms. Since C contains the specifications to apply the theory to empirical phenomena, it was
possible to logically derive from TC a class of observational statements: predictions. And taking for
granted the view in which explanations and predictions have the same logical form, if an observation
statement referred to an already known phenomenon was capable to be derived from TC, then TC can also
provide an explanation for the phenomenon at issue. In general terms, TC provides, as a logical conclusion
derived from it, a class of observational statements of the form [(O1 & O2 & … & On) → O’]4, where the
antecedent contains the initial conditions, and the consequent refers to the predicted fact or to the
explaining phenomenon. Name TO this class of observational statements derived from TC.
Consider now the truth value of TC. Is the truth of TO a necessary and sufficient condition for TC to
be true? The answer to this question depends on the ontological status one may assign to the reference of
the terms in VT. If one considers them as referring to actually existing but nonobservable objects or
properties, then the truth of TO is a necessary condition for TC to be true. But it is not a sufficient
condition, since the laws in T should have to be true as well, they must express empirically adequate
universal regularities of phenomena; and on the other hand C must contain true statements about how the
behavior of the nonobservable existing entities connects to observable data. This is the realist
interpretation of scientific theories. Given the problem of induction, it is quite difficult to see how the
truth of T could be supported. It is quite interesting to notice that one can sharply realize that the whole
tradition of inference to the best explanation and no miracle arguments is motivated by these
considerations5.
However, it is a logical possibility that all the statements in TO may be true, but at the same time TC to
be false. Such possibility opens the door for an alternative interpretation. Terms in VT could be considered
not as referring terms; that is, they do not denote any entities or properties. They only work as predictive
devices. In this view, TC is nothing but a set of rules to make observable predictions, is just a set to
logically specify TO, so that TC has not a truth value. Rather, it is adequate or inadequate insofar as the TO
statements it specifies are empirically true or false. This alternative is the instrumentalist interpretation of
theories.
From a thorough empiricist point of view, this latter alternative looks more compelling. It does not
commit itself to the problematic truth of T or to the task of specifying how some unobservable entities are
linked to observable phenomena –none of both issues can be stated on empirical grounds–. However, the

4 Since the conjuntion of TC and a set of observational sentences Oi expressing initial conditions entail an
observational sentence O' expressing the predicted fact; then it follows that TC logically entail sentences like
(Oi→O')
5 Inference to the best explanation arguments state that the fact that a certain scientific explanation of a certain
phenomenon provided by a specific theory works better than all of its rivals and so defeats them, is an indication
of the (approximate) truth of that theory. No miracle arguments assert that the only acceptable explanation of the
success of good scientific theories is that they are (approximately) true, otherwise their success would be a
miracle or the outcome of mere chance.

4
instrumentalist position must face a deep problem. Taking for granted that terms in VT can be translated via
observational concepts, why not to specify TO directly from VO? Why are theoretical terms necessary? If
their only function is to serve as devices to logically derive observational sentences they could be left out
from the outset and be replaced by their observational translations. Hempel stated this problem as a
dilemma: Theoretical terms serve or do not serve their purpose, if they do not, they are not necessary; but
if they do, they are not necessary anyway (as was just pointed out). If in any case they are not necessary,
why the actual practice of science still uses them? Or more importantly, why are they in science then?
This dilemma depends, of course, on the real possibility of VT to be dispensed out in order to specify
TO. Two main attempts were made to show it. First, Craig’s theorem says that if the nonlogical vocabulary
of a logical system S is split up in two classes A and B, and if T’ is the class of theorems of S such that all
the nonlogical terms in them belong to A; then there exists a logical system S’ whose only nonlogical terms
belong to A and such that its theorems are exactly the sentences in T’. Replacing S for TC, A for VO, B for
VT, and T’ for TO; then Craig’s theorem says that there is a system S’ the theorems of which are just the
sentences in TO, but whose nonlogical terms all belong to VO. That is, S’ is an empiricaly equivalent theory
with respect to TC, but it does not contain any VT term.
However, S’ is not finitely axiomatizable. It will have an infinite number of axioms which cannot be
specifiable from a finite number of axioms schemata. Actually, every sentence in VO will be an axiom of S’
or logically equivalent to an axiom of S’; so that the axioms in S’ would be a set of sentences in TO, and the
only nonaxiom sentences will be logically equivalent to axioms in TO. That is, S’ is no more than a list of
the axioms in TO, and a list which cannot be written down completely. What is worst, S’ can be specified
only showing how to build it from TC. Craig’s theorem, even though applicable, cannot show that VT can
be dispensed out to derive TO.
The second attempt consisted in using the Ramsey sentence. Let α1, …, αn be the theoretical terms
contained in the axioms ψ1, …, ψn in T. Then TC can be presented as a system witho only one axiom ψ1 &
… & ψn. Now take ψ1’, …, ψn’ as formulas just like ψ1, …, ψn but in which the terms αi have been replaced
by a predicate variable βi of second order logic –in which the quantification over predicates and relations
is allowed–. Then, the form for the Ramsey sentence for TC would be (Eβ1) (Eβn) (ψ1’ & … & ψn’), and let
TCR be the theory which is obtained when replacing ψ1 & … & ψn by the mentioned Ramsey sentence for
TC. It then follows that all the sentences in TO will be theorems of TC and of TCR, but in the latter there
will not be any theorem containing terms from VT, for they have been replaced by existentially quantified
predicate variables.
But this move does not do the trick either. TCR avoids theoretical terms, but just by not explicitly
mentioning them. And moreover, just as in the case of Craig’s theorem, the Ramsey sentence can only be
formulated relatively to TC, so that even accepting that it does avoid theoretical terms, it does not show
that they are dispensable to build the theory from which TO is to be derived.
One could say that these failures are just particular failures. Some other logical technique could indeed
show that theoretical terms are avoidable. However, it was also recognized that scientific theories must not

5
only provide empirically adequate predictions and explanations. They must also provide an account of
empirical regularities in a conceptually economical way, and in a way such that its formulation is
heuristically fruitful, in the sense of being able to suggest relevant experiments to test it and to permit
further extensions to new phenomena. Therefore, the dilemma should be posed as showing not only that
theoretical terms can be avoided to derive observational statements, but also show that it can be done in a
way that the theories can still be formulated in a systematically and conceptually economic way, and in a
way that they retain their heuristic fruitfulness. However, the history of science seems to show that
systematic economy and heuristic usefulness are acquired mainly through the use of theoretical concepts.
Thus, if theoretical terms are not only used for deriving observational sentences, if they are necessary for
the mentioned pragmatic virtues; they are indeed necessary the formulation of theories 6.
To recognize the indispensable function of theoretical terms is quite uncomfortable for the
instrumentalist. He would be accepting terms whose empirical meaning is not clear at all. They are not
only logical instruments, but they do not refer to anything real. It could be natural to think that these
difficulties would lead one to a realist position. But this interpretation must face its own problems. Besides
the difficulty to formulate criteria of truth for T and C, the realist should explain the semantic content of
theoretical terms. Realists assert that they do refer, but since they are nor wholly semantically
characterized by the correspondence rules, and since the only interpretation of such terms comes from C,
what is the rest of their meaning and where does it come from? The typical answer to these questions was
that the rest of the semantic content of terms in VT is determined by a richer metalanguage. But the nature
of it, and how it determines the extra meaning of theoretical terms was not satisfactorily explained 7.

Counterfactuals and Extensional Logic

Requirement 1 in the original formulation of LP view of scientific theories states that they are to be
axiomatized in first order logic. Given that the logic chosen is extensional, further difficulties came out. It
was already pointed out that dispositional terms are not explicitly specifiable via explicit definitions. The
main reason is that dispositional terms have not only an indicative factual content, but a subjunctive
counterfactual one as well.
This problem becomes much more general when one notices that scientific laws also have a
counterfactual content. They not only describe what is actually the case, but what would have been the
case if, or what would be the case if. That is, they express regularities among phenomena by means of
counterfactual conditionals. The counterfactual nature of scientific laws becomes a difficulty for the LP
view insofar as the extensional conditional of first order logic does not correspond to the counterfactual
conditional. The extensional conditional is always true when its antecedent is false, so that any
6 This attempted dissolution of the dilemma, including the analyisis of Craig's theorem, Ramsey's sentence, and the
pragmatic function of theoretical terms; was originally presented by Hempel in his article “Theoretician's
Dilemma”, in Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. Free Press. New
York, 1965.
7 For example, in Carnap's Foundations of Logic and Mathematics. University of Chicago Press. Chicago, 1939.

6
counterfactual conditional would be true from the extensional standpoint. For example, both (x) [(Sx &
Fx)→Bx] and (x) [(Sx & Fx)→Bx], where S means is struck, F means is fragile, and B means breaks; would
be true for an object made of glass which is not struck, for example. But the counterfactual conditional
states that the former is true whereas the latter is false.
However, just as in the case of reduction sentences and dispositional terms, it could be suggested that
extensional logic can represent counterfactuals in a more indirect way. But the possible solution of
specifying sufficient conditions can only grasp one part of the counterfactual content of dispositions and
laws. For example, to present the term fragile as a reduction sentence (x) Sx→(Bx→Fx) is more accurate
from a counterfactual point of view; but it leaves a zone of indetermiteness for the term F: when the
sufficient conditions are not realized this sentence does not specify what it is to be F. But the meaning of a
counterfactual conditional does not depend on whether or not the conditions in the antecedent are met 8.
Nelson Goodman attempted to describe counterfactual conditionals in a more complex way. He first
observed that such statements establish a connection between antecedent and consequent which is not a
feature of pure logical consequence. It seems to be the case that such a connection depends on a
background scientific law which determines a set of relevant conditions for the connection which are
implicitly contained in the antecedent. For instance, the counterfactual conditional if that match would
have been scratched, then it would have lighted is true by virtue of a natural law connecting the predicate
in the consequent not only with the explicit antecedent, but also with some other elliptic conditions such as
the match is dry, there is oxygen present, and so on. Considering this, Goodman thought that
counterfactual conditionals could be extensionally presented as a conditional which explicitly shows the
relevant conditions in its antecedent (as a conjunction of all of them), and imposing some requirements for
the possibility of such conjunction. A counterfactual could be presented as (A & S)→C, where A is the
antecedent of the counterfactual, S is the relevant conditions, and C is the consequent of the
counterfactual.
One of the requirements for the conjunction of A and S is that they are cotenable, meaning that is not
the case that S would not be true if A were true. But in order to know whether this requirement is
accomplished we should be able to know the truth value of a previous counterfactual if A were true, then S
would not be true. And in Goodman’s proposal that means to be able to find out if there is an S1 which is
cotenable with A and leads to S, and so on. That is, this option leads to an infinite regress.
Since counterfactuals connect a set of conditions with a consequent grounded on a natural law, maybe
they can be characterized in terms of scientific laws. But this move does not work either. Laws have the
form of universal generalizations, but accidental generalizations have the same logical form (all planets
move in ellipses around the sun, against all the objects in my pocket are made out of silver, for example).
The problem is to determine the feature which distinguishes both classes of generalizations. Goodman
stated that it had to be done in terms of the capability to make predictions of laws, which in its turn is
grounded on their capability to be inductively confirmed. But there is no satisfying account of what it is to

8 This point is made by Roderick Chisholm in “The Contrary to Fact Conditional” in Mind 55, 289-307.

7
be inductively confirmed. Then, any attempt to specify counterfactuals must involve either the notion of
cotenability or of lawlike generalizations. But the former cannot be specified without entering into an
infinite regress; and the latter requires developments in confirmation theory which are not entirely
satisfactory9.
Beyond the specific correctness of Chisholm and Goodman’s arguments, the problem they point out
led to the idea that counterfactual conditionals are essentially nonextensional, so that scientific theories
were not suitable to be axiomatized in extensional logic. The intensional connection established by
counterfactual conditionals among their antecedents and consequents was typically causal; so it was
thought that causal modal operators were needed to perform the logical axiomatization. The result was that
requirement 1 was programmatically modified in terms of 1’: theories are formulated in first order
mathematical logic with identity, possibly augmented by modal operators. I say programmatically because
fully developed modal logic systems were not available during those times.

The Precise Status of LP’s View of Scientific Theories

If one reads the five requirements of the LP view it is immediately apparent that actual theories are not
formulated in an axiomatic way, or have their correspondence rules explicitly established. Therefore, one
could hardly say that such a view stands as a description of the nature of actual theories; so one should ask
what is its real nature and aim.
The most appealing answer is that LP model of theories intends to be an explication of them. That is,
this view aims to be a general model which “lies behind” all actual theories, even though they are not
explicitly presented in that way; so that it can make clear their empirical foundations and their logical
structure. The view of scientific theories presented by LP

begins by specifying a canonical formulation for theories in terms of an axiomatic calculus and rules of
correspondence. This canonical formulation is claimed to stand in the following relation to any scientific
theory: any given scientific theory could be reformulated in this canonical manner, and such a canonical
formulation would capture and preserve the conceptual and structural content of the theory in such a manner
as to reveal most clearly and illuminatingly the conceptual or structural nature of the theory. 10

Adequacy and Criticism of the LP View

Now that the main features of the LP view of scientific theories have been described, and now that its
general status as an epistemological model has been outlined; it is possible to see whether such a view can
really achieve its purpose. In this section I will offer an overview of the main criticisms which attempted
to show that LP model for theories is essentially inadequate or incorrect.

9 Goodman, N. “The Probolem of Counterfactual Conditionals” in Fact, Fiction and Forecast. Harvard University
Press. Harvard, 1955
10 Suppe, 60.

8
a) Axiomatization and actual theories

Concerning the requirement of the expression in a formal and axiomatized way, we have that a small
number of the concepts in the preaxiomatic theory must be selected as basic, that axioms are introduced in
order to explicit the relationships among them, and definitions are stated in order to express the meaning
of the remaining concepts of the theory. This presentation does not typically grasp the whole content of the
theory; but if the axiomatization is fruitful and adequate, it will be possible to logically derive the
remaining content of the theory from the basic axioms. From this characterization of an adequate
axiomatization it follows that it is required that the theory may contain a well-developed body of
knowledge; and, especially, that the systematic interconceptual relations may be well understood.
The first accusation of the inadequacy of the LP model says that, even though some important
examples of axiomatization have been done in the field of physics; some other paradigms of a scientific
theory, such as evolution theory, are not capable to be axiomatically presented. If that is the case, then this
conception of theories does not hold as a universal structure. However, it is clear that LP has an easy way
out here. It could be quickly replied that in spite of its success in the realm of science, theories like
biological evolution are not yet clear enough with regard to their conceptual apparatus; but that further
analysis and clarification will finally show that they can be axiomatized.

b) Observational-Theoretical

A more serious general objection is related to the adequacy of the distinction between observational
and theoretical. On the one hand, some philosophers have questioned the distinction between analytic and
synthetic which is at the bottom of it. A typical formulation, in the context of LP thought, of this second
distinction states that analytic sentences are those which are true just in terms of the meanings of the terms
they involve; whereas synthetic one are true or false depending on factual information about the world –
and cognitively meaningful sentences are only analytically true or false or synthetic–. It is important to
notice that since TC partially determines the meaning of VT terms, then some of the sentences in it are
analytic, since they are truth only in terms of the meaning of the terms involved. But not all of those
sentences can be analytic, since in that case the whole theory would be analytic. Therefore, TC contains a
subclass of meaning postulates which are analytic sentences which play a semantic role with regard to VT
terms, and a subclass of factual sentences. Given adequate meaning postulates M in C, then all sentences
containing only nonlogical terms from VT and all sentences other than M with mixed terms will have
logical consequences which are subject to direct observation and which are therefore synthetic. The
essential link among both conceptual distinctions is that the observational-theoretical semantic structure of
the theory assures that all the nonanalytic sentences in it –that is, all sentences except logical truths and the
sentences in M–, are synthetic. Therefore, if the analytic-synthetic distinction falls, also does the
observational-theoretical one.
Maybe the most widely known criticism of the analytic-synthetic distinction is the one offered by
Quine. He first aims to show that the notion of analyticity, considered as the property of statements being

9
true only by virtue of the meaning of their terms, involves the notion of synonymy. Since none of the
attempts he undertakes to clarify the latter concept works; he then argues that the option of describing
analytic by means of the verification principle connects the dogma of the distinction at issue with the
dogma of the reduction of synthetic sentences to statements about immediate experience; and concludes
that both dogmas are in the end one and the same. However, such argument does not show that the
distinction is untenable, he has only shown that some attempts to clarify it do not work –in the case of the
verification principle he states that it does not work under the assumption of a holistic approach for the
grounds of empirical knowledge–11.
Another criticism of the distinction has been proposed by Putnam. Characterizing analytic statements
as those which cannot turn out false unless some change in the meaning of the terms is performed, he
intends to show that scientific theories contain statements which are nor analytic nor synthetic. He states
that science uses law cluster concepts, concepts that are determined by a cluster of laws which determines
its identity. They are a cluster in the sense that for an object to fall under it, it is not necessary that it may
satisfy all of the properties which define it, but a good number of them. A typical example of such a
scientific concept is energy. An important feature of these terms is that if one of the laws which constitute
them is abandoned or altered, it does not follow that its meaning has been altered. It follows that principles
involving laws cluster concepts can be denied without changing the meaning of their terms, they can turn
out false even though the meanings have not changed, so that in Putnam´s view they are not analytic. The
second part of the argument shows that considering that principles like energy conservation are so
centrally employed as auxiliaries to make predictions, without themselves to be experimentally threatened;
it follows that they are not synthetic either. If important statements of science are neither analytic nor
synthetic, then the distinction among observational and theoretical falls 12.
Another problem regarding the distinction at issue lies on the line which LP drew between what is
observable and what is theoretical. For example, for Carnap, a property P is observable if a subject is able,
under suitable circumstances and after few observations, to determine if P(b) or P(b). The main point is that
Carnap explicitly recognized that the few observations requirement is quite vague, especially considering
that it happens in an arbitrary point along a line of continuous degrees of observability. Such vagueness is
even more apparent when Carnap states that the line distinguishing theoretical and observable changes its
place depending if we are involved in scientific or in epistemological talk. However, such vagueness and
arbitrariness do not alter, for Carnap, the usefulness and the important nature of the distinction with
respect to the empirical foundations of knowledge13.
An important challenge to the usefulness and importance of the distinction was introduced by
Achinstein. The criterion of the few observations does not work, even if we forget about the vagueness
and arbitrariness. For example, if being directly observable means that no instruments are involved, and

11 Quine, W. V. O. “Two Dogmas of Empiricism”, in From a Logical Point of View. Harvard University Press.
Cambridge, 1953.
12 Putnam, H. “The Analytic and the Synthetic”, in Minnesota Studies in the Philosophy of Science, vol. 3. Edited
by H. Feigl and G. Maxwell. Univerisity of Minnesota Press. Minnesota, 1962.
13 This point is made in Carnap's Philosophical Foundations of Physics. Basic Books. New York, 1966.

10
therefore no directly observable means that a certain property is not observable but by means of
instruments; then it could be the case that charge should be included among observable properties, since
sometimes we can detect it without recourse to instruments. On the other hand, if we take not directly
observable as meaning properties the observation of which generally requires instruments, then the name
of any complex chemical substance would be a theoretical concept. The small amount of observations is a
problem as well. A skilled and well trained physicist, after only a couple of observations in a cloud
chamber can determine the presence of an α-particle emission.
Putnam offered another point of criticism. If observable terms can be only used to refer to observable
objects or properties, then there are no observable terms. Terms undoubtedly conceived as observable can
be used to refer to unobservable entities, as when Newton said that light is made out of red corpuscles. So,
observational terms are those which can be sometimes used to refer to observables and sometimes to
unobservables, whilst theoretical are terms which can never refer to observables. But again we would have
a similar problem, since electric charge would be observational –just put your finger on an electric
terminal–14.
The LP could argue that these remarks only point out the vagueness and arbitrariness from another
point of view, but even though the distinction does not correspond to what really happens in science, it
nevertheless illuminates important epistemological features. But we already saw that if one of those
features is related to the distinction between analytic and synthetic statements in a theory, other problems
come up. The observational theoretical distinction was also hold as clarifying the origin of knowledge:
sciences progresses from simple and directly observational concepts to abstract and theoretical ones. But
we have already mentioned that this genetic explanation does not work, neither systematically or
historically. Therefore, to introduce this artificial distinction brings more problems than solutions.

c) Correspondence rules and scientific practice

It was shown above that the main function that C plays in scientific theories is to determine the
conditions under which the theory is to be applied to phenomena. A problem with considering the theory
as the conjunction of T and C is that if new procedures and statements which specify new ways to apply
the theory are discovered or introduced, then a change will occur in C, so that the new class of statements
to relate the theory to the empirical world will be C’. Instead of the same theory considered as empirically
broader, what we have is a new theory TC’.
The way out for the LP, which is grounded on the fact that C is only a class of sentences which offers
a partial interpretation of VT terms, not an explicit definition of them; consists in not to consider the
correspondence rules as an essential part of the theory, but as a set of auxiliary hypotheses to connect it to
facts. In this case, the theory gets reduced to T as a sort of formal calculus which becomes empirically
meaningful by means of the auxiliary hypotheses.

14 Achinstein, P. Concepts of Science. John Hopkins Press. Baltimore, 1968.

11
Even though such modification resolves the problem of having a whole new theory every time that a
new correspondence rule is introduced, it leads to further difficulties. Is it really correct to say that what
determines a scientific theory is the formal and uninterpreted class of axioms T, and that C is only a sort of
empirical anchor? For example, Schaffner has argued that the VT terms in any theory have an independent
and antecedent theoretical meaning; that is, that they have a semantic independent content with respect to
C. This does not mean to deny that statements in C offer a partial interpretation of them, it only means that
when they are included in T it is misleading to characterize them as devoid of empirical meaning and
playing just a logical role. For example, Lorentz’s concept of electron already contains an antecedent
meaning such as tiny and electrically charged particle even before C15. Hanson, using other words, makes
the same point. Is it really meaningful to speak of something like T as a scientific theory? The answer is
obviously negative. Theoretical concepts and their empirical meaning constitute a whole unity, from the
outset16. As it can be seen, this difficulty related to the nature of correspondence rules is once again
connected with the problematic observational-theoretical distinction.
Another problem concerning C lies on the fact that, even if we consider them as auxiliary hypothesis,
the LP view is too simple as to grasp the real scientific practice when it comes to the empirical use of the
theory. For instance, Patrick Suppes pointed out that, instead of T plus a set of auxiliary hypotheses which
corresponds to it, what is needed in order to apply a theory is a whole hierarchy of levels of theoretical
statements. At the top of such hierarchy lies the physical theory itself; but considering, for example, that
the formalisms it contains commonly deliver infinite or continuous functions, another level of auxiliary
hypotheses is needed to interpret them in a discrete and applicable way. This second level is called by
Suppes as the theory of the experiment, and is commonly the case that this level borrows further
conceptual machinery from other theories. But in this second level things are still virtual. In order to
actually connect the theory with the empirical world a specific experiment must be designed, so that a new
level is introduced, the theory of experimental design, in which once again information from some other
theories is borrowed (via technological knowledge, for instance). Finally, in order to understand the
results of the experiment in the light of the original physical theory, another level of theory of data is
introduced. As it can be seen, Suppes has shown that the whole procedure to apply a theory is quite
complex and is not determined only by one set of ‘homogenous’ statements. Every level of the hierarchy
can borrow information from a different theoretical field 17.
Summarizing, LP’s doctrine of correspondence rules is inadequate in three senses. First, it mistakenly
attaches them as an essential part of the theories. Secondly, if in order to solve this problem the
correspondence rules are detached, the remaining class of statements T can be hardly considered as an
empirical theory. Finally, since C is conceived as determining the ways to connect the theory to empirical
phenomena, the description of this process is too simplistic as to make sense of the actual scientific
practice.
15 Schaffner, K. F. “Correspondence Rules”, in Philosophy of Science 36, 280-290.
16 Hanson, 73-82.
17 Suppes, P. “What is a Scientific Theory”, in Philosophy of Science Today, edited by S. Morgenbesser. Basic
Books. New York, 1967.

12
d) Is formalization useful?

Formalization of theories, in the specific form of a syntactic axiomatization, was conceived by LP


philosophers as a source of epistemological elucidation. It is the case that in some situations it has been so,
Von Neumann’s axiomatization of quantum theory which showed that Schrodinger and Heisenberg’s
theories were equivalent, for example. However, this does not mean that axiomatization is an
epistemological virtue per se. Actually, Hempel denied it. Axiomatization is a logical device in which the
relationship among concepts can be clearly put. But a wide variety of axiomatic presentations for the same
logical system are possible, so that the fact that a certain axiomatization says that a certain concept is more
basic than another one cannot be hold as a definitive fact. Hempel’s general point is that in order to
achieve the clarity that axiomatization can show, it is required that one has already done an adequate
philosophical analysis of the system to be axiomatized. Whatever philosophical elucidation achieved via
axiomatic presentation depends on the previous analysis 18.
In spite of Hempel’s remarks, formal axiomatization can still be regarded as an epistemological
important achievement, insofar as when it is done hand in hand with the philosophical analysis a high
degree of clarity is available. However, problems with the syntactic approach of formalization attempted
by LP have shown that a new way has to be undertaken. All the problems of presentation theoretical laws
in extensional logic which LP had to face –the problem of dispositional terms and counterfactual content
have been pointed out, and many problems in the field of confirmation could be added (the paradox of the
crows, for instance)–, could be avoided if a more semantic approach is chosen. Actually, that is what later
conceptions of scientific theories have done: the structuralist program of Stegmüller and other has made
use of a wider conception of formalization in which model and set theories play an important role. That is,
formalization is generally hold as a source of philosophical insight, but the axiomatic-syntactic conception
of it which LP supported is too narrow as to achieve the desired insight.

Concluding Remarks

Today, in the field of the Philosophy of Science, LP has been mostly abandoned as plausible position.
However, it is quite obvious that the later development in this field has been in a sense determined by it.
Its thorough intention of establishing a firm empirical ground for scientific knowledge still remains. The
observational/theoretical distinction, which was originally formulated in order to illuminate such grounds
led to too many problems, and was finally rejected as inadequate. Nevertheless, the distinction lies on the
basis of the whole debate between realist and antirealist conceptions of science. On the other hand, the
historicist approach of Kuhn, Hanson and Feyerabend departed of a criticism of the distinction in order to
formulate their important notion of the theory-ladeness of observation, a concept that in a certain way
depends on the difference between pure observation and theoreticity to be meaningful. From a general
18 This point is made in Hempel's “On the Standard Conception of Scietific Theories”, in Minnesota Studies in the
Philosophy of Science vol. 4, edited by M. Radner and S. Winokur. University of Minnesota Press. Minnesota,
1970.

13
standpoitn, it could be said that the failure of the distinction at issue, as formulated by LP, led to its ruin in
the field of Philosophy of Science, even though it still remains at the basis of its further development.
Concerning the quest of formal clarification, it has also been abandoned as originally conceived and
attempted by the Vienna Circle. Nevertheless, the development of modal logics and the structuralist
approach have been directly motivated by the will to achieve a formal clarification which avoids the
problems that LP had to face. Stegmüller, Balzer and Sneed, for example, have developed a model of
formal presentation of theories which makes sense of the logic-mathematical structure of theories, and at
the same time considers the Kuhnian notions of theory ladenness of every scientific concept, and also the
diachronic dynamics of theories.
Beyond this evaluation in perspective, an examination of the course of development of the LP view of
scientific theories shows interesting features and changes in some of the core and more general principles
of this philosophical school. That is, an exposition of their view of scientific theories illuminates some
aspects of the 'internal' course of LP. A remarkable example lies on the verificationist criterion of
empirical meaning. The development of the nature of correspondence rules for theoretical terms finally led
to the view that scientific theories as a whole were connected to empirical data, whilst it could be the case
that individual concepts were empirically meaningful just as a part of that whole, not just by themselves.
This is a radical receding from the original spirit of the formulation of the criterion.

Bibliography

Hanson, N. R. “Logical Positivism and the interpretation of Scientific Theories”, in Achinstein, P. and S.
Barker (eds.) The Legacy of Logical Positivism (57-86) The Johns Hopkins Press. Baltimore, 1969.

Hesse, M. “Positivism and the Logic of Scientific Theories”, in Achisntein and Barker (85-114).

Marmonn, T. “The Structure of Scientific Theories in Logical Empiricism”, in Richardson, A. and Thomas
Uebel (eds.) The Cambridge Companion to Logical Empricism. Cambridge University Press. Cambridge,
2007.

Shapere, D. “Toward a Post-Positivistic Interpretation of Science”, in Achinstein and Barker (116-162).

Suppe, F. “The Search for Philosophic Understanding of Scientific Theories”, in Suppe, F. (ed.) The
Structure of Scientific Theories (3-241). University of Illinois Press. Urbana, 1974.

14
15

You might also like