Weapons Ethic

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/8351386

The morality of weapons research

Article  in  Science and Engineering Ethics · August 2004


DOI: 10.1007/s11948-004-0010-z · Source: PubMed

CITATIONS READS

10 3,664

1 author:

John Forge
The University of Sydney
94 PUBLICATIONS   372 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Why there will be no significant action on climate change View project

All content following this page was uploaded by John Forge on 23 February 2014.

The user has requested enhancement of the downloaded file.


Science and Engineering Ethics (2004) 10, 531-542

The Morality of Weapons Research


John Forge
School of Science, Griffith University, Queensland, Australia

Keywords: weapons research, engineer, tool, purpose, responsibility, professional, moral rules,
justifiable exception

ABSTRACT: I ask whether weapons research is ever justified. Weapons research is


identified as the business of the engineer. It is argued that the engineer has
responsibility for the uses to which the tools that he designs can be put, and that
responsibility extends to the use of weapons. It is maintained that there are no
inherently defensive weapons, and hence there is no such thing as ‘defensive’ weapons
research. The issue then is what responsibilities as a professional the engineer has in
regard to such research. An account is given to ground the injunction not to provide
the means to harm as a duty for the engineers. This account is not, however, absolutist,
and as such it allows justifiable exceptions. The answer to my question is thus not that
weapons research is never justified but there must be a strong assurance that the
results will only be used as a just means in a just cause.

In March 2003, the Secretary of Defence of the United States, Donald Rumsfeld, called
a
for more scientific research into ‘mini-nukes’. He stressed that this did not amount to
an announcement that the United States was going to build these weapons, merely that
it was important to know if they could be built, hence the need for research. Mini-nukes
are normally understood to be low-yield (0.1-5 kilotonne) fission weapons, such as

a. And this is now official US policy, as President Bush included provision for funding it in his
November 2003 arms appropriation bill. Moreover, this appropriation overturned the Sprat-Furse
amendment that allowed research on small nuclear weapons but did not allow development that
could lead to testing and deployment. There is a good discussion of this amendment, and much
else of interest in regard to US policy on nuclear weapons, at the Union of Concerned Scientist
site, www.ucsusa.org. Another useful source on such matters is American Institute of Physics
Science Policy section, at, for example, www.aip.org.enews.

Address for correspondence: John Forge, School of Science, Faculty of Science and Technology,
Griffith University, Nathan, Queensland, 4111 Australia; email: J.Forge@sct.gu.edu.au.

Paper received, 13 August 2003: revised, 21 May 2004: accepted, 8 June 2004.
1353-3452 © 2004 Opragen Publications, POB 54, Guildford GU1 2YF, UK. http://www.opragen.co.uk
a

Science and Engineering Ethics, Volume 10, Issue 3, 2004 531


J. Forge

might have use in bunker-busting bombs or on the battlefield.b,1 The people who would
undertake this research are scientists and engineers. In fact, since a great deal is known,
in the United States at least, about nuclear weapons, determining the possibility of
making one with, say, a yield of 0.5 kilotonnes, might well look like a problem for
design engineering. Given certain demands about size, weight and strength, is it
possible to make a nuclear device that will produce the desired yield? However, if we
trace the development of nuclear weapons research back to the Manhattan Project and
beyond, we see that it has exemplified the full spectrum of scientific research,
engineering design, testing and manufacture.c Nuclear weapons are but one category,
albeit the most terrifying, of weapons created by scientists and engineers. The
discussion that follows is intended to cover all such categories, though the examples
will mainly refer to the former.
How should members of the scientific and engineering communities respond to
such requests, requests for them to do weapons research? Is it sometimes or always
morally wrong to undertake such research? Kenneth W. Kemp has even claimed, to the
contrary, that weapons research is a civic duty.d, 2 I will have something to say about
this position, though my main concern is in the opposite direction, with the moral limits
of weapons research. The presumption that weapons research (WR) is morally wrong
derives from the fact that it assists killing and harming, as weapons are used to kill.
Thus the idea that WR is always morally wrong is supported by moral systems which
hold that both killing and enabling killing are blameworthy under all conditions.
However, other systems of morality, including the most widely accepted, allow killing
in self-defence: that is, while killing is wrong in itself, it can be justified in certain
special circumstances. Unfortunately, and I say this as I should like to find moral
grounds for banning all WR, I do not think it is possible to appeal successfully to a
system of first-order morality, as they are called, to rule out WR entirely.e, 3 And even
if advocating a first-order morality like pacifism is not begging the question, it is far

b. While the feasibility of these uses of mini-nukes is not directly relevant to this paper, I note that
there appear to be good reasons to believe that the prospects of using nuclear bunker busters to
flush out terrorists and leaders of rogue states and other such missions, without creating
unmanageable amounts of fallout, look dim. See Robert W. Nelson “Low-Yield Earth-
Penetrating Nuclear Weapons” in Science and Global Security.1
c. Before the Manhattan Project, the discovery of the neutron and of nuclear fission can be classed
as discoveries of basic research, while the Frisch and Peierls memorandum was applied research.
Even during the Manhattan Project we can identify episodes of ‘strategic basic research’ such as
Teller’s work on the ‘super’. I will distinguish basic, applied, etc., research in terms of the
intentions of the researchers and the aims of the institutions in which they work, i.e. the context
of the research.
d. See his “Conducting Scientific Research for the Military as a Civic Duty” in Ethical Issues in
2
Scientific Research.
e. We recall that first-order morality is concerned with the appraisal of actions as right or wrong, as
opposed to second-order morality, which is concerned with persons who perform such actions
and with whether they (therefore) are of good character, live up to their responsibilities, etc.
These two ‘levels’ of morality are not of course independent, because the character,
responsibilities, etc. of agents are functions of their actions. For more see Jonathan Bennett The
3
Act Itself, pp. 46-49.

532 Science and Engineering Ethics, Volume 10, Issue 3, 2004


The Morality of Weapons Research

from clear that pacifism forbids WR, especially if it is done with the intention of
deterring killing.
The justification, if there is a justification, of WR is not however a straightforward
consequence of the justification of self-defence. This is because, unlike killing, WR
cannot be classified as defensive or aggressive. I assume that it is possible, at least in
some cases, to decide whether a person has killed in self-defence and likewise whether
a state has defended itself against aggression.f Presumably one looks at the facts of the
matter and comes to a considered opinion. But, as I shall argue in the second section of
what follows, one cannot distinguish ‘defensive’ WR with reference to the weapon or
system that is the object of the research in question. One cannot ‘look at the facts’ in
connection with a proposed weapon and decide on its intrinsic character as defensive or
otherwise. Because there is no such thing as inherently defensive WR, I will argue that
one and the same weapon can play defensive or offensive roles in different military
operations. If the only justification for doing WR is that its outcome will be used for
defensive purposes, then it may appear that scientists and engineers engaged in that
research must have good grounds to believe that it will only play such a role. And were
it never possible to have such grounds, then we would have found a way to argue
against all instances of WR.
To reach that position it will be necessary to argue that scientists and engineers do
indeed need to justify themselves if they engage in WR. And to do that we need to
respond to those who would concede that killing is wrong but deny that doing WR is
wrong. The response here is that designing, making, and providing weapons is not
wrong, and hence requires no justification, while using them is, or can be. While this
might not seem convincing, I think it needs to be dealt with. My reply will appeal to
second-order morality, namely the responsibilities of the engineer qua engineer. I say
engineer here, and not scientist and engineer, because the account of WR that I am
going to give in the next section is primarily an account of weapons design, and that is
mainly the business of the engineer. But, as we shall see in a moment, my ‘engineer’ is
to be understood in a wider sense to include some ‘applied scientists’ as well. I shall
claim in the third section of the paper that engineers, like all professionals, have
responsibilities informed by the duty of non-maleficence. That is, no professional
should use his special position in relation to others to do or enable harm, and that is the
basis of my reply to the objection at issue.g I shall then ask how this duty is to be
discharged in regard to WR: should engineers be obliged to make sure that their work
will only be used for defence or only that they do not have reason to believe that it will
used to some other end? Although it is still possible, within the limits of this argument,
to impose a ban on all instances of WR, I think this is not the conclusion we should
adopt.

f. I shall assume throughout that defensive wars are the only ones that are ‘just’. Just War Theory
does allow for a wider class of just causes for war, but I will not appeal here to that theory.
g. I will not appeal to any received account of the responsibilities of the engineer from the
literature, partly because this is a contested area, but mainly because I want to start from a broad
view of the responsibility of any professional and then work towards the responsibilities of the
engineer from that perspective.

Science and Engineering Ethics, Volume 10, Issue 3, 2004 533


J. Forge

Weapons Research, Scientists and Engineers


Roughly speaking (and I will not need a less rough characterisation), weapons research
is research conducted with the intention of designing or improving weapons’ systems
or designing or improving the means for carrying out activities associated with the use
of weapons, such as command, control and communications. So, not only does
research into better mini-nukes count as weapons research, so does research into better
ways to encrypt communications between headquarters and commanders in the field. In
general, WR is supposed to provide and improve the means for fighting wars, and
whatever other aims it might have depend on this primary aim. These activities can be
seen as the business of the state if war is understood in accordance with Clausewitz
classic definition as “merely the continuation of policy by other means”.4
The best way, in my view, to distinguish between the types of activities that make
up the research, engineering and development spectrum is in terms of the intention of
the practitioners and the aims of those who control the context in which they work, an
appoach in accord with our definition of weapons research. What is done, the activities
themselves, do not usually bear marks that allow us to make judgements about what is
basic, what is applied, and so on. That is, the distinction must be drawn on contextual
grounds, and not with reference to ‘content’. For instance, Fermi’s scientific work on
reactor design was in fact WR, and hence applied science, but it could have been basic
research intended to test Hahn and Strassman’s theory of nuclear fission, were it done
in a different time and place – a different context – and conversely Hahn and
Strassman’s research would have been classed as applied had it been done with the
intention of providing the Germans with a super weapon. I want to set aside basic
science here and ask whether there is a difference between the general nature of the
work that (applied) scientists and engineers undertake when they do WR such that we
need to consider the two separately when it comes to making a case against WR. This
is a preliminary issue but important nonetheless.
The work of the engineer is closer to the final outcome than that of the applied
scientist, and it is of a different kind, but I do not think that either mark a significant
moral difference here. Engineers at the Lawrence Livermore Laboratory would not try
to design mini-nukes unless certain applied scientific research had already been carried
out. The details of this work are not well-known, but they will resemble the pioneering
work done at Los Alamos in certain respects. Unless the critical mass of the fissile
material under consideration, say plutonium, is within certain limits for a given range
of geometries, limitations would be imposed on atomic (and hence all nuclear)
weapons that may make them worthless or unusable, or, in the case of mini-nukes, hard
to make. For instance, suppose for any geometry and configuration whatsoever of a
piece of plutonium, the critical mass is more than 10kg, then the only way to make a
h
mini-nuke will be a design that will have pre-detonation characteristics. We can say

h. This would be true if the critical mass of a sphere of plutonium is more than 10kg. An implosion
device of that magnitude assembled in the usual way would produce a much greater yield than a
mini-nuke, as happened at Nagasaki. It would therefore be necessary to come up with a design
that would ‘disassemble’ more rapidly than normal.

534 Science and Engineering Ethics, Volume 10, Issue 3, 2004


The Morality of Weapons Research

that the investigation of the relationship between critical mass and geometry is applied
research while actual bomb design is engineering.
The realisation of engineering design is an artefact or tool.i, 5 While artefacts can
be used for different purposes, they are usually designed with a given purpose in mind
and that, in more advanced settings, can be read off the design specifications. A mini-
nuke is designed to release 1 kilotonne of energy and that is why it is made. But that is
not the whole story. It is designed to release a kilotonne of energy when used on the
battlefield or in a bunker-busting mission. So when we speak of the purpose of an
artefact, it is not what it does defined in terms of purely physical quantities that is at
issue. A jet engine is designed to deliver a given thrust, but that is not the whole story
in regard to its purpose: it is supposed to be attached to an aeroplane and provide rapid
transit for people and goods. One might say that tools and artefacts are designed to do a
job (or jobs). Spier makes an important point about this relationship between engineers
and the tools they make. He notes that tools are made with an intention on the part of
the engineer, namely to do a job – what he calls the intentionality of tools – but then
points out that they can also be used in untoward and unintended ways (as a
consequence of their physical characteristics).j Another way to make the same point is
to say that artefacts, and the know-how from which they issue, are value-laden and that
their designers are aware of the values in question.
I can think of (at least) three uses for a mini-nuke. One is as a battlefield weapon to
kill large concentrations of troops; another is to deter aggression by an enemy with a
large standing army, and the third is as submarine ballast. I designate these the
primary, a derivative and a secondary purpose respectively. A weapon cannot deter if it
cannot ‘do its job’ (unless there is elaborate bluff involved, which I shall discount),
hence deterrence uses presuppose that the weapon would go off if fired at enemy
troops. But the latter purpose does not depend on the former; the bombs dropped on
Hiroshima and Nagasaki were never used for deterrence. The use as ballast, hardly a
serious suggestion, is a consequence of the physical characteristics that the weapon
happens to have in order to fulfil it’s primary purpose (being made of fissile material
and hence being very dense). I maintain that taking part in making an artefact commits
the engineer to the primary purpose in the sense that Spier has in mind when he talks
about the intention in the tool: that is, that designing a weapon commits the engineer to
taking responsibility for the realisation of its primary purpose. We should think of this
as a prima facie commitment: unless there are good reasons for him to believe that an
artefact is being commissioned for a derivative purpose, then he, among others, is
responsible for what ensues when the primary purpose is realised. I see this claim as
requiring no argument. If S deliberately makes A which does P, and if A is used to do
P, then S is (one of those) responsible for whatever effects P has. If space permitted,
we could discuss S’s commitment to derivative and secondary purposes.

i. The word “tool” is in the title, and is the focus, of Spier’s Ethics, Tools and Engineers.5 I prefer
to use “artefact” but I think I use it in the same sense and Spier uses “tool”.
j. Spier, p. 42.5 I part company, I think, with Spier because he thinks that the engineer is not
responsible for these untoward uses whereas I think he can be, but this will not be relevant in
what follows.

Science and Engineering Ethics, Volume 10, Issue 3, 2004 535


J. Forge

I have assumed that it is not the task of the applied scientist to actually design
weapons but to set some limits on such designs. But if it is true that an engineer who
designs a mini-nuke is responsible for the use of that weapon because he deliberately
sat down in the Lawrence Livermore Laboratory and designed it, then the physicist
who works in the next office and who provided him with parameters for the design is
also responsible. The engineer is responsible for a use of this weapon, where the
scientist is responsible for a use of a weapon of this kind, where the kind in question is
the kind of weapon to which his applied research was directed. I don’t think that it is
difficult to make a case for attributing responsibility in these sorts of examples either.
And I don’t see that there is really too much different between the roles of the engineer
and the scientist in regard to the morality of weapons research. This is not to say that
there may not be other forms of applied research more remote from the artefact, such
that the job it does may be less clear in some relevant detail. There may be questions in
such cases about whether the applied scientist has the same responsibility as the
engineer. I will set such cases aside here and henceforth talk of “engineers” understood
in a wide (and slightly non-standard) sense to include all those who both make the
weapon and know about what it is supposed to do.

Defensive Weapons

It may appear that the classification of artefacts given above allows for purely
defensive weapons: a gun battery that is designed to shoot down aircraft looks like a
defensive weapon, given that its role will be to defend an asset that the aircraft is
attacking. We cannot, however, infer that it is a purely defensive weapon, because it
only plays a defensive role and repels an aggressor. And that is because the battery
could serve to free up a squadron of fighter aircraft to escort attack aircraft that would
otherwise be unescorted and hence less effective, in the absence of the battery, because
the fighters had to defend the asset. It is therefore how a particular kind of weapon fits
into the overall scheme of operations, what the aims of the conflict are and how they
are to be achieved, that decides its role as defensive, and this in turn is a function of the
aims of the war. To reply that in fact that was not how things were planned in this
instance, that there were no attack aircraft in the arsenal, and hence that it could not
undertake aggressive action, is to move beyond the ‘inherent nature’ of the system
itself and take account of the other forces, equipment, operational plans, strategy,
politics, etc. The point here is that how a particular system or kind of system is
classified on a given occasion depends on just these relationships, and not any inherent
nature.
An episode of considerable historical importance, also about air defence, that took
place in the years following President Reagan’s infamous ‘Star Wars’ speech supports
this claim.6 In 1983 Reagan called upon the scientific community to exploit new
technologies, such as lasers, to provide the basis for a defensive shield for the United
States and its allies that would render nuclear weapons “impotent and obsolete”. The
scientific community subsequently took huge amounts of money for the Strategic
Defence Initiative, even though prospects for a perfect, namely leak-proof, area

536 Science and Engineering Ethics, Volume 10, Issue 3, 2004


The Morality of Weapons Research

defence of continental United States seemed impossible from the very beginning.k, 7
Doubts have been expressed about Reagan’s grasp of the issues, though it is likely that
he sincerely believed stars wars was simply about defence. But since the system was
never going to be perfect, the Soviets interpreted an imperfect missile defence shield as
part of the means for an offensive first strike capability: after a first strike at the
enemy’s offensive systems, the defensive shield would mop up (most of) the surviving
missiles launched in retaliation.l, 8 Notice that what the Soviets were interpreting here
were the future intentions and plans of the United States. It was clear to all that the SDI
was to be an attempt to set up a defensive shield to intercept as many incoming
warheads as possible – everyone understood what the job was supposed to be. But the
Soviets and the Americans stated diametrically opposed views on the strategic role it
would play.m, 9 It is also worth mentioning that in the aftermath of the Cold War there
remain large numbers of nuclear weapons that were originally produced for the purpose
of deterrence. The large ‘strategic’ systems look increasingly like dinosaurs, relics of a
forgotten age, but the smaller systems could be adapted for battlefield use, were the
occasion to arise.
What this tells us is that weapons (and some other artefacts) are flexible and the
jobs they can do and the purposes they enable can be fitted into grand schemes like war
in more than one way. And it is not that some of these ways are aberrant or that they go
beyond the ‘intentionality’ of the artefact. Spier notes that scissors can be used to stab
as well as cut household items, but he does not think the designer can be held
responsible for stabbings as well as cuttings because the former is a ‘misuse’ (see
footnote j, and note also that on my taxonomy stabbings etc. are secondary purposes of
scissors). However, the various roles that weapons have in war are not necessarily
different uses: star wars still shoots down incoming warheads, whether these be a first-
strike or the remnants of the enemy force that has itself suffered a first-strike from the
state with the star wars defence. The conclusion that I draw from this is that there is no
such thing as a weapon that is inherently defensive in the sense that the only role it can
play is to defend a country against an aggressor who unjustly attacks it. Therefore, if
defensive WR is understood to be WR involving weapons that have this character, then
there is no such thing as defensive WR because no weapon is inherently defensive.
Since there are no inherently defensive weapons, there can be no inherently ‘defensive

k. That this was a call for a research effort was emphasised by Richard Perle in his article, “The
Strategic Defence Initiative”, Journal of International Affairs 39.7
8
l. See Richard Ned Lebow “Assured Strategic Stupidity”, Journal of International Affairs 39. Had
a perfect defensive shield been possible, the same inference could have been made, namely that it
was to support a nuclear strategy of aggressive intent. The argument here does not therefore
require the shield to have been assessed as leaky; that was just a matter of the historical record.
m. This was an enduring theme of the Cold War: differing inferences and perceptions made on the
basis of evident capabilities. From the 1950s on, the United States inferred that the large numbers
of tanks stationed in Eastern Europe were poised for a conventional attack, only deterred by US
nuclear forces, the tank being a mobile gun platform ideal for crossing borders in aggressive
operations. The Soviet position, however, was that these were purely defensive, designed to
prevent anything like a re-enactment of the Great Patriotic War. See, for example, M. MccGwire,
9
Perestroika and Soviet National Security. p.43.

Science and Engineering Ethics, Volume 10, Issue 3, 2004 537


J. Forge

deterrent’ WR; that is, WR designed to build formidable defensive systems to make an
attack pointless.

The Responsibilities of the Engineer

According to (all) first-order morality no moral agent should kill, or harm, without
justification – the pacifist is an absolutist here and thinks that there can be no
justification for killing, not even self-defence. What is much more difficult is to decide
under what circumstances harming may be justified and what other duties and
obligations moral agents have, for instance whether they have ‘positive’ duties to do
good as well as ‘negative duties’ not to harm. Preventing harm is a positive duty in the
sense that it is something the agent should do as opposed to refrain from doing. There
have been times in the past when the case for doing WR has been so strong as to
resemble a positive duty; for instance, in the last war when the Britain stood alone in
1940, it could be said that scientists and engineers had a duty to conduct research into
radar, code-breaking machines, and so forth, in order to prevent harm to their fellow
citizens. It could even be said that Szilard and others had a duty to mobilise their
companions to do research into nuclear weapons, given what was known about the
capacity of German science and industry, in order to respond to a possible German
nuclear threat.n, 10 If we propose that the injunction “Never do war research” be
included among the responsibilities of the engineer, and if this is understood as
absolute, then we have to deal with the objection that, in times of extraordinary
emergency, the engineer should be free to do WR. So must we treat the injunction as
one that could be over-ridden in extraordinary circumstances? And can it in any case be
defended as the norm? In peacetime, when there are no enemies bent on conquest and
genocide, can the injunction be imposed? I would answer “yes” to all these questions.
What is wrong with doing WR is that its products can kill and harm, and indeed
they are intended to do these things. Non-maleficence, the generalised prohibition on
harming, is part of every system of morality. I suggest that part of the responsibility of
professionals like medical practitioners and engineers, the most important part, is to
abide by non-maleficence in relation to their ‘special position’. Non-maleficence arises
in a direct way in medicine. Since the aim of medicine is to prevent and undo harm
caused by disease and poor health, actually causing harm strikes at the very heart of the
practice – hence the above all do no harm of the Hippocratic Oath. The special position
of the engineer is constituted by the fact that he is able to provide tools and artefacts
that perform certain jobs. Consistent with non-maleficence, though not quite entailed
by it, is the demand that these jobs not be such as to harm or provide the means to
harm. A generalised duty of non-maleficence applied to professionals across the board
will demand that they not misuse their special position to cause harm or provide the

n. An excellent account of the genesis of the Manhattan Project is Richard Rhodes The Making of
10
the Atomic Bomb. He has a good deal to say about Szilard’s role in that project. There is a
difference between saying that WR is a duty in such circumstances and saying that it is a
justifiable breach of a moral rule. The former needs more argument, but, as mentioned above, I
do not take seriously the suggestion that doing WR can be a duty.

538 Science and Engineering Ethics, Volume 10, Issue 3, 2004


The Morality of Weapons Research

means for causing harm. This is plausible because, to repeat, all forms of first-order
morality forbid harming and hence, given that second-order moralities, like the
responsibilities attached to professionals, require some content, it is natural to include a
duty not to use the special abilities of the professional in question to cause harm.
Second-order morality is then consistent with first-order morality, as it must be in view
of the fact that they are not independent. If we repudiate absolutism at the level of first-
order morality and allow the agent to cause harm, either in self-defence or to prevent
greater harm, should we not do so at the second-order level? That is, should we not
allow the engineer justifiable breaches of “do not provide the means to harm”? And
what of positive duties?
To answer these questions properly it would be necessary to work out our account
of the responsibilities of the engineer in much more detail, a task that cannot be done
here. However, we can see the outline of the account a little better by actually adopting
a moral system and looking at how to move from that. I stress that this is just one way
to proceed, given our starting point, and that there are other possibilities. So, suppose
we begin with a moral system such at that of Bernard Gert that gives priority to rules
that forbid harming but which is not absolutist.o, 11 Gert proposes a system of moral
rules each of which prohibit a particular form of harm, like killing, disabling, depriving
of freedom, etc. Gert also identifies a set of complementary moral ideals that are such
as to urge the prevention of the harms referred to in the rules. Thus, the complement of
the rule “do not kill” is the ideal “prevent killing”. However, the agent has no moral
obligation to obey the ideals; preventing killing is a supererogatory act. At the level of
ordinary morality, the agent is required not be break the moral rules which forbid his
doing harm, and that is all that morality requires. If he conforms to the ideals as well,
then he deserved our admiration but morality does not requires it. Gert’s system
therefore clearly differs from utilitarianism.p
When it comes to professional ethics it is possible for a norm or principle that has
the status of a moral ideal in ordinary morality to be raised to the status of a moral rule
for the professional. This is what happens in medical ethics with regard to ideals that
concern the prevention of physical harm. Medical practitioners are not merely to abide
by non-maleficence, they must also prevent killing, disabling, etc. The special position,
as we have called it, of the medical practitioner so to speak embodies these ideals. The
implications of Gert’s system for engineering have yet to be worked out, but I see no
such ‘embodiment’ of any moral ideals in the practice of engineering that would be
such as to raise them to the level of a rule for the engineer as professional. Rather, it
seems that the injunction mentioned above “do not provide the means for harm”, which
implies “never do war research”, should be the centre piece of the responsibilities of

o. See Gert’s Morality. Oxford University Press.11


p. At this stage it is necessary to make a commitment to some form of first-order morality, and
whatever commitment is made, we will not be able to please everyone. However, depending on
how it is implemented, I think the conclusions of a utilitarian-based account of the
responsibilities of the engineer, when coupled with our account of the commitment of the
engineer to the uses of the artefacts that he designs, may not be too different from those reached
here.

Science and Engineering Ethics, Volume 10, Issue 3, 2004 539


J. Forge

the engineer derived from a system of moral rules like Gert’s, in view of its emphasis
on the prevention of harm. If Gert’s system did give rise to positive duties – perhaps
“where possible, provide tools that help people” – this would not conflict with the
negative duty not to provide the means to harm. However, Gert’s system allows
exception to its rules, so when might an exception to the rule not to engage in WR be
justified?
Exceptions to the rules are justified if they are ‘publicly allowed’.q This means that
a rational and impartial person would advocate that the rule be broken in the
circumstance under consideration. The emphasis on both public and impartial is to
ensure that the exception has a universally acceptable character.r For instance, if S is
allowed to break his promise here and now, then this is justified only if everyone can
accept that anyone in a relevantly similar circumstance could break his promise. For
instance, everyone can agree that S should drive his seriously ill father to hospital
rather than keep his promise to phone a friend at midday – except perhaps an irrational
person, but Gert has built in the requirement of rational assessment – and anyone in a
similar position should be allowed to do the same. This leaves room for debate and
discussion about individual cases, though I think some are clear-cut. For example, the
engineers and scientists who did war research into radar and code-breaking in 1940
were surely allowed to do so. Any impartial observer would agree that Britain had to
s
defend itself against Nazi Germany, and radar and code-breaking were just means.
The same conclusion could perhaps be reached in regard to the setting up of the
Manhattan Project: as far as the Allies knew, Nazi Germany could have been planning
to build a bomb and so they needed to develop their own as a deterrent. However, by
the same token, by 1944, when it was clear that Germany was in no position to get
atomic weapons, the impartial observer would have advocated the abandonment of the
project.
Abandoning the Manhattan Project was not, of course, a practical alternative at that
stage, as the atomic scientists found out – they also found out later that neither did (the
majority of them) have any say in the decision to use the bombs on Japan. One of the
lessons of the Manhattan Project is that scientists and engineers relinquish the power to
make decisions about their creations when they work for governments and the military,
and that must be kept in mind when they consider whether to undertake WR. Justifiable
exceptions to the rule not to do WR should therefore be made in the light of knowledge
of what has happened in the past. Engineers should consider exceptions in the light of a
precautionary principle according to which they need to have (very) strong grounds for
believing that their work will only be used as a just means for a just cause.

q. See Gert, Morality, pp. 121-3.11


r. One might ask, in the case of war research, about the political allegiance of our impartial
observer: in a case of proposed WR in Australia, for instance, should he be Australian? My own
inclination is that the observer should be neutral and have no political allegiance, as this would
make exceptions in favour of WR much harder to justify.
s. Notice that radar is by no means an inherently defensive weapon, as its uses in the invasion of
Europe showed.

540 Science and Engineering Ethics, Volume 10, Issue 3, 2004


The Morality of Weapons Research

There is surely no such assurance for the proposed work on mini-nukes. Rumsfeld
called for such work in the context of the war on terror, with the suggestion that they
could be used to make bunker-busting bombs – to kill terrorists hiding in caves. But
while the war on terror presumably has just cause, it is surely unlikely that it will be
won with new kinds of military hardware. What is needed is information: who the
terrorists are, what they are going to do next, and so forth. These people do not live in
caves any more. In my opinion, these are matters that engineers need to consider before
they undertake the work in question, whether it be designing mini-nukes or means of
intelligence gathering. The latter looks more promising as a tool to combat terrorism,
but in the light of the position adopted here, it should be acknowledged that gathering
intelligence can be used in other situations for quite different ends, aggressive and not
defensive.t But the engineers that would design such things must bear responsibility for
any such uses, because they would have designed devices that enable all those
missions, and that, I have argued, entails commitment.

Conclusion

I have argued that engineers have a duty not to provide the means to harm, whatever
other duties they may have. This does not lead to an absolute prohibition on WR but it
does lead to considerable restriction on the scope of justified WR. This is, I believe, the
best we can do by way of arriving at a moral stance about WR. To try for absolute
prohibition would leave the account an easy target for counter-examples about Nazi
invaders bent on genocide, terrorists determined to kill civilians, and the like. The
argument was informed by a view of engineers as designers of tools or artefacts that
have certain purposes and do given jobs, and that the engineer was committed to the
realisation of these purposes and the doing of the jobs. Without this, there would be no
connection between the work of the engineer and the outcomes of the tools doing their
jobs. Also, the idea that there could be purely defensive weapons was dismissed. Were
there such weapons, then there could be a general justification for doing WR on the
ground that they would only be used in defensive roles, whatever else was the case, and
hence WR would be an infringement of the duty not to provide the means for harm that
would always be justified. Finally, I see no grounds at all for research into mini-nukes
and I believe engineers are duty-bound to ignore Rumsfeld’s call and not take the
federal funds just made available for it.

t. Although in the light of my definition of WR, it might be argued that research into better means
of intelligence-gathering, etc, does not count as WR. I think it does.

Science and Engineering Ethics, Volume 10, Issue 3, 2004 541


J. Forge

REFERENCES

1. Nelson, Robert W. (2002) Low-Yield Earth-Penetrating Nuclear Weapons, Science and Global
Security 10: 1-20.
2. Kemp, Kenneth W. (1994) “Conducting Scientific Research for the Military as a Civic Duty”, in
E. Sherwin et al. (eds) Ethical Issues in Scientific Research. Garland, New York.
3. Jonathan Bennett, Jonathan (1995) The Act Itself. Oxford: Oxford University Press, pp. 46-49.
4. von Clausewitz, C. (1984) On War. Trans. by Michael Howard and Peter Paret. Princeton
University Press, Princeton, NJ, p. 87.
5. Spier, Raymond E. (2001) Ethics, Tools and Engineers. CRC Press, Boca Raton, Florida.
6. The New York Times, March 24, 1983, p. 24.
7. Perle, Richard (1985) The Strategic Defence Initiative, Journal of International Affairs 39: 23.
8. Lebow, Richard Ned (1985) Assured Strategic Stupidity, Journal of International Affairs 39: 68.
9. MccGwire, M. (1991) Perestroika and Soviet National Security. Brookings Institution,
Washington, D.C., p. 43.
10. Rhodes, Richard (1986) The Making of the Atomic Bomb. Penguin, Harmonsworth, UK.
11. Gert, Bernard (1998) Morality: Its Nature and Justification. Oxford University Press, New York.

542 Science and Engineering Ethics, Volume 10, Issue 3, 2004

View publication stats

You might also like