Approved for public release; distribution is unlimited.

Document Published Aerospace Power Journal - Summer  2000


Politics, Death, and Morality in US Foreign Policy

Dr. Karl P. Mueller

Editorial Abstract: In this companion piece to Dr. Record’s article on “Force-Protection Fetishism,” Dr. Mueller provides a balanced perspective on casualty aversion and its potential implications in military operations and on national security policy. He argues that aversion has become “cultish” due largely to technological changes in warfare that make it more feasible and, therefore, a moral imperative to conduct less brutish combat. Yet, he points out that moral obligation may just as well dictate dying for the right cause and that such morality, rather than politically expedient doctrines, should drive our policy.

AMERICAN NATIONAL LEADERS, both military and civilian, appear to be held in thrall by a cult of casualty avoidance, as Jeffrey Record compellingly argues in slightly different words in his article “Force-Protection Fetish-ism” (this issue). To call it a cult is not mere hyperbole. Many statesmen and generals believe, with absolute and unquestioning conviction, that the United States can no longer use force successfully unless American military casualties are virtually nil, even though there is little evidence to support this belief and in spite of its pernicious effects on US foreign and defense policy.1

The belief that the United States will avoid risking the lives of its troops, and will capitulate if they are killed in quantity, encourages America’s enemies by offering an apparent means to defeat the numerically and technologically superior superpower. It also divides the United States from allies who do not share this belief about themselves. So buying into the myth is an act of pessimism—even of defeatism—although, of course, statesmen have often held erroneously pessimistic beliefs before. What is more surprising is that the casualty-avoidance cult is so powerful among military leaders when, as Record notes, it threatens the very existence of the US Army (and arguably the Marine Corps as well) as we know it. It also holds the potential to transform the combat arms of the US Air Force into mere deliverers of standoff munitions and operators of uninhabited aircraft. Such a transition might conceivably make military sense, but one certainly would not expect it to appeal to traditional fighter or bomber generals.

Of course, like most myths, the belief in American casualty intolerance is constructed around a kernel of truth. US public support for wars that seem inordinately costly relative to their objectives—or that appear to offer little prospect of success—has indeed disintegrated as body counts have risen, most visibly in Korea, Vietnam, Lebanon, and Somalia—although this pattern is neither unique to the United States nor a product of the television age, as is often suggested.2 However, historical experience offers no reason to believe that the American public will fail to support costly wars in which the lives of US troops are not apparently being wasted. Moreover, public-opinion evidence indicates that Americans have been largely indifferent to loss of life among allied forces, enemy troops, and civilian populations although, again, US leaders often believe the opposite to be true.

Behind the Cult

Why, then, do the myths of casualty and collateral-damage intolerance hold such sway? In fact, there are many reasons for the cult. In part, it grows out of paying too much attention to a small number of high-profile cases without placing them in proper context. And, in part, it has to do with many politicians, military leaders, and journalists being undereducated in history and social science. But it also reflects larger historical and technological trends: the increasing potential cleanliness of warfare and the West’s slow, ongoing shift away from barbarism.

Although the idea that warfare is becoming less gruesome may seem counterintuitive at first glance, it is generally true. During the last two hundred years, both conventional land and naval combat have grown progressively (though not always steadily) less horrible for their participants in the developed world, thanks to factors such as improved medical care and casualty evacuation, mechanization, and refinements in some classes of weapons. Air warfare, too, has become a far less bloody activity over its 90 years of development. In short, the lives of soldiers have, on the whole, become less nasty, brutish, and short since the beginning of the industrial revolution, as have the peacetime lives of civilians. Warfare has also tended to become less brutal for noncombatants, except of course when they are deliberately targeted; particularly in recent years, the ability of armed forces to minimize harm to civilians when attacking their enemies has improved dramatically as a result of the revolution in precision-guided weapons. Of course, none of this means that a particular war will be less horrible than those that preceded it—only that it can be.

Along with this increasing potential for the human costs of warfare to decline has come a normative belief that they should do so and that war, widely considered a morally uplifting entertainment as recently as a century ago, is something that ought in general to be avoided—or at least controlled.3 The more casualties can and should be avoided, the more justification they require and the more unacceptable the profligate waste of soldiers’ lives becomes.

Thus, in some ways, a faulty or exaggerated belief in total casualty intolerance can be seen as something hopeful—as giving Americans credit for even greater aversion to death and killing than they actually deserve. However, it has a far less laudable side as well, representing the dominance of political expediency over morality, assuming moral cowardice on the part of the American people, and shifting blame onto the public for the military and political failures of statesmen and generals.

Making a Virtue of Timidity

Jeffrey Record attributes many of the failures of Operation Allied Force—most notably the failure to halt the expulsion of the Albanian Kosovars—to the unwillingness of the United States and the North Atlantic Treaty Organization (NATO) to place the lives of ground troops at risk, and to the air campaign’s priority on minimizing alliance losses by operating at medium and high altitudes. These are reasonable charges although it is not certain that a less cautious air campaign would have achieved better political results, even if it had been more effective at destroying Serbian ground forces. Nor can we yet be sure that the “no ground forces” pledge actually lengthened the war, although it may well have—Slobodan Milosevic probably would have doubted NATO’s will to invade Serbia until Anglo-American intentions to do so were made clear late in the war, regardless of the ill-advised rhetoric coming from the White House and Brussels in the early weeks of the conflict. And an early combined-arms attack into Kosovo might have produced a far greater bloodbath for the Kosovars than actually occurred. Nevertheless, a pervasive fear of casualties, along with efforts to avoid causing civilian deaths, certainly dominated both the air campaign and Milosevic’s strategy to make NATO call off the war.

Next door to Serbia, in Bosnia, the effects of the force-protection mania are also visible in a way that is less dramatic but at least as disturbing. As Record describes, if American troops often appear afraid to emerge from their compound except in heavily armed, multivehicle convoys in spite of Bosnia’s low-threat environment, they can contribute little to real peacekeeping. The US military stands poised to cross the line from being the world’s slightly uneasy sheriff to its downright nervous Barney Fife.

However, in both the Serbian and Bosnian cases, among others, it may not be the effects of casualty-averse US policies that are the most troubling, but their motivations. In one briefing and press conference after another, both military and civilian leaders explain their efforts to protect the lives of American troops in terms of the political unpopularity of suffering casualties, painting a picture of an American public that is too craven to make noble sacrifices on its own and too ignorant to grasp leaders’ explanations of why it should. Similarly, NATO’s Herculean efforts to avoid causing collateral damage during Operations Deliberate Force and Allied Force were usually justified on the grounds that they were required in order to keep the international media and the allied powers happy. Among other effects, emphasizing the political rather than the moral imperatives to avoid killing noncombatants threatens to create a litigious mind-set among air campaign planners that assumes that if a target is legal to attack, it must be worth attacking.

Does the American public really demand that the lives of US troops and those of civilians not be wasted? Will the press have a field day if civilians are killed by US bombing? At the most fundamental level, it should not matter. We certainly ought to protect our forces and protect noncombatants, insofar as we can, regardless of popular opinion—not because doing so is politically prudent but because it is morally right.

Conversely, however, there are objectives that are worth dying—and killing—in order to achieve; in such cases, it is morally wrong not to risk or take lives when necessary. To shy away from casualties under these circumstances strikes at the very heart of American soldiers’ solemn oath to defend their country from all enemies. Moreover, to blame such a lack of national courage on the imaginary squeamishness of the electorate calls into question the philosophical foundation of the Republic itself.

Reassessing the Morality of War

Record rightly savages the Weinberger-Powell Doctrine over its prescriptions to use military force only when the most vital national interests are at stake and only when public and legislative opinion favor the use of force. As he argues, these criteria would have supported the disastrous Anglo-French appeasement of Hitler at Munich in 1938, and they probably would have suggested that US intervention in Vietnam was a good idea.4 (Moreover, although Weinberger himself disagrees, a good case can be made that all of his doctrine’s criteria were eventually fulfilled during Operation Allied Force.)5 One could add that if the Weinberger Doctrine had held sway in the 1770s, the American Revolution—initially supported by only a third or so of the colonists—would never have been undertaken. Endorsing the use of overwhelming force to protect vital interests while prohibiting the use of limited force for more modest ends does indeed tie the hands of statesmen both unnecessarily and inappropriately, subordinating pursuit of the national interest to protection of the government’s popularity.

The last of Weinberger’s six criteria also merits reexamination: the widely accepted rule that commitment of US forces to combat should be a policy of last resort. Although the “last resort” mantra has a certain absolutist appeal, it is in fact a fatally flawed principle. If the reason for making force a last resort is simply to avoid suffering casualties unless there is no alternative, then American statesmen should consider using military force in many situations in which it can be effectively employed without risk of harm to US forces, a potentially common circumstance in the post-cold-war world of weak enemies and powerful standoff weapons. Moreover, putting US forces in harm’s way is almost never truly a last resort—there are always alternatives for the world’s only superpower. The fact that for 50 years the United States has opted to suffer casualties in a number of conventional conflicts that could easily have been settled by using nuclear weapons is but one clear indication that we do not actually believe that spilling American blood must be avoided at all costs short of surrender.

On the other hand, if the last-resort rule is based on the moral premise that military force is too destructive to employ unless all else has failed, it provides poor guidance in cases in which military force has the potential to inflict less harm than alternative policies. For example, in some circumstances, as was true in the 1990–91 confrontation with Iraq over Kuwait, using force sooner rather than later can be less costly than trying everything else first. Moreover, it is important to recognize that in this era of discriminate weapons, the use of force can be far less destructive than employing some other, supposedly milder, instruments of power—most notably wide-spectrum economic sanctions. This is strikingly illustrated by Western policy towards Iraq in the 1990s, when United Nations trade restrictions indirectly led to the deaths of hundreds of thousands of Iraqi civilians, in the wake of a far more effective air war that killed only thousands of them.6 As airpower continues to develop its precision-targeting and -attack capabilities, and as nonlethal weapons enter the military inventory, the traditional association of military force with maximum destruction will become increasingly outdated, and the last-resort principle will eventually have to be abandoned.

Making Moral Strategy

If the American public is conditionally tolerant of casualties and consistently indifferent to collateral damage, and if the central principles of the Weinberger Doctrine are little more than a list of excuses for avoiding political risk, what should guide US decisions about when and how to use military force? Inconveniently for national decision makers, the answer is that these choices call not for simple rules of thumb but for actual wisdom. Deciding which causes are worth risking American lives to pursue and what amount of risk is appropriate ultimately requires a moral, not simply a political, compass.

This is not to say that public opinion is irrelevant—in a sound democracy it cannot be. However, national leaders are obligated to lead. When they do so, they generally find that the populace is quite tolerant of their foreign-policy decisions. In fact, the American people will even support military actions that are ill advised, requiring statesmen and generals to provide their own restraints on adventurism, although these ought to be more sophisticated and well founded than those embodied in the cult of the defensive or the Weinberger-Powell Doctrine.

The best defense against losing public support for military actions once casualties begin to occur is popular conviction of their compelling moral value. To a considerable extent, this can be shaped by effective leaders, although history also teaches that the American people are not amoral dupes who will credulously accept anything they are told. Expensive wars are often acceptable, while apparently pointless or disproportionately expensive wars are not. In the end, however, the assumption that the public will not support doing that which is right is simply unacceptable as a basis for national policy. If it were consistently true, the United States would not deserve the protection of those who have pledged their lives to defend it. 


1. On the reality of US casualty intolerance, see, in addition to the sources cited by Record, Troy E. DeVine, “The Influence of America’s Casualty Sensitivity on Military Strategy and Doctrine” (master’s thesis, School of Advanced Airpower Studies, June 1997); and John Mueller, “Public Opinion as a Constraint on U.S. Foreign Policy: Assessing the Perceived Value of U.S. and Foreign Lives” (paper presented at the International Studies Association National Convention, Los Angeles, Calif., 14 March 2000). On the historically more common military tendency towards cultish beliefs in the omnipotence of the offense, see Maj John R. Carter, Airpower and the Cult of the Offensive (Maxwell AFB, Ala.: Air University Press, 1998).

2. The relationship between the decline in US public support for the televised Vietnam War and the accumulation of casualties in the conflict was roughly the same as occurred in Korea, during the age of radio and newsreels. See John E. Mueller, War, Presidents, and Public Opinion (New York: Wiley, 1973).

3. See John E. Mueller, Retreat from Doomsday: The Obsolescence of Major War (New York: Basic Books, 1989).

4. At least early in the conflict. After the fall of Sukarno and the Indonesian communists in 1964, the argument that keeping South Vietnam noncommunist was vital to US national interests became less tenable.

5. See Caspar W. Weinberger, “The Use of Force—The Six Criteria Revisited,” speech at the Air Force Association National Convention, Washington, D.C., 14 September 1999; on-line, Internet, 14 March 2000, available from http://www.aef.org/ wein999.html.

6. See John Mueller and Karl Mueller, “Sanctions of Mass Destruction,” Foreign Affairs, May/June 1999, 43–53.


Dr. Karl P. Mueller (BA, University of Chicago; MA, PhD, Princeton University) is an associate professor of comparative military studies at the School of Advanced Airpower Studies, Maxwell AFB, Alabama. He has written articles on a variety of national security topics, including deterrence theory, economic sanctions, and the coercive use of military power in such journals as Security Studies and Foreign Affairs, and has contributed chapters to The Paths of Heaven: The Evolution of Airpower Theory and Deliberate Force: A Case Study in Effective Air Campaigning, both published by Air University Press. He is now working on projects dealing with space weaponization, the role of airpower in twenty-first-century warfare, and the strategy and results of Operation Allied Force. Dr. Mueller is completing a book about the security strategies of European middle powers.


The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University.

[ Back Issues | Home Page | Feedback? Email the Editor ]