Sam Seitz

Toward the end of the 1960s, NATO planners were once again forced to contend with the possibility of Warsaw Pact forces achieving overwhelming conventional superiority in Central Europe. While improved qualitative and quantitative assessments earlier in the decade suggested that NATO conventional forces enjoyed rough parity with their Warsaw Pact adversaries, these same measures now suggested a growing disparity in Moscow’s favor by the early 1970s.[1] To counter this deleterious trend, American policymakers began to seek technological offsets that granted their forces significant qualitative advantages over Soviet forces. Specifically, the U.S. sought to use technological advances to link and magnify the capability of different units and platforms. This approach eventually came to be known as the Second Offset. It is difficult to gauge how successfully the forces that resulted from the Second Offset would have fared in a conflict with the Soviets due to the counterfactual nature of this kind of assessment. Coalition dominance in the 1991 Gulf War, however, suggests that the pursuit of technological supremacy did meaningfully improve the capacity of American forces to enjoy clear supremacy over forces employing Soviet equipment and doctrine.

Nevertheless, progress under the framework of the Second Offset was frequently retarded by institutional opposition. While this seems perplexing in hindsight, the fundamental changes in doctrine and force composition that the Offset entailed were fairly radical. In a conservative institution like the military, one would expect skepticism toward such radical departures from the status quo. After all, while every organization has its sacred cows, the military can be particularly risk averse.[2] Fundamental innovations can yield huge advantages, but a miscalculation can result in catastrophic failure. Indeed, critics presented a range of concerns, including fears that technological offsets would be too expensive to field in sufficient numbers and would be too complex to effectively employ, especially in the heat of battle.[3] Thus, the preference of many senior military officials was to err on the side of existing methods that were well-known rather than pursue high-risk technologies that ideally would yield a significant advantage but might also fail to produce, leaving the U.S. military dangerously exposed. If not for the dedicated efforts of certain visionaries and increasing uncertainty over the sustainability of American conventional deterrence in Europe, therefore, the transformational developments of the Second Offset may never have occurred.

The Origins of the Second Offset

The initial impetus for the Second Offset was a worsening of trends in the conventional balance in Europe. Perhaps the clearest such measure was the burgeoning size of Soviet forces: from the mid-1960s to 1975, the Red Army increased from 3.15 to 3.9 million personnel, and the number of Soviet divisions grew from 136 to 170 (though much of this expansion occurred along the Soviet’s border with China).[4] Although Soviet divisions remained smaller than those of the United States, during this period they grew substantially both in numbers of soldiers per division and in terms of vehicles and support equipment allocated per unit. This was particularly true in East Germany, the most likely flashpoint, where intelligence revealed that Soviet forces were composed of elite “Guards” units.[5]

But beyond expanding their manpower, the Soviets were also outproducing the U.S. across a wide range of vehicles and equipment while also rapidly growing the Soviet Air Force. At the same time, the number of NATO forces along the Inner German Border had slightly declined. These developments in the early 1970s led to a situation in which NATO was clearly numerically inferior to the Warsaw Pact in the European theater.[6] These concerning developments were further compounded by the improving quality of Soviet vehicles and equipment. New platforms like the T-72 tank, BMP IFV, and certain SAM systems were judged to be as good as or better than their NATO equivalents.[7] This was not an immediate concern at the time because the USSR had yet to replace most of its legacy systems with these newer, improved models. It did, however, suggest that NATO’s qualitative edge was waning. Thus, there was increasing evidence that the Soviets were beginning to reclaim clear conventional superiority, re-establishing the strategic problems of the early Cold War. NATO, it was believed, might once again face the prospect of large Soviet armored thrusts into Central Europe. And due to the growing conventional imbalance, there were fears that Warsaw Pact forces could not be halted long enough for reinforcements to be moved into position.

To European allies who were already skeptical of the efficacy of the Flexible Response doctrine and its prioritization of conventional forces, these developments were deeply concerning.[8] American analysts were more divided, and many maintained that American and NATO forces’ qualitative advantages created sufficient doubts about the likelihood of a Warsaw Pact victory to deter Soviet aggression. But there were also a number of pessimists in the American camp, primarily in the military and Congress, who felt that the ever-growing size and technological capacity of the Red Army rendered the goals of effective deterrence and defense of Europe increasingly untenable.[9] It was not obvious which of these positions most accurately reflected the reality at the time, with proponents of both sides challenging the metrics and methods of the other. What was clear, though, were the growing cracks in the consensus around the efficacy of Flexible Response.

Undertaking the Second Offset

As the 1960s transitioned to the 1970s and internal disagreement over NATO’s relative conventional strength vis-à-vis the Warsaw Pact grew, there was increasing interest among US policymakers in exploring strategies to overcome a potentially deteriorating strategic balance. A return to a strategy of massive retaliation was untenable because the Soviets by this point possessed a survivable nuclear arsenal and the ability to launch a crippling attack on the U.S. homeland, thus undermining the credibility of American extended deterrence. Indeed, the situation was even more perilously insecure than that in the mid-1960s because of the Soviet’s development of MIRVed nuclear warheads and early ballistic missile defense systems, which further eroded America’s nuclear edge.[10]

With a return to massive retaliation impractical and ill-advised, the U.S. began to leverage technology to improve its position relative to the Soviets and their allies. These efforts eventually grew into and came to be known as the Second Offset. As former Secretary of Defense William Perry put it, the Second Offset “sought to use technology as an equalizer or ‘force multiplier.’”[11] The goal was not simply to build platforms that were qualitatively superior to their Soviet counterparts but rather to develop a “system of systems” that complemented each other in ways that improved the tactical effectiveness of U.S. forces in the aggregate. Over time, these systems came to focus on command and control, communications, and intelligence; defense suppression; and precision strikes.[12] These systems worked to complement each other: Improvements in intelligence platforms and command and control allowed for more rapid detection and targeting of enemy units, while standoff weapons and stealth permitted these attacks to be far more potent. Importantly, these technologies were all force multipliers, as they did not require NATO to match the Warsaw Pact unit for unit. Instead of building an air force large enough to absorb high losses from Soviet SAMs, for example, the U.S. chose to build fewer stealth and electronic warfare aircraft that rendered Soviet air defense systems far less effective. Precision weapons were particularly effective in this regard, as they allowed individual aircraft to strike targets that previously would have required tens or hundreds of sorties to destroy.

Much of the early work occurred through DARPA, which, under the leadership of George Heilmeier, initiated work on a range of technologies designed to improve command and control, stand-off and anti-armor weapons, stealth, and sensor platforms and communications platforms like AWACs and JSTARS.[13] These efforts enjoyed the support of successive administrations from Richard Nixon to George H. W. Bush and benefitted from the growing military funding that characterized the late Carter and early Reagan administrations.

Despite this broad support, there were influential skeptics both in the military and the broader policy community. Some of these holdouts were doubtful of the more optimistic claims made by the advocates of certain technologies. For example, Air Force General Robert Dixon was initially reluctant to support DARPA’s work on the doppler radar system that would eventually enable JSTARS because of his underwhelming experience while working with DARPA on another project.[14] Consequently, he was skeptical that such a “magical radar” was even feasible and initially withheld his support of it.[15] There were similar doubts about the efficacy and necessity of GPS, which required a highly precise time resolution and was deemed redundant.[16] In both cases, clever advocacy and personal relationships eventually greased the wheels of change, but doubts continued to remain throughout the 1980s for a host of reasons, including skepticism about the costs and effectiveness of such novel systems and doctrinal evolutions.

Evaluating the Second Offset

Given the resistance that the Second Offset strategy encountered, it is worth considering how effective it proved to be. Perhaps the greatest test of the systems and doctrine that developed as a result of the Second Offset was the First Gulf War. At this point, many of the aforementioned technologies – stealth aircraft, precision strike munitions, and C4ISR aircraft like JSTARS and AWACS – were being fielded to varying degrees, and it was possible to evaluate the effectiveness of these systems in a real-world battlefield environment. As former Secretary of Defense Perry put it, “[the technology developed through the Second Offset] gave American forces a revolutionary advance in military capability.” Indeed, he proceeds to argue that “An army with such technology has an overwhelming advantage over an army without it, much as an army equipped with tanks would overwhelm an army with horse cavalry.”[17]

The effectiveness of these technologies is difficult to deny. Coalition forces completely eviscerated Iraq’s military – then the fourth largest in the world and equipped with high-end Soviet vehicles and cutting-edge air defense systems – and they did so while suffering few casualties and historically low aircraft attrition. Perry convincingly demonstrates the ways in which the technology developed during the Second Offset enabled this unprecedented victory. For example, he argues that AWACS and JSTARS (along with America’s well-developed array of intelligence and communications satellites) granted U.S. forces near-instantaneous information on the location and disposition of enemy aircraft and ground forces, and this enabled commanders to quickly adjust to changing conditions. He also highlights the role that stealth aircraft played in allowing the U.S. to penetrate, suppress, and destroy Iraqi air defense systems and clear the way for non-stealthy aircraft to bomb high-value strategic targets. Precision and standoff weapons were also valuable in this regard, as they permitted strike platforms to largely remain out of range of Iraqi defenses. The result was a crippled Iraqi air defense system forced to resort to World War Two tactics of shooting blindly into the sky and a historically low Coalition aircraft attrition rate.[18]

While these results suggest that the Second Offset bore significant fruit, it is important not to overstate the role of technological developments. As Perry himself acknowledged, much of the success of Coalition forces is attributable to the excellent training of their soldiers and the skill of their commanders. Moreover, the American buildup of forces was only possible because of Saddam Hussein’s inexplicable decision to allow U.S. and allied force to surge into the region unopposed. The U.S. was also advantaged logistically by the large number of well-maintained Saudi airbases and the plentiful oil resources available to them.[19]

More recent work also suggests that there was more to the Coalition victory than simply the technology developed through the Second Offset. Caitlin Talmadge, for example, suggests that Iraq’s army was poorly organized to conduct conventional warfare due to Saddam’s efforts to “coup-proof” Iraq’s military by centralizing control, pitting units and services against one another, and failing to adequately train his forces.[20] Another view, notably advanced by Kenneth Pollack, is that Arab cultural attributes limit their ability to successfully prosecute modern conventional wars. In particular, Pollack argues that Arabs’ hierarchical social organizations prevents initiative by lower levels officers.[21] Finally, some argue that the idea of a “revolution in military affairs” is overstated, as it is too focused on technology at the expense of training and weapons employment.[22]

It is also worth highlighting that certain components of the Second Offset were underdeveloped. In particular, the number of precision weapons (especially on warships) available to Coalition forces was relatively small. While this ultimately did not prove to be a major handicap, one can imagine that this small magazine depth would have posed a serious problem in a major conventional conflict in Central Europe. Another noteworthy consideration is the permissiveness of the operating environment during the First Gulf War. For example, the Iraqis failed to meaningfully disrupt any of the intelligence, communications, or command and control infrastructure of the Coalition, but a more formidable adversary (such as the Soviets) likely would have.[23] Thus, it is difficult to accurately measure the extent to which Coalition victory was attributable to the Second Offset, and it is even more difficult to predict how effective it would have been in a conflict against the Warsaw Pact.

What is clear is that it was an innovative solution to a vexing problem. It offered a means to offset Soviet conventional superiority by leveraging American economic and technological might, and it resulted in several truly transformative technologies that remain core elements of the modern American military. While the skepticism of certain individuals in the national security establishment toward these technologies is understandable and, in hindsight, not completely unjustified, it is also clear that many of their concerns were overstated or simply based on false premises. The changing security landscape of the early 1970s required some kind of evolution in American force structure and doctrine, and the Second Offset was a realistic and fairly effective strategy for evolving the force. Certain people may have had their doubts, but the offset approach was a largely successful strategy for retooling the American military to capitalize on U.S. strengths in technological innovation in order to build a force able to meet American objectives.

 

 

 

 

_______________________________________

Notes:

[1] For a discussion of the development of earlier estimates, see Alain Enthoven and K. Wayne Smith, How Much is Enough (New York, N.Y.: Harper and Row, 1971).

[2] Paul Musgrave, Yu-Ming Liou, and J. Furman Daniel, “The Imitation Game: Why Don’t Rising Powers Innovate their Militaries More,” The Washington Quarterly 38, no. 3 (2015): 157-174.

[3] William Perry, “Desert Storm and Deterrence,” Foreign Affairs, Fall 1991, https://www.foreignaffairs.com/articles/iraq/1991-09-01/desert-storm-and-deterrence.

[4] Richard Bitzinger, Assessing the Conventional Balance in Europe (Santa Monica, Ca.: Rand Corporation, 1989), 24-25; 27.

[5] Ibid., 25.

[6] Ibid., 27.

[7] Ibid., 28.

[8] For example, French General Andre Beaufre labelled Flexible Response “incompetent.” Andre Beaufre, NATO and Europe, trans. Joseph Green (New York, N.Y.: Vintage Books, 1965).

[9] Bitzinger, 29-32.

[10] Ibid., 24 and David Alan Rosenburg, “U.S. Nuclear Warplanning, 1945-1960,” in Strategic Nuclear Targeting, Desmond Ball and Jeffery Richelson eds. (Ithaca, N.Y.: Cornell University Press, 1986), 46-47.

[11] Robert Tombs, “THE COLD WAR OFFSET STRATEGY: ORIGINS AND RELEVANCE,” War on the Rocks, November 6, 2014, https://warontherocks.com/2014/11/the-cold-war-offset-strategy-origins-and-relevance/.

[12] Perry.

[13] Tombs.

[14] Glenn A. Kent, THINKING ABOUT AMERICA’S DEFENSE: An Analytical Memoir (Santa Monica, Ca: Rand Corporation, 2008), 185.

[15] Ibid., 186.

[16] Ibid., 189.

[17] Perry.

[18] Ibid.

[19] Ibid.

[20] Caitlin Talmadge, The Dictators Army: Battlefield Effectiveness in Authoritarian Regimes (Ithaca, N.Y.: Cornell University Press, 2015).

[21] Kenneth Pollack, Armies of Sand: The Past, Present, and Future of Arab Military Effectiveness (Oxford, England: Oxford University Press, 2018).

[22] Stephen Biddle, “Victory Misunderstood: What the Gulf War Tells Us about the Future of Conflict,” International Security 21, no. 2 (1994): 139-179.

[23] Perry.