Desert Storm ushered in what would be called “the new American way of war.” Precision-guided munitions, combined with reconnaissance and battle networks, were pivotal technological developments that contributed to Desert Storm’s sweeping success. Ironically, these weapons and systems were developed for an adversary and battlefield nothing like the one faced in the Persian Gulf 30 years ago. The history of their development and their successful — and sometimes unsuccessful — use should inform the development of the next generation of defense systems.

In the 1991 Gulf War, precision systems — and a well-led, well-trained force using them — made for a conflict unlike any before it in history. This was largely an air war: The first 90% of the operation was the shaping and destruction of targets including air defense, Iraqi forces and facilities, and strategic targets. With the availability of precision weapons, smaller but valuable targets were feasible objectives compared to what was achievable with unguided, “dumb” bombs. The U.S. Air Force F-117A stealth Nighthawk played a key role in the air campaign. The existence of the secret attack aircraft had only been acknowledged three years before. It struck 40% of the targets hit within the first three days. These strikes were critical in degrading air defense and communications networks, allowing greater freedom of action for the rest of the campaign. Laser-guided bombs also proved highly effective. CNA analysis calculated a 60% hit rate for these precision weapons against bridges. In contrast, fewer than 1 in 14 unguided bombs hit bridge targets.

Importantly, the new way of war contributed to unusually low casualties for both coalition troops and civilians compared to other 20th century wars. Certainly Saddam Hussein’s unmotivated forces and other unusual conditions helped, but success from the air also greatly reduced the riskier role of ground troops. The month-long air war was followed by a lightning-fast, 100-hour ground war. Precision munitions also reduced the number of bombs that were needed; previously many munitions would be dropped in order to have a high confidence that a target would be struck. This helped keep civilian casualties to fraction of those seen in the Korean and Vietnam Wars, when U.S. bombardments depended heavily on area weapons.

This is what military planners call an “offset strategy.” Offsets address what appears to be a competition that is unwinnable — or winnable only at unacceptable cost — by changing the balance through the application of different strengths. Since World War II, the U.S. has announced three such offset strategies. The First Offset, near the tail end of the Korean War, related to nuclear deterrence.

The Second Offset proposed advanced technology to provide better information on the battlefield and develop the ability to conduct precision strikes in order to improve combat effectiveness. This involved the development of new precision weapons — both laser-guided and GPS-guided munitions — combined with new surveillance capabilities and new battle networks for exchanging targeting information quickly to forces that needed it. This Second Offset effort was remarkable for its singular and sustained focus. It was not a general call for a broad application of technology. Specific enabling capabilities for particular operational requirements were pursued and developed consistently over the course of two decades to help deter a Soviet invasion.

Thankfully, these weapons were never used against their intended target; that is exactly the point of a deterrent. In Desert Storm, and later in Afghanistan and Iraq, however, the Second Offset systems proved themselves ideally suited to targeting traditional military objectives. That does not mean they have been a panacea for every military requirement. U.S. forces relied on similar precision airpower to try to drive ISIS from the Syrian city of Raqqa in 2017. By the time ISIS had been defeated, “precision” weaponry had rendered 80 percent of the city uninhabitable and killed many civilians. Clearly, the weapons systems design to combat the Soviet army on the plains of Central Europe were not ideally suited against non-state armed groups in an urban environment.

What can these successes and failures of the Second Offset teach us as the Department of Defense tries to implement the Third Offset? This strategic turn began in 2014. Acknowledging that great power competitors had caught up in much Second Offset technology and had other advantages of size and geography, the Third Offset seeks to broadly pursue artificial intelligence and autonomy, to leverage them for military advantage.

The first lesson is that Second Offset weapons development succeeded precisely because it was not a broad pursuit of technology. The Third Offset would benefit from a similar effort to maintain a sustained, patient focus on a limited number of operational imperatives and enabling technologies within the vast fields of AI and autonomy.

A second lesson: history has taught us that however vital it may be to prepare for the most formidable threat, U.S. forces are more likely to end up engaging with lower-end threats. And the U.S. has repeatedly found itself lacking some of the capabilities needed for that set of threats. As the nation progresses in Third Offset technology, it’s not too soon to begin analyzing how these capabilities might — and might not — be adapted to the adversaries we are more likely to face. Thirty years ago, success in Desert Storm was partially an unintended consequence of preparing for “the big one.” As a military strategy, it would be better to plan for intended consequences.

Larry Lewis is the Vice President and Director of the Center for Autonomy and Artificial Intelligence at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance, and counterterrorism.

Don Boroughs is a Communications Senior Advisor at CNA.