The U.S. Navy achieved a lot of successes in Desert Storm. But I am a big believer that we should never miss an opportunity to relook at every decision and performance metric of man and machine so that we can do better the next time. In order to learn those lessons, I discovered early on in my career that it helps to use the right metrics and the right analytical partner.

I discovered the value of an analytical partnership as a young officer. I was ordered to VX-1, a squadron dedicated to evaluating various weapons and systems for antisubmarine warfare. My first assignment as a project officer was to determine the suitability of the AS-12, a French, wire-guided missile to be used by patrol aircraft against Soviet Komar-class boats. The first part of the process was to develop a test plan that would determine its effectiveness to counter the threat. This was to be an expedited study, with a limited number of live-fire missiles provided.

In VX-1 we had an assigned CNA Operations Evaluation Group (OEG) field representative who had to endorse each test plan to ensure that it provided a solid basis for any conclusions and recommendations. When I explained that we were going calculate the AS-12’s effectiveness with our limited number of missiles, he told me what I didn’t want to hear: There would be no possibility of providing a valid probability of hit and kill with such a small number of data points. But after encouraging me to adjust my sights on a more realistic performance metric, he was most helpful in developing a test plan.

Preparing for that test, I learned another potent lesson about using the right metric. My team went to France for factory training on how to properly handle and use the missile. We spent hours in the simulator room, where a projector could provide a sequence of launch and a tracking light with which to guide the missile. The four or five of us being trained were carefully observed, and the result of each simulated missile launch was recorded, resulting in a median miss distance, commonly called “circular error probable,” or CEP. In the back of the room was an extra observer, who I assumed was there to keep an eye on our instructor.

At the end of our week of training, the factory instructor pulled me aside and explained the purpose of the other observer. Despite whatever CEP score each of our shooters achieved on the simulator, the observer could predict with great certainty who could hit with the real thing. Over years of using the missile, the French had determined that certain operators could be aces on the simulator, with small miss distances, and still be a bust when trying to guide the real thing. It was critical to understand that results from simulations, tests and actual operations were not necessarily the same. For this missile and this target, a close miss had little chance of resulting in a kill. The important metric was the probability of hit, not CEP; the French trained to probability of hit.

Our live-fire tests proved that the hit metric was indeed best for this missile-target combination. With the right metric and the OEG field rep overseeing the test plan, our evaluation provided the best possible clue as to whether the system would be a viable option to counter the threat.

So well before Desert Storm, I had learned two important lessons that I continued to apply as I rose through the ranks: Get the OEG field representative into the picture as soon as possible, and with any guided weapon, know which is more important — the CEP or probability of hit. The warhead and target will dictate.

Knowing those two things created a valuable opportunity for the Navy to learn from our experience in Desert Storm employing the Tomahawk missile. About a month before the war began, I had to sell skeptical leadership on a plan to give the Navy’s cruise missile a role in the critical first night of Desert Storm. The Tomahawk had never been used in combat before. With Generals Norman Schwarzkopf and Colin Powell around the table, I displayed a diagram of a baseball field. I explained that years of operational test firings showed that the Tomahawk would not land somewhere in the outfield or even on a random spot inside the bases. It would hit the pitcher’s mound.

The argument was sufficiently convincing, and the Navy launched 118 Tomahawks in the first 24 hours of Desert Storm. We quickly learned that we needed to review how we staged the entry of multiple launches into the same target area. Sending four missiles down the same street hands the adversary an opportunity to shoot down the last in line.

But the enduring insight I carried with me about targeting metrics convinced me that there were more important lessons to be learned, and I knew the right partner to pursue those lessons. I thought there could be a way to reconstruct each missile’s prescribed track and determine if it reached its intended target. This would require access to very sensitive observations of the target area and route of flight. I asked my embarked CNA representative to inquire whether CNA would be interested in doing such a study. We received lots of pushback from the Tomahawk program office and from intelligence agencies that would have some of the information needed. Christine Fox and her team at CNA did a marvelous job of making this happen. In the end, we learned that even with a launch-and-forget system, there are many places in the planning cycle and delivery instructions where improvements had to be made. Desert Storm showed that we were doing a lot of things right. More importantly, it showed how we could do even better.


Adm. Stanley R. Arthur, as Commander, U.S. Naval Forces Central Command, led U.S. and coalition naval forces in Desert Storm. For more information about CNA’s analysis in Desert Storm, please visit our Analysis in Combat page.