Warning Decision Making: The Relative Roles of Conceptual Models, Technology, Strategy, and Forecaster Expertise on 3 May 1999

Abstract
This paper examines concepts related to warning decision making for the 3 May 1999 tornado outbreak in central Oklahoma. Sixty-six tornadoes occurred during this outbreak, with 58 occurring in the Norman, Oklahoma, National Weather Service Weather Forecast Office (WFO) area of responsibility. Verification statistics for the event revealed the WFO issued 48 tornado warnings, with a median lead time of 23 min, a false-alarm rate of 0.29, and a probability of detection of 0.89. WFO Norman meteorologists utilized a warning decision-making methodology that relied upon 1) scientifically based conceptual models of storm types and their environments, 2) Doppler radar data, 3) ground-truth observations, 4) technology, 5) strategy, and 6) human expertise. This methodology was compared with the ability of radar algorithms [e.g., Weather Surveillance Radar-1988 Doppler (WSR-88D) Mesocyclone (MA) and Tornado Detection Algorithms (TDA)] to identify tornado threat. Although the steady-state nature of the isolated long-lived tornadic supercells presumably presented an ideal case for algorithm performance, shortcomings were identified. The most significant finding was the difference in median lead times between the WFO's subjective human tornado warning and signature detection by TDA for the first tornado associated with each supercell. The first tornado is especially significant because ground truth of the tornado is not yet available and radar signatures are less defined at this early stage. Median lead times were 2 min for TDA and 29 min for the WFO. The MA and TDA proved most useful when used as a safety net or check against the WFO warnings. The initial tornado warning for one supercell storm would have been delayed had the TDA not alerted the meteorologist to investigate the storm.