Abstract
At PerMIS 2002, the Air Force Research Lab presented a paper entitled "Metrics Schmetrics", which presented the results of a research study on how to determine the autonomy level of a unmanned aerial vehicle (UAV). The striking point made by the author, given the obvious importance of the need to effectively evaluate and classify autonomous algorithms, was the general dearth of existing effective metrics and/or taxonomies to determine a UAVs autonomy level. To fill the void, the author presented a framework, referred to as the AFRL Autonomy Framework, which identified 10 autonomy control levels (ACLs) and presented the characteristics that differentiate the various levels. The purpose of this paper is to examine how the metrics presented have been applied and how they have evolved and expanded, in theory and practice, as a result of lessons learned during those applications. Specifically, the evolution in metrics development has culminated with the advent of an evaluation technique that blends emerging simulation technologies to create new unmanned aerial vehicle (UAV) autonomy assessment methods which take full advantage of visual virtual environments and statistical constructive simulations to examine the autonomy algorithm along the four dimensions of the Observe, Orient, Decide, Act (OODA) loop commonly applied by military aviators during the decision making process. This approach has been used over the last three years to evaluate emerging autonomy technologies from the Army's Unmanned Autonomous Collaborative Operations (UACO) Science and Technology program. This unique technique for testing UAV autonomy effectiveness is a significant advance in the UAV community. As autonomy algorithms proliferate to the point where multiple candidate algorithms are available for each platform, the ability to characterize the effectiveness of each autonomy algorithm will be critical to further for successful implementation of autonomous capability.