Approved for public release; distribution is unlimited.
Published: 1 December 2009
Air & Space Power Journal - Winter 2009
Views and Analyses
Maj Aaron Tucker, USAF*
Equipping, including research and development, is a primary responsibility of the Air Force.1 Yet, a loss of expertise during acquisition-reform initiatives and a lack of immediate and continuous involvement of test professionals have caused the service to struggle in its attempts to execute this critical task properly. Within the defense acquisition corps, these individuals contribute critical capabilities and expertise to the mission of supporting the materiel needs of the war fighter. To be fully effective, they must become involved in this acquisition process at the earliest stages. A proposed cadre of test professionals strikes a balance between system/mission experts and developmental test experts. These groups are developed along separate career paths that provide both recent operational experience and profound technical expertise to decision makers in the acquisition arena. A cadre of deliberately developed test professionals also seeds the ranks of senior officers with direct experience in acquisition. The result is a full integration of such professionals across a system’s life cycle, from initial definition of requirements through development and initial operating capability to sustainment of war-fighting capability in our nation’s defense.
Report after report has shown that there are fundamental problems with the way we buy major weapons systems.
—Senator Carl M. Levin, 6 May 2009
The relationship between the government’s and industry’s conduct of flight test has always provided a constructive tension designed to serve the requirements of the war fighter while pushing the leading edge of existing technology. Industry offers innovative, quality solutions to the war fighter’s requirements while government testers ensure that the products meet those requirements. The military has recognized the need to develop its own standards and perform an independent evaluation of commercially produced aircraft since their initial use in World War I. The Air Corps Act of 1926, however, reduced military flight test and evaluation to brief acceptance-test programs. By the end of World War II, so many deficiencies were detected late in the procurement process that an independent Flight Test Division was established to conduct test and evaluation independent of the contractors and project offices. To meet the need for practitioners of this independent testing, the military established a test pilot school to improve technical competencies and standardize flight-test methodologies.2
By the end of the twentieth century, advances in technology, political shifts in acquisition policy and funding levels, and mission requirements had affected the balance of roles, responsibilities, and authority between government and industry testers. A series of acquisition-reform initiatives in the 1990s generally decreased government involvement in test planning, execution, and reporting. At best, government testers became partners in the conduct and analysis of tests. At worst, they simply evaluated test results for the program office, resulting in a significant reduction of experienced government test personnel and a veritable freeze in accessing, training, and educating the next generation of test professionals.3 “The lack of skilled oversight is costing the government,” notes Sue C. Payton, the previous assistant secretary of the Air Force for acquisition. “I could save millions of taxpayer dollars . . . but I have to have the workforce with the domain knowledge that could be able to oversee it and manage it.”4
Senators Carl Levin (D-MI) and John McCain (R-AZ) of the Senate Armed Services Committee introduced the Weapon System Reform Act of 2009 in order to “remedy a fundamentally broken defense acquisition system.”5 The defense acquisition program suffered from a loss of resident expertise in the 1990s and a lack of involvement of test professionals early in the process. This, along with other political, fiscal, and technical factors, has resulted in a series of major acquisition programs that cannot be executed either on budget or on time, thus degrading the ability of the war fighter to respond rapidly to emerging threats and maintain superiority in a turbulent world. “I can’t tell you how many programs have come to me that aren’t signable because they are improperly structured or funded,” says John J. Young, the previous deputy undersecretary of defense for acquisition, technology, and logistics.6
The Air Force’s acquisition workforce declined from 57,000 personnel 20 years ago to 24,000 at the end of 2008.7 According to Payton, “If you look at the workforce, we were up around 500,000 people in acquisition in all of the Defense Department. It is down to about 200,000 now. . . . What we are managing is scarcity.”8 This scarcity refers not only to the total workforce but also to the proportion of government testers, which has declined compared to contractor personnel. The latter comprised 20 percent of the acquisition workforce in 1994, a ratio that more than doubled to 50 percent in 2003, thereby creating a dependence of inexperienced government officials on contractors. In the last 15 years, many programs have been adversely affected by poor judgment that can be attributed to an inexperienced acquisition/test workforce and funding reductions.9 The Air Force is not alone in its predicament; all of the services produced underfunded programs, offered poorly built budgets, and underestimated requirements as preludes to seeking a cash infusion from the Office of the Secretary of Defense.10 The problems seen in the defense acquisition corps in general are also felt in the developmental test and evaluation enterprise:
• A large number of the most experienced management and technical personnel in government and industry were lost with no adequate replacement pipeline.11
• Major personnel reductions strain the pool of the government’s experienced test personnel. A significant amount of developmental testing occurs without an appropriate degree of government involvement or oversight and, in some cases, with limited government access to contractor data.12
• The number of Air Force test personnel has declined by approximately 15 percent, and engineering personnel in supporting program offices have been reduced by as much as 60 percent in some organizations. Moreover, these reductions occurred during a time when programs have become increasingly complex.13
Upon taking office as the 19th chief of staff of the Air Force in August 2008, Gen Norton A. Schwartz identified acquisition excellence as one of his top initiatives.14 A critical part of any proposed solution to General Schwartz’s challenge is the deliberate development of a cadre of test professionals. As a subset of the larger defense acquisition corps, these professionals deliver capabilities and value critical to an effective acquisition program. The skills of the test professional must be applied across the acquisition process, from the initial generation of requirements to the sustainment of weapons systems.
Test professionals’ dedication to the needs of the war fighter is critical to their ability to translate needed war-fighting capabilities into a set of requirements. These needs serve as the genesis of a reliable system that functions effectively and efficiently in the intended operational environments against known and conceivable threats. The test professional’s early involvement in the acquisition process can help focus research efforts, define test assets, assess technical risks, determine test resources, and scope the test program. It is critical that such professionals become involved in the generation of requirements before the Joint Requirements Oversight Council locks them in. Several acquisition programs (e.g., the Joint Air-to-Surface Standoff Missile and the Space-Based Infrared System) significantly exceeded their budgets partly due to poorly written, unrealistic requirements.15 Test professionals are particularly suited to aligning operational requirements with test-related evaluations that verify and validate a system design. That process is often heuristically based and heavily influenced by their military judgment and prior test experience.
The current trend in industry to protest source-selection decisions serves as an added impetus for developing well-defined, verifiable requirements. Poorly articulated metrics have contributed to embarrassing bid protests, such as the $35 billion Air Force KC-X tanker-replacement debacle.16 Such protests are “dragging us down to the nth degree,” Payton observes. “Acquisition folks have not taken adequate measures to make sure requirements are testable and verifiable in contract award.”17 The acquisition community and test professionals are now held to the practical standard of writing requirements that are of practical use by a source-selection authority and unassailable in court. Anything less will cause delays of needed capability to the war fighter.
Test and evaluation is perfectly situated to significantly affect the life-cycle cost of a system—at the crossover of cost and risk (see figure).18 The economies of detecting design deficiencies and implementing solutions on only a handful of test articles, compared to implementing a solution on a fielded system, support the cost of maintaining a developmental test capability. Roughly 75 percent of a system’s life-cycle costs are set in the initial design process, so an early, rigorous test program will save time and money over the life cycle of the system.19 In both the development of a new system and the long-term sustainment efforts that follow, test professionals are critical to ensuring that the system is fully and accurately tested and evaluated. Payton observes that “it’s more beneficial in the long run to spend an additional 20 percent on a program in the development phase (including prototypes or flyoffs) than to pay for 58 percent overruns in the future when a project is found to be lacking in technology or test procedures.”20 As test articles are designed and built, programmatic risk begins to decrease because design choices have been bounded or selected, technology has matured, and cost and schedule uncertainties come into focus.
Figure. Test at the crossover of cost and risk. (Reprinted from Aaron A. Tucker and Cihan H. Dagli, “Design of Experiments as a Means of Lean Value Delivery to the Flight Test Enterprise,” Journal of Systems Engineering 12, no. 3 [forthcoming], 203.)
Introducing the Weapon Acquisition Reform Act of 2009, Senator McCain noted that the “key to defense acquisition programs performing successfully is getting things right from the start—with sound systems engineering, cost-estimating, and developmental testing early in the program cycle.”21 Integration of test professionals at the earliest stages of requirements generation is essential in order to realize the benefits of systems engineering by tracing measurable requirements through test to delivery of the capability. The skills that such individuals bring to the development team augment and focus the program manager’s task of managing the cost, schedule, and performance of a system. Tightly controlled performance metrics help rein in cost and schedule expansion. Excluding test personnel and their experience from the development phase is a short-sighted attempt to save money and results in increased life-cycle costs.22 While war fighters operate their equipment as established systems replete with the inertia that makes change difficult, test professionals can affect a system design when changes are still relatively cheap and easy.23 Further, each system must be considered as part of the larger, networked battlespace and integrated into a system of systems, which is most easily accomplished early in the process. Modern systems of systems fuse information from sensors across the battlespace, from ground to air to space. Fully testing such a capability substantially increases the complexity and expense of the test with each added sensor, which gives further impetus for the early involvement of test professionals.
Just as air systems demand thorough testing to ensure their safe and effective operation, space equipment also requires rigorous testing which is encumbered by peculiar challenges. Space systems in orbit are unique pieces of hardware that are subject to a particularly unforgiving environment and generally cannot be directly accessed once placed into service. These systems are exposed to thermal shock and atmospheric extremes that are difficult, if not impossible, to test accurately before launch. Few, if any, identical systems are produced, and no ability exists to correct discrepancies discovered after launch. Thermal/vacuum testing, one of the final evaluations of orbital systems, offers the best approximation of the hard vacuum of space. Such fidelity, however, remains extremely expensive and takes weeks to execute in one of a handful of facilities in the country. The availability of thermal/vacuum chambers that can accommodate large satellites is particularly limited. Integration testing of the orbital system and ground control is also very important. These system-level tests account for 35 to 50 percent of nonrecurring costs.24 Test professionals with operational experience are particularly critical in space acquisition programs because they occupy the best position for discovering discrepancies and correcting them before a system is placed in orbit.
Software is one of the few systems that can be developed and maintained after the launch of a space system. In the last two decades, systems have become increasingly software intensive. In order to manage the complexity of software-intensive systems, many programs have adopted a block-upgrade strategy whereby each upgrade drives its own developmental test program, which merges into almost continuous test programs (e.g., F-16 Block Upgrades, C-17 Follow-on Flight Test Program, and Global Positioning System Blocks I through III). Sustainment test programs maintain a system’s relevancy and require the continuous involvement of test professionals with a steady focus on requirements and test discipline. These personnel must ensure delivery of the new capability in a block upgrade and prevent the degradation of baseline capabilities through regression testing.
The value of test professionals corresponds to systems-engineering principles which hold that programmatic risk and uncertainty are probabilities that can be mitigated or eliminated. Their value lies in the independent evaluation of system performance, which supports fielding decisions. Test professionals help generate requirements, evaluate acquisition proposals, and offer their expert insight into technology and performance risk rather than simply select the lowest-cost proposals. If these individuals fail to perform their duties properly, the needed change may prove technically impossible or fiscally prohibitive.25 Similarly, the time for the system’s effectiveness may have passed, resulting in a defeat on the battlefield, the fielding of an enemy countermeasure, or a paralyzing war of attrition. Test professionals with an operational focus can break through crippling limitations by questioning assumptions and applying technology to provide new capabilities.
Efficient programmatic practices are in continual demand from the test professional: risk management, test planning, mission relevance, deficiency reporting, and programmatic wherewithal.26 Test professionals must develop an ability to understand and balance cost, schedule, performance, and their attendant risks and uncertainties. An understanding of the needs of the war fighter is critical to decisions about performance risk. Which capabilities can be cancelled, delayed, or modified, and which are not negotiable? Test professionals have a unique perspective that allows them to find problems or deficiencies before a fielding decision is made, to evaluate design fixes, and to prevent rework on production systems. Even within a single developmental test program, a skilled, experienced test team can save time and money by reducing the fly-fix-fly cycle. Developmental test is expensive but not nearly as costly as not having skilled, experienced test professionals. The price of finding a deficiency late in a system’s life cycle and then implementing a design change can be quite high.27 For instance, space system programs spent 10 percent of the development schedule and 10 percent of their profit margins fixing problems not discovered until the final system-level thermal/vacuum test.28
Maj Gen David J. Eichhorn, commander of the Air Force Flight Test Center, believes that the “government’s role can’t be allowed to degrade into nothing more than deep pockets / check writers.”29 Complete information informs the decisions of acquisition authorities as they continually balance cost, schedule, and performance while steering a direct course to deliver combat capability to the war fighter. Test professionals have the responsibility of collecting and interpreting rigorous technical data from the earliest analyses of materiel solutions and technology-development efforts through sustainment. They should then educate acquisition decision makers on the underlying assumptions and probabilities associated with the system. Even before actual test data is available for a system, test professionals can advise decision makers using judgment born of education, training, and experience as practical testers. Source-selection teams can leverage the judgment of these professionals to evaluate proposed test programs.30
Tools such as design of experiments (DOE) and theory of constraints have been applied to overcome the debilitating need for absolute surety and the distractions of false dilemmas. Both tools employ a statistically rigorous analysis to determine the probability that a particular reality actually exists, based on a finite number of observations. DOE-based test plans enable the development of analyses and conclusions couched in terms of statistical confidence and power intervals. These statistical measures of the quality of test data are critical to sound, objective acquisition decisions. Further, test professionals can present decision makers with discrete levels of test resources required to answer a particular question—essentially buying increments of statistical confidence and power.31 One case study proposes that a DOE-based flight-test experiment can save 70 to 84 percent of the cost of traditional, one-factor-at-a-time approaches.32
Test professionals, who have a variety of technical skill sets, include operators, engineers, and program managers trained and educated in the art and science of test. Each career path should be developed within a cadre of test professionals comprised of a balance of two types of experts:
1. System/mission experts who have depth, recency, and career focus in operations coupled with firsthand test experience.
2. Developmental test experts who may have a background in operations and maintain a career path focused on developmental test.
Both types of experts are operator, engineer, and program-manager members of a combined test force (CTF), which can focus on a system (e.g., an F‑35 CTF) or a capability (e.g., a Global Reach CTF or a Global Power CTF). All members of a CTF contribute to the developmental test and evaluation program to develop capabilities for the war fighter. System/mission experts provide extensive system expertise to evaluate new capabilities and support the CTF’s training, standardization, and operations functions. Developmental test experts act to ensure that systems are evaluated safely, effectively, and efficiently through test and safety planning and reporting. Both share in the execution of test missions according to their specific skill sets—by exchanging ideas and experience, they enhance the CTF mission of providing decision-quality data for acquisition programs.
System/mission experts should be closely identified with the operational community. The Defense Science Board’s report on developmental test and evaluation recommends, as a minimum, making available a cadre of operational personnel to support developmental test and evaluation for Acquisition Category I (total procurement of more than $2.19 billion) and special-interest programs.33 System/mission experts can ensure that evaluations are conducted in the context of the mission, which can be evolving with emerging threats and new tactics, techniques, and procedures. They would evaluate the system in terms of mission capability and report the results in terms of operational significance to the user.34 This cadre brings operational considerations such as the utility of new capabilities to the developmental test program and seeds the future ranks of senior leaders with officers who have working-level experience in test and acquisition. A National Research Council study of 2008 characterizes inexperienced government and industry personnel in key leadership positions as the largest driver of cost-development time and performance risk.35 A continuous flow of recent operational expertise to the test enterprise is justified by considering the benefits to the acquisition programs and the professional development of the individuals.
System/mission experts’ professional development broadens from a concentration on operations to include an acquisition perspective. After one or two operational assignments, an operator with a technical background and experience as an instructor in a major weapons system is eligible to join the cadre of test professionals. For individuals with solid operational credentials, an assignment in test and evaluation could become an alternative to a tour as a schoolhouse instructor or air liaison officer. Weapons school graduates would be particularly valuable to a test organization. The Defense Acquisition University’s online courses in acquisition and test and evaluation would serve as an entrée for novice test professionals, and training in a National Test Pilot School or an Air Force Test Pilot School short course could train operators and flight test engineers for operational and developmental test assignments. Moreover, flight-test assignments that perform program management could provide staff officer experience for senior captains or junior majors, coupled with flying duties. Although out of the air and space expeditionary force’s deployment cycle for their weapons systems, test professionals could support individual deployment taskings commensurate with their skill sets, enabling them to stay connected with current operations and shoulder their fair share of the deployed mission.
The breadth of acquisition experience gained by a system/mission expert depends largely on the program, but most test professionals would become familiar with and have the opportunity to affect several programs in different stages of the acquisition process before returning to the war-fighting commands. Along with taking Defense Acquisition University courses, this experience would qualify the individual for an Acquisition Professional Development Program Level II or III certification in test and evaluation.36 The courses and training that lead to these certifications would help system/mission experts understand the capabilities and limitations of operational and developmental test and evaluation. Additionally, acquisition certifications and test experience would expand their eligibility for higher-level staff assignments in test, acquisition, plans, programs, and operational tactics and training. Finally, due to their involvement with next-generation systems, these experts would become very familiar with the newest system capabilities and would be uniquely qualified to deliver a system to the war-fighting command as the initial cadre in a leadership capacity. These rising leaders would be able to draw on their direct experience with acquisition as they progress to roles of increasing responsibility. The Air Force should emphasize the value of a test and evaluation tour to ensure that system/mission experts are promoted to augment the ranks of senior leaders with individuals who are able to draw on their direct experience with acquisition as they progress to roles of increasing responsibility.
Acquisition programs benefit from the valuable, recent operational experience of system/mission experts. Furthermore, these personnel can be drawn from the general pool of operators, engineers, and program managers, thus providing a flexible, responsive manning source from which to quickly increase or decrease the manning according to the needs of the particular test program. The inclusion of system/mission experts in a cadre of test professionals also greatly enhances the amount of operational expertise organic to the acquisition program. Finally, system/mission experts who are operators can participate in the vast majority of test missions because only medium- and high-risk test missions (12 percent of test sorties) require graduates of a test pilot school to execute the mission.37 The fact that that requirement may be met by contractors or waived by the test leadership further increases the opportunity for system/mission experts to execute test missions.38
Drawing on their extensive knowledge of systems and tactics in major weapons systems, operator system/mission experts can serve as instructors or evaluators for the CTF and as command chief pilots for Air Force Materiel Command. They must take care, however, to overcome the philosophy of rigid training and standardization rules necessary in operational units. The developmental-test mission demands flexibility in order to execute tests safely and efficiently. This flexibility is enabled by test discipline, technical judgment, and outstanding airmanship of highly experienced aircrews. Test is not executed by inexperienced copilots or basic wingmen. The learning curve is always very steep, test professionals are rarely comfortable, and each person must carefully manage operational risk as it relates to the specific test mission. The risk of realizing a hazard is also carefully mitigated by the operating environment (e.g., daytime, good weather, sanitized airspace, and very long runways), a mission profile that has been vetted through multiple levels of technical and safety reviews, and the diverse team of experts charged with planning, executing, and monitoring highly instrumented test vehicles.
System/mission experts complement developmental test experts within a CTF. The system/mission expert’s career is weighted heavily toward operational assignments, whereas the developmental test expert starts with a technical background, adds operational experience, and continually builds momentum with assignments in test and acquisition in order to mature as an acquisition professional. The developmental workforce tends to be relatively static due to the extremely long lead time needed to select and train developmental test experts. To be effective, they should start developmental test assignments early in their careers after beginning with a base of operational expertise upon which to develop skills and experience. Operators, engineers, and program managers who are growing as developmental test experts need to learn their craft through a combination of education, training, and experience while undertaking a series of increasingly difficult tasks. Their professional development includes honing critical-thinking skills, technical acumen, and engineering judgment. The challenge involves developing their ability to move flexibly among developmental test programs and provide effective, system-generic test expertise while remaining operationally relevant. Balanced experience across major weapons systems is a critical skill for developmental test experts to possess.
The value of the dedicated test professional becomes evident when designing or executing a critical test point. A system must demonstrate its capabilities near the edge of the operating envelope when significant resources are at stake. Examples include a maximum-performance braking event when tire and wheel damage is expected, maximum weight operations on a dirt landing zone, or the release of an expensive weapon at the edge of the operating envelope. Graduates of test pilot school are the best candidates for assessing technical and safety risks in order to ensure that the test is designed and executed properly the first time. Their training allows them to design the test based on theory enabled by a sense of what’s actually practical. When executing the test, operator and engineer developmental test experts approach the test point with a situational awareness developed toward controlling dynamic, multivariate systems. This enables them to observe the test as well as overall system performance and report on the test with the benefit of years of trained observation. Developmental test experts can meet the challenge of maintaining operational relevance by reserving time for participation in major exercises or operational deployments.
The common thread among the syllabi at all test pilot schools is that theoretical expertise supports safe, effective, and efficient flight test and accurate reporting. Each school strikes its own balance of instruction in performance, handling qualities, and systems. They all, however, attempt to surpass the simple goal of training a skill set by also educating a test professional’s critical thinking and judgment. For example, the US Air Force Test Pilot School’s curriculum received approval to begin granting a master of science degree in flight-test engineering, starting in May 2008. Intermediate Developmental Education in-residence credit as well as Defense Acquisition University equivalency (up to Level III Test and Evaluation coursework) had already been approved. This trend toward strategic education supports the progression of a developmental test expert. Test pilot school selection boards consider demonstrated officership as well as strong academic performance in the applied sciences. They don’t simply select a test pilot school student but a future developmental test professional. Test professionals progress to command test and development centers; hold senior acquisition, planning, and programming positions; or step into research to provide operational and test perspectives to technology-development efforts. In addition, the military test pilot schools are considered strategic assets because they provide a flow of expertise into industry as well as into the government test establishment.39
The years of technical development and training in the test skill set produce a developmental test expert who makes decisions and gathers data that is well worth the cost of training. A flight-test engineer can pay back those training costs by designing a test plan that safely and effectively validates a system’s capabilities. A test pilot can justify those training costs by executing the test point on the first attempt and by accurately reporting the results. A cadre of deliberately developed test professionals justifies its cost many times over by enabling acquisition decisions based on rigorous, accurate data from a source that protects the interests of the war fighter and taxpayer.
—Gen Carl A. Spaatz
The chief of staff of the Air Force’s initiative to regain acquisition excellence recognized that Congress and the Department of Defense had lost confidence in the service’s acquisition decisions at a time when resources must be carefully conserved. Test professionals are critical to providing accurate information for those acquisition decisions. They perform the necessary function of translating needed capabilities to requirements, managing development programs, and accurately and fully testing systems. The value of test professionals is realized through independent evaluation that exposes system flaws early in development when they can be solved easily and quickly. They also produce decision-quality data for acquisition decision makers who must be able to rely on those data. Therefore, it is critical that a cadre of deliberately developed professional testers be fully integrated into acquisition from the earliest stages.
This cadre of test professionals includes a necessary balance of system/mission experts and developmental test experts. The former include operators, engineers, and program managers who come from operational assignments and contribute mission focus and system expertise to test programs before returning to operational assignments. They can gain acquisition experience that will prove critical later in their careers as senior leaders in operations, acquisition, plans, or programs. Developmental test experts develop core skills in operations, engineering, and program management that are critical to planning and executing safe, efficient, and effective test programs. Their career path remains in test and acquisition to take advantage of experience and judgment that has been sharpened by the challenges of developmental test.
Fixing the problems in test and evaluation represents a complex undertaking yet is only a small part of achieving acquisition excellence. Deliberate development and investment in the acquisition corps in general, and in the test professional in particular, are necessary for the Air Force to answer the chief of staff’s call. Acquisition excellence is based on properly navigating a series of programmatic decisions fraught with risks and assumptions. Test professionals reduce those risks and assumptions with data and educate the judgment of decision makers to deliver needed capability to the war fighter and secure the national defense.
College Station, Texas
*The author is a doctoral student in the Department of Aerospace Engineering at Texas A&M University as part of the Air Force Institute of Technology’s Civilian Institute Program. Recently, he served as the assistant operations officer and as a C-5 and C-17 experimental test pilot in the 418th Flight Test Squadron, Air Force Flight Test Center, Edwards AFB, California. He would like to acknowledge the assistance of several individuals for their review of and exceptional insight into this article: Dr. George Ka’iliwai III, Col Terry Luallen, Col Dave Fedors, Col Wade Smith, Maj Jack Fischer, Dr. Michelle Tucker, and Mr. Brian Ai Chang.
1. 10 United States Code, sec. 8013(a)(1)(b)(4).
2. Jeff Veselenak and Lori Beutelman, “The Evolving Role of Developmental Test and Evaluation at the Air Force Flight Test Center,” American Institute of Aeronautics and Astronautics (AIAA) 2004-6856 (paper presented at the US Air Force Developmental Test and Evaluation Summit, Woodland Hills, CA, 16–18 November 2004), 1.
3. Defense Science Board, Report on Developmental Test and Evaluation (Washington, DC: Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics, May 2008), 6–7.
4. Amy Butler, “Truth and Consequences,” Aviation Week and Space Technology 170, no. 3 (19 January 2009): 22–23.
5. Senator John S. McCain, “The Weapon Acquisition Reform Act of 2009” (floor statement, US Senate, Washington, DC, 23 February 2009).
6. David A. Fulghum, “Moving Targets: ANG Budgets, Bases and Equipment Are in a Swirl as Hawaii Prepares for F‑22s,” Aviation Week and Space Technology 169, no. 3 (21 July 2008): 60.
7. Lt Col William A. McGuffey, military assistant, Office of the Assistant Secretary of the Air Force (Acquisition), Pentagon, Washington, DC, to the author, e-mail, 21 April 2009.
8. Butler, “Truth and Consequences,” 22.
9. Defense Science Board, Report on Developmental Test and Evaluation, 4.
10. John Bennett, “Acquisition Chief Slams Air Force 2010 Budget: Official Claims Space Program Underfunded,” Air Force Times 69, no. 18 (17 November 2008): 10.
11. Defense Science Board, Report on Developmental Test and Evaluation, 4.
12. Ibid., 7.
13. Ibid., 4.
14. Gen Norton Schwartz, “The CSAF’s Perspective” (Washington, DC: Headquarters US Air Force, [1 August 2008]), http://www.afa.org/grl/pdfs/SLOC-CSAFsPerspective_1-Aug-08_v5.pdf (accessed 17 January 2009).
15. Amy Butler, “Mending Fences: Overly Complex Requirements Add to Problems Buying Weapon Systems at the Pentagon,” Aviation Week and Space Technology 169, no. 5 (4 August 2008): 28.
16. Butler, “Truth and Consequences,” 23.
17. Sue C. Payton, undersecretary for acquisitions, US Air Force (address, Edwards AFB, CA, 6 November 2008).
18. Aaron A. Tucker and Cihan H. Dagli, “Design of Experiments as a Means of Lean Value Delivery to the Flight Test Enterprise,” Journal of Systems Engineering 12, no. 3 (forthcoming), 203.
19. Interim Defense Acquisition Guidebook (Fort Belvoir, VA: Defense Acquisition University, 15 June 2009), 454, https://akss.dau.mil/dag/ (accessed 15 March 2009).
20. Butler, “Mending Fences,” 28.
21. McCain, “Weapon Acquisition Reform Act of 2009.”
22. Defense Science Board, Report on Developmental Test and Evaluation, 5.
23. Lt Col E. John Teichert, “Testing Efficacy: The Substantial Influence of Test Professionals,” Air Force Print News Today, 30 September 2008.
24. Earll M. Murman et al., Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative (New York: Palgrave, 2002), 226.
25. Teichert, “Testing Efficacy.”
26. “Test Professional Training Breakout Panel” (presentation at the 41st Symposium and Banquet, Society of Experimental Test Pilots, Anaheim, CA, 29 September 2008).
27. Benjamin S. Blanchard and Wolter J. Fabrycky, Systems Engineering and Analysis, 4th ed. (Englewood Cliffs, NJ: Prentice-Hall, 2006), 145.
28. Murman et al., Lean Enterprise Value, 226.
29. Brig Gen David J. Eichhorn, “General INFO #24—History,” to personnel at Edwards AFB, CA, e-mail, 13 April 2008.
30. Defense Science Board, Report on Developmental Test and Evaluation, 25.
31. Tucker and Dagli, “Design of Experiments,” 209.
32. Maj Aaron A. Tucker, Gregory T. Hutto, and Cihan H. Dagli, “Application of Design of Experiments to Flight Test: A Case Study,” AIAA 2008-1632 (paper presented at the US Air Force Test and Evaluation Days Conference, Los Angeles, 5–7 February 2008), 7, http://scholarsmine.mst.edu/post_prints/pdf/AIAA_09007dcc805c8187.pdf (accessed 1 September 2009). See also Maj Aaron A. Tucker, Gregory T. Hutto, and Cihan H. Dagli, “Application of Design of Experiments to Flight Test: A Case Study,” AIAA’s Journal of Aircraft (forthcoming).
33. DOD Instruction 5000.02, Operation of the Defense Acquisition System, 8 December 2008, 33, http://www.dtic.mil/whs/directives/corres/pdf/500002p.pdf; and Defense Science Board, Report on Developmental Test and Evaluation, 31.
34. Defense Science Board, Report on Developmental Test and Evaluation, 50.
35. Ibid., 27.
36. Defense Acquisition University 2009 Catalog (Fort Belvoir, VA: Defense Acquisition University, 2009), 144–46, http://icatalog.dau.mil/onlinecatalog/doc/Catalog2009.pdf (accessed 2 September 2009).
37. “Air Force Flight Test Center Test Missions, January 2006–February 2009,” Center Operations Online (COOL) database (accessed 16 March 2009).
38. Air Force Flight Test Center Instruction 99-105, Test Control and Conduct, 1 April 2008, 2.
39. “Test Professional Training Breakout Panel.”
The conclusions and opinions expressed in this document are those of the author cultivated in the freedom of expression, academic environment of Air University. They do not reflect the official position of the U.S. Government, Department of Defense, the United States Air Force or the Air University
[ Home Page | Feedback? Email the Editor]