Coordinated contractor testing can help accelerate the acquisition process and improve the quality of equipment and programs.
by Mr. Harry H. Jenkins III
Test and evaluation (T&E) is a perennial target of criticism for the time and cost it adds to acquisition programs. But there are ways to minimize this impact. One way is to use contractor-generated test data.
As the acquisition community strives to “shift left”—to accelerate acquisition timelines and thus support earlier decision-making—the use of data derived from contractor testing could be more efficient, save on testing costs and speed fielding of equipment. The project manager (PM) for the Armored Multi-Purpose Vehicle (AMPV), the replacement for the M113 family of vehicles, is exploring the use of contractor testing and its impact on the acquisition process, especially when resources are constrained.
Typically, contractors test an article in accordance with their own test plan to determine broadly whether their design meets intended requirements. This testing is done in isolation with minimal input from the government, generally at the contractor’s own facilities. Contractual language added to the statement of work created the conditions for the AMPV contractor to successfully demonstrate the required performance specifications and for the government to obtain valid data to support the evaluation in one test, versus separate tests, saving time and money. The key is for the government and the contractor to share a common cause with the testing, creating advantages for each.
The PM AMPV’s effort dates to June 2016, when it described not only how the program office would be conducting contractor-driven developmental and reliability testing, but also the potential for the U.S. Army Test and Evaluation Command (ATEC) to use these test data for evaluation purposes.
PM AMPV’s 2016 briefing on the subject gave rise to a white paper, coordinated between the Program Executive Office for Ground Combat Systems (PEO GCS), the PM’s parent organization, and ATEC and in collaboration with the Office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation. The white paper explored approaches, guidelines, procedures and other considerations that would promote the acceptance of contractor test data to support ATEC evaluation efforts.
Contractor developmental and reliability testing became part of the T&E program for the AMPV in its approved milestone B test and evaluation master plan. BAE Systems developed a detailed plan for its testing, which addressed design, engineering and production of the AMPV. The contractor test, conducted at the U.S. Army Aberdeen Test Center at Aberdeen Proving Ground, Maryland, was a two- to three-month test for each vehicle variant to “shake them down” and discover any design, quality and manufacturing issues early in the program. The contractor test design was to use government test facilities and government testers. It followed internationally accepted test operating procedures and the AMPV system’s operational mode summary and mission profile.
The operational mode summary and mission profile describe the test conditions in which the vehicle is to operate and the amount of time that critical pieces of equipment are operational during the mission. For example, the AMPV general purpose vehicle must operate in conditions comprising 34 percent primary road surfaces, 38 percent secondary and 28 percent cross-country and hilly cross-country road surfaces. In a given combat day, the vehicle’s mission command equipment will operate for 22 hours, its primary weapon will fire 387 rounds and its engine operate for 21 hours.
Combining this detailed information on operations tempo with the use of government test facilities, testers and test procedures has enabled the contractor to support the design and development of the system. For the government, it provides the opportunity to use contractor test data to augment planned government testing, thus enhancing sample size, allowing for longer testing and broadening performance measurements.
A WIN-WIN EQUATION
It is essential to the success of test planning using this expanded approach to create advantageous conditions for both the contractor and the government. This calls for contractually providing both the PM and ATEC the opportunity to review and comment on the contractor test plan so that they can shape it to fulfill the evaluation needs of the T&E community. The contractor wants to ensure that its equipment meets established performance specifications. The evaluator needs this verification to be performed a certain way for statistical validity.
There are additional conditions to be set in ATEC’s system evaluation plan as well, namely the T&E planning, execution and reporting guidelines to follow in order for ATEC to accept any program data provided by a contractor. For AMPV, this data covered primarily the areas of automotive performance, suitability and survivability, as the system has no offensive weapons.
The test planning also has to address where and how the testing is to be performed, under what conditions and for what duration (as described in the operational mode summary and mission profile), and how the data is to be collected and reported, among other factors.
Presenting, addressing and approving the concept of the government using contractor data in the program’s approved test and evaluation master plan allows the necessary acceptance by T&E stakeholders. For developmental testing, the stakeholders are the PM, the Office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation, the Department of the Army, the Office of the Deputy Undersecretary of the Army for Test and Evaluation and ATEC.
The contractor test planning must address issues identified in ATEC’s system evaluation plan to justify reducing the government’s testing.
Contractually, the PM needs to ensure that the request for proposal and the subsequent contract describe the government’s expectations for conducting contractor testing and using contractor data. The contract must allow for review and approval of contractor test plans to enable the government to provide proper guidance.
The government uses various verification techniques (e.g., test, demonstration, inspection and analysis) to ensure that the systems or items being acquired meet performance requirements and the user’s needs. The type of verification techniques and the amount of T&E needed should be part of a contractor test plan. The government must ensure that proper procedures are part of the requirements verification portion.
The contract also needs to address an ATEC inspection of any nongovernment test facilities and ATEC monitoring of test execution. ATEC needs to observe contractor testing to ensure that the system operates in the manner that Soldiers will use it.
Contractor testing typically takes two forms. One form involves the contractors testing their systems in a stressing manner that can induce failures, thus causing contractors to resist sharing their test data. The other involves the contractors treating their equipment with kid gloves because they are afraid to break it. These concerns can make the contractor reluctant to release test results to the government that show multiple failures.
Then, when the system enters government testing, which replicates how the Soldier will use the system, testing uncovers a higher number of failures. This leads to delays to make time for redesigns, manufacturing and testing to ensure contractual compliance. Depending on the technology’s maturity—the technology readiness level—high failure rates may be acceptable. But if the technology readiness level is high (e.g., greater than 6 on the standard DOD readiness scale of 1 to 9), then high failure rates could indicate poor quality or design.
In short, bad news does not get better with time. It is always best to test in a robust, realistic way to identify failures early, providing time for correction if necessary, rather than hiding them by testing in unrealistic ways that pamper the system. Well-designed systems can operate as intended and do not induce delays in testing, thus satisfying requirements and saving test time and money.
KEEPING IT REAL
All parties involved must become comfortable with the risks of realistic testing. Contractors need to overcome the resistance to share data that may be critical of their design, as this early feedback is exactly what the Army T&E community needs. The Army needs to be receptive to early discovery of issues and provide feedback to the contractor to mature the product and achieve the desired end state. Early discovery minimizes the expense of corrective actions or design changes to mature a concept.
The PM and ATEC can accept contractor data from nongovernment test facilities, but no single approved process, policy or overall guidance exists to fit every testing scenario. Audits of test sites and reviews of testing procedures and reporting requirements are necessary to assess each scenario on a case-by-case basis. In some cases in which test data already exist, ATEC will need to assess the pedigree of the data.
Combining government and contractor testing is also important in supporting reliability growth, the maturation of a system to achieve optimum reliability over its expected operating lifetime. Testing to the expected life of a system can identify “infant mortality,” or failures that occur very early in the life of a system and are associated with design shortcomings; steady-state failures, which occur randomly following the infant mortality phase; and wear-out failures, which come at the end of the life cycle.
CONCLUSION
For AMPV, delays in contractor delivery significantly impacted the scheduled test execution. But because of the efforts of the T&E integrated product team in planning the contractor testing, the government was able to simply redesignate the executed contractor testing as government testing and saved several months of schedule, preventing a milestone slippage. Had this planning and these actions not taken place, it was unlikely that the program would have maintained the planned milestone schedule.
The bottom line is that use of contractor data to address test and evaluation requirements for acquisition programs is possible, but it will require cooperation and planning by the acquisition and T&E communities. The T&E community needs more than an agreement about the testing and data. What is needed is an agreed-to process to resolve questions and answers such as in the accompanying example quickly and easily.
In such cases, the T&E community will have to face the reality that its test, although combined with that of a contractor to reduce redundancy, must actually expand in scope to deal with the problems identified. Contractor testing ends up adding things the contractor normally would not do, but the overall benefit is the potential to reduce government testing on the back end. Also, additional testing may be required to determine if a solution was addressed effectively. This acceptance is key when combined testing is necessary for the sake of overall test or schedule reductions and efficiency.
Lastly, the test community must recognize that a combined test may gather more information, with greater cost or scope, than either of the two individually planned tests, as it is collecting data for two agencies. Nonetheless, the test can still reduce overall redundancy and create efficiency compared with two completely separate tests.
With the constant goal of streamlined acquisition and exercising better buying power, the use of contractor testing, with appropriate organizational coordination and planning, is a best practice to adopt.
For more information, contact the author at 443-861-9608 or DSN: 848-9608; or at harry.h.jenkins2.civ@mail.mil.
DISCLAIMER
While this paper was coordinated with PEO GCS, the views expressed herein are solely those of the author, and do not necessarily reflect the views of PEO GCS, the U.S. Army Test and Evaluation Command or the U.S. Army or the Department of Defense.
MR. HARRY H. JENKINS III is an Army test and evaluation command systems chair for the Mounted Systems Evaluation Directorate of the Army Evaluation Center. He holds an M.S. in engineering management from the University of Maryland, Baltimore County, and a B.S. in engineering from the University of Tennessee at Chattanooga. He has 26 years’ experience in acquisition, test and evaluation and is a member of the Army Acquisition Corps.
---------------
CLEAR AND COMMON EXPECTATIONS
Let’s say a fuel efficiency test requires operating a vehicle for three hours at a stable speed, on a defined road course, using defined driver procedures. During the test, a tire fails. Clearly, the test must stop to replace the tire.
However, conflicts can arise when trying to restart the test. One agency may want to change the procedure to gather more information about why the tire failed and choose not to complete the efficiency test. Another agency may want to restart the test from the beginning to ensure that it can gather the fuel efficiency data (even though a tire may fail again before completion).
To combine contractor test data with government test data, several fundamental criteria must match: decision support, the data, test procedures, test execution, reporting and test article configuration.
Decision support—Tests are planned for different reasons. Testing by the contractor supports its design, engineering and production decisions (adequacy of drawings, accuracy of output, quality, design performance, reliability, etc.), whereas government testing supports assessment ratings to meet requirements and satisfy mission capability, while also supporting risk assessments of the factors the contractor used to support its test decisions.
Combined testing from the two sources must support both organizations’ decision-making. The contractor’s decisions weigh the cost and benefit to its bottom line, which means it may benefit the contractor not to address or correct deficiencies, based on the cost. The government’s decision-making is based on a separate analysis of cost and benefit, weighing additional factors such as mission effect, attrition of equipment and loss of life.
Data—Data are defined by format, measurement, collection and system-unique requirements. To combine two sources, procedures must ensure that all four factors match and that the instrumentation can collect all data needed. This data authentication process should be relatively easy to establish: Set a standard for data and instrumentation that both agencies will use.
Test plans and procedures—Users of the data (for the AMPV, BAE Systems and, for the government, ATEC and PM AMPV) all should agree on a common test procedure and execution. Each agency has an objective to accomplish, and the test plans are tailored to meet the data and decision-making needs of all users. A single planning procedure is necessary to ensure that all decisions and data can be combined as well, so as not to mix apples and oranges. Procedures must also incorporate the decision-making process to account for test outcomes that will require modifying future steps in the test process.
Test execution—Both agencies must agree in advance what they will do while executing the testing and, most important, what they will do when testing reveals something unexpected (higher- or lower-than-expected performance, or a failure). For example, the vendor may want to demonstrate a capability such as top speed, whereas the government wants statistical assurance of the same metric, which may require more samples. Additionally, the government may want to look at the top speed as the system gets older to see how time and usage affect this capability.
Reporting—Reporting could be one of the easiest aspects to combine between organizations. But again, how data support the parties’ decisions may tailor the reporting of findings. It is possible that test planning does not have to address reporting at all, as long as there is agreement between both agencies. How is the information shared, for example? Is a formal report required, or is a briefing chart sufficient? A spreadsheet with results, or a database?
Test article configuration—This aspect should also be easy to combine. However, the reality is that configuration can change based on how the data support decisions. In particular, it may be desirable to change the configuration for design and engineering purposes, but to keep it stable or fixed for requirements and mission capability assessment.
Take software updates, a frequent example. There should be a plan as to when updates will occur. If testing reveals the need for an unplanned software update, the teams must come together to determine when to insert it into the schedule and how this unplanned “fix” impacts testing: Does it need to start from zero, or can it continue from the cut-in point? If the update adds capability, what is the impact on evaluation of the system?
This article is published in the October - December 2018 issue of Army AL&T magazine.
Date Taken: | 10.19.2018 |
Date Posted: | 10.19.2018 11:49 |
Story ID: | 297018 |
Location: | US |
Web Views: | 136 |
Downloads: | 1 |
This work, Shift Left, must comply with the restrictions shown on https://www.dvidshub.net/about/copyright.