Back to Top

Defense Acquisitions Made Easy

Blog Archives

Major Reviews

Test Readiness Review (TRR)

A Test Readiness Review (TRR) is conducted to determine if the system under review is ready to proceed into formal testing by deciding whether the test procedures are complete and verify their compliance with test plans and descriptions. A TRR is normally conducted before each major test configuration item including hardware and software and provides management with the assurance that a system has undergone a thorough test process and is ready for turnover to the next test phase. The Flight Readiness Review (FRR) is a subset test of the TRR.

Checklist: Test Readiness Review (TRR) Risk Assessment

The TRR assesses test objectives, test methods and procedures, scope of tests, and safety and confirms that required test resources have been properly identified and coordinated to support planned tests. The TRR verifies the traceability of planned tests to program requirements and user needs. The TRR also assesses the system under review for development maturity, cost/ schedule effectiveness, and risk to determine readiness to proceed to formal testing.

The TRR should answer the following questions:

  1. Why are we testing?
  2. What is the purpose of the planned test?
  3. Does the planned test verify a requirement that is directly traceable back to a system specification or other program requirement?
  4. What are we testing (subsystem, system, system of systems, other)?
  5. Is the configuration of the system under test sufficiently mature, defined, and representative to accomplish planned test objectives and or support defined program objectives?
  6. Are we ready to begin testing?
  7. Have all planned preliminary, informal, functional, unit level, subsystem, system, and qualification tests been conducted, and are the results satisfactory?
  8. What is the expected result and how can/do the test evaluation results affect the program?
  9. Is the planned test properly resourced (people, test article or articles, facilities, data systems, support equipment, logistics, etc.)?
  10. What are the risks associated with the tests and how are they being mitigated?
  11. What are the hazards and ESOH risks associated with the specific testing?
  12. Have the necessary “Safety Releases” from the Program Manager (PM) been provided to developmental and operational testers prior to any test using personnel?
  13. What is the fall-back plan should a technical issue or potential showstopper arise during testing?

The scope of the TRR is directly related to the risk level associated with performing the planned tests and the importance of the test evaluation results to overall program success. The level of specific risk and associated risk level will vary as a system proceeds from component level, to system level, to systems of systems level testing. Early component level tests may not require the same level of review as the final system level tests. Sound judgment should dictate the scope of a specific test or series of tests.

Typical TRR success criteria including the following:

  • Completed and approved test plans for the system under test,
  • Completed identification and coordination of required test resources,
  • The judgment that previous component, subsystem, and system test results form a satisfactory basis for proceeding into planned tests, and
  • Identified risk level acceptable to the program leadership.

The Program Manager (PM) should address the scope of the TRR in the Systems Engineering Plan (SEP). Test and Evaluation (T&E) is an integral part of the Systems Engineering Processes of Verification and Validation.

AcqLinks and References:

Updated: 7/18/2017

Major Reviews

System Functional Review (SFR)


Major Reviews

The System Functional Review (SFR) is a technical review to ensure that the system’s functional baseline is established and can satisfying the requirements of the Initial Capabilities Document (ICD) or draft Capability Development Document (CDD) within the currently allocated budget and schedule. It also determines whether the system’s lower-level performance requirements are fully defined and consistent with the system concept and whether lower-level systems requirements trace to top-level system performance requirements.  The SFR is conducted during the Technology Maturation and Risk Reduction (TMRR) Phase of a program.

A critical component of an SFR review is the development of representative operational use cases for the system. System performance and the anticipated functional requirements for operations maintenance, and sustainment are assigned to sub-systems, hardware, software, or support after detailed analysis of the architecture and the environment in which it will be employed. The SFR determines whether the system’s functional definition is fully decomposed to its lower level, and that Integrated Product Teams (IPT) are prepared to start preliminary design.

The system’s lower-level performance requirements are evaluated to determine whether they are fully defined and consistent with the system concept, and whether traceability of lower-level systems requirements to top-level system performance and the CDD is maintained. This activity results in two major systems engineering products: the final version of the system performance specification and draft version of the performance specifications, which describe the items below system level (item performance specifications).

Completion of the SFR should provide the following:

  1. An established system functional baseline with traceability to lower-level performance requirements,
  2. An updated risk assessment for the Engineering, Manufacturing and Development (EMD) Phase,
  3. An updated Cost Analysis Requirements Description (CARD) or a CARD-like document based on the system functional baseline,
  4. An updated program development schedule including system and software critical path drivers, and
  5. A preliminary system level maintenance plan with updates applicable to this phase.


  • The SFR is the first review that begins to allocate requirements to separated sub-systems and organizational IPTs
  • An SFR review is where the need for Interface Control Documents becomes necessary to define areas of responsibility and constraints requiring coordination across IPTs.

AcqLinks and References:

Updated: 9/19/2017

Major Reviews

System Requirements Review (SRR)

A System Requirements Review (SRR) is a formal review conducted to ensure that system requirements have been completely and properly identified and that a mutual understanding between the government and contractor exists. It ensure that the system under review can proceed into initial systems development and that all system and performance requirements derived from the Initial Capabilities Document (ICD) or draft Capability Development Document (CDD) are defined and testable, and are consistent with cost, schedule, risk, technology readiness, and other system constraints.

Checklist: SRR Risk Assessment Pre-Award

Checklist: System Requirements Review Completion Checklist

A SRR assesses the system requirements captured in the system specification and ensures that the system requirements are consistent with the approved materiel solution, ICD, enabling concepts, and available technologies identified in the Materiel Solutions Analysis (MSA) phase. An SRR is important in understanding the system performance, cost, and scheduling impacts that the defined requirements will have on a system.

Completion of the SRR should provide the following:

  1. An approved system performance specification with sufficiently conservative requirements to provide for design trade space for the Engineering, Manufacturing and Development (EMD) phase,
  2. A preliminary allocation of system requirements to hardware, human, and software subsystems,
  3. A preliminary Identification of all software components (tactical, support, deliverable, non-deliverable, etc.),
  4. A comprehensive risk assessment for EMD,
  5. An approved EMD phase Systems Engineering Plan (SEP) that addresses cost and critical path drivers, and
  6. An approved Life-Cycle Sustainment Plan (LCSP) defining the product support plan and sustainment concepts with the corresponding metrics.

The SRR reviews and evaluates the draft functional baseline and requirements analysis. All relevant documentation should be reviewed, including:

Note: IEEE 5288.2 “Standard for Technical Reviews and Audits on Defense Programs” is the standard for technical reviews and audits to be performed throughout the acquisition life cycle for the US Department of Defense (DoD) and other defense agencies. This standard guides the DoD and contractor on what is required during an SRR.

AcqLinks and References:

Updated: 4/11/2018

Major Reviews

Physical Configuration Audit (PCA)


Major Reviews

The Physical Configuration Audit (PCA) examines the actual configuration of an item being produced and is conducted around the time of the Full-Rate Production Decision. It verifies that the related design documentation matches the Configuration Item (CI) as specified in the contract and confirms that the manufacturing processes, quality control system, measurement and test equipment, and training are adequately planned, tracked, and controlled. The PCA validates many of the supporting processes used by the contractor in the production of the item and verifies other elements of the item that may have been impacted or redesigned after completion of the System Verification Review. The PCA is also used to verify that any elements of the CI that were redesigned after the completion of the Functional Configuration Audit (FCA) also meet the requirements of the CI’s performance specification. [1]

Detailed instructions and guidance for conducting PCA can be found in the:

Handbook: NAVAIR ILSM Configuration Management – Section 3

Checklist: PCA Risk Assessment

A PCA is normally conducted when the government plans to control the detail design of the item it is acquiring via the Technical Data Package. When the government does not plan to exercise such control or purchase the item’s Technical Data Package, the contractor should conduct an internal PCA to define the starting point for controlling the detail design of the item and establishing a product baseline. The PCA is complete when the design and manufacturing documentation match the item as specified in the contract. If the PCA was not conducted before the Full-Rate Production Decision (FRPD), it should be performed as soon as production systems are available. [1]

The PCA for a CI shall not be started unless the FCA has already been accomplished. After successful completion of the audit and the establishment of a Product Base Line (PBL), all subsequent changes are processed by formal engineering change action.

Software PCA [2]
The Software PCA is an examination of the as-coded total system software against its design or deliverable documentation. For Commercial off-the-Shelf (COTS) software this involves verification of correct documentation to support use of the software versions actually being delivered on the applicable media. Adequacy of identification and marking of all deliverable software media in accordance with contract requirements or best commercial practice is included in the PCA. The software PCA is normally conducted by government personnel at the trainer site after Government Final Inspection as soon as all final software corrective actions have been implemented.

Hardware PCA [2]
The Hardware PCA is an examination of the as built system against its design documentation. A prerequisite for the PCA is the successful completion of an FCA. Preliminary Operation and Maintenance manuals and red lined preliminary engineering drawings may be used when appropriate. Audits of Commercial off-the-Shelf (COTS) equipment should be limited to verification of agreement of Model/Part Numbers with system drawing parts lists and correct vendor documentation supplied. The hardware PCA is conducted jointly by the contractor and the government at the contractor’s plant prior to shipment for new trainer acquisitions.

Preliminary PCA [2]
A Preliminary PCA is normally conducted by the government during new trainer acquisitions, typically at the Critical Design Review time frame, to review reference designator number assignment plans, drawing/lists expectations, mutual understanding of contract specifications relative to the total hardware system, part identification and markings, in-process inspections, review of contractor’s CM/QA program and evidence thereof. The hardware PCA may also be conducted incrementally when the contract involves a large system. In that case the final PCA should be conducted prior to the completion of GFI.

AcqLinks and References:

Major Reviews

Post-Deployment Review (PDR)


Major Reviews

A Post-Deployment Review (PDR) is used by a Program Manager (PM) to review a system, beginning at Initial Operational Capability (IOC), to verify whether the fielded system continues to meet or exceed thresholds and objectives for cost, performance, and support parameters approved at full-rate production. DoD policy requires that, “The Services shall conduct periodic assessments of system support strategies vis-à-vis actual vs. expected levels of performance and support. These reviews occur nominally every three (3) to five (5) years after IOC or when precipitated by changes in requirements/design or performance problems, and should include, at minimum:

  • Product Support Integrator/Provider performance;
  • Product improvements incorporated; and
  • Configuration control.

PDR continue as operational support plans execute (including transition from organic to contract support and vice versa, if applicable), and should be regularly updated depending on the pace of technology. The program manager should use existing reporting systems and operational feedback to evaluate the fielded system whenever possible.

AcqLinks and References:

Major Reviews

Initial Technical Review (ITR)


The Initial Technical Review (ITR) is a multi-disciplined technical review to support a program’s initial Program Objective Memorandum (POM) submission in the Materiel Solutions Analysis (MSA) Phase . This review ensures a program’s Technical Baseline is sufficiently rigorous to support a valid cost estimate and enable an independent assessment. The ITR assesses the capability needs and materiel solution approach of a proposed program and verifies that the requisite research, development, test and evaluation, engineering, logistics, and programmatic bases for the program reflect the complete spectrum of technical challenges and risks.

Additionally, the ITR ensures the historical and prospective drivers of system Life-Cycle cost (LCC) have been quantified to the maximum extent and that the range of uncertainty in these parameters has been captured and reflected in the program cost estimates. The basic Cost Analysis Requirements Description (CARD)  technical and programmatic guidance, tailored to suit the scope and complexity of the program, should be followed to ensure all pertinent design-related cost drivers are addressed.

Completion of the ITR should provide:

  1. A complete Cost Analysis Requirements Description (CARD) – like document detailing the operational concept, candidate materiel solutions, and, risks,
  2. An assessment of the technical and cost risks of the proposed program, and
  3. An independent assessment of the program’s cost estimate; Independent Cost Estimate (ICE).

Typical ITR success criteria include affirmative answers to the following exit questions:

  1. Does the CARD-like document capture the key program cost drivers, development costs (all aspects of hardware, human integration, and software), production costs, operation and support costs?
  2. Is the CARD-like document complete and thorough?
  3. Are the underlying assumptions used in developing the CARD-like document technically and programmatically sound, executable, and complete?
  4. Have the appropriate technical and programmatic competencies been involved in the CARD-like document development, and have the proper SMEs been involved in its review?
  5. Are the risks known and manageable within the cost estimate?
  6. Is the program, as captured in the CARD-like document, executable?


  • Independent assessment is conducted by Subject Matter Experts (SMEs)

AcqLinks and References:

Major Reviews

Functional Configuration Audit (FCA)


Major Reviews

A Functional Configuration Audit (FCA) examines the functional characteristics of the configured product and verifies that the product has met the requirements specified in its Functional Baseline documentation approved at the Preliminary Design Review (PDR) and Critical Design Review (CDR).  It has to do more with systems engineering and program management that official auditing. The FCA is a review of the configuration item’s test and analysis data to validate the intended function meets the system performance specification.  The FCA is normally performed prior to Low-Rate Initial Production (LRIP) and prior to or in conjunction with a Physical Configuration Audit (PCA). A successful FCA typically demonstrates that Engineering and Manufacturing Development product is sufficiently mature for entrance into LRIP.  A FCA may also be conducted concurrently with the System Verification Review (SVR).

The issues that are addressed during a FCA are:

  • Readiness issues for continuing design, continuing verifications, production, training, deployment, operations, support, and disposal have been resolved.
  • Verification is comprehensive and complete
  • Configuration audits, including completion of all change actions, have been completed for all CIs
  • Risk management planning is/has been updated for production
  • Systems Engineering planning is updated for production
  • Critical achievements, success criteria and metrics have been established for production.

In large system with complex Configuration Items (CI), the FCAs may be accomplished in increments. Each increment may address a specific functional area of the system and will document any discrepancies that are found in the performance capabilities of that increment. After all of the increments have been completed, a final (summary) FCA may be held to address the status of all of the action items that have been identified by the incremental meetings and to document the status of the FCA for the system or CI in the minutes and certifications. In this way, the audit is effectively accomplished with a minimum of complications. [1]

The Program Management Office (PMO) is ultimately responsible for the performance of audits.  The Program Manager (PM) has overall disposition authority on audit results and reports. The PM’s designee, who may be the System Engineer (SE) or Logistics Management Specialist (LMS), will ensure audits requirements are properly delineated in the contract and the FCA is properly executed.

– See Software Functional Configuration Audit


AcqLinks and References:


Major Reviews

Integrated Baseline Review (IBR)


Major Reviews

An Integrated Baseline Review (IBR) is a joint assessment conducted by the government Program Manager (PM) and the contractor to establish a mutual understanding of the Performance Measurement Baseline (PMB). This understanding provides for an agreement on a plan of action to evaluate the risks inherent in the PMB and the management processes that operate during program execution. PM’s are required to conduct IBRs on all cost or incentive contracts that require the implementation of Earned Value Management (EVM) (contracts valued at or greater than $20 million).  IBRs should be used to understand:

  • The scope of the PMB consistent with authorizing documents;
  • Management control processes;
  • Risks in the PMB associated with cost, schedules, and resources; and
  • Corrective actions where necessary.

IBR FigureCompletion of the review should result in the assessment of risk within the PMB and the degree to which the following have been established:

  1. Technical scope of work is fully included and is consistent with authorizing documents,
  2. Key project schedule milestones are identified and supporting schedules reflect a logical flow to accomplish the work,
  3. Resources (budgets, facilities, infrastructure, personnel, skills, etc.) are available and are adequate for the assigned tasks,
  4. Tasks are planned and can be measured objectively relative to the technical progress,
  5. Rationales underlying the PMB are reasonable, and
  6. Management processes support successful execution of the project.

IBRs should be scheduled as early as practicable and the timing of the IBRs should take into consideration the contract period of performance. The process will be conducted not later than 180 calendar days (6 months) after:

  1. Contract award,
  2. Exercise of significant contract options, and
  3. Incorporation of major modifications.

IBRs are also performed at the discretion of the PM or within a reasonable time after the occurrence of major events in the life of a program. These events may be completion of the Preliminary Design Review (PDR), completion of the Critical Design Review (CDR), a significant shift in the content and/or time phasing of the PMB, or when a major milestone such as the start of the production option of a development contract is reached. Continuous assessment of the PMB will identify when a new IBR should be conducted.


AcqLinks and References:

Major Reviews

System Verification Review (SVR)


Major Reviews

The System Verification Review (SVR) is a product and process assessment to ensure the system under review can proceed into Low-Rate Initial Production (LRIP) and Full-Rate Production (FRP) within cost, schedule, risk, and other system constraints during the Engineering, Manufacturing and Development (EMD) Phase.  It assesses the system functionality and determines if it meets the functional requirements in the Capability Development Document (CDD) and draft Capability Production Document (CPD) documented in the functional baseline. The SVR establishes and verifies final product performance and provides inputs to the CPD.

Checklist: System Verification Review (SVR) – 27 Sept 2010

Typical SVR success criteria include affirmative answers to the following exit questions:

  1. Does the status of the technical effort and system indicate operational test success (operationally effective and suitable)?
  2. Can the system satisfy the Capability Development Document (CDD) and draft Capability Production Document (CPD)?
  3. Are adequate processes and metrics in place for the program to succeed?
  4. Are the risks known and manageable?
  5. Is the program schedule executable within the anticipated cost and technical risks?
  6. Are the system requirements understood to the level appropriate for this review?
  7. Is the program properly staffed?
  8. Is the program’s non-recurring engineering requirement executable with the existing budget?
  9. Is the system producible within the production budget?

The SVR is often conducted concurrently with the Production Readiness Review (PRR) and Functional Configuration Audit (FCA). Product support IPT members should participate in the review to:

  • Address system supportability and, based on developmental testing or analysis whether the sustainment features will satisfy the Capability Development Document/draft Capability Production Document and Sustainment Key Performance Parameters (KPP) / Key System Attributes (KSA).
  •  Adequate processes are in place so the sustainment performance metrics can be used to help the program to succeed in meeting user needs.
  • Ascertain if the system is supportable within the procurement, operations, and support budgets.

The SVR risk assessment checklist is designed as a technical review preparation tool and should be used as the primary guide for assessing risk during the review. This checklist is attached below.


AcqLinks and References:

Major Reviews

Flight Readiness Review (FRR)

A Flight Readiness Review (FRR) is a sub-set of the Test Readiness Review (TRR) and is applicable only to aviation programs. It assesses the readiness to initiate and conduct flight tests or flight operations. Typically, FRR approval requires the aviation system to be under Configuration Management (CM), have a flight clearance issued by the technical authority, approved flight test plan(s), discrepancy tracking and Risk Assessment processes in place.

The FRR is a technical assessment establishing the configuration to be used in flight testing to ensure that the system has a reasonable expectation of being judged operationally effective and suitable.  This review assesses a system test environment to ensure that the system under review can proceed into flight test with airworthiness standards met, objectives clearly stated, flight test data requirements clearly identified, and an acceptable Risk Management Plan (RMP) defined and approved. [1]

An FRR shall be conducted prior to the first flight of any new air vehicle.  For complex systems, an FRR shall be conducted with an assessment of each subsystem or Configuration Item (CI) prior to flight.  An FRR is also required prior to the first flight of any major changes to hardware, software, envelope, or objectives not covered in a previous FRR. [1]

AcqLinks and References:

Updated: 2/17/2017