Back to Top

Educating the Aerospace Industry

Blog Archives

Business & Marketing

Wage Determinations Online Program (WDOL)

 

The Wage Determinations Online (WDOL) Program provides a single location for federal contracting officers to use in obtaining appropriate Service Contract Act (SCA) and Davis-Bacon Act (DBA) wage determinations (WDs) for each official contract action. The website is available to the general public as well. Guidance in selecting WDs is provided in the WDOL.gov User’s Guide.

 

Website: Wage Determinations Online Program (WDOP) Home Page

 

The WDOL Program provides contracting officers direct access to the Department of Labor’s (DOL’s) “e98” website to submit a request for SCA WDs for use on official contract actions. In some instances, the WDOL.gov Program will not contain the appropriate SCA WD, and contracting officers will be directed to use DOL’s e98 website in order to obtain the required SCA WD. DOL will provide the contracting officer with an SCA WD through the e98 system.

 

AcqLinks and References:

Updated: 6/22/2018

Schedule Development

'What if' scenario analysis

 

‘What if’ scenario analysis is a simulation method that compares and measures the effects of different scenarios on a project schedule. It uses Schedule Network Analysis to determine the effects of various scenarios on a project schedule from delayed activities, strikes, bad weather, late resources, and other adverse situations. This analysis is used to plan for the risks posed in these scenarios and allows project personnel to:

  • Evaluate the feasibility of completing the project under unfavorable conditions,
  • Prepare contingency and response plans to project risks,
  • Mitigate the impact of any schedule risks

 

The most common simulation technique used in ‘What-if’ scenario analysis is Monte Carlo. Monte Carlo is used to run many simulations to calculate a distribution of possible outcomes for the total project where early start/early finish, late start/late finish dates.

 

AcqLinks and References:

Updated: 6/20/2018

Program Management

Working Groups

 

A Working Group (WG) is an interdisciplinary collaboration of people working on a project or problem that would be difficult to develop under a traditional organizational structure or funding mechanisms. The lifespan of the WG can last anywhere between a few months to several years and can be a collaboration of a Cross-Functional Team makeup. Such groups have the tendency to develop a quasi-permanent existence once the assigned task is accomplished; hence the need to disband the WG once it has provided solutions to the issues for which it was initially convened. [1]

 

A few of the common Working Groups within the acquisition process are:

  • Analysis of Alternative
  • Computer Resource
  • Defense IA/Security Accreditation
  • DIA/Joint Information Operations Threat
  • Intelligence Community Metadata
  • Interface Control
  • Logistics Support Management Team (LSMT)
  • Methods and Processes
  • Process Action Team (PAT)
  • Requirements Development
  • Requirements Interface
  • Resource Enhancement Project
  • System Safety
  • System Security
  • Technology Assessment
  • Test & Evaluation

Updated: 7/16/2017

Modeling & Simulation

Verification Validation and Accreditation

 

Verification, Validation, and Accreditation (VV&A) are three interrelated but distinct processes that gather and evaluate evidence to determine whether a model or simulation should be used in a given situation and establishing its credibility. The decision to use the simulation will depend on the simulation’s capabilities and correctness, the accuracy of its results, and its usability in the specified application. [1]

 

The purpose of VV&A is to assure the development of correct and valid simulations and to provide simulation users with sufficient information to determine if the simulation can meet their needs.  VV&A processes are performed to establish the credibility of the models and simulations.  Credibility depends on simulation and also depends on the accuracy of a simulation.  The decision on whether or not a simulation provides the necessary degree of accuracy depends not only upon the inherent characteristics of the simulation, but also upon how the simulation will be used, and upon the significance of any decisions that may be reached on the basis of the simulation’s outputs. [1]

 

Credibility for a simulation depends on its correctness, the level of confidence that its data and algorithms are sound, robust, properly implemented, and that the accuracy of the simulation results will not substantially and unexpectedly deviate from the expected degree of accuracy.  Credibility also depends, on its usability. Usability are the factors related to the use of the simulation, such as the training and experience of those who operate it, the quality and appropriateness of the data used in its application, and the configuration control procedures applied to it. [1]

 

The official DoD definitions for these processes are: [2]

  • Verification:  The process of determining that a model implementation and its associated data accurately represent the developer’s conceptual description and specifications.
  • Validation:  The process of determining the degree to which a model and its associated data provide an accurate representation of the real world from the perspective of the intended uses of the model.
  • Accreditation:  The official certification that a model, simulation, or federation of models and simulations and its associated data is acceptable for use for a specific purpose.

 

See the Verification, Validation & Accreditation Recommended Practice Guide for more detailed information.

 

The DoD VV&A Documentation Tool (DVDT) addresses the need to capture (VV&A) information in a consistent form with consistent content.[3]

Military Standards related to VV&A
Standardized DoD VV&A documentation templates are located in MIL-STD-3022 “Documentation of (VV&A) for Models and Simulations”.

  • DI-MSSM-81750 Department of Defense (DoD) Modeling and Simulation (M&S) Accreditation Plan
  • DI-MSSM-81751 Department of Defense (DoD) Modeling and Simulation (M&S) Verification and Validation (V&V) Plan
  • DI-MSSM-81752 Department of Defense (DoD) Modeling and Simulation (M&S) Verification and Validation (V&V) Report

 

AcqLinks and References:

Updated: 7/9/2018

Systems Engineering

Verification Process

 

The Verification Process confirms that Design Synthesis has resulted in a physical architecture that satisfies the system requirements. Throughout a system’s life cycle, design solutions at all levels of the physical architecture are verified to meet specifications.

 

The objectives of the Verification process include using established criteria to conduct verification of the physical architecture from the lowest level up to the total system to ensure that cost, schedule, and performance requirements are satisfied with acceptable levels of risk. Further objectives include generating data (to confirm that system, subsystem, and lower-level items meet their specification requirements) and validating technologies that will be used in system design solutions. A method to verify each requirement must be established and recorded during requirements analysis and functional allocation activities. The three (3) steps in the verification process include: [1,2]

  1. Planning
  2. Execution
  3. Reporting

Verification Flow Chart

1) Verification Planning: [1]
Verification planning is performed at each level of the system under development. The following activities describe the development of a verification plan:

  • Verification Method and Level Assignments: Defines the relationships between the specified requirements method and level of verification. This activity typically yields a Verification Cross Reference Matrix for each level of the architecture and serves as the basis for the definition of the verification tasks. The level of verification is assigned consistent with the level of the requirement (e.g., system-level, subsystem level etc.). Verification activities include Analysis, Inspection, Demonstration and Test. (see below) Choice of verification methods must be considered an area of potential risk. Use of inappropriate methods can lead to inaccurate verification.
  • Verification Task Definition: Defines all verification tasks with each task addressing one or more requirements. The ability to define good verification tasks requires the test engineer to have a sound understanding of how the system is expected to be used and its associated environments. An essential tool for the test engineer is to utilize the integrated architecture that consists of the requirements, functional and physical architectures. The functional architecture is used to support functional and performance test development and in combination with the physical architecture, a family of verification tasks are defined that will verify the functional, performance and constraint requirements.
  • Verification Configuration Definition: Defines the technical configuration, resources, including people, and environments needed to support a given verification task. This may also include hardware or software to simulate the external interfaces to the system to support a given test.
  • Verification Scheduling: Defines the schedule for the performance of the verification tasks and determines which verification tasks are in sequence or in parallel and the enabling resources required for execution of the verification tasks.

 

Typical verification methods include: [2]

  • Analysis – the use of mathematical modeling and analytical techniques to predict the compliance of a design to its requirements based on calculated data or data derived from lower-level component or subsystem testing. It is generally used when a physical prototype or product is not available or not cost-effective. The analysis includes the use of both modeling and simulation.
  • Inspection – the visual examination of the system, component, or subsystem. It is generally used to verify physical design features or specific manufacturer identification,
  • Demonstration – the use of system, subsystem, or component operation to show that a requirement can be achieved by the system. It is generally used for a basic confirmation of performance capability and is differentiated from testing by the lack of detailed data gathering, or
  • Test – the use of system, subsystem, or component operation to obtain detailed data to verify performance or to provide sufficient information to verify performance through further analysis. Testing is the detailed quantifying method of verification it is ultimately required in order to verify the system design.

2) Verification Execution: [1]
The performance of a given verification task with supporting resources. The verification task results, whether from a test, analysis, inspection or simulation, are documented for compliance or non-compliance with data supporting the conclusion.

3) Verification Reporting: [1]
Reports the compiled results of the executed verification plan and verifies the materials employed in system solutions can be used in a safe and environmentally compliant manner.

 

AcqTips:

  • Verification can be viewed as the intersection of systems engineering and test and evaluation.

 

AcqLinks and References:

5/11/2018

Science & Engineering

Value Engineering

 

Value Engineering (VE) (FAR Part 48) is an organized/systematic approach that analyzes the functions of systems, equipment, facilities, services, and supplies to ensure they achieve their essential functions at the lowest Life-Cycle Cost (LCC) consistent with required performance, reliability, quality, and safety. Typically the implementation of the VE process increases performance, reliability, quality, safety, durability, effectiveness, or other desirable characteristics. [1]

 

VE is a management tool that can be used alone or with other management techniques and methodologies to improve operations and reduce costs. For example, the acquisition reform efforts, the emphasis on performance-based specifications, the design-build project delivery process, and the use of Integrated Product Teams (IPT) can include VE and other cost-reduction techniques, such as life-cycle costing, Cost As an Independent Variable (CAIV), concurrent engineering, and design-to-cost approaches. These techniques are effective as analytical tools in process and product improvement. VE can be used with Lean-Six Sigma processes to challenge requirements and identify functions that cost more than they are worth. [1]

 

Website: DoD Value Engineering Program

Website: FAR Part 48 “Value Engineering”

 

 The DoD Value Engineering Program is set up to guide and monitor VE initiative in the DoD. They also help monitor the Value Engineering Change Proposal (VECP) rules and procedures. The program has two (2) distinct components:

  1. An in-house effort performed by DOD military and civilian personnel; and
  2. An external effort performed by DOD contractors and applied to contracts after Department approval.

 

The VE Management Advisory Group (VEMAG) is established to act as the agent across Services and agencies promoting the use of value engineering. The chair of the VEMAG is the Office of the Secretary of Defense (OSD) VE Program Manager (PM). The membership consists of one primary voting member from each Service and agency. [1]

 

Public Law
In the United States, value engineering is specifically spelled out in Public Law 104-106, which states “Each executive agency shall establish and maintain cost-effective value engineering procedures and processes.”

 

Federal Acquisition Regulation
FAR Part 48 “Value Engineering” governs VE within the Federal Government. According to FAR 48.201(a), unless exempted by an agency head, a VE incentive clause must be included in all contracts exceeding $100,000 except those for research and development (other than full-scale development), engineering services for non-profit organizations, personal services, commercial items, or a limited specific product development. Furthermore, the use of the VE incentive clause is encouraged for use in smaller dollar-value contracts where there is a reasonable chance for acquisition savings. For supplies or services contracts, FAR 52.248-1 “Value Enginering” is the incentive clause that provides the basis for contractors to submit Value Engineering Change Proposals (VECP). [2]

 

AcqTips:  

  • Program Managers (PM) should push Value Engineering Change Proposals made by contractors as a way of sharing cost savings and should also ensure that implementation decisions are made promptly.

 

AcqLinks and References:

Intelligence & Security

Validated Online Lifecycle Threat (VOLT)

 

The Validated Online Lifecycle Threat (VOLT) Report as a regulatory document for Acquisition Category (ACAT) I-III programs. These programs require a unique, system-specific VOLT Report to support capability development and PM assessments of mission needs and capability gaps against likely threat capabilities at Initial Operational Capability (IOC).

 

VOLT Reports are required for all other programs unless waived by the Milestone Decision Authority (MDA). Programs on the Director, Operational Test, and Evaluation (DOT&E) Oversight List require a unique, system-specific VOLT Report, unless waived by both the MDA and the DOT&E. DoD Components produce a VOLT Report. DIA validates the VOLT Report for ACAT ID or IAM programs; the DoD Component validates the VOLT Report for ACAT IC or IAC programs and below. For ACAT ID or IAM programs, DIA contact information and the VOLT Report request form are available at the following

The VOLT Report is defined as the authoritative threat assessment tailored for and normally focused on one specific ACAT I, II or III program and authorized for use in the Defense Acquisition Management process. The VOLT Reports involve the application of threat modules, and are to be written to articulate the relevance of each module to a specific acquisition program or planned capability. At the discretion of the responsible MDA, VOLT Reports can be used in the future to support multiple programs that address like performance attributes, share an employment CONOPs, and have a similar employment timeline.

 

AcqLinks and References:

Updated: 6/20/2018

Software Management

USAF Software Management Guide

Air Force Software Management Guidebook

Air Force Software Management Guidebook

 

The USAF Weapons System Software Management Guide is intended to help acquisition and sustainment organizations more rapidly and more predictably deliver capability by learning from the past, establishing realistic and executable plans, applying systems engineering processes in a disciplined manner, and engineering systems right the first time. The purpose of this guidebook is to provide concise guidance for organizations that acquire or sustain systems that involve significant development, integration, or modifications to their embedded software. It should not be used as a policy or referenced in contracts. Rather, it provides an overview of the activities necessary to have a successful system/software acquisition.

 

This guidebook addresses these known software issues and sets top-level expectations for the development, acquisition, management, and sustainment of weapon systems software and software embedded in DoD systems so that software-related problems that are now too typical can be understood and avoided in the future. The principles and techniques in this guidebook will generally apply to all software domains, but the targeted domains include aeronautical, electronics, weapons, and space systems. The intended audience includes Project Managers (PM), systems/software engineers, and other engineers that have a software acquisition element in their project. Software engineering is an integral part of system acquisition and Systems Engineering (SE), and the guidance offered herein is intended to fit within and support current management and systems engineering approaches in DoD systems and acquisition programs.

 

Table of Content
1.0 Introduction
2.0 Background
3.0 Software Process Guidelines for Air Force Acquisition Organizations
3.1 Software Aspects of Acquisition Program Planning
3.2 Estimating Software Size, Effort and Schedule
3.3 Management of Software Related Risks
3.4 Source Selection Considerations
3.5 Applying Earned Value Management to Software
3.6 Establishing and Managing Software Requirements
3.7 Acquisition Insight and Involvement
3.8 Safety Critical Systems
3.9 Non-Developmental Software
3.10 Software Assurance and Anti-Tamper Protection
3.11 Configuration Management
3.12 Life-Cycle Support
3.13 Lessons Learned
Appendix A Software in the Integrated Master Plan
Appendix B Software Content for the Statement of Objectives (SOO) and Statement of Work (SOW)
Appendix C Example Software Content for RFP Section L
Appendix D Example Software Content for RFP Section M
Appendix E Software Contracting Considerations
Appendix F Computer Systems and Software Criteria for Technical Reviews
Appendix G Process Considerations for Safety-Critical Systems
Appendix H Air Force Core Software Metrics
Appendix I Software Development Plan
Appendix J Glossary of Supporting Information

 

AcqLinks and References:

Updated: 6/7/2018

Contracts & Legal

Uniform Commercial Code

 

The Uniform Commercial Code (UCC) is a uniform act that governs the law of sales and other commercial transactions in all 50 states within the United States of America. The UCC deals primarily with transactions involving personal property.

 

The Code, as the product of private organizations, is not itself the law, but only a recommendation of the laws that should be adopted in the states. Once enacted by a state, the UCC is codified into the state’s code of statutes. A state may adopt the UCC verbatim or a state may adopt the UCC with specific changes. [1]

 

The UCC is a starting point to understanding DoD contacting. Article 2 “Sales” covers contract types and methods that are similar to Federal Acquisition Regulations (FAR).

 

The UCC is composed of nine articles:

 

AcqTips:    

  • Understanding the UCC will allow you to understand DoD contracts and the FAR better.

AcqLinks and References:

Updated: 7/18/2017

Risk & Safety Management

Typical Risk Sources

 

Typical risk sources include: [1]

  • Threat: The sensitivity of the program to uncertainty in the threat description, the degree to which the system design would have to change if the threat’s parameters change, or the vulnerability of the program to foreign intelligence collection efforts (sensitivity to threat countermeasure).
  • Requirements: The sensitivity of the program to uncertainty in the system description and requirements, excluding those caused by threat uncertainty. Requirements include operational needs, attributes, performance and readiness parameters (including Key Performance Parameters), constraints, technology, design processes, and Work Breakdown Structure (WBS) elements.
  • Technical Baseline: The ability of the system configuration to achieve the program’s engineering objectives based on the available technology, design tools, design maturity, etc. Program uncertainties and the processes associated with the “ilities” (reliability, supportability, maintainability, etc.) must be considered. The system configuration is an agreed-to description (an approved and released document or a set of documents) of the attributes of a product, at a point in time, which serves as a basis for defining change.
  • Test and Evaluation: The adequacy and capability of the test and evaluation program to assess attainment of significant performance specifications and determine whether the system is operationally effective, operationally suitable, and interoperable.
  • Modeling and Simulation (M&S): The adequacy and capability of M&S to support all life-cycle phases of a program using verified, validated, and accredited models and simulations.
  • Technology: The degree to which the technology proposed for the program has demonstrated sufficient maturity to be realistically capable of meeting all of the program’s objectives.
  • Logistics: The ability of the system configuration and associated documentation to achieve the program’s logistics objectives based on the system design, maintenance concept, support system design, and availability of support data and resources.
  • Production/Facilities: The ability of the system configuration to achieve the program’s production objectives based on the system design, manufacturing processes chosen, and availability of manufacturing resources (repair resources in the sustainment phase).
  • Concurrency: The sensitivity of the program to the uncertainty resulting from the combining or overlapping of life-cycle phases or activities.
  • Industrial Capabilities: The abilities, experience, resources, and knowledge of the contractors to design, develop, manufacture, and support the system.
  • Cost: The ability of the system to achieve the program’s life-cycle support objectives. This includes the effects of budget and affordability decisions and the effects of inherent errors in the cost estimating technique(s) used (given that the technical requirements were properly defined and taking into account known and unknown program information).
  • Management: The degree to which program plans and strategies exist and are realistic and consistent. The government’s acquisition and support team should be qualified and sufficiently staffed to manage the program.
  • Schedule: The sufficiency of the time allocated for performing the defined acquisition tasks. This factor includes the effects of programmatic schedule decisions, the inherent errors in schedule estimating, and external physical constraints.
  • External Factors: The availability of government resources external to the program office those are required to support the program such as facilities, resources, personnel, government furnished equipment, etc.
  • Budget: The sensitivity of the program to budget variations and reductions and the resultant program turbulence.
  • Earned Value Management System: The adequacy of the contractor’s Earn Value Management (EVM) process and the realism of the integrated baseline for managing the program.

 

Additional areas, such as manpower, ESOH, and systems engineering, that are analyzed during program plan development provide indicators for additional risk. The program office should consider these areas for early assessment, since failure to do so could cause significant consequences in the program’s latter phases. [1]

 

AcqLinks and References:

Updated: 6/19/2018